Hi :wave: I see that a number of people have used ...
# help
g
Hi 👋 I see that a number of people have used Typeorm with SST. When using Typeorm we are seeing lambda cold start times of ~3.5s. Has anyone else experienced this issue or have any suggestions?
r
May or may not help but worth seeing if marking it as external and putting the dependency in a layer instead speeds things up
g
Thanks will definitely give that a try
f
@Gethyn Jones r u using the
RDS
construct?
g
Yes we are using rds construct
f
Yeah Typeorm is pretty large in size, hence adding to the cold start time.
If you are using the
RDS
construct, we recommend using Kysely to interact with the DB through data api. @thdxr shared some reasonings in this thread https://serverless-stack.slack.com/archives/C01JG3B20RY/p1649089211861559
g
Ah sorry, I got confused. We are using CDK RDS construct, not SST RDS.
But Kysely looks good 👍
f
r u using the RDS Serverless?
g
No. We have a postgres instance running
f
Ah i see. Gotcha.
g
@Ross Coundon thanks for the suggestion. I got typeorm working in an AWS layer. Reduced the lambda size but no effect on cold start time. It feels like Typeorm is ultimately too big to play nice with serverless. Might need to rethink our architecture.
f
what’s the diff in Lambda size when moved ORM to the layer?
g
Approx 18mb I think. Although that sounds crazy, might need to double check that.
f
how big is it now? (after moved to the layer)
g
37kb
f
18mb is quite large, were you setting typeorm as an external? or was it bundled/tree shaked?
g
We are listing it in
nodeModules
for the function
I listed in
externalModules
when using typeorm in a layer
t
I don't think you're going to get better performance by shuffling it around - ultimately all of that has to be loaded when your function cold starts. I found anything over 3-4mb starts to become unacceptable unfortunately with serverless you do have to pick libraries well designed for it, you can try analyzing typeorm to see why it results in such a big bundle but I don't think it will yield anything useful
g
Yeah I think I've come to this conclusion
unfortunately with serverless you do have to pick libraries well designed for it
☝️ exactly this. I don't think Typeorm is a good choice for serverless
t
yeah and it's surprisingly how popular seemingly well maintained libraries aren't a good fit, we've discovered both Prisma and Apollo do not work well in lambda
r
really interesting this
trying to get my head around exactly what a cold start is
presumably some copying of files from slow -> fast disk
all the blocking require('...')
t
the gist of it is when your function is invoked, AWS spins up a fast starting container. The slowest part of it is downloading your code so it can be loaded, bigger that is the slower it goes
r
until it's in memory
t
Yeah
r
so if was a vanilla node process
wouldn't take 5 seconds to load 18Mb
t
But surprisingly it's more the downloading of the code than it is the loading into memory, both impact but the transfer is slower
r
_or would it...)
t
It would
your code is stored on s3, there's a limit on how fast it can be shipped to the container
r
if I ran
node server.js
from my laptop it would be < 1second
kk..so it's a preliminary step on lambda
was wondering if it was all the blocking
require()
on a slow disk
Copy code
- load 18MB of stuff from S3
- read + parse 18MB of JS into memory
- start some sort of TCP connection ready to accept data
dunno if that's right...
but is a hot start between 2 + 3?
guessing there's some magic container stuff going on to freeze memory in between
soz for questions btw...just trying to separate lambda node from sans-lambda node in my head
r
Subsequent invocations after a cold start reuse the provisioned runtime and connectivity so there's almost no latency
r
gotcha thx
funny one this:
Copy code
export async function handler(event: any) {
  return {
    statusCode: 200,
    body: JSON.stringify(["hello"]),
  };
}

18MB bundle
cold start times via ab:

  50%    852
  66%    852
  75%    852
  80%    861
  90%    862
  95%    862
  98%    862
  99%    862
 100%    862
Copy code
require("typeorm");

export async function handler(event: any) {
  return {
    statusCode: 200,
    body: JSON.stringify(["hello"]),
  };
}

18MB bundle
cold start times via ab:

  50%    997
  66%   1046
  75%   2283
  80%   2297
  90%   3398
  95%   3398
  98%   3398
  99%   3398
 100%   3398
t
it's not the require call, you can ship an 18mb dummy file and it'll slow starts as well (seen this in practice with big Prisma binaries)
r
the 18MB bundle without the require call is fast tho
appreciate doesnt change things..just trying to understand to root cause
is it the IO + initial parsing thats the slow bit 🤷
Copy code
console.log("pre require");

require("typeorm");

console.log("post require");

export async function handler(event: any) {
  console.log("handler");
  return {
    statusCode: 200,
    body: JSON.stringify(["hello"]),
  };
}

--

cold start:

Response time 2.83 s

logs:

2022-04-12T21:48:05.331Z undefined INFO pre require

2022-04-12T21:48:07.569Z undefined INFO post require

2022-04-12T21:48:07.587Z f1b67089-98fc-47a8-8a1d-b1e200fee327 INFO handler
based on that looks like the latency is mostly caused once the bundle has been pulled from S3 and the node process that been invoked
bit more on this 😆
even just importing these modules:
Copy code
"pg"
"aws-sdk" << this is an external
"pino"
"zod"
"kysely"
we're seeing cold start time ~ 1.2s, which is too long really