Hi everyone, I am trying to wrap my head around th...
# help
k
Hi everyone, I am trying to wrap my head around the cold start speed and how I could optimize it, my functions are around 1.8mb and I am seeing >1s cold starts I did some tests running a payload of 100 req a second on a function and it I was getting
Copy code
99th percentile: 2.042189291s
P95th percentile: 2.011754874s
P90th percentile: 1.985379375s
P50th percentile: 1.683923479s
Max : 2.042189291s
Mean : 1.571575707s
Min : 884.344166ms
The time with warm lambdas is around 100-300ms. Is 1.8mb too much? The env. is node14 with 1024mb memory and arm64 arch. I am really wondering if this has to do with the size of the function or something else.
t
I suspect it might be something besides the bundle size
Are there things you do when the function is starting?
k
So there is actually no payload this is the speed for a
GET
request
The weird thing is that I donโ€™t really think I do a lot outside of the lambda functions
its basic stuff like Dynamo client SNS client stuff like thatโ€ฆ
I use class-validator and class-transformer
I am thinking maybe that is the issue
This is very weird ๐Ÿ˜‚
t
By payload I meant bundle sorry
k
Ah ok.. so I need to see how to get that down ๐Ÿ˜‚
t
2s does seem pretty high for 1.8mb, I feel like I've seen < 1s for that size
k
Yeah seems very unreasonable ๐Ÿ˜…
wait this is stupid this includes apigw latency and stuff ๐Ÿ˜‚
But still I need to find out how to improve this
a
I use class-validator and class-transformer
I don't recognize these. Is this input validation?
These basically help me to validate and transform data from Json (payload) to my DTOs and the other way around
They are super useful ๐Ÿ˜‚
a
Yup. That might be your cold start cost though. It's processing all those decorators and metadata during initialization. These systems are usually designed for long running processes, where an extra second or two to front-load the processing is no big deal.
You either accept the performance hit or choose something else. I went with ts-auto-guard. It isn't nearly as complete as other validation libraries, but all the pre-processing is done at compile time, not runtime.
k
Ahh makes a lot of sense
I will see how ts-auto-guard works together with esbuild + sst
a
It's separate. I just run it manually when I make a change to the models.
Not saying you shouldn't use your snazzy libraries. They are nice. You just have to account for them causing slower cold starts.
k
No honestly it is more important for me to have faster cold starts
Thanks for validating my concerns
a
You can validate that that really is the cause by turning them off. That's a likely suspect, but we haven't proven it.
k
I use some other decorators that I wrote for some data mapping at the DB level so I would not be able to disable decorators completely but I will try to just remove them
I think I tried removing them from one lambda and I think my cold starts went to 400ms
which is much more acceptable
actually 400ms with everything AWS GW
all that
Thanks a lot Adam
t
fwiw I never use classes
a
I, on the other hand, am very classy. ๐Ÿ˜‰