I am using aurora serverless postgres with Prisma ...
# orm-help
m
I am using aurora serverless postgres with Prisma right now for my application. I need to release it to the public soon, and the prisma cold starts make my app slow enough to be unusable. In order to make it faster, I'm evaluating my options. Would my best hope be to convert my DB from aurora serverless to public aurora and use the data proxy? Can I set up some sort of tunnel from the internet to my aurora DB inside my VPC? Since I think part or most of the cold start time is related to the parsing of my dmmfString, will data proxy even help? Also just discovered I can't combine interactive transactions and data proxy which is a problem. I'm happy to spend some time myself trying to speed up the prisma node client initialization if it's something I can help with but I'd need someone to point me in the right direction. I need to get this fixed. Any other suggestions?
l
I'm not familiar with Aurora, but on GCP it is possible to set up a timed trigger, e.g. once a minute, that make a requests towards any service that has cold start issues. This should make sure that there is one available at all times (although ramping up could still be slow). Also there are options, at least for Cloud Run, to have permanently live instances.
m
I have dozens of functions though, that's not a great option for me
c
How are you running those functions? I presume Lambda but via Next.js API routes on Vercel or ?
m
Via AWS AppSync - it's a bunch of lambda graphql resolvers
e
Hi guys! I’m also looking into architecting using serverless and I’m new to this.. Would the cold start issue be mitigated with calling the function periodically? Or this is not a guarantee?
l
@Eu Jin Kim The type of serverless you are looking into should have detailed documentation on the lifetime cycles. On GCP, it is stated that cloud functions will be available for up to 9 minutes. I haven't actually implemented such pinging, but I have seen that it behaves this way by observing delays on various requests.
🙇‍♂️ 1
@Eu Jin Kim Actually, I may have mis-remembered the 9 minute part, it seems tha is the max timeout of a request, not specifically runtime. What the docs DOES say, is that as long as there are steady requests, the function won't be scaled down to 0 and so there will be a live instance (where also information cached in previous calls is still available). The exact algorithm isn't stated. But every scale up (due to increased demand), will cause a cold start. Probably I would have tested to see what timings works, and what doesn't. https://cloud.google.com/functions/docs/concepts/exec
e
@Lars Ivar Igesund Gotcha. I’m also reading AWS lambda doc. This is definitely helping me understand the issue better. So pinging periodically is likely to mitigate the cold start but you can’t get away with it. Thanks so much!
👍 1