https://serverless-stack.com/ logo
#random
Title
# random
t

thdxr

07/27/2021, 3:47 PM
What are people seeing in terms of response times from api gateway + lambda? I have an endpoint that's only doing a simple read from Dynamo and I'm seeing it take 100-200ms. Was really hoping for much faster
r

Ross Coundon

07/27/2021, 4:00 PM
Same for us. Although in the app I'm looking at it's the old version of API Gateway, i.e. not the HTTP API
t

thdxr

07/27/2021, 4:07 PM
I started looking at just the Lambda invocation times and it's kind of disappointing. Even a warm lambda doing nothing but returning a 200 is close to 100ms
r

Ross Coundon

07/27/2021, 4:07 PM
APIG definitely adds overhead
t

thdxr

07/27/2021, 4:09 PM
wait hmm I stripped it all down and was able to get a 12ms lambda invocation for just a 200
need to do more investigating. ApiGateway overhead still an issue though 😞
r

Ross Coundon

07/27/2021, 4:10 PM
are you using HTTP API or the old/original one?
t

thdxr

07/27/2021, 4:10 PM
Using the
sst.API
construct which I believe is HTTP API
r

Ross Coundon

07/27/2021, 4:11 PM
ok, yeah, that's right, and that's supposed to be faster than the original one 😞
t

thdxr

07/27/2021, 4:12 PM
That image you sent me is that from xray?
r

Ross Coundon

07/27/2021, 4:13 PM
no, it's Epsagon
t

thdxr

07/27/2021, 4:14 PM
ah cool
f

Frank

07/27/2021, 7:25 PM
As a side note, HTTP API doesn’t support xray yet.
t

thdxr

07/27/2021, 7:26 PM
They don't want us to know how slow it is
^ Following up on this I setup tracing with datadog. Noticed at the end of my lambda invocation that there was a 100ms POST request after every single one. Turns out I had Sentry tracing sampling set to 1.0 which means it was posting a trace to sentry at the end of every invocation. Removed that and now the lambda runs for < 20ms and often < 10ms. However latency from API Gateway is still enormous
ok I must be doing something wrong, their docs claim
With that in mind, HTTP APIs is built to reduce the latency overhead of the API Gateway service. Combining both the request and response, 99% of all requests (p99) have less than 10 ms of additional latency from HTTP API.
Hmm cloudwatch logs show very little added latency from HTTP API. I wonder if this is just the reality of being in us-east-2 from NY.
4 Views