is it generally known how much latency is added wh...
# help
s
is it generally known how much latency is added when running in live debug mode, and having a local front end client make API calls?
f
I’m on the east coast, app deployed to
us-east-1
, the round trip AWS <-> my MBP + invoking a local process to spawn the function adds up to ~300ms when I last benchmarked.
t
I've seen similar numbers
f
@thdxr proposed to keep the spawned process around, and that can potentially shave off 100~150ms.
s
that sounds about right. I’m seeing like 1.5 to 2 seconds total for my API calls
I haven’t yet compared it to a production call yet
wow. production is like 100-300ms range
so this is significant
those are API endpoint deployed w/ Serverless Framework, but same Lambda code exactly
I’ll kill the debug proc & deploy, then see how fast it is
f
Yeah if you restart the
sst start
process, make 1 request, and DM me the
.build/sst-debug.log
, I can take a quick look at what’s taking long.
s
sure thing! one sec
sorry that took so long, Internet kept dropping out mid-deploy, so I had to restart.. twice. 😕 ok, so it’s about 150ms for one particular endpoint (once it’s warm). then, same endpoint after `yarn start`: about 1500ms 😬
for reference: I’m in Chicago, Internet is fiber - 300Mbps up/down, 2ms latency
lemme know if you need help debugging this. this is sort of a big deal, as it really slows down the dev process overall