Woohoo it has landed
# general
j
Woohoo it has landed
m
@thdxr
ApolloApi
doesn't appear to work with
udp
When I load up the output
ApolloApi
URL I get
{"message":"Internal Server Error"}
t
is it specifically apolloapi or any function?
m
Specifically
ApolloApi
t
weird ok will look into this
Think maybe issues with larger payloads
Figured it out, I forgot to implement payload splitting for responses, only did it for requests - will fix tomorrow
k
I don’t use Apollo but it did not work for me either, I suppose it is a general response issue πŸ‘
t
moral of the story is, don't try to cut corners by using a uint8 when a uint16 is only one byte bigger
a
@thdxr Excited to give this a try with our ~500 API integration tests that are all IO heavy. We used to be able to run them all with sst start, but needed to switch to sst deploy as it became too slow / ran into timeouts. I'll hopefully be able to provide some decent numbers to see if this changes anything.
t
The slowness came from the function taking too long to execute?
a
The tests started failing due to timeouts, but yeah even when we dropped parallelism they were slow. Plus APIs sent events which triggered other lambdas causing a bit of an invoke storm.
t
interesting ok
I think I got the issue fixed will be doing a release shortly
m
Thank you @thdxr! We appreciate you!
t
It's fixed in 0.51.1 lmk if it works
m
@thdxr Thanks! Checking it out now.
a
@thdxr nevermind my comment on not being able to run all lambdas with sst start. It seems like the recent changes have significantly improved the speed that lambdas execute locally (this is without the udp flag)! I just haven't checked recently. Nice job! πŸŽ‰
Here are my results with our integration test suite. Just some context around my benchmarks are: β€’ We run integration test with jest and control parallelism with
maxWorkers
for example in CI we run with a parallelism of 40 (one of the super cool thing of serverless is that you can just hammer your services without worrying about cost/scale πŸ˜„ ) β€’ We have about 100 files, each file containing ~5 tests and each test does typically one API request which hits our graphql API lambda. β€’ I'll provide some benchmarks with different maxWorkers settings to see how chattiness from the lambdas to my macbook change the numbers β€’ My internet and wifi is pretty good: 600mbps up/down, unifi AP/switches. Using cloudping.info i'm getting 10-15ms for eu-west-2 where our services run, so RTT is pretty low for me. I can try using a VPN from another country to add some artificial RTT if that would be interesting for you. β€’ These timings are from second/third runs, therefore lambda stubs are all warm and any local cache is also warm. maxWorkers=2 benchmarks: Without udp flag:
Copy code
Test Suites: 98 passed, 98 total
Tests:       513 passed, 513 total
Snapshots:   0 total
Time:        440.215 s, estimated 739 s
Ran all test suites.
With udp flag:
Copy code
Test Suites: 98 passed, 98 total
Tests:       513 passed, 513 total
Snapshots:   0 total
Time:        402.35 s, estimated 424 s
Ran all test suites.
Result: ~10% improvement maxWorkers=10 benchmarks: Without udp flag:
Copy code
Test Suites: 98 passed, 98 total
Tests:       513 passed, 513 total
Snapshots:   0 total
Time:        97.648 s, estimated 100 s
Ran all test suites.
With udp flag:
Copy code
Test Suites: 98 passed, 98 total
Tests:       513 passed, 513 total
Snapshots:   0 total
Time:        90.263 s, estimated 95 s
Ran all test suites.
Result: ~10% improvement. maxWorkers=20 benchmarks: Without udp flag:
Copy code
Test Suites: 98 passed, 98 total
Tests:       513 passed, 513 total
Snapshots:   0 total
Time:        82.769 s
Ran all test suites.
With udp flag:
Copy code
Test Suites: 98 passed, 98 total
Tests:       513 passed, 513 total
Snapshots:   0 total
Time:        62.965 s
Ran all test suites.
Result: ~25% improvement Take these numbers with a grain of salt as they are integration tests that hit tons of AWS services so there could be some variability. But what definitely seems true is that there's an improvement across the board πŸŽ‰ so nice job! πŸš€
t
Wow real nice - thanks for sharing that in detail super helpful
k
@thdxr I tried 0.51.1 on an existing stack and for some reason every time I run it I see updates and it also is not working, requests are not hitting my env..
t
Can you do me a favor and invoke a function from the aws console? It should time out but print a log - can you share that log here
It's possible this new udp mode doesn't work for everyone's network setup - but gave it to a few people where it did work so thought it would be fine
k
I will get that log to you later tonight
Copy code
START RequestId: e52f55dc-e2e1-4ac1-8398-2905b0f40c13 Version: $LATEST
2021/11/12 22:13:23 Listening...
2021/11/12 22:13:23 Registering 35.158.130.208:10280
2021/11/12 22:13:23 Waiting for first ping
END RequestId: e52f55dc-e2e1-4ac1-8398-2905b0f40c13
REPORT RequestId: e52f55dc-e2e1-4ac1-8398-2905b0f40c13	Duration: 10003.06 ms	Billed Duration: 10000 ms	Memory Size: 1024 MB	Max Memory Used: 29 MB	
XRAY TraceId: 1-618ee6f8-72b05fc20afdfed73412887f	SegmentId: 37a1af7b101cd04e	Sampled: true	
2021-11-12T22:13:33.288Z e52f55dc-e2e1-4ac1-8398-2905b0f40c13 Task timed out after 10.00 seconds
@thdxr this is all I get πŸ˜…
t
Ok that's helpful. Looks like it gets stuck trying to initialize a udp path to your local machine. Are you on normal home internet or something more complex?
k
Should be very simple 🀷
I did not look at the code but is it maybe expecting a static IP or something for the local env.. ?
I also noticed that it was running updates on the functions when I run it
every time
t
No what it tries to do is use UDP hole punching to allow packets to flow in both directions from your computer and the lambda function. This doesn't always work but it often works. It might be that something about the way your router works is preventing this approach, which is why we're going to probably keep
--udp
as an optional flag
Update on the function every time is strange....but could happen if your IP is rapidly changing which would be bizarre