MyZeD
03/15/2022, 5:07 PMVitali
03/15/2022, 6:32 PMandrew
03/15/2022, 11:29 PMBen Hong
03/16/2022, 4:04 AMVitali
03/16/2022, 2:38 PMandrew
03/16/2022, 8:51 PMdarrennotfound
03/17/2022, 4:55 AMStew
03/17/2022, 12:11 PMandrew
03/17/2022, 12:13 PMJames
03/17/2022, 2:34 PMjohn.spurlock
03/17/2022, 4:28 PMVitali
03/17/2022, 8:08 PMJames
03/17/2022, 8:11 PMjohn.spurlock
03/17/2022, 8:12 PMVitali
03/17/2022, 8:13 PMVitali
03/17/2022, 8:14 PMjohn.spurlock
03/17/2022, 8:14 PMJames
03/17/2022, 8:15 PMVitali
03/17/2022, 8:16 PMVitali
03/17/2022, 8:16 PMjohn.spurlock
03/17/2022, 8:21 PMVitali
03/17/2022, 10:10 PMjohn.spurlock
03/18/2022, 4:11 PMaccept-encoding
. Came up almost instantly for anyone serving static assets to browsers, always had to put something in front of S3 to apply a compression stream (now Cloudfront can perform this role as well, but not S3 itself). R2 has all of the info per request (content-length, content-type, content bytes for the content, accept-encoding on the incoming request) to do this tooVitali
03/18/2022, 5:15 PMCompressionStream
but you'll have to handle what happens if you have too many concurrent requests at once or a poorly compressible file and suddenly you're potentially hitting the Workers memory limit.
I want to evolve R2 here to be much more flexible but that feature is definitely a couple of years out if not longer.john.spurlock
03/18/2022, 5:21 PMaccept-encoding
)Epailes
03/18/2022, 5:36 PMzegevlier
03/18/2022, 7:43 PMkian
03/18/2022, 8:18 PMIsaac McFadyen | YYZ01
03/18/2022, 9:11 PMIsaac McFadyen | YYZ01
03/18/2022, 9:12 PM