https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • m

    MyZeD

    03/15/2022, 5:07 PM
    an whole S3 importer would be great 😋
  • v

    Vitali

    03/15/2022, 6:32 PM
    Don't expect it for Open Beta though.
  • a

    andrew

    03/15/2022, 11:29 PM
    so it’s possible to e.g. upload a 10GB object using multipart, then retrieve it with a single getobject request (but not from a worker)?
  • b

    Ben Hong

    03/16/2022, 4:04 AM
    Will there be a minimum storage duration? For example, if I upload an object for 10 minutes and then delete it, will I be billed for the 10 minutes or a longer period of time?
  • v

    Vitali

    03/16/2022, 2:38 PM
    @User You can't upload more than 500MB at a time to a worker (without getting a plan adjustment for now). I don't believe retrieval has such a limitation but we haven't yet tested that use case. @User We try to stop billing instantly but if the bucket is particularly busy we may miss events and only notice it at a coarser granularity (e.g. 10 minutes). We hope to continue to improve the robustness of billing stopping immediately over time.
  • a

    andrew

    03/16/2022, 8:51 PM
    @Vitali thanks! i don’t mean uploading to a worker, i just mean uploading direct to R2…
  • d

    darrennotfound

    03/17/2022, 4:55 AM
    If I use b2 is that pull free?
  • s

    Stew

    03/17/2022, 12:11 PM
    I think the only way to interact with R2 is via a worker?
  • a

    andrew

    03/17/2022, 12:13 PM
    @User this message - https://discord.com/channels/595317990191398933/940663374377783388/953334866991276112 - suggests that there are two ways to interact with R2: * S3 endpoint * via worker runtime bindings
  • j

    James

    03/17/2022, 2:34 PM
    Last I heard, the S3 endpoints were still a WIP. The workers bindings are very simple to use though! 😀
  • j

    john.spurlock

    03/17/2022, 4:28 PM
    will R2 have strong consistency by default at launch? (unlike s3 at launch, but like s3 today: https://aws.amazon.com/s3/consistency/)
  • v

    Vitali

    03/17/2022, 8:08 PM
    @User you can upload through a worker or the S3 endpoint. The 500MB limit on upload applies to Workers. So if you bind an R2 bucket into your worker, you'll have that 500MB limit to deal with for your worker (it's not related to R2). The S3 endpoint doesn't have these limits. @User I would not classify the S3 endpoint as WIP. There's some pieces of functionality we were never targeting for Open Beta (SSE-C, ListUploads, ListParts, Websites). We're finishing off Sigv4 authorization which was the biggest "WIP" piece for everyone in the internal beta piece. @User R2 is strongly consistent.
  • j

    James

    03/17/2022, 8:11 PM
    Ah that's good to know - I should probably poke some folks and get some updated info then - all of our testing thus far has been purely via the workers bindings 😅 Sorry for any confusion!
  • j

    john.spurlock

    03/17/2022, 8:12 PM
    Cool, thanks - one last thing: any plans on supporting conditional PUT requests? (PUT if-match) Something sorely missing from S3
  • v

    Vitali

    03/17/2022, 8:13 PM
    I don't know if we've onboarded anyone outside of CF onto the S3 endpoint because of the authorization piece that was missing until recently.
  • v

    Vitali

    03/17/2022, 8:14 PM
    @User in the worker bindings or the S3 endpoint?
  • j

    john.spurlock

    03/17/2022, 8:14 PM
    Both? still waiting on R2 to see the bindings : )
  • j

    James

    03/17/2022, 8:15 PM
    Aha, that makes sense! 👍
  • v

    Vitali

    03/17/2022, 8:16 PM
    Makes sense. I don't see any technical reason why not.
  • v

    Vitali

    03/17/2022, 8:16 PM
    Will bring it up with the team
  • j

    john.spurlock

    03/17/2022, 8:21 PM
    nice - would be a great competitive advantage for cf, there are aws forums threads asking for this from as far back as 2006! (but they were not strongly consistent back then)
  • v

    Vitali

    03/17/2022, 10:10 PM
    Any other useful features you saw being asked for that still don't exist?
  • j

    john.spurlock

    03/18/2022, 4:11 PM
    Transparent compression: i.e. user uploads content uncompressed, R2 automatically applies gzip or brotli per request based on
    accept-encoding
    . Came up almost instantly for anyone serving static assets to browsers, always had to put something in front of S3 to apply a compression stream (now Cloudfront can perform this role as well, but not S3 itself). R2 has all of the info per request (content-length, content-type, content bytes for the content, accept-encoding on the incoming request) to do this too
  • v

    Vitali

    03/18/2022, 5:15 PM
    It's something that was already on my backlog but there's a bunch of technical and non-technical headwinds. I don't think that's something that we can support any time soon. The main obstacle is that all entrypoints into R2 require that you specify the size of the object you're giving us (which AFAIK isn't different from any other object storage system). Practically this means you'll need to buffer the compressed file in memory (so that the length becomes known) which then starts having a limit to the size of file you can compress. You can kind of do this via a Worker binding by using
    CompressionStream
    but you'll have to handle what happens if you have too many concurrent requests at once or a poorly compressible file and suddenly you're potentially hitting the Workers memory limit. I want to evolve R2 here to be much more flexible but that feature is definitely a couple of years out if not longer.
  • j

    john.spurlock

    03/18/2022, 5:21 PM
    Oh I meant on the way out of R2, not on the way in. Assuming there will be some sort of public read equiv for R2's GET Object call, let's say from a browser, it would be ideal if R2 compressed the response sent to the client based on what the client supports (in
    accept-encoding
    )
  • e

    Epailes

    03/18/2022, 5:36 PM
    How does the 500MB file limit if bound to workers function if the workers themselves have max 128MB limit? (Shared between all instances of that worker in a particular datacentre as well, so if there's 10 running they should all be 12.8MB or less really).
  • z

    zegevlier

    03/18/2022, 7:43 PM
    That's the first time I've heard that, where did you get that from? Edit: Just to clarify, the multiple workers instances sharing memory bit.
  • k

    kian

    03/18/2022, 8:18 PM
  • i

    Isaac McFadyen | YYZ01

    03/18/2022, 9:11 PM
    That's for RAM, files can be streamed as long as you don't do certain methods on them.
  • i

    Isaac McFadyen | YYZ01

    03/18/2022, 9:12 PM
    https://discord.com/channels/595317990191398933/779390076219686943/952301598971940934
1...111213...1050Latest