https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • v

    Vitali

    04/17/2022, 2:23 AM
    If it’s already cached then there’s no R2 cost. You only pay if the request hits R2. I can’t recall if you can put Argo Tiered Cache in front of R2 yet though.
  • m

    Marcelino Franchini

    04/17/2022, 4:30 AM
    Previously I wrongly assumed that every cached response counted as a read operation and calculated the breakeven point between egress and requests priced CDNs, if there's caching in between this logic falls down, make sure to explain this in the marketing stuff: A read operation is a cold hit and cached responses are free.
  • b

    Burrito

    04/17/2022, 7:39 AM
    Does this apply to free tier caching as well?
  • s

    Stew

    04/17/2022, 9:33 AM
    I think they mean if we can store the file in either workers cache or cdn cache, then those don't count as reads. Cold hit Vs R2s internal cache which would get a faster response time would both count as a read I imagine (happy if I'm wrong)
  • s

    stupefied | AS204829

    04/17/2022, 1:34 PM
    Would be amazing if you can
  • v

    Vitali

    04/17/2022, 4:04 PM
    We don’t yet know what the caching story will look like. Ideally we would figure out a way to put Argo tiered caching to “just work” but there’s technical challenges (as I understand it, even if you set up a virtual cname to R2, if I recall correctly it’s not possible to set it up because “technical details my brain refuses to remember”). If I recall correctly what does work is setting up your own worker and putting tiered caching in front. We’re definitely planning on making the piece of putting a Worker in front closer to feature parity with the S3 endpoint so that you can choose to turn on unbuffered uploads and raise the upload limit on your route (to bypass the 500MB limit your worker would normally be under). You’d also lose the ability to write WAF rules that act on the body, but that’s the trade off and it’s a feature that seems rarely used in practice. Another way to make things work even without all that is to rearchitect your client to do multipart uploads.
  • k

    kavinplays

    04/17/2022, 7:26 PM
    I believe they can be, but cache reduces strong consistency
  • k

    kavinplays

    04/17/2022, 7:26 PM
    (moved from #812577823599755274)
  • v

    Vitali

    04/17/2022, 8:29 PM
    Oh for sure. Caching usually gives up strong consistency. But sometimes that’s OK for the problem domain and that’s the controls cache HTTP headers try to expose so that you can communicate that (+ non standard tools CDN providers offer like purging the cache proactively). There’s also content addressed use cases (eg how GWT worked) where caching actually doesn’t reduce strong consistency. There you basically just download the small Merkle tree of resources uncached but all resources are content addressed so caching is perfectly safe and strongly consistent. GWT didn’t use Merkle trees at the time IIRC and it’s likely overkill for that, but hey - Merkle trees are the 1T software algorithm you sprinkle in everywhere and say “blockchain”.
  • b

    Ben Hong

    04/17/2022, 8:41 PM
    Is there a way to pre cache an R2 object from a worker if you know a user will request it for sure in say, half a second, but it probably isn't in cache?
  • k

    kavinplays

    04/17/2022, 8:46 PM
    assumably, you need to do it yourself through a request
  • k

    kavinplays

    04/17/2022, 8:46 PM
    and wouldn't happen automatically
  • a

    albert

    04/17/2022, 8:46 PM
    Just request the object? That should cause it to be cached.
  • a

    albert

    04/17/2022, 8:47 PM
    (assuming you have configured caching for R2 - not yet sure how that works)
  • b

    Ben Hong

    04/17/2022, 8:51 PM
    Oh cool. What if the request hasn't finished before the user requests it?
  • v

    Vitali

    04/17/2022, 8:53 PM
    Cache doesn’t really work that way. There’s a request and if it’s missing from the cache then you make the call to the origin
  • v

    Vitali

    04/17/2022, 8:55 PM
    So prepopulating in such a world would mean that you just put the object into cache at the same time as the response. So if it hasn’t finished it’s not in cache. Such a world model though is generally a bad fit for how CDN caches work through
  • v

    Vitali

    04/17/2022, 8:55 PM
    You have absolutely no idea where in the world the request will come from so how are you prepopulating in an intelligent way?
  • k

    kavinplays

    04/17/2022, 8:56 PM
    i think by making a request when one is headed to the download page?
  • k

    kavinplays

    04/17/2022, 8:56 PM
    doesn't seem ideal or efficient
  • b

    Ben Hong

    04/17/2022, 9:12 PM
    I was thinking that when someone loads a video player in an iframe, a worker responds to the HTML request, and simultaneously starts loading the first segment (a few mb) so that when the client requests it, it won't be a cache miss. Just wanna shave off a few milliseconds before the video plays.
  • b

    Ben Hong

    04/17/2022, 9:12 PM
    Is this possible?
  • k

    kavinplays

    04/17/2022, 9:16 PM
    it wouldn't be guranteed
  • k

    kavinplays

    04/17/2022, 9:16 PM
    like a person could go from loading it to clicking the play button very fast
  • v

    Vitali

    04/17/2022, 9:22 PM
    That might work? Not sure. Whether it’s a good idea to also depends on what the point of the site is. Are you just playing around? Are you building a business? The latter is probably best served by focusing on the business unless you have enough scale to conclude this is the bottleneck. The former can be served by just trying ideas out.
  • b

    Ben Hong

    04/17/2022, 9:25 PM
    Just playing around haha. I guess it's time to start testing.
  • a

    Abelia

    04/18/2022, 1:24 AM
    Will there be any extra overhead when using the S3 API in a worker instead of R2 bindings? (Context: we currently have Workers that retrieve objects from Wasabi, so if there is no overhead, we would like to reuse them.)
  • v

    Vitali

    04/18/2022, 1:57 AM
    There may be some for now but if you're fine with S3 I certainly wouldn't expect a regression. Any overheads that exist today should come down over time as the overall Workers platform & R2 are optimized. The main reason for R2 bindings is a more developer-friendly API that's easier to use and provides us the opportunity to innovate better security model practices (e.g. binding exactly what you need vs your worker having credentials to all buckets in your accounts). My genuine hope is we figure out a production evolution to move the S3 part of the code to be an open source wrapper around Workers bindings that you can just install as a library into your code. Or you just click a button that says "create my own custom S3 endpoint that has buckets A, B & C bound in" and that worker is auto-generated for you and installed into your account. That way we make it much easier for our developers to have a better security posture that's easier to audit.
  • a

    andrew

    04/18/2022, 2:42 AM
    i wonder how tricky that will be though, given the request size limits of workers, and the much higher request size limits of S3
  • m

    Max Elia

    04/18/2022, 11:06 AM
    Hello, I wonder, how you can get selected to get access to private beta.
1...343536...1050Latest