https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • v

    Vitali

    04/19/2022, 4:19 PM
    Your multipart upload can remain unpublished however long you want. You'll be paying for its storage but otherwise there's no time at which we go and cancel your multipart uploads.
  • i

    Isaac McFadyen | YYZ01

    04/19/2022, 4:20 PM
    Oh, awesome! Thanks
  • i

    Isaac McFadyen | YYZ01

    04/19/2022, 4:21 PM
    So would that be a realistic use-case? Can you think of anything I'm not thinking of in there?
  • i

    Isaac McFadyen | YYZ01

    04/19/2022, 4:22 PM
    I'm trying to keep my DB as in-sync to my uploads as I can. Previously I just uploaded the object but found sometimes the DB would fail for one reason or another and I'd have out-of-sync uploads unless I manually deleted them.
  • v

    Vitali

    04/19/2022, 4:32 PM
    One problem you'll want to consider is whether there can be concurrent uploads to the same file. So for example, DB query succeeds for upload 1 to file X Publish upload 1 for file X DB query succeeds for upload 2 to file X Publish upload 2 for file X Conceivably, without knowing more about your solution you might have DB query succeeds for upload 1 to file X DB query succeeds for upload 2 to file X Publish upload 2 for file X Publish upload 1 for file X If the uploads are single-threaded, then that's not really possible. However, if this is concurrent it seems conceivably possible because all R2 promises is that once a request succeeds, future requests see that state. Two concurrent state modifications are racy and there's no way R2 can solve this for you. It sounds like you actually need 2PC (two-phase commit) where you do: * DB query commits "publish in progress" for file X state if it's current state is "published" or file X doesn't exist * Publish upload for file X * Db query commits "published" for file X state Then you have a guarantee that things are also strongly consistent in your application. There's a lot of error handling you need to handle to actually make this work in practice, keeping in mind you probably want to handle R2 abort upload however makes sense in your app to avoid leaks, handling retries of the "publish" step correctly if the "published" commit to the DB fails (i.e. "no such upload" = published).
  • i

    Isaac McFadyen | YYZ01

    04/19/2022, 4:33 PM
    Oh wow, lots of detail! Yeah I definitely didn't think of that and will have to consider it... thanks for your input!
  • v

    Vitali

    04/19/2022, 4:34 PM
    Np. Good luck!
  • v

    Vitali

    04/19/2022, 4:44 PM
    (2PC comes up a lot in R2 as you can imagine as we have the similar problem between our metadata layer and storage nodes)
  • i

    Isaac McFadyen | YYZ01

    04/19/2022, 4:51 PM
    Oh yeah, interesting.
  • w

    Wallacy

    04/19/2022, 6:32 PM
    I do multipart upload that i only confirm several seconds after the last part and work all the time. (using S3 over B2)
  • s

    Seb

    04/19/2022, 8:51 PM
    any news on when r2 will possibly launch? 😄
  • k

    kavinplays

    04/19/2022, 8:58 PM
    open beta in q2, and ga after that ( i think, not very sure)
  • k

    kavinplays

    04/19/2022, 8:59 PM
    ga is second half of this year, so before 2022 ends
  • i

    Isaac McFadyen | YYZ01

    04/19/2022, 9:08 PM
    The 2nd quarter of the year is open beta, the 2nd half of the year is general availability.
  • l

    lpellegr

    04/19/2022, 10:27 PM
    Let's guess, May 5 are financial results for the quarter, May 12 is a Cloudflare event (https://www.cloudflare.com/fr-fr/connect2022/) so my bet is to see a beta launch for R2 around that time. Really hoping I am true 😄
  • h

    HardAtWork

    04/21/2022, 12:57 AM
    So... Maybe I can attend this?
  • k

    kavinplays

    04/21/2022, 9:24 AM
    are you french?
  • k

    kavinplays

    04/21/2022, 9:24 AM
    (assuming fr is for france/french)
  • z

    zegevlier

    04/21/2022, 10:28 AM
    https://www.cloudflare.com/connect2022/ is the same link, so I don't think it's only for french people.
  • t

    testzw1

    04/21/2022, 7:45 PM
    Is it possible to access (read/write) R2 object directly over the Internet via rest API like S3 (ie: not from Worker)? Thanks!
  • y

    yj

    04/21/2022, 9:33 PM
    :cough: the channel topic
  • y

    yj

    04/21/2022, 9:33 PM
    🙂
  • y

    yj

    04/21/2022, 9:33 PM
    I'm sure they don't really mind 😅
  • i

    Isaac McFadyen | YYZ01

    04/21/2022, 9:35 PM
    Right now it's fairly restricted... they announce giveaways in #909458221419356210 but only a few (like 20 last time) at a time, since it's not yet open-beta.
  • i

    Isaac McFadyen | YYZ01

    04/21/2022, 9:38 PM
    You should be fairly safe transitioning from S3, there's a feature that will lazily pull from AWS to R2 on first access and then pull from R2 on subsequent accesses.
  • i

    Isaac McFadyen | YYZ01

    04/21/2022, 9:38 PM
    https://blog.cloudflare.com/introducing-r2-object-storage/
  • k

    kian

    04/21/2022, 9:40 PM
    https://developers.cloudflare.com/r2/platform/s3-compatibility/s3-compatibility/ also documents which S3 APIs aren’t currently supported
  • k

    kian

    04/21/2022, 9:40 PM
    so stay clear of those and the migration should be seamless
  • v

    Vitali

    04/21/2022, 11:40 PM
    Yes. We have a fairly flushed out S3 compat (see https://developers.cloudflare.com/r2/platform/s3-compatibility/s3-compatibility/). Like all Cloudflare APIs we also have an APIv4 endpoint which just isn't documented at the moment (& has other restrictions limitations that we'll be lifting by GA)
  • l

    lmtr0

    04/22/2022, 12:00 AM
    I'm not very familiar with the s3 api, so this may be a stupid question. From my understanding s3 objects can be json documents, can we get a specific field inside it to save bandwidth and response time? like, the object:
    Copy code
    json
    {
      "owner": "user::id",
      "data": "base64(...)",
      "meta": {
        "something": "other",
        "hello": "world"
      }
    }
    is it possible to get like
    GET /file::id?path=meta
    Copy code
    json
    {
       "something": "other",
       "hello": "world"
    }
    or
    GET /file::id?path=meta.hello
    Copy code
    json
    "world"
1...373839...1050Latest