https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • k

    kian

    04/27/2023, 10:53 PM
    R2 only currently supports migrating the entire bucket
  • Hi is there a scoped role for giving a
    w

    wegemit

    04/28/2023, 1:45 AM
    Hi, is there a scoped role for giving a user access only to R2?
    • 1
    • 1
  • h

    honzasterba

    04/28/2023, 7:55 AM
    Is it possible to get R2 object public URL via the API? Did not find it in the HEAD request.
  • k

    Karew

    04/28/2023, 8:35 AM
    You can connect multiple subdomains or r2.dev to your bucket, the API doesn't return full URLs. You just need to concatenate
    https://<subdomain>/<object-key>
  • p

    PeterC

    04/28/2023, 9:51 AM
    Hey I am having issue with R2 downloading large files that there could be transfer interruption around 1 TB
  • p

    PeterC

    04/28/2023, 9:51 AM
    Tried wget and curl none of them worked
  • h

    HardAtWork

    04/28/2023, 9:52 AM
    For large files, it might be beneficial to use Range requests to download the file in chunks.
  • p

    PeterC

    04/28/2023, 9:53 AM
    chunking doesn't work for me because I am using pipeline untar command like
    Copy code
    wget -qO- ... | tar -xv
  • h

    HardAtWork

    04/28/2023, 9:55 AM
    Not sure what else you can do. As with uploads, we recommend downloading in chunks, to hedge against network issues. Even if your ISP & Cloudflare are performing perfectly, there is always a chance that there is a hiccup in some intermediary, and thus it is a lot easier to recover from that if you download in smaller bites. Maybe you could try downloading the chunks manually, and then run it through tar?
  • p

    PeterC

    04/28/2023, 9:55 AM
    Problem is I don't have a large enough disk it's a 3 TB file and my drive is like 3.5 TB lol
  • h

    HardAtWork

    04/28/2023, 9:55 AM
    This should apply with any storage provider, not just Cloudflare
  • p

    PeterC

    04/28/2023, 9:56 AM
    s3 always worked without issue though
  • h

    HardAtWork

    04/28/2023, 9:56 AM
    🤷
  • r

    raiyansarker

    04/28/2023, 10:57 AM
    ?r2-roadmap
  • h

    Helpflare

    04/28/2023, 10:57 AM
    > These features are now available!! > - **Object Lifecycles**: > - Public buckets, with custom domains: > - Presigned URLs in the S3-compatible API > > Future roadmap > Read more: > - Jurisdictional Restrictions (e.g. 'EU') > - Live Migration without Downtime (S3->R2)
  • m

    Marty

    04/28/2023, 12:02 PM
    Hi, I would like to try R2 Migrator, but there is one thing: The cloud storage bucket you are migrating consists primarily of objects less than 10 GB (1000³ bytes). Objects greater than 10 GB will be skipped and need to be copied separately. Does it mean, that one object cant be greater that 10GB or all objects together?
  • h

    HardAtWork

    04/28/2023, 12:03 PM
    No single object may be greater than 10 GB
  • m

    Marty

    04/28/2023, 12:23 PM
    First, I would like to use the R2 as a backup storage for my current S3 storage (DO spaces). Is it possible to set up worker to sync R2 with my current S3 storage ? Or only solution is small VPS with rclone ?
  • m

    magicthib

    04/28/2023, 12:32 PM
    Hi folks. Is the any plan to support 404 pages from R2 public buckets with custom domains soon ? Thanks!
  • z

    zendev

    04/28/2023, 3:07 PM
    Hi, I'm trying to configure public access to my R2 bucket with a custom domain. I recently transferred this domain over to Cloudflare, and it is now added as one of my sites on Cloudflare and everything seems in order. However, when I try to connect the domain to my R2 bucket, I keep getting this error. DNS record for this domain already exists on zone. (Code: 10056)
  • d

    danny.m

    04/28/2023, 4:31 PM
    Hi all. I'm looking into enabling users to create buckets that only they can access. I was thinking of doing this via the binding to workers. Is that a reasonable way to go about this? Looks like you might need to know the name of a bucket before binding a bucket to a worker? So I'm not sure if there's a way to dynamically bind buckets to workers, or if you would have to create a new worker per bucket. Or maybe I'll have to use presigned urls or some other alternative mechanism for this?
  • k

    Karew

    04/28/2023, 5:22 PM
  • k

    Karew

    04/28/2023, 5:23 PM
    You already have a record set up for whatever subdomain you’re trying to use. For example, if you’re trying to use
    files.example.com
    , you need to remove the CNAME or whatever you might have set for
    files
    already
  • d

    Dave ©

    04/28/2023, 5:42 PM
    Is there any chance to increase the bucket limit per account? I'd like to use presigned Urls for users to edit a set of objects and I'd rather have many buckets containing only a few objects than a few buckets with huge amounts of objects.
  • z

    zendev

    04/28/2023, 7:34 PM
    Nvm I figured it out! Thanks
  • k

    knpwrs

    04/29/2023, 3:04 AM
    I think I've encounted a bug in R2. When I use the
    CopyObjectCommand
    to copy an object, and I attempt to set
    ContentDisposition
    on the new object as a part of the
    CopyObjectCommand
    , the new object exists, but the
    ContentDisposition
    is not set. This is my copy code:
    Copy code
    ts
      await client.send(
        new CopyObjectCommand({
          Bucket: bucket,
          CopySource: `${bucket}/${tmp}`,
          Key: key,
          ContentDisposition: contentDisposition,
        }),
      );
    The new object at
    key
    exists, but when I use
    HeadObjectCommand
    I don't see a content disposition. This is the reponse:
    Copy code
    json
    {
      "$metadata": {
        "httpStatusCode": 200,
        "attempts": 1,
        "totalRetryDelay": 0
      },
      "AcceptRanges": "bytes",
      "LastModified": "2023-04-29T02:42:09.000Z",
      "ContentLength": 4662718,
      "ETag": "\"a8b4bcf0648ec79e2e79cb116fda9e7c\"",
      "ContentType": "video/mp4",
      "Metadata": {}
    }
    The AWS S3 documentation suggests that
    ContentDisposition
    should be returned: https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/preview/client/s3/command/HeadObjectCommand/ I've confirmed through debugging that the value for
    ContentDisposition
    is a string as such:
    attachment; filename=This%20is%20an%20oink%20moo.mp4;
  • k

    Karew

    04/29/2023, 3:09 AM
    You need to specify
    MetadataDirective: "REPLACE"
    and set all the metadata to the new values it should be
  • a

    Ambyjkl

    04/29/2023, 9:21 AM
    hi, i was wondering why the public url only supports http/1.1
  • a

    Ambyjkl

    04/29/2023, 9:21 AM
    http/2 would be quite beneficial for my use case
  • k

    Karew

    04/29/2023, 9:22 AM
    R2 public domains are HTTP2 now, but if you happen to have one that was created a long time ago, you may have to remove and re-add it to get HTTP2
1...100210031004...1050Latest