https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • h

    HardAtWork

    03/28/2023, 7:14 PM
    What’s the error/issue you are seeing?
  • g

    GeorgeTailor

    03/28/2023, 7:21 PM
    I guess there's a bug somewhere in Cloudflare, it refuses to accept files as is from
    FormData
    when I call
    BUCKET.put(key, file)
    From what I found it is quite common issue, the worker awaits the response from R2 until timeout. Typescript types suggest that you can
    put
    Blob perfectly fine, but in reality you cannot https://community.cloudflare.com/t/r2-put-object-throws-network-connection-lost-error/430109 this error was reported multiple times in various places, even on SO. Maybe just remove
    Blob
    from accepted types and mention in the docs that R2 expects a stream rather than a file?
  • d

    Dani Foldi

    03/28/2023, 7:25 PM
    if you're using a worker, do you have a compatibility date, or the flag for formdata files set?
  • g

    GeorgeTailor

    03/28/2023, 7:25 PM
    also, how does one replicate a behaviour from preview or prod environments when it comes to accessing stuff in the bucket from the browser? For example, I upload an image from a worker to R2, my r2 instances has a custom domain attached to it, so I can just do
    <img src="https://r2.example.com/image-uuid"/>
    How do I do that locally?
  • k

    Karthik

    03/28/2023, 7:28 PM
    Hi All, This is Karthik. Context: We are using R2 Storage for storing files and downloading the files from an android app. We migrated from S3 to R2. Before this we have S3 as storage and cloudfront was serving the files from S3. Note: we don't have custom domain setup on our R2 bucket. *Issue: * After migrating to R2 for storing the files. We observed that our android app is not able to download the files sometimes. Unfortunately, we do not have logs from our android team that we are getting 429 Too Many Requests. One of the possible reasons is we are hitting the limit (number of requests). As per the cloudflare docs on R2 there was a mention of limitation if we are not setting up the custom domain. Also caching is not available. There is no mention of limitation metric like number of classA and classB operations allowed etc. It will be great if someone can shed some light on what is this limitation and respective numbers if available. It will really help us to take some decision on custom domain and upgrades if necessary. We appreciate your response. Thanks in advance 🙇‍♂️
  • g

    GeorgeTailor

    03/28/2023, 7:28 PM
    there is a bug in wrangler somewhere, if you set
    "moduleResolution": "node"
    you cannot change
    "types": ["@cloudflare/workers-types/experimental"],
    it always points to the oldest compat date (the one in the root) without moduleResolution I have problems with importing anything that is not ts or js. but I also checked in experimental folder and it also has this:
    Copy code
    put(
        key: string,
        value:
          | ReadableStream
          | ArrayBuffer
          | ArrayBufferView
          | string
          | null
          | Blob,
        options?: R2PutOptions
      ): Promise<R2Object>;
  • k

    kian

    03/28/2023, 7:29 PM
    Do
    file.stream()
  • g

    GeorgeTailor

    03/28/2023, 7:29 PM
    yeah I know, this is what was recommended in the link I shared.
  • g

    GeorgeTailor

    03/28/2023, 8:27 PM
    ok, so I think I've got this. I just move
    state
    folder from
    .wrangler
    folder in the root of my project to whatever place inside my local serve folder. Also some additional logic required in frontend JS to determine which origin to use,
    localhost
    or my
    r2.example.com
    so when I run
    wrangler pages dev public
    the
    --persist-to
    should be set to
    public/state
    . However, due to broken CI on Cloudflare I need to deploy manually, is there an option somewhere I can pass to wrangler so that it ignores a specific folder when publishing with
    wrangler pages publish public
    ?
  • g

    GeorgeTailor

    03/28/2023, 8:41 PM
    Copy code
    [site]
    bucket = "./public"
    exclude = ["local_state"]
    from here https://developers.cloudflare.com/workers/wrangler/configuration/#workers-sites doesn't work
  • l

    levifig

    03/28/2023, 10:50 PM
    It's not a very specific error, but this webpage that is loading a different page, and working on Cloudfront+S3 isn't really working on R2: it basically is getting a weird (seemingly) CORS error, even though we've matched the CORS rules (even setting it to "*" on the AllowedOrigins, just to make sure)
  • s

    shirt

    03/28/2023, 11:37 PM
    My R2 bucket is redirecting to https://www.cloudflare-terms-of-service-abuse.com/stream.mp4 when on my domain
  • s

    shirt

    03/28/2023, 11:38 PM
    I used to cache mp4s which I think was the root cause of this but I stopped doing that many months ago
  • w

    Walshy | Pages

    03/28/2023, 11:40 PM
    Can you please make a ticket, make sure to include some URLs which result in this and make sure to mention these are coming from R2
  • j

    Jeff12345

    03/28/2023, 11:43 PM
    https://developers.cloudflare.com/r2/api/workers/workers-api-usage/#4-bind-your-bucket-to-a-worker Is there a benefit to doing it this way rather than using something like aws4fetch?
  • k

    kian

    03/28/2023, 11:44 PM
    Entirely up to you
  • k

    kian

    03/28/2023, 11:44 PM
    If you want to use the S3 API or the Workers bindings
  • k

    kian

    03/28/2023, 11:45 PM
    Overall there's basically no difference in perf/options, although new features tend to end up in one and eventually make their way over to the other
  • j

    Jeff12345

    03/28/2023, 11:45 PM
    Ok cool, my existing code uses aws4fetch with multiple buckets so it's easier for me to stick with that
  • j

    Jeff12345

    03/28/2023, 11:45 PM
    Thanks
  • k

    kevfly

    03/29/2023, 12:10 AM
    Hi, I've been trying to quickly transfer some large files (+100 GB) from my current S3 bucket to R2. I'm nearly done but have been getting some errors about multipart requests failing. I think it might have something to do with the concurrency, but not sure. I'm trying to use as much of the network bandwidth I have on an ec2 instance. I created a 24-hour key earlier today and just saw that key has now been deleted. I'm assuming I'm hitting a threshold and getting throttled and potentially got my key deleted by Cloudflare systems. Should I wait a while and upload the rest a bit later, is the issue related to something else, or should I just try again now?
  • k

    kevfly

    03/29/2023, 12:52 AM
    To follow up here with a solution that worked for me: I created a new API key and kept lowering the concurrency until it finally stopped failing
  • s

    shirt

    03/29/2023, 1:09 AM
    I actually did make a ticket but haven't gotten a response in 4 days, ticket #2745303
  • w

    Walshy | Pages

    03/29/2023, 1:09 AM
    Thanks, I'll get it escalated
  • s

    shirt

    03/29/2023, 1:09 AM
    appreciate it
  • w

    Walshy | Pages

    03/29/2023, 1:53 AM
    Your zone is shirt.rip right?
  • s

    shirt

    03/29/2023, 8:21 AM
    Yes
  • k

    koreyoshi_re

    03/29/2023, 8:46 AM
    They used an exact limit during beta, but it can't be found now. According to their current documentation, "r2.dev subdomain is not intended for production usage". Maybe use custom domain is the best resolution
  • o

    oc

    03/29/2023, 2:26 PM
    hello.
  • o

    oc

    03/29/2023, 2:27 PM
    R2 record of DNS cannot delete when removed the bucket of r2, anybody knows how to solve it? thx
1...967968969...1050Latest