https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • d

    DBlessDev.eth

    04/12/2023, 10:01 PM
    Is there a way to update R2 buckets from a google cloud function? i have python cron job that won't work in JS and I don't think workers can do Python code?
  • d

    DBlessDev.eth

    04/12/2023, 10:01 PM
    Could the worker call the google cloud function then update the r2 bucket?
  • k

    Karew

    04/12/2023, 10:38 PM
    Google Cloud Functions have access to boto, right? You should be able to connect to R2 using boto, R2 has an S3-compatible API
  • p

    png-rafaellucio

    04/12/2023, 11:31 PM
    Hi team, can I create a domain record to R2 using terraform?
  • p

    png-rafaellucio

    04/13/2023, 12:15 AM
    Looking documentation I haven't any information about this https://developers.cloudflare.com/r2/ or here https://developers.cloudflare.com/r2/examples/terraform/ I tried create make this using cloudflare_record but dont work ๐Ÿ˜ข
  • a

    Ambyjkl

    04/13/2023, 2:32 AM
    what are the GDPR implications of cloudlare R2 "world" region?
  • a

    Ambyjkl

    04/13/2023, 2:33 AM
    GDPR doesn't consider the US to be "privacy" enough for instance
  • r

    Rush

    04/13/2023, 3:08 AM
    It seems to be inefficient but maybe it's not a problem.
  • r

    Rush

    04/13/2023, 3:19 AM
    Actually - when I increase number of connections I often get more errors and I need to have a solid retry logic ... "We encountered an internal error. Please try again". But also perhaps the same thing would happen with HTTP2? maybe it's related to some internal R2 stuff
  • k

    KRGaming

    04/13/2023, 4:02 AM
    Question, I'm using rclone to upload to a R2 bucket. I've found that smaller files upload no problem but a 23GB file for instance won't upload. Rclone will just sit there at 0B/23GiB. Am I missing something?
  • c

    chientrm

    04/13/2023, 4:08 AM
    can you retry with
    aws
    cli?
  • k

    KRGaming

    04/13/2023, 4:09 AM
    Through a setting in Rclone or Amazon?
  • c

    chientrm

    04/13/2023, 4:10 AM
    download and then reupload?
  • k

    KRGaming

    04/13/2023, 4:18 AM
    Tried, downloading files seems fine. Just uploading that appears to be a problem.
  • k

    KRGaming

    04/13/2023, 4:42 AM
    Found my issue.
  • v

    vvo

    04/13/2023, 7:15 AM
    Thanks ๐Ÿ™‚
  • v

    vvo

    04/13/2023, 7:46 AM
    Multiple questions for R2 folks, that I couldn't find an answer in the docs: - are there rate limits regarding reads/s (public bucket, and api calls), and writes/s? - what's the hard limit as for the number of buckets on an account? It is said to be 1,000 and we're wondering if this can be raised to, let's say 1,000,000? - can a worker be bound to multiple buckets? what's the hard limit as for the number of buckets connected to a single worker? Thanks ๐Ÿ™
  • k

    kian

    04/13/2023, 7:48 AM
    1)
    r2.dev
    URLs have rate limits so you should use a custom domain. 2) No idea, fill out https://forms.gle/ukpeZVLWLnKeixDu7 with your use-case and they'll tell you. 3) Yes, it'll probably start timing out when you hit ~120 as it'll get progressively slower to publish.
  • v

    vvo

    04/13/2023, 7:48 AM
    Thanks, any idea as for limits when using a custom domain?
  • k

    kian

    04/13/2023, 7:49 AM
    Nope, there are none.
  • s

    Sid | R2

    04/13/2023, 11:28 AM
    Currently not, unfortunately. There's more to setting up a custom domain to a bucket than a CNAME, so what you're doing will not work.
  • s

    Sid | R2

    04/13/2023, 11:29 AM
    For this kind of thing, you'd use an
    EU
    jurisdiction I suppose, which will make sure that your bucket and all processing happens in Europe. Jurisdictions should be out soon, but they do not exist yet.
  • s

    Sid | R2

    04/13/2023, 11:30 AM
    What was it? ๐Ÿ˜„ Was it a bug on our end?
  • s

    Sid | R2

    04/13/2023, 11:33 AM
    Are you planning on having a "one-bucket-per-customer" kind of setup?
  • k

    KRGaming

    04/13/2023, 11:35 AM
    Nope, turns out it needed more time than I was giving it to actually start the transfer. Completely my fault.
  • s

    Sid | R2

    04/13/2023, 11:36 AM
    Oh I see, I believe rclone calculates a file hash before uploading, so that explains the delay
  • k

    KRGaming

    04/13/2023, 11:40 AM
    Quite possibly, I was thinking it could have been some sort of pre-allocation but now that you mention it, that seems more likely. I appreciate your follow-up though! There is a good chance I'll be back here in the future as I dive deeper into R2 and its many uses.
  • v

    vvo

    04/13/2023, 11:47 AM
    Yup, something like that. For now we planned to have a single bucket and create subfolders. But we were curious about any hard limits if we were to use one bucket per customer/your advice.
  • s

    Sid | R2

    04/13/2023, 11:55 AM
    A single bucket with sub-directories is probably the right choice TBH. It's going to make managing things a lot simpler (although I might be wrong, I don't know what your setup looks like :D). I was mostly concerned because I've seen people attempt to allocate a bucket per customer, and then distribute auth tokens for each bucket around. There are no real hard limits on the number of buckets an account can have, but generally if you find yourself needing more, it's a good idea to talk to someone first!
    v
    l
    • 3
    • 7
  • o

    O Rato

    04/13/2023, 2:06 PM
    Hi guys, I have a question about usage of R2 and cache I'm developing a solution that will use R2 for storage and we will deliver the content through an API so we would be able control access and rights to file using revalidate cache headers and tokens, we will be using the DASH protocol which generates lots of small files, so having a cache would be good for us (for speed and costs). From what I understood, serving cached files from R2 is not a problem, but I think that serving API which delivers the media is a issue because of ToS. We didn't find that the CF-Stream service meets our needs. Since most of the files are expected to be media file, would it be againt the ToS to use the cache service? Sorry if someone already asked the same question before. Our setup would be R2 => API => CACHE (Cloudflare or someone else, and I'd be happy to pay cloudflare for the cache)
1...989990991...1050Latest