https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • k

    Karew

    05/24/2023, 4:48 AM
    This maybe?
  • i

    I love cf

    05/24/2023, 5:13 AM
    R2Bucket.get returns null when running worker on local wrangler. It returns correct image when I turn off local mode. Is this desired behavior? Then, how to run R2 locally?
  • s

    Skye

    05/24/2023, 8:46 AM
    The data stored for local mode is on your disk, unlike the data stored in production
  • s

    Skye

    05/24/2023, 8:47 AM
    You can either just keep local mode off, or you can put the data on your disk too for a local emulation
  • i

    I love cf

    05/24/2023, 8:56 AM
    Thanks. Does this mean there's way to run R2 dev server locally? (I probably will just turn off local mode when testing R2 related function)
  • s

    Skye

    05/24/2023, 8:57 AM
    Yes, when using local mode, your worker can put and get to the bucket as normal, it's just the data locally isn't the same as in production
  • c

    Chaz

    05/24/2023, 9:26 AM
    hey, I uploaded to an R2 bucket but the public URL is still not working after allowing access, any ideas?
  • c

    Chaz

    05/24/2023, 9:29 AM

    https://cdn.discordapp.com/attachments/940663374377783388/1110862155336650762/image.png▾

  • c

    Chaz

    05/24/2023, 9:30 AM

    https://cdn.discordapp.com/attachments/940663374377783388/1110862339307221044/image.png▾

  • f

    frydim1

    05/24/2023, 9:34 AM
    what is the best way to test different r2 locations for latency and performance ?
  • use rclone
    f

    frydim1

    05/24/2023, 9:34 AM
    use rclone
    s
    • 2
    • 1
  • e

    Erisa | Support Engineer

    05/24/2023, 11:23 AM
    Are you accessing a files filename, or just the domain on its own? Theres no index so youll have to add a path/filename
  • c

    Chaz

    05/24/2023, 11:23 AM
    just the domain
  • c

    Chaz

    05/24/2023, 11:23 AM
    ok it works nevermind
  • c

    Chaz

    05/24/2023, 11:23 AM
    thanks
  • s

    Skye

    05/24/2023, 11:26 AM
    For future reference - please don't tag people specifically for help - especioally employees
  • s

    Skye

    05/24/2023, 11:26 AM
    (its worse to delete it after too)
  • m

    miguelff

    05/24/2023, 11:52 AM
    At Prisma we migrated all our engine distribution infrastructure from AWS to CF, and now we use R2 to store and distribute our assets. Some of our Chinese user base created a continuously updated mirror of those assets to overcome network reliability problems, to do that, they used to hit the HTTP index of our former S3 endpoint (https://prisma-builds.s3-eu-west-1.amazonaws.com/?delimiter=/&prefix=) getting an XML listing with the objects in the bucket and/or their prefixes. To the best of my knowledge this HTTP public endpoint doesn´t exist in R2, even for public buckets. Am I right? If so, I was thinking on providing a presigned URL for the bucket ListObject´s operation with a long expiration. Is that a sensible alternative approach?
  • s

    Sid | R2

    05/24/2023, 12:00 PM
    How were they authenticating against S3 before? Judging by that URL, it looks like they were issuing a ListObjects? Presigned URLs would work, but you'll have to keep regenerating & redistributing them as they expire. An alternative here could be deploying a simple Worker that can expose the HTTP endpoint you want (using the R2 binding's
    .list()
    method)
  • k

    Karew

    05/24/2023, 12:01 PM
    1. You could provide a public-read R2 key for this purpose. R2 keys can't currently be scoped for just one bucket though, so they key would currently be able to read all of your buckets. Unsure if that is feasible. 2. You could provide your own CF Worker or HTTP endpoint that returns all of the bucket contents yourself by proxying ListObjects for them 3. If you already keep track of the files in
    prisma-builds
    in a database or something, you could create your own listing method that uses the database?
  • m

    miguelff

    05/24/2023, 12:02 PM
    > How were they authenticating against S3 before? Judging by that URL, it looks like they were issuing a ListObjects? Public access is enabled for the bucket and this API is implicitly provided by S3.
  • c

    Chaz

    05/24/2023, 12:15 PM
    how do you delete with rclone?
  • c

    Chaz

    05/24/2023, 1:05 PM
    rclone rmdir cloudflare:\foldername isn't working
  • c

    Chaz

    05/24/2023, 1:05 PM
    also rclone tree cloudflare:\ gives me 0 results
  • c

    Chaz

    05/24/2023, 1:05 PM
    only rclone copy cloudflare:\ works
  • k

    Karew

    05/24/2023, 1:10 PM
    You have to use
    rclone delete
    or
    rclone purge
    (read the docs carefully for each), there is no concept of folders on R2
  • h

    HardAtWork

    05/24/2023, 1:10 PM
    To remove a folder, you need to use
    rclone purge
    . Be careful though, as this can delete your entire bucket if used incorrectly
  • c

    Chaz

    05/24/2023, 1:11 PM
    so rclone purge cloudflare:\foldername
  • c

    Chaz

    05/24/2023, 1:11 PM
    ?
  • h

    HardAtWork

    05/24/2023, 1:11 PM
    rclone purge cloudflare:/bucketname/foldername
1...104510461047...1050Latest