https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • p

    PaCmEn

    04/12/2023, 8:12 AM
    there is a way to make unique encryption to each bucket? or its still not implemented?
  • h

    HardAtWork

    04/12/2023, 8:17 AM
    Buckets are transparently encrypted, if that is what you mean. And, if you want something along the lines of a bucket-scoped token, you could probably run every request through a Worker, and then handle auth yourself?
  • p

    PaCmEn

    04/12/2023, 8:19 AM
    let me check the Worker thing
  • p

    PaCmEn

    04/12/2023, 8:21 AM
    I want to use unique encryption for each bucket in assumption that all API tokens can access all buckets but the token will only be able to decrypted associated bucket.
  • p

    PaCmEn

    04/12/2023, 8:23 AM
    And I also don't wont to be able to "understand" me clients\customers content.
  • p

    PaCmEn

    04/12/2023, 8:24 AM
    Workers seems a bit overhead to my POC thanks for the suggestion!!!
  • k

    kian

    04/12/2023, 8:25 AM
    As it stands, any given token can access every bucket on your account
  • p

    PaCmEn

    04/12/2023, 8:26 AM
    😦
  • p

    PaCmEn

    04/12/2023, 8:26 AM
    guys!!! thank you for your support!!! you really helped me!
  • k

    kian

    04/12/2023, 8:27 AM
    Bucket-scoped tokens are supposed to be happening by the end of the quarter
  • k

    kian

    04/12/2023, 8:27 AM
    So, it could be 2.5 months till they happen
  • k

    kian

    04/12/2023, 8:27 AM
    or longer if that isn't hit
  • p

    PaCmEn

    04/12/2023, 8:27 AM
    and the most important i've learned new things
  • p

    PaCmEn

    04/12/2023, 8:28 AM
    This is a good news!
  • y

    yashaggarwal212

    04/12/2023, 8:41 AM
    Hi Team, Is there any api available through which we can attach custom domain to s3 bucket directly, without any manual intervention.
  • h

    HardAtWork

    04/12/2023, 8:44 AM
    There is the API used by the Dash, but it may change/disappear entirely without notice.
  • o

    oldmanmeta

    04/12/2023, 8:52 AM
    @kian - found the issue was with my R2 API key which I had added my v4 IP address, as the page indicates to do, to include it in the allow
  • o

    oldmanmeta

    04/12/2023, 8:52 AM
  • o

    oldmanmeta

    04/12/2023, 8:52 AM
    That's either broken, or miss-leading for sure.
  • b

    Bas Z

    04/12/2023, 10:39 AM
    When using the terraform
    aws_s3_bucket
    resource to create an R2 bucket everything is just fine. However, deleting the created resources fails with:
    Copy code
    rror: emptying S3 Bucket (<bucket_name>): listing S3 Bucket (<bucket_name>) object versions: NotImplemented: ListObjectVersions not implemented status code: 501, request id: , host id:
    This only happens when there are objects in the bucket. Did anywone else run into this problem? Is there a workaround?
  • k

    kian

    04/12/2023, 10:46 AM
    Doesn't look like there's a way to tell it that versioning isn't implemented at a glance -
  • s

    Sid | R2

    04/12/2023, 11:06 AM
    How do you mean? You should be able to leave that empty if you do not need IP filtering FWIW
  • s

    Sid | R2

    04/12/2023, 11:54 AM
    The good news is that we might have an R2 resource on the cloudflare provider soon so you should be able to switch
  • b

    Bas Z

    04/12/2023, 12:10 PM
    Oh cool! Is there a way to track the progress?
  • l

    levifig

    04/12/2023, 12:12 PM
    Where can I monitor my R2 bill? 😅
  • v

    vvo

    04/12/2023, 12:34 PM
    👋 Are public R2 buckets following the Cloudflare rules as for compression? As seen here: https://developers.cloudflare.com/support/speed/optimization-file-size/what-will-cloudflare-compress/
  • k

    Karew

    04/12/2023, 12:59 PM
    Yes
  • k

    Karew

    04/12/2023, 1:00 PM
    Although you can also upload your own gzipped files and they will be handled correctly now (a recent fix I think?)
  • o

    oldmanmeta

    04/12/2023, 2:45 PM
    It read to me that I needed to add an IP address to be included, which should work either way. Adding my local IP breaks it.
  • o

    oldmanmeta

    04/12/2023, 5:01 PM
    Thanks for the heads up with the Upload - I got that to work nicely, but as you say, using a presigned URL is the end of the rainbow for this. I can't find any documentation that confirms it - my thought being that the S3Client config endpoint would be the target, while removing the credentials. Have you had any luck with making that work?
1...988989990...1050Latest