https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • d

    denchi

    03/16/2023, 9:19 PM
    This saves me so much work.
  • d

    denchi

    03/16/2023, 9:19 PM
    Just set it up, guess I'll see in 1-2 days if that worked šŸ˜„
  • d

    denchi

    03/16/2023, 9:20 PM
    Do you have plans to expose this on the dashboard as well?
  • h

    Harshal

    03/16/2023, 9:28 PM
    Yep we plan on following up with UI, just wanted to get the API out first and into customer hands šŸ˜„
  • d

    denchi

    03/16/2023, 9:29 PM
    Really appreciate that. I was gonna start implementing my own lifecycle management later today.
  • p

    Plotzes

    03/16/2023, 9:44 PM
    yeah i would also love a dashboard UI because i dont use the S3 api to interact with my buckets šŸ˜„
  • d

    denchi

    03/16/2023, 9:54 PM
    If I have an object with a certain key and upload another object with the same key, it'll overwrite the old object. That would also reset the expiry time for that object, right?
  • h

    Harshal

    03/16/2023, 10:10 PM
    Yep, if all you're looking to do is just reset the expiry based on the configuration you can just copy it to itself as well.
  • d

    denchi

    03/16/2023, 10:10 PM
    Got it
  • j

    jonjohnson

    03/16/2023, 11:44 PM
    šŸ‘‹ hello! I tried and failed to google for this, so maybe someone can confirm if this is expected... I want to receive an upload from users and compute the sha256 hash of the content before serving it anywhere. FWIW, I am using the s3 API from go. Trying to set
    ChecksumAlgorithm
    (https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#PutObjectInput.ChecksumAlgorithm) so that R2 will compute it for me fails with
    STREAMING-UNSIGNED-PAYLOAD-TRAILER not implemented
    . That's fine. I tried to work around this by computing the sha256 myself while writing the object to a temporary key, then calling
    CopyObject
    once I have confirmed the sha256. This worked, but was really slow (took ~22s, iirc). I would expect
    CopyObject
    to be a pretty fast metadata-only operation that creates (effectively) a symlink or something. Am I doing something wrong here, or should I expect
    CopyObject
    to be slow?
  • f

    Frederik

    03/17/2023, 12:14 AM
    Changelog for a release that went out today: - HTTP2 is now enabled by default for new custom domains linked to R2 buckets. (This will only apply to newly added domains, if you must have http2 on an existing domain, consider removing and re-adding that domain. Long term we will look into providing functionality to edit settings like this for existing domains) - Bug fix: Requests to public buckets will now return Content-Encoding header for gzip files when Accept-Encoding: gzip is used Also the ListParts API has been made available in a previous release, which Sid already mentioned in passing šŸŽ‰ (Workers bindings to come later)
  • s

    Sid | R2

    03/17/2023, 12:39 AM
    A CopyObject isn’t necessarily just a metadata operation, it will copy actual bytes, so it can be ā€œslowā€ depending on your file’s size. 22s seems excessive though, how large is your file?
  • j

    jonjohnson

    03/17/2023, 1:21 AM
    About 100MB
  • s

    Sid | R2

    03/17/2023, 2:13 AM
    Are you sure it’s the CopyObject that’s taking that long and not: - Your file being uploaded to the temporary key - Your file being downloaded from the temporary key - Your file being hashed?
  • k

    Karew

    03/17/2023, 2:39 AM
    Sorry, by removing and re-adding, you mean removing the R2 custom domain and re-adding it? Or redoing the entire Cloudflare zone?
  • z

    Zeblote

    03/17/2023, 2:41 AM
    why do we need to do that, instead of you doing something to enable it for all existing custom domains?
  • r

    rrgeorge

    03/17/2023, 2:46 AM
    Hello, I've been searching through the documents, and I cannot seem to find an answer. According to the documentation at , R2 should support presigned URLs. But when I generate a presigned url and use it, I receive a 401 unauthorized response
  • e

    Erisa | Support Engineer

    03/17/2023, 2:48 AM
    just the custom domain, not the zone
  • k

    Karew

    03/17/2023, 2:49 AM
    And I guess: Why doesn't R2 just follow my zone settings like all the other products instead of this mysterious behavior? TLS 1.2 settings are also still not enforced on R2 domains either (I don't have a compliance requirement for that, but someone surely does)
  • r

    rrgeorge

    03/17/2023, 3:06 AM
    Okay. It seems that the pre signed url is only accessible from the ip address that requested it?
  • r

    Rush

    03/17/2023, 3:35 AM
    Is HTTP2 also available for the S3-compatible api?
  • r

    Rush

    03/17/2023, 3:35 AM
    Currently the only way to have a lot of requests is having many dozens of keep-alive http 1 connections
  • r

    Rush

    03/17/2023, 3:35 AM
    http2 would likely make it much more efficient
  • r

    Rush

    03/17/2023, 3:50 AM
    just did a quick check. Http2 is still not available for the S3 compatible API
  • n

    NT261

    03/17/2023, 3:58 AM
    I'm having a lot of
    ERR_CONTENT_DECODING_FAILED
    today only on files with
    content-encoding: gzip
    (No changes on my side recently)
  • s

    Sid | R2

    03/17/2023, 4:07 AM
    That doesn’t sound right, how are you creating the presigned URL?
  • r

    rrgeorge

    03/17/2023, 4:08 AM
    I finally figured it out. My s3 token was restricted to a certain ip so the presigned url had the same restriction.
  • s

    Sid | R2

    03/17/2023, 4:11 AM
    There was a change today wrt gzip. I’m assuming the uploaded files are actually gzipped?
  • n

    NT261

    03/17/2023, 4:11 AM
    Yes, it's gzipped before upload
  • j

    jonjohnson

    03/17/2023, 5:25 AM
    I am hashing the file while uploading it to the temporary key and have instrumentation timing just the CopyObject call, so I am pretty sure that's what is slow, but I will confirm another experiment tomorrow.
    k
    s
    x
    • 4
    • 11
1...947948949...1050Latest