https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • c

    Craiggles

    05/05/2022, 3:46 AM
    but doesn't r2 limit the number of parallel requests to less than 5? is that going to stay or just another milestone to overcome?
  • a

    andrew

    05/05/2022, 4:08 AM
    huh... first i've heard of a 5 parallel request limit, interesting if true. that's per object?
  • c

    Craiggles

    05/05/2022, 4:26 AM
    5 parallel upload at time*. With suggested max 3 otherwise multipart uploads fail.
  • v

    Vitali

    05/05/2022, 4:46 AM
    It's 3 concurrent uploads to the same (multipart) upload id. There's no problem uploading unrelated keys concurrently (currently we seem to cap out at ~200-300)
  • a

    andrew

    05/05/2022, 6:49 AM
    oh... i'm talking about downloads (GET requests)
  • a

    albert

    05/05/2022, 7:51 AM
    There's not a limit on parallel GET requests as far as I'm aware, and the limit on parallel part uploads is only a temporary one (will be raised before open beta iirc). I was able to achieve ~5K GET req/s (although with an empty body) during testing.
  • d

    Dani Foldi

    05/05/2022, 7:59 AM
    *will be raised for everyone except @albert ๐Ÿ˜… < 3
  • i

    itsmatteomanf

    05/05/2022, 9:01 AM
    And Synology now works! You can re-update the Gist, @john.spurlock ahah
  • i

    itsmatteomanf

    05/05/2022, 9:02 AM
    Is the "There was an error deleting this bucket" error known?
  • i

    itsmatteomanf

    05/05/2022, 9:03 AM
    It should at least show the actual error message that the bucket is not empty...
  • n

    ncw

    05/05/2022, 10:05 AM
    > S3: Fix handling escaped entities being sent (e.g. rclone CompleteMultipartUpload). I can confirm this is working with rclone now - thank you @Vitali Will now try rclone's integration test suite
  • s

    Sid | R2

    05/05/2022, 10:23 AM
    Heyo, this came up recently, weโ€™re going to throw a more meaningful error when you try to delete a bucket is not empty
  • n

    ncw

    05/05/2022, 10:46 AM
    Initial results of rclone's integration tests - R2 seems to change CR at the start or end of a file name into LF! It otherwise copes perfectly with control characters in file names - when trying to copy files between buckets get:
    NotImplemented: Copying from a different account/bucket/object not implemented
    - when trying to copy a file to itself to update the metadata:
    InvalidArgument: Invalid Argument: copy source bucket name
    . This is easy to replicate with
    rclone touch -vv --dump bodies --low-level-retries 1 --retries 1 r2:rclone/file.txt
    - Range requests seem incompatible with s3:
    Invalid Argument: range must be in format bytes=start-end
    . This is easy to replicate with
    rclone cat -vv --tail 5 --low-level-retries 1 --retries 1 r2:rclone/file.txt --dump bodies
    which shows rclone is sending
    Range: bytes=21-
    which is RFC compliant and aws compatible. - streaming uploads are failing with
    InternalError: We encountered an internal error. Please try again
    . I haven't managed to reproduce this outside the test suite yet. The integration test suite didn't finish so there is probably more to come. Would you like me to investigate more of those? Or post the transcript somewhere? If you want to run rclone's test suite yourself, - Checkout the rclone source, branch
    fix-5422-s3-putobject
    from
    github.com/rclone/rclone
    -
    cd rclone/rclone/backend/s3
    - create a remote for testing - I called mine
    r2
    - see below -
    go test -list-retries 1 -v -remote r2: -timeout 30m
    Add
    -verbose -dump-bodies
    if you want to see the HTTP transactions The rclone config needs to look something like
    Copy code
    [r2]
    type = s3
    provider = Other
    access_key_id = YOURACCESSKEY
    secret_access_key = YOURSECRETACCESSKEY
    endpoint = https://YOURCUSTOMERID.r2.cloudflarestorage.com
    region = auto
  • v

    Vitali

    05/05/2022, 10:56 AM
    Via which endpoint? S3, no. Not known. In the UI? known. In the bindings? known - hopefully fixed in today's runtime release
  • v

    Vitali

    05/05/2022, 11:03 AM
    > when trying to copy files between buckets get: NotImplemented: Copying from a different account/bucket/object not implemented This is surprising. The only thing we don't support is cross-account copies (no ACLs). Is this not implemented error description from R2 or rclone? I can't find anything in our codebase that has that description so I think it's just tripping up cross-account copies (in which case it should just be skipped for R2). > when trying to copy a file to itself to update the metadata: InvalidArgument: Invalid Argument: copy source bucket name. This is easy to replicate with rclone touch -vv --dump bodies --low-level-retries 1 --retries 1 r2:rclone/file.txt Can you share the actual x-amz-copy-source directive you put on the wire? I believe we recently changed to require a leading slash but that may have been premature if rclone is expecting to be able to not send it. Alternatively maybe you're eliding the bucket altogether? I don't see anything in the spec that lets you do that.... > - Range requests seem incompatible with s3: Invalid Argument: range must be in format bytes=start-end. This is easy to replicate with rclone cat -vv --tail 5 --low-level-retries 1 --retries 1 r2:rclone/file.txt --dump bodies which shows rclone is sending Range: bytes=21- which is RFC compliant and aws compatible. Yeah, unfortunately we don't currently parse the implicit begin/end range args right now. > - streaming uploads are failing with InternalError: We encountered an internal error. Please try again. I haven't managed to reproduce this outside the test suite yet. What are "streaming uploads"? Just a regular PutObject? Multipart? GCS has a concept of actual streaming uploads but that can't be what you mean here because it's not implemented by AWS. Thanks for the integration test suite. We've been meaning to run some kind of integration test suite regularly to make sure we're passing. I'll make sure to note that rclone is another candidate.
  • v

    Vitali

    05/05/2022, 11:11 AM
    > I don't see anything in the spec that lets you do that.... Not an excuse for any failures on our end, just a frustrating statement of fact that the S3 spec is more a guidance
  • i

    itsmatteomanf

    05/05/2022, 11:13 AM
    Sorry, didn't specify ๐Ÿ˜ฆ UI ๐Ÿ™‚
  • i

    itsmatteomanf

    05/05/2022, 11:13 AM
    Any specific rate limits on deletion requests?
  • v

    Vitali

    05/05/2022, 11:14 AM
    Deletions are similar to writes
  • i

    itsmatteomanf

    05/05/2022, 11:15 AM
    Might be a limitation with Cyberduck, I guess then. It took ~1s per deletion.
  • v

    Vitali

    05/05/2022, 11:16 AM
    That's a little surprising.
  • v

    Vitali

    05/05/2022, 11:17 AM
    I would expect it to be ~300-400ms typically. But 1s could be the case if the backing DO for your bucket is actually very far away for some reason....
  • i

    itsmatteomanf

    05/05/2022, 11:18 AM
    Any way to check, just out of curiosity? Account ID?
  • v

    Vitali

    05/05/2022, 11:18 AM
    Not at this time
  • i

    itsmatteomanf

    05/05/2022, 11:19 AM
    Even bribing one of you...? ๐Ÿ˜›
  • v

    Vitali

    05/05/2022, 11:19 AM
    Even bribing. The debug endpoint isn't written yet & I don't know if we'll provide that kind of support for non-ENT users
  • i

    itsmatteomanf

    05/05/2022, 11:20 AM
    I see, no problem... it's an ENT account, though ๐Ÿ™‚
  • v

    Vitali

    05/05/2022, 11:20 AM
    Then reach out to your customer support person or @Greg-McKeon ๐Ÿ™‚
  • n

    ncw

    05/05/2022, 11:21 AM
    > This is surprising. The only thing we don't support is cross-account copies (no ACLs). Is this not implemented error description from R2 or rclone? I can't find anything in our codebase that has that description so I think it's just tripping up cross-account copies (in which case it should just be skipped for R2). Here is the HTTP transaction - you can see the error message in the response
    Copy code
    2022/05/05 12:13:03 DEBUG : PUT /rclone-test-galumiw1yewasey5xaxumey1/hello%3F%20sausage/%C3%AA%C3%A9/Hello%2C%20%E4%B8%96%E7%95%8C/%20%22%20%27%20%40%20%3C%20%3E%20%26%20%3F%20%2B%20%E2%89%
    A0/z.txt-copy HTTP/1.1
    Host: 14aad7c9ed489151b51557e321b246cf.r2.cloudflarestorage.com
    User-Agent: rclone/v1.59.0-DEV
    Content-Length: 0
    Authorization: XXXX
    X-Amz-Acl: private
    X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    X-Amz-Copy-Source: rclone-test-galumiw1yewasey5xaxumey1/hello%3F%20sausage/%C3%AA%C3%A9/Hello,%20%E4%B8%96%E7%95%8C/%20%22%20%27%20@%20%3C%20%3E%20&%20%3F%20%2B%20%E2%89%A0/z.txt
    X-Amz-Date: 20220505T111303Z
    X-Amz-Metadata-Directive: COPY
    Accept-Encoding: gzip
    Response
    Copy code
    2022/05/05 12:13:04 DEBUG : HTTP RESPONSE (req 0xc000538900)
    2022/05/05 12:13:04 DEBUG : HTTP/2.0 501 Not Implemented
    Content-Length: 123
    Cf-Ray: 70690b6f0ec77714-LHR
    Content-Type: text/plain;charset=UTF-8
    Date: Thu, 05 May 2022 11:13:03 GMT
    Expect-Ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
    Server: cloudflare
    Vary: Accept-Encoding
    
    <Error><Code>NotImplemented</Code><Message>Copying from a different account/bucket/object not implemented</Message></Error>
  • n

    ncw

    05/05/2022, 11:22 AM
    > Can you share the actual x-amz-copy-source directive you put on the wire? I believe we recently changed to require a leading slash but that may have been premature if rclone is expecting to be able to not send it. Alternatively maybe you're eliding the bucket altogether? I don't see anything in the spec that lets you do that... Here is the request
    Copy code
    2022/05/05 12:16:35 DEBUG : PUT /rclone/file.txt HTTP/1.1
    Host: 14aad7c9ed489151b51557e321b246cf.r2.cloudflarestorage.com
    User-Agent: rclone/v1.59.0-beta.6116.781bff280.fix-5422-s3-putobject
    Content-Length: 0
    Authorization: XXXX
    Content-Type: text/plain; charset=utf-8
    X-Amz-Acl: private
    X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    X-Amz-Copy-Source: rclone/file.txt
    X-Amz-Date: 20220505T111635Z
    X-Amz-Meta-Mtime: 1651749394.777997661
    X-Amz-Metadata-Directive: REPLACE
    Accept-Encoding: gzip
    and here is the response
    Copy code
    2022/05/05 12:16:35 DEBUG : HTTP RESPONSE (req 0xc000672300)
    2022/05/05 12:16:35 DEBUG : HTTP/2.0 400 Bad Request
    Content-Length: 103
    Cf-Ray: 70691098ae2e8898-LHR
    Content-Type: text/plain;charset=UTF-8
    Date: Thu, 05 May 2022 11:16:35 GMT
    Expect-Ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
    Server: cloudflare
    Vary: Accept-Encoding
    
    <Error><Code>InvalidArgument</Code><Message>Invalid Argument: copy source bucket name</Message></Error>
    > Yeah, unfortunately we don't currently parse the implicit begin/end range args right now. I can work around this in rclone if necessary - it isn't a big deal. > What are "streaming uploads"? Just a regular PutObject? Multipart? GCS has a concept of actual streaming uploads but that can't be what you mean here because it's not implemented by AWS. Streaming uploads are multipart uploads where you don't know the size of the file in advance. All the ones I tried worked manually worked fine though.
1...116117118...1050Latest