https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • u

    Unsmart | Tech debt

    04/05/2023, 8:09 PM
    🤷
  • a

    azat

    04/05/2023, 8:15 PM
    Seems that this is true only for S3 Glacier - https://docs.aws.amazon.com/amazonglacier/latest/dev/api-multipart-initiate-upload.html, and Standard S3 seems does not has such limitation - https://docs.aws.amazon.com/AmazonS3/latest/userguide/qfacts.html
  • u

    Unsmart | Tech debt

    04/05/2023, 8:20 PM
    Either way I doubt it will change and every library I know makes each part the same size not sure why you would even want differing part sizes
  • a

    azat

    04/05/2023, 8:28 PM
    Ok, thanks for clarification (It is not a problem to fix the caller).
  • f

    Frederik

    04/05/2023, 8:31 PM
    Can you DM me your accountId, I can look into this further.
  • a

    azat

    04/05/2023, 8:32 PM
    BTW here is one use-case, there is limit for number of parts in multi part upload, and if you are starting uploading from a smaller part size then eventually you can hit this limit for big uploads, while you can scale the part size on fly and then this not be a problem.
  • u

    Unsmart | Tech debt

    04/05/2023, 8:35 PM
    You would just use a part size high enough at the start. For example if you have a 5TB file you can use 10,000 parts of 500MB
  • a

    azat

    04/05/2023, 8:37 PM
    @Unsmart | Tech debt Sometimes you may not know the input size
  • a

    azat

    04/05/2023, 8:57 PM
    And just FYI uploading of parts of different sizes is not a theoretical issue, I had been hit by this while I was testing R2 for ClickHouse. I can fix it, but this will be one more quirk, so it will be nice to have such a support from R2.
  • a

    azat

    04/05/2023, 8:58 PM
    Also I've just finished a simple patch and indeed the problem was the different size of first parts, with fixing this, the issue gone away.
  • i

    itsezc

    04/05/2023, 9:05 PM
    Does R2 have any hard limitations on Egress, say I was to upload a 10 MB that could potentially have 100 TB in egress is that violating any TOS?
  • h

    HardAtWork

    04/05/2023, 9:06 PM
    Nope, though extremely high bandwidth may prompt a chat from sales, asking you to upgrade to Enterprise
  • h

    HardAtWork

    04/06/2023, 8:48 AM
    message has been deleted
  • t

    ToetMats

    04/06/2023, 9:06 AM
    Heya folks! I ran into some vague wording regarding CF’s policy and R2, does anyone here perhaps know more about it (if it ever happened before, for example) https://community.cloudflare.com/t/user-complaints-for-data-held-in-r2-storage/493887 (I’m sorry if crossposting is against the discord rules, ill just delete and we can pretend it never happened then)
  • t

    ToetMats

    04/06/2023, 11:40 AM
    (if cloudflare handles it and only deletes the file, can I just link the cloudflare dmca takedown without risking my entire account or should it be handled entirely on my end)
  • u

    ucinteractivesl

    04/06/2023, 11:49 AM
    I uploaded a few videos in hls master.m3u8 format to R2 for testing
  • u

    ucinteractivesl

    04/06/2023, 11:50 AM
    I only did around 1200 requests and operations B increased from 9k to 150K
  • u

    ucinteractivesl

    04/06/2023, 11:50 AM
    How can be that possible?
  • w

    Walshy | Pages

    04/06/2023, 11:52 AM
    Multipart?
  • t

    ToetMats

    04/06/2023, 11:55 AM
    Isn't m3u8 a life format, and thus constantly streaming/ingesting?
  • s

    Sid | R2

    04/06/2023, 12:00 PM
    Let me raise this internally and I'll get you an answer
  • u

    ucinteractivesl

    04/06/2023, 12:04 PM
    Yes it is, but every part has 90mb. I uploaded a video of 1.4GB, and I only had 1200 requests
  • u

    ucinteractivesl

    04/06/2023, 12:05 PM
    Also I was doing tests by my own watching videos of different lengths and size, and by my testings every watch counted as 10 operations
  • u

    ucinteractivesl

    04/06/2023, 12:05 PM
    Now that I made it public is getting crazy
  • t

    ToetMats

    04/06/2023, 12:06 PM
    thank you so much
  • a

    alex85

    04/06/2023, 12:15 PM
    Hi, I would like to use pre-signed urls for a bucket with custom domain while keeping the bucket private. Is there any way to do that? I've tried to create a proxied site which points to .r2.cloudflarestorage.com but requests fail with "1014 CNAME Cross User Banned Error". Also are requests to the .r2.cloudflarestorage.com domain rate limited like r2.dev? can that domain be used for production?
  • h

    HardAtWork

    04/06/2023, 12:16 PM
    Pre-signed URLs do not work on custom domains
  • a

    alex85

    04/06/2023, 12:19 PM
    so I guess I can safely use ".r2.cloudflarestorage.com" domain for production and requests will not be blocked, right?
  • s

    Sid | R2

    04/06/2023, 12:20 PM
    The
    *.r2.cloudflarestorage.com
    domain hosts the S3-compat API. It is not rate limited, but it will need credentials for access
  • s

    Sid | R2

    04/06/2023, 12:21 PM
    If you're generating presigned URLs, they'll be on the
    *.r2.cloudflarestorage.com
    domain, and in that case, you can use it for production!
1...978979980...1050Latest