https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • z

    Zeblote

    02/26/2023, 5:55 PM
    but it doesn't know in advance that the file is too large to be cached
  • z

    Zeblote

    02/26/2023, 5:56 PM
    you can fix™️ it by placing files too large to be cached in a separate folder with a separate cache rule that turns it off
  • z

    Zeblote

    02/26/2023, 5:56 PM
    (or avoid such files)
  • a

    aeharding

    02/26/2023, 6:15 PM
    Unfortunately I don't have control over this because this site is using the PeerTube software. Maybe it would be possible to develop a workaround with the cf cache specifically for r2 buckets, since there is a tight integration and the cf cache is being encouraged for custom domains and removing rate-limiting 🙂
  • a

    aeharding

    02/26/2023, 6:18 PM
    But I'm happy since there is a workaround at least for now. Thanks again @Erisa | Support Engineer for the super fast response. I've really been blown away with the response times and help from the Cloudflare Discord! It's super.
  • e

    elithrar

    02/26/2023, 6:23 PM
    Does PeerTube not allow you to slice up larger videos into actual chunks, per typical HLS/DASH delivery? Does it just expect you to serve a big blob?
  • a

    aeharding

    02/26/2023, 6:26 PM
    It's fragmented HLS with byte range segmenting
  • e

    elithrar

    02/26/2023, 6:28 PM
    Right, but even at 2160p I would expect much smaller chunks. Byte range segmenting has challenges across many CDN providers due to how Range fetches work in practice.
  • a

    aeharding

    02/26/2023, 6:32 PM
    hmmm, I don't think splitting it up into separate files is supported with PeerTube, but I could be wrong. I'll look into it more. Do you have any links on video segmenting best practices for CDNs? I can open an issue with the PeerTube software, they're pretty receptive of feedback 🙂
  • e

    elithrar

    02/26/2023, 6:40 PM
    They don’t seem to expose settings but being able to set a max segment size for HLS is very useful — almost all modern HLS delivery has segment sizes of 4-20MB, and avoids big files with byte range fetching: https://github.com/Chocobozzz/PeerTube/blob/develop/config/production.yaml.example (I don’t see any segment config exposed)
  • e

    elithrar

    02/26/2023, 6:42 PM
    Since they lean on ffmpeg - allowing you to split by time (6s - 12s segments) is likely to be a straightforward improvement on their side. 6s of 15Mbps 2160p = 11.25MB.
    a
    • 2
    • 2
  • l

    lignol23

    02/26/2023, 7:42 PM
    hi guys, I am a python developer and i am testing to write some data (a pandas dataframe) to a r2 bucket. Writing small dataframes works for me, but for larger dataframes botocore seems to use mutlipart upload, which fails.
    OSError: [Errno 22] There was a problem with the multipart upload.
    It works without any problems with my self hosted minio. Are here any other python developers with this problems?
  • e

    Erisa | Support Engineer

    02/26/2023, 7:48 PM
    what python library are you using to interact with R2?
  • l

    lignol23

    02/26/2023, 9:05 PM
    I am using s3fs and also tried pyarrows filesystem. Here is some example code:
    Copy code
    python
    import pyarrow.fs as pafs
    import s3fs
    import pyarrow.parquet as pq
    import pyarrow as pa
    
    table_small = pq.read_table("small_data.parquet")
    table_large = pq.read_table("large_data.parquet")
    
    fs1 = s3fs.S3FileSystem(
      key="some_key", 
      secret="some_secret", 
      client_kwargs=dict(endpoint_url="https://123456.r2.cloudflarestorage.com"), 
      s3_additional_kwargs=dict(ACL="private") # <- this is neccesary for writting. 
    ) 
    
    fs2 = pafs.S3FileSystem(
      access_key="some_key", 
      secret_key="some_secret", 
      endpoint_override="https://123456.r2.cloudflarestorage.com"
    )
    
    
    pq.write_table(table_small, "test/test.parquet", filesystem=fs1) # <- works 
    pq.write_table(table_small, "test/test.parquet", filesystem=fs2) # <- works 
    
    #  failed with OSError: [Errno 22] There was a problem with the multipart upload. 
    pq.write_table(table_large, "test/test.parquet", filesystem=fs1) 
    
    # failed with OSError: When initiating multiple part upload for key 'test.parquet' in bucket 'test': AWS Error NETWORK_CONNECTION during CreateMultipartUpload operation: curlCode: 28, Timeout was reached
    pq.write_table(table_large, "test/test.parquet", filesystem=fs2)
  • r

    rez0n

    02/26/2023, 9:20 PM
    I said that it started works but after few hours it again not works (at least one of five page loads in incognito return CORS error, wtf)
  • t

    Till

    02/26/2023, 9:42 PM
    I’m trying to backup R2 via Synology’s Cloud Sync, but while it’s listing buckets for access key and secret correctly, the authentication then fails. Has anybody got this working?
  • e

    elithrar

    02/26/2023, 9:49 PM
    Purge your cache, wait 1 min, and report back. It’s possible you had cached versions of those files BEFORE you configured CORS, which means some cached objects will NOT have the CORS headers cached.
  • r

    rez0n

    02/26/2023, 9:55 PM
    After cache purging it become unworkable state. I keeping updated CORS settings on demo page https://cors-test-page.quaded.com/index.html Look devconsole, you'll see this strange behaviour
  • c

    conqr

    02/26/2023, 11:31 PM
    Huge W for cat lovers
  • m

    matthew t healy

    02/26/2023, 11:33 PM
    hmm that’s interesting
  • a

    aeharding

    02/26/2023, 11:50 PM
    Not supported, since cloud sync uses subdomain bucket naming only. You can install rsync and then setup a scheduled task as a workaround
  • e

    Erisa | Support Engineer

    02/26/2023, 11:51 PM
    R2 supports subdomain bucket naming
  • e

    Erisa | Support Engineer

    02/26/2023, 11:51 PM
    it not working here is likely a different incompatibility issue
  • a

    andrew

    02/26/2023, 11:56 PM
    ooof that’s a nasty one
  • z

    Zeblote

    02/26/2023, 11:58 PM
    lots of little things one has to account for to build working systems on r2
  • e

    elithrar

    02/26/2023, 11:58 PM
    Remove the trailing slash from
    AllowedOrigins
    - an
    Origin
    cannot include a path component. Should be:
    Copy code
    "AllowedOrigins": [
          "https://cors-test-page.quaded.com"
        ],
  • r

    rez0n

    02/27/2023, 12:06 AM
    It works right now. Strange that I had same configuration (without trailing slash) in production env where I first time seen this issue, but it maybe was due cache. I will look how its going and let you know if will notice any issues. Thanks!
  • e

    elithrar

    02/27/2023, 12:07 AM
    Super common mistake. S3, GCS and most web frameworks "allow" you to do that, but it's not a valid
    Origin
    header value.
  • a

    aeharding

    02/27/2023, 12:12 AM
    Lol, I had this exact same problem a couple days ago. Cloudflare docs have a comment on the same line as the key
  • e

    Erisa | Support Engineer

    02/27/2023, 12:19 AM
    May be worth PRing the docs to remove the comment if its causing more confusion then its worth
1...915916917...1050Latest