https://discord.cloudflare.com logo
Join Discord
Powered by
# r2
  • e

    edgaras

    04/30/2023, 1:33 PM
    aws s3api sync https://xxx.r2.cloudflarestorage.com/files /local/path --endpoint-url https://xxx.r2.cloudflarestorage.com
    This doesn't seem to work
  • e

    edgaras

    04/30/2023, 1:40 PM
    OK figured out, this works:
    Copy code
    shell
    # Template
    aws s3 sync s3://<MY_BUCKET_NAME> /my_local/path --endpoint-url https://<ACCOUNT_ID>.r2.cloudflarestorage.com
    
    # Example
    aws s3 sync s3://<MY_BUCKET_NAME> ./ --endpoint-url https://0azdeb1dc4523dada6cd845c17dia629.r2.cloudflarestorage.com
  • o

    oldmanmeta

    04/30/2023, 3:24 PM
    Can I generate a presigned URL for a specific file in a specific bucket that has no expiry?
  • o

    oldmanmeta

    04/30/2023, 3:24 PM
    and will that URL still be valid for any updated versions of that file?
  • c

    chientrm

    04/30/2023, 3:24 PM
    yep.
  • c

    chientrm

    04/30/2023, 3:25 PM
    the url'll still be valid as long as the API token is not revoked
  • o

    oldmanmeta

    04/30/2023, 3:25 PM
    Perfect - thanks for the quick response!
  • o

    oldmanmeta

    04/30/2023, 3:30 PM
    leads me to the next question - is the expiresIn value required, and if so, how do I specify to never expire? This is the example code I am working off:
    Copy code
    await getSignedUrl(S3, new GetObjectCommand({Bucket: 'my-bucket-name', Key: 'dog.png'}), { expiresIn: 3600 })
  • c

    chientrm

    04/30/2023, 3:32 PM
    nope. maximum 7 days expires.
  • o

    oldmanmeta

    04/30/2023, 3:36 PM
    ok - so I think your previous response of 'yep' meaning the - no expiry part of my question 😉 ... um... any thoughts on this? I understand why it would have an expiry, but in my use case, I have CORS blocks on various assets in a bucket, and other items I generally want them to be public via a pre-signed URL. If I have to create another bucket just for this, that would be pretty rubbish.
  • c

    chientrm

    04/30/2023, 3:37 PM
    you can write a worker to manage ACL :v
  • o

    oldmanmeta

    04/30/2023, 3:37 PM
    Yes - I could create a worker to return the URL
  • c

    chientrm

    04/30/2023, 3:37 PM
    you can return the stream directly, not the url.
  • o

    oldmanmeta

    04/30/2023, 3:38 PM
    I see what you are saying - I think you've just helped me see the light 😉
  • o

    oldmanmeta

    04/30/2023, 3:38 PM
    Thanks @chientrm
  • o

    oldmanmeta

    04/30/2023, 3:44 PM
    Can I ask for some further clarity here - I'm a noob to workers and most other things, sorry if this is a stupid question - do I still use the S3 client inside the worker to access the file in the bucket?
  • c

    chientrm

    04/30/2023, 3:44 PM
    nope. it's built-in
  • o

    oldmanmeta

    04/30/2023, 3:45 PM
    v cool - just trying to find some docs - have sooo many tabs open, it's killing me
  • o

    oldmanmeta

    04/30/2023, 3:46 PM
    and there be some more light - now I get the bindings part 😉 going to need another coffee on this
  • o

    oldmanmeta

    04/30/2023, 3:50 PM
    Is the idea that I create a new worker for each function I want to execute, or do I branch from a single worker to perform various functions? I'm coming from a position of say a Controller in a rest API where we would have multiple functions callable, but I'm not clear on if this is intended best practice with a worker?
  • c

    chientrm

    04/30/2023, 3:50 PM
    that's up to ur case.
  • c

    chientrm

    04/30/2023, 3:51 PM
    my advice is the put everything into a single script first 😐
  • o

    oldmanmeta

    04/30/2023, 3:52 PM
    ok cool - so perhaps use something like a param to act as a quasi controller path? e.g. domain.com/path/function and perhaps push values in a body, or just the body if its simple
  • c

    chientrm

    04/30/2023, 3:52 PM
    can use
    hono
    or
    itty-router
    .
  • o

    oldmanmeta

    04/30/2023, 3:53 PM
    oh v cool - I will check those out for sure. Thanks for the help, have so much to learn and this has been great.
  • o

    oldmanmeta

    04/30/2023, 3:53 PM
    I try and smash out a worker and see I can make this work
  • o

    oldmanmeta

    04/30/2023, 4:20 PM
    Does the worker need to be published in order to access the R2 bucket, or should it be able to access it from the dev environment?
  • h

    HardAtWork

    04/30/2023, 4:21 PM
    wrangler
    can emulate a bucket in local dev, but if you want to access the files already in the bucket, you need to deploy
  • h

    HardAtWork

    04/30/2023, 4:21 PM
    Or remote dev might work
  • o

    oldmanmeta

    04/30/2023, 4:21 PM
    ok -cool, thanks for clarifying that
1...100510061007...1050Latest