storage is in the database right so could I have a...
# off-topic
j
storage is in the database right so could I have a foreign key being an image in storage and then have my own images table to add things like alt text, width, height and reference the image in storage and maybe even cascade delete it if the image is deleted
g
You can certainly link to the storage object table row, but you can't delete with cascade as the delete of the file data in s3 storage is done by the storage api which then deletes the object table row. You would orphan the actual file in storage if you just delete the row in the object table. I had to create a trigger function which called storage api thru http extension to delete the files "attached" to a note object when deleted in my app.
j
ok and for making policy what are all the fields I get? like I want to limit to certain image types for example
Copy code
((bucket_id = 'posts'::text) AND (storage.extension(name) = 'webp'::text))
but I can simply name the file test.webp and provide a jpg image like this but I want the actual file type not just name extension
when I list files I see
Copy code
metadata:
cacheControl: "max-age=3600"
mimetype: "image/webp"
size: 279338
perhaps somehow getting mimetype and size would beuseful to limit as well
g
You can get to those in RLS policies. If you go to the dashboard table UI and select storage for the schema you can see the object table and columns you can use. The metadata is stored in json.
j
Ok I am trying like this
Copy code
(
  (bucket_id = 'posts'::text) AND 
  (
    ('private'::text = (storage.foldername(name))[1]) OR 
    ('public'::text = (storage.foldername(name))[1])
  ) AND 
  ((uid())::text = (storage.foldername(name))[2]) AND 
  (
    (
      (storage.extension(name) = 'webp'::text) AND
      (json_extract_path(storage.metadata, '{mimetype}') = 'image/webp'::text) AND
      (json_extract_path(storage.metadata, '{size}') < 999999 ) 
    ) OR 
    (storage.extension(name) = 'mp4'::text) OR 
    (storage.extension(name) = 'pdf'::text) OR 
    (storage.extension(name) = 'gif'::text)
  )
)(
  (bucket_id = 'posts'::text) AND 
  (
    ('private'::text = (storage.foldername(name))[1]) OR 
    ('public'::text = (storage.foldername(name))[1])
  ) AND 
  ((uid())::text = (storage.foldername(name))[2]) AND 
  (
    (
      (storage.extension(name) = 'webp'::text) AND
      (jsonb_extract_path(storage.metadata, '{mimetype}') = 'image/webp'::text) AND
      (jsonb_extract_path(storage.metadata, '{size}') < 999999 ) 
    ) OR 
    (storage.extension(name) = 'mp4'::text) OR 
    (storage.extension(name) = 'pdf'::text) OR 
    (storage.extension(name) = 'gif'::text)
  )
)
and get error on save
Error updating policy: missing FROM-clause entry for table "storage"Error updating policy: missing FROM-clause entry for table "storage"
can I access the metadata like that or is this saying I need some sort of like from(storage) function
This is for an insert policy btw. And if I dont assign target roles I'm assuming just activates for all inserts
g
There is no function storage.metadata like foldername and extension, you have to do sql: select metadata from storage.objects where.... and then do postgres json operators to get to your json field on that result.
j
area of focus being
Copy code
(json_extract_path(storage.metadata, '{mimetype}') = 'image/webp'::text)
ohhh ok let me try that
i can do declare and begin and end in these checks then im assuming. Ill try that
g
What table is this insert policy on?
j
its on storage.objects. When someone uploads file. I made it so there is posts bucket not public and then public folder and private folder and next folder in each is uid of uploader and want public ones public read for all and then all insert,update,delete for user on own files. Then only allow specified file types under specified size
g
Ahh, so there is likely a problem then...
The storage API does two calls to the object table when a file is uploaded. First is an insert with basic information and to check policy. Then AFTER it stores the file to s3 it updates and adds metadata....
j
ohh ok so there is really no way to check what is being uploaded or its size until after its too late. That's odd. In general is this a good approach I am doing
g
Yeah, so I think you just have the name of the file at insert time. I can't remember what storage-api does if the 2nd update fails to the db.
j
Ok I'll look into that. also so this is the read policy i have for public folder which means i can use getPublicUrl to get urls right? I seem to be having issue. and createSignedUrl() for private
Copy code
supabase
        .storage
        .from('posts')
        .list('public/'+user.id, {
          limit: 100,
          offset: 0,
          sortBy: { column: 'name', order: 'asc' },
        }),
gets me the list of files and then i need urls from existing items
Copy code
publicData.map(async (publicFile) => {
          const { publicURL, error } = await supabase
            .storage
            .from('posts')
            .getPublicUrl(`${user.id}/${publicFile.name}`, 60)
And it gives me urls but they dont work https://cnxmxunyrouuwprrtvst.supabase.co/storage/v1/object/public/posts/e8527e42-34de-4840-a378-c35f93a10570/test1.webp
g
getPublicUrl does not make an API call. It just builds a path based on constants and parameters. Then when you use that URL it does not get impacted by RLS policies at all as it goes straight to s3 storage.
To make a bucket public you need to actually select the "make public" option on the bucket.
j
oh so I guess instead of having private posts bucket with public and private folders i should instead do public and private buckets with folders being uid/filename
g
Yes, if you want to have long lived urls without token then you need the bucket public.
j
ok I see what you mean. I have working properly just using token as is even for signed out users but may be advantages to switching to public anyway. Im guessing with public and as you referred to them "long lived urls" these might be better for seo because I am guessing tokenized way wouldnt show up in google images for example Idk all the pros and cons of each but might give it a bit of thinking tonight
If I have say a tweet and its a 'draft' I would want that image private until send then could use moveto to make public on send. But I dont believe I can move from one bucket to another so would just need to make sure when they have a draft the img url is still public somewhere even if not convienent in front end
g
Move only works within a bucket. If your urls are not predictable, then until you actually post the url somewhere public, the file is pretty secure as you can't list a public bucket without permission. Of course depends on security level you need for unpublished info.