Migrate data listeners from firestore to supabase?
# help-and-questions
m
Our team is considering moving our firestore application to supabase. However, we currently use the onSnapshot functionality(https://firebase.google.com/docs/firestore/query-data/listen) extensively throughout our application. We use the firebase web-sdk to listen to real time changes on data.. Every time the app boots up, we start up listeners that listen to collections for changes, allowing our app to basically have a real-time sync of data for all users in the web-browser. I know that supabase has a "changes API", but this looks like it isn't exactly what we need, because when a user loads up the browser, they need to have an "initial snapshot" of the data and then start listening to changes from that snapshot going forward. The changes api appears to only give you changes, but not an initial starting snapshot of the data. Is there a way to do this behavior with supabase? I'm happy to explain further if the above explanation is not helpful. Thanks! -Michael
g
You have to create the "snapshot" table yourself with initial database queries, and then use realtime to get event changes and you keep it updated based on those events. Realtime does not do queries, but gets single row updates from a table with 1 filter of several types shown at the link below. You have to deal with joins either with a database call or with multiple realtime subscriptions. Depending on the complexity of your snapshots this can be somewhat easy to hard. https://supabase.com/docs/guides/realtime/extensions/postgres-changes describes what is available.
m
how do i create the snapshot and atomically ensure that i get the changes starting from the time of the snapshot?
g
I do the subscription first, then when it reports connected, I load the initial data in that handler. I've not done multiple subscriptions on a data set before though.
This user has generated some code that might be useful, I've not followed it in awhile... https://code.build/p/GZ6ioN6YzcpDwNwGNnDpEn/supabase-subscriptions-just-got-easier
m
this approach seems rife with potential race conditions no? so you listen to the listener first, and then you load the initial data... so what if the initial data overwrites the events in the listener?
g
https://supabase.com/docs/guides/realtime/extensions/postgres-changes#combination-changes This example seems to imply you can have multiple tables and a single status to trigger on.
m
you could have writes occurring on the real time listener... and then you are still "loading" initial data"
so then you'd have... old data
you get what im saying gary? the firestore listener starts with a snapshot, and then notifies of live events from that logical time point on, exactly starting from the snapshot of the data
1) register the listener 2)you receive the full snapshot 3)a change event comes in for document A 4)the change event for document A gets processed 5)the full initial data for the document A gets loaded you end up with old data for document A
g
The opposite is also true. You can't get the data and then call realtime, that is worse as you could lose seconds of data while it connects. I understand your case. You certainly know all the events coming in, and I need to think thru how replication works between the table you get and then a change before and after your read as to if it is a realistic race or not. It could be. There is no information from Supabase in guides or docs on dealing with this that I know of, and I likely would.
You might generate an issue on supabase/realtime and see if one of the devs has a comment also. This is mainly a user helping user forum.
Another approach is to just rerun the query on every relevant change from realtime 🤨 Edit... this may not be as bad as it seems as you already have to do a reload on any connection error (power saving modes on devices, tab going to background for a few minutes, internet interruption, etc.). Just depends on how often these query updates...
... The race is read the table and start send back response. Then change occurs and database replication processes it and sends out a message. Supabase's realtime server gets notice of change, then reads the database to verify RLS and filter. Based on that result getting back to it, then sends down websocket message. At least that is my understanding of the new realtime.
m
there's really no way to guarantee that the data will be accurate. even if the realtime listener and the snapshot are delivered consistently, once it gets to the application level code, depending on how its processed, a callback might get ordered one before the other, that could lead to data that doesnt look right. the only way to solve this issue is to have some explicit logical ordering so that updates that have been overwritten are explicitly dropped, on both the snapshot loader AND the real time listener
so it appears based on what we are saying here gary, this isnt something that is supported by supabase.
g
There certainly is nothing built in. And I agree there is a race, unless you can just reload the query each time there is an update, which works for some cases. Also dealing with an event that you have before your initial data has finished loading is possible with a queue, but messy and may need to involve an updated at field...
Just an update... based on looking like no way to avoid lost data... Looks like this is solvable, at least as far as keeping a set of table data updated. (This is not the same as a Firestore query snapshot). This has a few details and backup data and soon code (that is working...after cleanup) https://github.com/GaryAustin1/Realtime2 The table updating works like the Stream realtime mode Flutter. You have to have a primary key to keep your data array in memory current.

https://cdn.discordapp.com/attachments/1111039543756472452/1112219778778665062/image.pngâ–¾