Whenever I try to push a large dataset to my datab...
# help
b
Whenever I try to push a large dataset to my database using supabase-py, the request times out and my dashboard becomes unavailable for several minutes. Any ideas on what’s going on, and how to prevent this (besides chunking my data)?
n
Hello @ben! This thread has been automatically created from your message in #843999948717555735 a few seconds ago. We have already mentioned the @User so that they can see your message and help you as soon as possible! Want to unsubscribe from this thread? Right-click the thread in Discord (or use the ``...`` menu) and select "Leave Thread" to unsubscribe from future updates. Want to change the title? Use the ``/title`` command! We have solved your problem? Click the button below to archive it.
t
Have you got any triggers on the tables you are inserting into? Perhaps its a blocking query on insert!
n
ben (2022-05-08)
t
Whats the result of
Copy code
sql
SELECT pid,
       now() - pg_stat_activity.query_start AS duration,
       query,
       state,
       *
FROM pg_stat_activity
WHERE true    AND state = 'active'
b
I don't have any triggers on the tables, they're freshly created with nothing but RLS added to allow read/write. I shouldn't have any data issues either. I fixed my client-side timeout and now I get an upstream timeout message:
postgrest.exceptions.APIError: {'message': 'The upstream server is timing out'}
It gives me a table with some stuff in it. The stuff in it changes if I run the command when my connection is timing out vs when it's idle. Which field(s) are of interest?
t
We are looking for blocked queries that are hanging around waiting for something else to finish.
How simple is your RLS policy?
b
All access for all users, plus another that's read access for all users. When I'm done making changes I go delete the "all access" one
Here's the query result from the outputs of this script when I'm pushing enough data to choke things up: with pub as ( select concat_ws( ',', case when bool_or(pubinsert) then 'insert' else null end, case when bool_or(pubupdate) then 'update' else null end, case when bool_or(pubdelete) then 'delete' else null end ) as w2j_actions, coalesce( string_agg( realtime.quote_wal2json(format('%I.%I', schemaname, tablename)::regclass), ',' ) filter (where ppt.tablename is not null), '' ) w2j_add_tables from pg_publication pp left join pg_publication_tables ppt on pp.pubname = ppt.pubname where pp.pubname = $1 group by pp.pubname limit 1 ), w2j as ( select x.*, pub.w2j_add_tables from pub, pg_logical_slot_get_changes( $2, null, $3, 'include-pk', '1', 'include-transaction', 'false',
But the state remains active
Is it just a timeout on the backend? I haven't changed the timeout length and I'm just using the anon key
t
Interesting. Looks like something with Realtime.
If so try disabling it and seeing if the problem occurs afterwards. That way you’ve narrowed down the problem at least.
b
It's on but not for any tables
Well disabling it changed the error message at least. Instead of a timeout I'm now getting
postgrest.exceptions.APIError: {'message': 'An invalid response was received from the upstream server'}
Maybe I didn't disable it properly. The query still has realtime in it
with pub as ( select concat_ws( ',', case when bool_or(pubinsert) then 'insert' else null end, case when bool_or(pubupdate) then 'update' else null end, case when bool_or(pubdelete) then 'delete' else null end ) as w2j_actions, coalesce( string_agg( realtime.quote_wal2json(format('%I.%I', schemaname, tablename)::regclass), ',' ) filter (where ppt.tablename is not null), '' ) w2j_add_tables from pg_publication pp left join pg_publication_tables ppt on pp.pubname = ppt.pubname where pp.pubname = $1 group by pp.pubname limit 1 ), w2j as ( select x.*, pub.w2j_add_tables from pub, pg_logical_slot_get_changes( $2, null, $3, 'include-pk', '1', 'include-transaction', 'false',
It's also saying
You are passing an API error to the data field. (type=value_error)
. Is that an error in my data? I've gotten different error codes before when my data had issues