Hey all! I'm new to supabase and have enjoyed migr...
# off-topic
j
Hey all! I'm new to supabase and have enjoyed migrating our data to it. However, my one concern is speed. Basically, i've used two different serverless graphql products to create our API (Stepzen and Hasura) - but with both it's taking up to two seconds to get data from one table. This leads me to believe the speed issue is with Supabase. I have two questions, 1. should I expect the free tier of Supabase to be faster or it essential to move to the $50 tier if i want sub second responses? 2. Is there a better service for accessing Supabase via graphQL? Thanks all!
s
Supabase is based on Postgres, so it can scale to as many rows as you need. The overhead of Supabase is minimal (e.g. 1-2% compared to querying Postgres directly). If I was to guess, I'd say you've got hundreds of thousands, or maybe even millions of rows, and don't have any table partitions or indexes setup. Without those things in place, Postgres will essentially perform the equivalent of linear searching through the table, since it doesn't know where the data is in the table unless you have an index or partitioned table (or both which can be even better than just one or the other). An index provides PG with a way to find data (similar to an index in a book). A partition table splits your data into smaller hidden tables based on a column, or columns allowing PG to know where to search when it sees a request to query data for a specific column. You still perform the query against the same table name, but PG knows which of those hidden tables to search. As an example, with 40 million rows in a table, I have setup partitions, indexes, and some optimised queries, and I can query data in about 200-300 ms. Upgrading plans wouldn't bring a huge benefit.
s
For more concrete info, we publish our benchmarks in this repo: https://github.com/supabase/benchmarks for REST this is more relevant: https://github.com/supabase/benchmarks/issues/2 As Scott mentioned, it really depends on how you structure your DB and what your queries look like, but a subsecond response time shouldn't be hard to reach.
j
Thanks @User & @User . Right now i'm getting the building blocks of our tech stack in place and am only testing Supabase with 15 rows of data, joined to a table with just 2 rows. eventually, this might be 3-4k rows but nothing crazy. Do you think indexing would help me pick up the speed?
s
That seems strange. Did you set your Supabase project up in a region nearest to you? Also, any front-end which is connecting to it, is that hosted in the same region? I know from a recent experience that when I hosted my API (which pulls data from Supabase) on Railway, it was taking around 1-2 seconds to bring back even simple queries because Railway was hosted in the middle of the US, while my Supabase instance was hosted in the UK (and I'm based in the UK too). The round trip latency was extremely high, even though the actual time running the query was around 100 ms.
j
I thought it seemed strange too. Supabase is London, GraphQL Api sever is New york and i'm testing from the UK. Naturally, it would be better to move the data closer to the API layer to avoid two trips over the atlantic. Still, it's feeling more sluggish than i expected. I'm trying to figure out how to measure the speed of the response
2.6 seconds.
s
If it's UK to GQL in NY and Supabase in London, it's actually more than 2 trips: [You, request - UK] --> [GQL request - NY] --> [Supabase request - London] --[Supabase processing query]--> [Supabase response - London] --> [GQL response - NY] --> [You, response, UK]
If you run the same query directly against Supabase (using the query editor in the dashboard or PGAdmin), is the response still slow?
j
It's good to know that supabase should be faster. I'll move now and test. should be retestable in 5mins.
2.55 secs
s
Hmm, that's very strange. Just tested on my minimal Supabase project, and I'm getting similar results. Usually it'd complete in under 200 ms
This definitely looks like an issue with the dashboard or something in that area - currently taking 2500ms to run. If I run the same query directly against the DB with PGAdmin, it completes in 150ms.
j
Just loaded the same data into a new supabase project in US east coast. response at 2.42 seconds (via supabase sql editor). Could my data have something to do with it? for example my ID is a string/text.
s
Maybe an auto-generated UUID as a primary key might help. It's possible it'll make a difference, but seems unlikely with such a small dataset, especially since you're selecting * without any filters.
j
I'll keep trying different things, but it's good to know to expect upto 150ms responses
ok, something is messed up. I've created a new table with 5 rows and 2 columns and performed the most basic sql query. still getting over 2 second responses. I'm feeling a bit unlucky with supabase. the results are the same from from here in UK and my virtual desktop hosted in Germany.
c
@Scott P Curious to learn how you solved the Railway issue? Their service looks interesting ☺️
s
Only way to solve the latency issue was to migrate back to DigitalOcean and setup Portainer + Docker, instead of using Railway
j
Thanks for your help over the weekend @User . I've decided to drop graphql and use the Supabase API. Main reason is that it means i can locate the supabase db in London which through PGadmin tools is 180ms vs 700ms if db is in the US.
s
Glad to hear you got something figured out 🙂 One other option, if possible, might be to move the GQL API to London but if you've no need for GQL specifically, then the Supabase API is a great option too
h
Am in India and the Supabase is in US It takes just 200ms
j
What was @User issue?
Seems weird, did it have anything to do with supabase or just location?