Jesse
05/17/2023, 7:48 PMosamita
05/17/2023, 8:00 PMhttps://cdn.discordapp.com/attachments/1108484119626780783/1108484119773593651/image.png▾
fekdaoui
05/17/2023, 8:27 PMP-a
05/17/2023, 9:10 PMParrybro
05/17/2023, 9:16 PMFailed to run sql query: function cube(vector) does not exist
when I try to run a query with a cube_distance function? has anyone run into this issue before?happenator
05/17/2023, 9:26 PMprovider_access_token
and provider_refresh_token
in the DB on auth/re-auth. It seems I could do this by adding logic to the client to detect when these values have changed and send them back to the server, but I'd love to find a simpler, more reliable and secure way by simply triggering on something server side. Has anyone found a way to do this?phil
05/17/2023, 10:05 PMjs
import { createMiddlewareSupabaseClient } from "@supabase/auth-helpers-nextjs";
import { NextResponse } from "next/server";
export async function middleware(req) {
const res = NextResponse.next();
const supabase = createMiddlewareSupabaseClient({ req, res });
const { data: { session } } = await supabase.auth.getSession();
if (session === null) return NextResponse.redirect(new URL("/login", req.nextUrl));
return res;
}
export const config = {
matcher: [],
};
J21TheSender
05/17/2023, 10:44 PMRubensNobre
05/17/2023, 10:54 PMzardoru / Agka
05/18/2023, 12:44 AMhttps://cdn.discordapp.com/attachments/1108555581314306098/1108555581553377280/image.png▾
Marty
05/18/2023, 1:29 AMsupabase.channel('realtime slots')
.on(
'postgres_changes',
{ event: '*', schema: 'public', table: 'Time Slots' },
(payload) => {
console.log('Change received!', payload)
}
)
.subscribe()
Is there anything I'm missing? The supabase dashboard is receiving the realtime requests as well.Baorong Huang
05/18/2023, 2:51 AMuseUser
hook does not exist in supabase-js
https://cdn.discordapp.com/attachments/1108587735578578994/1108587735729590292/image.png▾
zadlan
05/18/2023, 3:19 AMAkkilah
05/18/2023, 3:51 AMcurl -X DELETE 'https://liiusngzzaswexdpunjg.supabase.co/rest/v1/airport?idairport=eq.28' -H "apikey: SUPABASE_KEY" -H "Authorization: Bearer CLIENT_JWT_TOKEN"
However, I'm facing an issue. I couldn't find a way in Supabase to set the client JWT for createClient()
. Currently, I can only use either the anonymous key or the server key. I've searched and tried using the following code snippet:
const { data: d, error: e } = supabase.auth.setSession({
access_token: da.session.access_token,
refresh_token: da.session.refresh_token
});
But it doesn't seem to have any effect. I also tried looking for other documentation, but I couldn't find anything specific (it seems that there was a way in SDK V1, but not in V2).
Additionally, after an extensive search, I thought that maybe when I use the following code snippet:
let { data: da, error: er } = await supabase.auth.signInWithPassword({
email: 'akkilah@gmail.com',
password: 'xxxxxxxx'
});
Supabase adds the client JWT to the Supabase client internally. However, it still doesn't behave as expected even when I tried doing this from inside the API middleware (meaning logging in from within the middleware).
Any advice on how to archive what I want?auser
05/18/2023, 5:13 AMfetch()
? I assumed that would work, but I keep getting a 400/Bad Request when I try to call it that way.
Is it possible to handle a "redirection" for the auth endpoint?azza
05/18/2023, 5:26 AMconst { data, error } = await supabase.from("cities").select();
and combine this data with my new entries from the API.
Then I filter out the ones that have an id
and insert the new data into supabase.
This worked well initially but I found that the select call has a max of 1000 records so I am now getting duplicate records with my current logic.
What is a better way scale this to ensure I don't get duplicates here.OyugoObonyo
05/18/2023, 5:28 AMElofin
05/18/2023, 7:03 AMhttps://cdn.discordapp.com/attachments/1108651063143321640/1108651063684382760/Screenshot_2023-05-18_at_09.58.58.png▾
akub
05/18/2023, 8:10 AMDECLARE
request_id bigint;
payload jsonb;
url text := TG_ARGV[0]::text;
method text := TG_ARGV[1]::text;
headers jsonb DEFAULT '{}'::jsonb;
params jsonb DEFAULT '{}'::jsonb;
keys text[] := string_to_array(TG_ARGV[3]::text, ',');
timeout_ms integer DEFAULT 1000;
BEGIN
IF url IS NULL OR url = 'null' THEN
RAISE EXCEPTION 'url argument is missing';
END IF;
IF method IS NULL OR method = 'null' THEN
RAISE EXCEPTION 'method argument is missing';
END IF;
IF TG_ARGV[2] IS NULL OR TG_ARGV[2] = 'null' THEN
headers = '{"Content-Type": "application/json"}'::jsonb;
ELSE
headers = TG_ARGV[2]::jsonb;
END IF;
IF keys IS NULL THEN
params = '{}'::jsonb;
ELSE
params = get_query_parameters(VARIADIC keys);
END IF;
IF TG_ARGV[4] IS NULL OR TG_ARGV[4] = 'null' THEN
timeout_ms = 1000;
ELSE
timeout_ms = TG_ARGV[4]::integer;
END IF;
url := get_env(url);
CASE
WHEN method = 'GET' THEN
SELECT http_get INTO request_id FROM net.http_get(
url,
params,
headers,
timeout_ms
);
WHEN method = 'POST' THEN
payload = jsonb_build_object(
'old_record', OLD,
'record', NEW,
'type', TG_OP,
'table', TG_TABLE_NAME,
'schema', TG_TABLE_SCHEMA
);
SELECT http_post INTO request_id FROM net.http_post(
url,
payload,
params,
headers,
timeout_ms
);
ELSE
RAISE EXCEPTION 'method argument % is invalid', method;
END CASE;
INSERT INTO supabase_functions.hooks
(hook_table_id, hook_name, request_id)
VALUES
(TG_RELID, TG_NAME, request_id);
RETURN NEW;
END
https://cdn.discordapp.com/attachments/1108667802111459369/1108667802757374022/image.png▾
https://cdn.discordapp.com/attachments/1108667802111459369/1108667803155824710/image.png▾
Mhamad Othman
05/18/2023, 11:11 AMhttps://${projectId}.supabase.co/storage/v1/upload/resumable
OyugoObonyo
05/18/2023, 12:08 PMcreate or replace function public.handle_new_user()
returns trigger
language plpgsql
security definer
as $$
begin
insert into public.users(id, phone_number, password_hash, created_at, updated_at)
values (new.id, new.phone, new.encrypted_password, new.created_at, new.updated_at);
return new;
end;
$$;
drop trigger if exists on_auth_user_created on auth.users;
create trigger on_auth_user_created
after insert on auth.users
for each row execute procedure public.handle_new_user();
Also, I have another trigger that fires when auth.users is updated. The trigger effects any update on auth.users to public.users. The update trigger is as follows:
create or replace function public.handle_user_update()
returns trigger
language plpgsql
security definer
as $$
begin
update public.users
set
first_name=new.raw_user_meta_data ->> 'first_name',
last_name=new.raw_user_meta_data ->> 'last_name',
-- designation_id=new.raw_user_meta_data ->> 'designation_id',
-- organization_id=new.raw_user_meta_data ->> 'organization_id'
updated_at=new.updated_at
where id=new.id;
return new;
end;
$$;
drop trigger if exists on_auth_user_updated on auth.users;
create trigger on_auth_user_updated
after update on auth.users
for each row execute procedure public.handle_user_update();
This trigger works but until I uncomment . designation_id=new.raw_user_meta_data ->> 'designation_id
OR organization_id=new.raw_user_meta_data ->> 'organization_id'
. Upon uncommenting them, the update user logic returns {"user": null}
. What could be the reason why my trigger stops working as excepted as soon as I include either of them? Is sit because they're foreign keys and I'm not handling them in the right manner?
https://cdn.discordapp.com/attachments/1108727740644995252/1108727740770828411/public_users_schema.png▾
brain
05/18/2023, 1:05 PMLayout.tsx
I'm running into an error of Cannot find name 'SupabaseListener'.
and Cannot find name 'session'.
. I've been unable to find any documentation on the SupabaseListener component. Any help would be apprciated
https://supabase.com/docs/guides/auth/auth-helpers/nextjs-server-components<hmmhmmhm/>
05/18/2023, 1:42 PMKennStack01
05/18/2023, 2:25 PMViky
05/18/2023, 3:24 PMWaviestBalloon
05/18/2023, 3:51 PMjavascript
{
messageCount: x,
commandCount: x <-- only want to modify this element
}
I have a very inefficient way of combating this issue, fetching the JSON from the row, then changing the elements that I need to change and then writing it to the row.
This works decently, even if it's sending about 2-3 completely unnecessary requests on every requested change, but I can live with it.
Another issue pops up, sometimes two changes are made and they conflict and overwrite each other:
self: { commandCount: 14, messageCount: 1355 }
self: { commandCount: 15, messageCount: 1356 }
self: { commandCount: 14, messageCount: 1357 }
First time posting here, sorry if I got anything wrong! Appreciatedbombillazo
05/18/2023, 3:53 PMsupabase db reset
because it destroys the local DB, we simply want to run the missing migrations on our local so we can be on par with our deployed schema.Miles
05/18/2023, 4:19 PMGoTrue.GoTrueError.APIError(message: nil, msg: nil, code: nil, error: Optional("invalid_grant"), errorDescription: Optional("Invalid Refresh Token: Refresh Token Not Found")
What does refresh token not found mean exactly? How could I go about debugging why this happens, as the client library should be handling refreshing automatically? Before any authenticated request I call let session = try await supabaseClient.auth.session
, and this error happens on that line.P-a
05/18/2023, 4:54 PMDan Rumney
05/18/2023, 5:08 PMsupabase functions serve --env-file .functions.env
The fuzzing works fine, but I invariably end up with this error:
<--- Last few GCs --->
[1:0xfffe394987d0] 272 ms: Scavenge 13.3 (14.3) -> 13.1 (14.3) MB, 0.8 / 0.0 ms (average mu = 1.000, current mu = 1.000) allocation failure;
[1:0xfffe394987d0] 280 ms: Scavenge 13.3 (14.3) -> 13.2 (14.3) MB, 2.2 / 0.0 ms (average mu = 1.000, current mu = 1.000) allocation failure;
[1:0xfffe394987d0] 284 ms: Scavenge 13.2 (14.3) -> 13.2 (14.3) MB, 3.0 / 0.0 ms (average mu = 1.000, current mu = 1.000) task;
<--- JS stacktrace --->
#
# Fatal javascript OOM in NewSpace::EnsureCurrentCapacity
#
Error: error running container: exit 133
I don't think the fuzzer is submitting any requests that would, on their own, result in OOM.
Has anyone else seen anything like this while serving their functions locally for an extended period of time or after a certain number of requests?
Are there any known memory leaks in the local implementation?