Hi all, I have a quick question that I want to mak...
# orm-help
n
Hi all, I have a quick question that I want to make sure works with Prisma before I execute it. I have a few massive tables (~20GB) with new columns that I added recently that I need to backfill. After doing some research, the fastest strategy I saw was to essentially create a clone of that table, insert all the rows into the cloned table with the right data in the column, drop the old table, then rename the cloned table to the same name as the original. Will this cause any issues with Prisma? Or will it just work? All of the same indexes/foreign key constraints will exist on the cloned table as before
1
d
The most important part of this equation — do you think you will do this more than once? if your big table has foreign keys to other records you should be fine. where you would have problems is if other tables had foreign keys into YOUR BIG TABLE (S) - those would NOT transport. However, if the keys are the same, you might be able to redefine the schema to point to the new bigger tables after the fact. This is a great example of how you can do big messy things — but there will always be a lot of moving parts to track. One possible consideration — if you have FK in other schema that point back to the big table, you could in theory create a second set of FK that point to the bigger table. So your remote tables would have schema pointers to both versions of the bigger table at the same time. IE: given: BIGTABLE_ALPHA[child_table_id: int] BIGTABLE_EXTENDED{child_table_id:int] REFTABLE {big_table_alpha_id: int} CHILD_TABLE[id] CHILD_TABLE is going to be perfectly fine. the FKs in both BIGTABLE versions will transport easily. REFTABLE is a bit more work. you could alter REFTABLE: REFTABLE {big_table_alpha_id: int, big_table_extended_id: int} and use a SQL INSERT (in raw sql) to self-copy big_table_alpha_id into big_table_extended_id, and manually validate that all the linkages work. If this is a once-only transport, you can then delete the references to BIG_TABLE_ALPHA, and finally delete BIG_TABLE_ALPHA entirely. The TLDR is,
n
HI Dave, thanks for the very comprehensive answer. My plan to handle the foreign keys into the table is to (within one transaction), drop those foreign keys, recreate them with NOT VALID, then run VALIDATE CONSTRAINT later to clean up our data. I just wanted to confirm that Prisma wouldn't have any issues with this, because the underlying table is "technically" different, but named the same as before, and I was planning on running these commands using a script as opposed to with a migration.
j
@Nicolas Rodriguez if you are on postgresql, and a more recent version, and the vast majority of those backfill values are the default, then if/when you add that column via the sql, set the default for the column and you will have many, many fewer rows to update. Also - doing the table swap operation will also have to take into account rows that have been updated / deleted / added into the original table while the new table is being created. - So you'll either have to (a) lock the original table for write while doing the swap (b) migrate the added/updated/deleted rows after the fact or (c) turn off access to the database while doing this migration. And if you are doing (c) you might as well do the regular migration instead of doing a table swap.