Hi I'm working with PostgreSQL. I'm trying to inse...
# orm-help
t
Hi I'm working with PostgreSQL. I'm trying to insert 100k rows into an empty table with about 60 columns with createMany. It takes around 30 seconds. Is it normal speed or is there any possibility for speed improvement? Or should I consider like redesign my schema? Logging each query, createMany seems to throw about 40 queries to complete all the inserts.
1
j
60 columns a lot of columns, I would definitely look at your data model and see if this needs to be normalized in some way, especially if you have sets of columns which only apply to certain subsets of rows, and/or you have lots of columns that are mostly null
t
@Jeremy Hinegardner thank you for your advice I made a table with single string column inserted 100k rows took about 20 secs. is this normal? o some hardware problem?
j
That would totally depend upon your complete architecture, if your insertions are happening over a network, what that network is, etc. For a pure - internal postgesql insert, try making a table and then doing `generate_series`for integers and / or timestamps such. AS an example, on my laptop (M1 mac)
Copy code
test_insert=# create table t1(i integer);
CREATE TABLE
test_insert=# \timing
Timing is on.
test_insert=# insert into t1(i)  select * from generate_series(1, 100000);
INSERT 0 100000
Time: 157.004 ms
Inserting 100k integers took 157ms
And doing a timestamp table:
Copy code
test_insert=# insert into t2(t)  select * from generate_series('2022-01-01T00:00:00Z'::timestamp, '2022-12-31T23:59:59Z'::timestamp, '1 second');
INSERT 0 31536000
Time: 44852.734 ms (00:44.853)
44.8s for inserting 31 million rows.
n
You could also compare the time difference by inserting 100K records through Raw Query and with createMany call, also as Jeremy mentioned it totally depends on your architecture. If the columns in your table have indexes then it could take some more time compared to if there are no indexes.