What's the fastest way to do a bulk insert int

2019-01-01 06:34发布

I need to programmatically insert 10's of millions of records into a postgres database. Presently I am executing 1000's of insert statements in a single "query".

Is there a better way to do this, some bulk insert statement I dont know about?

8条回答
琉璃瓶的回忆
2楼-- · 2019-01-01 07:02

It mostly depends on the (other) activity in the database. Operations like this effectively freeze the entire database for other sessions. Another consideration is the datamodel and the presence of constraints,triggers, etc.

My first approach is always: create a (temp) table with a structure similar to the target table (create table tmp AS select * from target where 1=0), and start by reading the file into the temp table. Then I check what can be checked: duplicates, keys that already exist in the target, etc.

Then I just do a "do insert into target select * from tmp" or similar.

If this fails, or takes too long, I abort it and consider other methods (temporarily dropping indexes/constraints, etc)

查看更多
忆尘夕之涩
3楼-- · 2019-01-01 07:03

I just encountered this issue and would recommend csvsql for bulk imports to Postgres. To perform a bulk insert you'd simply createdb and then use csvsql, which connects to your database and creates individual tables for an entire folder of CSVs.

$ createdb test 
$ csvsql --db postgresql:///test --insert examples/*.csv
查看更多
裙下三千臣
4楼-- · 2019-01-01 07:06

There is an alternative to using COPY, which is the multirow values syntax that Postgres supports. From the documentation:

INSERT INTO films (code, title, did, date_prod, kind) VALUES
    ('B6717', 'Tampopo', 110, '1985-02-10', 'Comedy'),
    ('HG120', 'The Dinner Game', 140, DEFAULT, 'Comedy');

The above code inserts two rows, but you can extend it arbitrarily, until you hit the maximum number of prepared statement tokens (it might be $999, but I'm not 100% sure about that). Sometimes one cannot use COPY, and this is a worthy replacement for those situations.

查看更多
还给你的自由
5楼-- · 2019-01-01 07:07

You can use COPY table TO ... WITH BINARY which is "somewhat faster than the text and CSV formats." Only do this if you have millions of rows to insert, and if you are comfortable with binary data.

Here is an example recipe in Python, using psycopg2 with binary input.

查看更多
梦寄多情
6楼-- · 2019-01-01 07:15

UNNEST function with arrays can be used along with multirow VALUES syntax. I'm think that this method is slower than using COPY but it is useful to me in work with psycopg and python (python list passed to cursor.execute becomes pg ARRAY):

INSERT INTO tablename (fieldname1, fieldname2, fieldname3)
VALUES (
    UNNEST(ARRAY[1, 2, 3]), 
    UNNEST(ARRAY[100, 200, 300]), 
    UNNEST(ARRAY['a', 'b', 'c'])
);

without VALUES using subselect with additional existance check:

INSERT INTO tablename (fieldname1, fieldname2, fieldname3)
SELECT * FROM (
    SELECT UNNEST(ARRAY[1, 2, 3]), 
           UNNEST(ARRAY[100, 200, 300]), 
           UNNEST(ARRAY['a', 'b', 'c'])
) AS temptable
WHERE NOT EXISTS (
    SELECT 1 FROM tablename tt
    WHERE tt.fieldname1=temptable.fieldname1
);

the same syntax to bulk updates:

UPDATE tablename
SET fieldname1=temptable.data
FROM (
    SELECT UNNEST(ARRAY[1,2]) AS id,
           UNNEST(ARRAY['a', 'b']) AS data
) AS temptable
WHERE tablename.id=temptable.id;
查看更多
浮光初槿花落
7楼-- · 2019-01-01 07:20

One way to speed things up is to explicitly perform multiple inserts or copy's within a transaction (say 1000). Postgres's default behavior is to commit after each statement, so by batching the commits, you can avoid some overhead. As the guide in Daniel's answer says, you may have to disable autocommit for this to work. Also note the comment at the bottom that suggests increasing the size of the wal_buffers to 16 MB may also help.

查看更多
登录 后发表回答