I'm using Postgres and would like to make a big update query that would pick up from a CSV file, lets say I got a table that's got (id, banana, apple)
.
I'd like to run an update that changes the Bananas and not the Apples, each new Banana and their ID would be in a CSV file.
I tried looking at the Postgres site but the examples are killing me.
I would
COPY
the file to a temporary table and update the actual table from there. Could look like this:If the imported table matches the table to be updated exactly, this may be convenient:
Creates an empty temporary table matching the structure of the existing table, without constraints.
Privileges
SQL
COPY
requires superuser privileges for this. (The manual):The psql meta-command
\copy
works for any db role. The manual:The scope of temporary tables is limited to a single session of a single role, so the above has to be executed in the same psql session:
If you are scripting this in a bash command, be sure to wrap it all in a single psql call. Like:
Normally, you need the meta-command
\\
to switch between psql meta commands and SQL comands in psql, but\copy
is an exception to this rule. The manual again:Big tables
If the import-table is big it may pay to increase
temp_buffers
temporarily for the session (first thing in the session):Add an index to the temporary table:
And run
ANALYZE
manually, since temporary tables are not covered by autovacuum / auto-analyze.Related answers:
You can try the below code written in python, the input file is the csv file whose contents you want to update into the table. Each row is split based on comma so for each row, row[0]is the value under first column, row[1] is value under second column etc.