Generate SQL statements from a Pandas Dataframe

2020-05-23 03:21发布

I am loading data from various sources (csv, xls, json etc...) into Pandas dataframes and I would like to generate statements to create and fill a SQL database with this data. Does anyone know of a way to do this?

I know pandas has a to_sql function, but that only works on a database connection, it can not generate a string.

Example

What I would like is to take a dataframe like so:

import pandas as pd
import numpy as np

dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))

And a function that would generate this (this example is PostgreSQL but any would be fine):

CREATE TABLE data
(
  index timestamp with time zone,
  "A" double precision,
  "B" double precision,
  "C" double precision,
  "D" double precision
)

3条回答
在下西门庆
2楼-- · 2020-05-23 03:56

GENERATE SQL CREATE STATEMENT FROM DATAFRAME

SOURCE = df
TARGET = data

GENERATE SQL CREATE STATEMENT FROM DATAFRAME

def SQL_CREATE_STATEMENT_FROM_DATAFRAME(SOURCE, TARGET):

# SQL_CREATE_STATEMENT_FROM_DATAFRAME(SOURCE, TARGET)
# SOURCE: source dataframe
# TARGET: target table to be created in database

    import pandas as pd
    sql_text = pd.io.sql.get_schema(SOURCE.reset_index(), TARGET)   
    return sql_text

Check the SQL CREATE TABLE Statement String

print('\n\n'.join(sql_text))

GENERATE SQL INSERT STATEMENT FROM DATAFRAME

def SQL_INSERT_STATEMENT_FROM_DATAFRAME(SOURCE, TARGET):
    sql_texts = []
    for index, row in SOURCE.iterrows():       
        sql_texts.append('INSERT INTO '+TARGET+' ('+ str(', '.join(SOURCE.columns))+ ') VALUES '+ str(tuple(row.values)))        
    return sql_texts

Check the SQL INSERT INTO Statement String

print('\n\n'.join(sql_texts))
查看更多
▲ chillily
3楼-- · 2020-05-23 04:05

If you only want the 'CREATE TABLE' sql code (and not the insert of the data), you can use the get_schema function of the pandas.io.sql module:

In [10]: print pd.io.sql.get_schema(df.reset_index(), 'data')
CREATE TABLE "data" (
  "index" TIMESTAMP,
  "A" REAL,
  "B" REAL,
  "C" REAL,
  "D" REAL
)

Some notes:

  • I had to use reset_index because it otherwise didn't include the index
  • If you provide an sqlalchemy engine of a certain database flavor, the result will be adjusted to that flavor (eg the data type names).
查看更多
神经病院院长
4楼-- · 2020-05-23 04:17

If you want to write the file by yourself, you may also retrieve columns names and dtypes and build a dictionary to convert pandas data types to sql data types.

As an example:

import pandas as pd
import numpy as np

dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))

tableName = 'table'
columnNames = df.columns.values.tolist()
columnTypes =  map(lambda x: x.name, df.dtypes.values)

# Storing column names and dtypes in a dataframe

tableDef = pd.DataFrame(index = range(len(df.columns) + 1), columns=['cols', 'dtypes'])

tableDef.iloc[0]           = ['index', df.index.dtype.name]
tableDef.loc[1:, 'cols']   = columnNames
tableDef.loc[1:, 'dtypes'] = columnTypes

# Defining a dictionnary to convert dtypes

conversion = {'datetime64[ns]':'timestamp with time zone', 'float64':'double precision'}

# Writing sql in a file

f = open('yourdir\%s.sql' % tableName, 'w')

f.write('CREATE TABLE %s\n' % tableName)
f.write('(\n')

for i, row in tableDef.iterrows():
    sep = ",\n" if i < tableDef.index[-1] else "\n"
    f.write('\t\"%s\" %s%s' % (row['cols'], conversion[row['dtypes']], sep))

f.write(')')

f.close()

You can do the same way to populate your table with INSERT INTO.

查看更多
登录 后发表回答