I am trying to insert some data in a table I have created. I have a data frame that looks like this:
I created a table:
create table online.ds_attribution_probabilities
(
attribution_type text,
channel text,
date date ,
value float
)
And I am running this python script:
engine = create_engine("postgresql://@e.eu-central-1.redshift.amazonaws.com:5439/mdhclient_encoding=utf8")
connection = engine.raw_connection()
result.to_sql('online.ds_attribution_probabilities', con=engine, index = False, if_exists = 'append')
I get no error, but when I check there are no data in my table. What can be wrong? Do I have to commit or do an extra step?
This could happen because it defaults to the public database, and there's probably a table with that name under the public database/schema, with your data in it.
@MaxU's answer does help some, but not the others. For others, here is something else you can try:
When you create the engine, specify the schemaname like this:
Link: https://stackoverflow.com/a/49930672/8656608
I have a similar situation. Not able to insert pandas dataframe into oracle table. I am using cx_oracle and SQLalchemy for oracle connection. There is no error. I am using following command:
Hopefully this helps someone else.
to_sql
will fail silently in the form of what looks like a successful insert if you pass a connection object. This is definitely true for Postgres, but i assume the same for others as well, based on the method docs:This got me because the typing hints stated
Union[Engine, Connection]
, which is "technically" true.If you have a session with
SQLAlchemy
try passingcon=session.get_bind(),
Check the
autocommit
setting: https://docs.sqlalchemy.org/en/latest/core/connections.html#understanding-autocommitI faced the same problem when I used
.connect()
and.begin()
Just remove the
.connect()
and.begin()
and it will work.I had a similar issue caused by the fact that I was passing sqlalchemy connection object instead of engine object to the con parameter. In my case tables were created but left empty.