SQLAlchemy StaleDataError on deleting items insert

2020-08-09 11:00发布

问题:

I'm having an issue where I get a an error such as this one:

"MyPyramidApplication Error"<class 'sqlalchemy.orm.exc.StaleDataError'>: DELETE statement on table 'page_view' expected to delete 6 row(s); Only 0 were matched.

So, I have a good idea what is causing the issue but I have been unable to solve it.

I have a page_view model, that has a foreign key on page_id and a user_id.

Here's what the model looks like:

page_view_table = sa.Table(
   'page_view',
    metadata,
    sa.Column('id', sa.Integer, primary_key=True),
    sa.Column('page_id', sa.Integer, sa.ForeignKey('guide.id')),
    sa.Column('user_id', sa.Integer, sa.ForeignKey('user.id')),
    sa.Column('last_view', sa.DateTime, nullable=False),
    sa.UniqueConstraint('user_id', 'page_id'),
    mysql_engine='InnoDB',
    mysql_charset='utf8mb4'
)

Here's what the relations look like

orm.mapper(Page, page_table,
    properties = {
        'users_viewed': sa.orm.relation(
            User,
            secondary=page_view_table,
            backref='page'),
    }
)

I am adding some items to my database using an insert statement, something similar to this:

ins = model.page_view_table.insert()
sql = str(ins)
sql += ' ON DUPLICATE KEY UPDATE last_view = :last_view'
session = model.Session()
session.execute(sql, page_views)
mark_changed(session)

As far as I can tell from the logs, the transactions gets committed properly and I see the items in the DB.

However, when I try to delete the page item using the ORM, I get the StaleDataError exception. Looking at the logs, I see the ORM issuing a delete statement but then rolling back due to the error.

I have tried experimenting with session.expire_all() as well as session.expunge_all() right after the insert statement but they weren't very helpful and I still the error.

Here's what I see in the SQLAlchemy logs.

2011-11-05 18:06:08,031 INFO  [sqlalchemy.engine.base.Engine][worker 3] DELETE FROM page_view WHERE page_view.page_id = %s AND page_view.user_id = %s
2011-11-05 18:06:08,031 INFO  [sqlalchemy.engine.base.Engine][worker 3] (13818L, 259L)
2011-11-05 18:06:08,032 INFO  [sqlalchemy.engine.base.Engine][worker 3] DELETE FROM page_view WHERE page_view.page_id = %s AND page_view.user_id = %s
2011-11-05 18:06:08,033 INFO  [sqlalchemy.engine.base.Engine][worker 3] (13818L, 259L)
2011-11-05 18:06:08,033 INFO  [sqlalchemy.engine.base.Engine][worker 3] ROLLBACK

I thought the double delete statement was a suspect, maybe pointing to a misconfigured ORM relation but I don't think that's the case.

回答1:

I guess I can give a hint on this problem. The short version is: "You will probably have to modify data in the Database manually to solve the issue".

The longer version: I had a similar issue with SQLite. I had the following table mapped:

ingredients = Table('ingredients', metadata,
    Column('recipe_title', Unicode, ForeignKey('recipes.title'), primary_key=True),
    Column('product_title', Unicode, ForeignKey('products.title'), primary_key=True),
    Column('amount', Integer, nullable=False),
    Column('unit_title', Unicode, ForeignKey('units.title')))

see that composite primary key? I somehow managed to insert two rows with the same recipe_title/product_title pair. I was surprized to find out that there were not a single constraint on the side of SQLite for this table (no primary key, no fereign key - it was just a plain vanilla table), but well - thats the way sqlalchemy goes, not my business.

Then when I tried to delete a persited object involving those two rows, sqlalchemy saw that it's constraints were violated and it threw the 'StaleDataError'. Finally I just had to remove one duplicatinng row manually from SQLite table.



回答2:

Although columns can be marked as primary_key, make sure this is enforced on the database level as well (for example when the database was created by some different tool). In MySQL this means ensuring they are PRIMARY KEY and not only KEY.

In my case there were 2 columns marked as primary_key (composite) but there were multiple rows containing the same (supposedly) unique id's.