Phantom Read anomaly in Oracle and PostgreSQL does

2019-03-10 08:33发布

问题:

I noticed the following occurrence in both Oracle and PostgreSQL.

Considering we have the following database schema:

create table post (
    id int8 not null, 
    title varchar(255), 
    version int4 not null, 
    primary key (id));    

create table post_comment (
    id int8 not null, 
    review varchar(255), 
    version int4 not null, 
    post_id int8, 
    primary key (id));

alter table post_comment 
    add constraint FKna4y825fdc5hw8aow65ijexm0 
    foreign key (post_id) references post;  

With the following data:

insert into post (title, version, id) values ('Transactions', 0, 1);
insert into post_comment (post_id, review, version, id) 
    values (1, 'Post comment 1', 459, 0);
insert into post_comment (post_id, review, version, id) 
    values (1, 'Post comment 2', 537, 1);
insert into post_comment (post_id, review, version, id) 
    values (1, 'Post comment 3', 689, 2); 

If I open two separate SQL consoles and execute the following statements:

TX1: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;

TX2: BEGIN TRANSACTION ISOLATION LEVEL SERIALIZABLE;

TX1: SELECT COUNT(*) FROM post_comment where post_id = 1;

TX1: > 3

TX1: UPDATE post_comment SET version = 100 WHERE post_id = 1;

TX2: INSERT INTO post_comment (post_id, review, version, id) VALUES (1, 'Phantom', 0, 1000);

TX2: COMMIT;

TX1: SELECT COUNT(*) FROM post_comment where post_id = 1;

TX1: > 3

TX1: COMMIT;

TX3: SELECT * from post_comment;

     > 0;"Post comment 0";100;1
       1;"Post comment 1";100;1
       2;"Post comment 2";100;1
       1000;"Phantom";0;1

As expected, the SERIALIZABLE isolation level has kept the snapshot data from the beginning of the TX1 transaction and TX1 only sees 3 post_comment records.

Because of the MVCC model in Oracle and PostgreSQL, TX2 is allowed to insert a new record and commit.

Why is TX1 allowed to commit? Because this is a phantom read anomaly, I was expecting to see that TX1 would be rolled back with a "Serialization failure exception" or something similar.

Does the MVCC Serializable model in PostgreSQL and Oracle only offer a snapshot isolation guarantee but no phantom read anomaly detection?

UPDATE

I even changed Tx1 to issue an UPDATE statement that changes the version column for all post_comment records belonging to the same post.

This way, Tx2 creates a new record and Tx1 is going to commit without knowing that a new record has been added that satisfied the UPDATE filtering criteria.

Actually, the only way to make it fail on PostgreSQL is if we execute the following COUNT query in Tx2, prior to inserting the phantom record:

Tx2: SELECT COUNT(*) FROM post_comment where post_id = 1 and version = 0

TX2: INSERT INTO post_comment (post_id, review, version, id) VALUES (1, 'Phantom', 0, 1000);

TX2: COMMIT;

Then Tx1 is going to be rolled back with:

org.postgresql.util.PSQLException: ERROR: could not serialize access due to read/write dependencies among transactions
  Detail: Reason code: Canceled on identification as a pivot, during conflict out checking.
  Hint: The transaction might succeed if retried.

Most likely that the write-skew anomaly prevention mechanism detected this change and rolled back the transaction.

Interesting that Oracle does not seem to be bothered by this anomaly and so Tx1 just commits successfully. Since Oracle does not prevent write-skew from happening, Tx1 commits juts fine.

By the way, you can run all these examples yourself since they are on GitHub.

回答1:

What you observe is not a phantom read. That would be if a new row would show up when the query is issued the second time (phantoms appear unexpectedly).

You are protected from phantom reads in both Oracle and PostgreSQL with SERIALIZABLE isolation.

The difference between Oracle and PostgreSQL is that SERIALIZABLE isolation level in Oracle only offers snapshot isolation (which is good enough to keep phantoms from appearing), while in PostgreSQL it will guarantee true serializability (i.e., there always exists a serialization of the SQL statements that leads to the same results). If you want to get the same thing in Oracle and PostgreSQL, use REPEATABLE READ isolation in PostgreSQL.



回答2:

I love this question because it demonstrates that the Phantom Read definition in the SQL Standard only pictures the effect without stating the root cause of this data anomaly:

P3 ("Phantom"): SQL-transaction T1 reads the set of rows N that satisfy some . SQL-transaction T2 then executes SQL-statements that generate one or more rows that satisfy the used by SQL-transaction T1. If SQL-transaction T1 then repeats the initial read with the same , it obtains a different collection of rows.

In the 1995 paper, A Critique of ANSI SQL Isolation Levels, Jim Gray and co, described Phantom Read as:

P3: r1[P]...w2[y in P]...(c1 or a1) (Phantom)

One important note is that ANSI SQL P3 only prohibits inserts (and updates, according to some interpretations) to a predicate whereas the definition of P3 above prohibits any write satisfying the predicate once the predicate has been read — the write could be an insert, update, or delete.

Therefore, a Phantom Read does not mean that you can simply return a snapshot as of the start of the currently running transaction and pretend that providing the same result for a query is going to protect you against the actual Phantom Read anomaly.

In the original SQL Server 2PL (Two-Phase Locking) implementation, returning the same result for a query implied Predicate Locks.

The MVCC (Multi-Version Concurrency Control) Snapshot Isolation (wrongly named Serializable in Oracle) does not actually prevent other transactions from inserting/deleting rows that match the same filtering criteria with a query that already executed and returned a result set in our current running transaction.

For this reason, we can imagine the following scenario in which we want to apply a raise to all employees:

  1. Tx1: SELECT SUM(salary) FROM employee where company_id = 1;
  2. Tx2: INSERT INTO employee (id, name, company_id, salary) VALUES (100, 'John Doe', 1, 100000);
  3. Tx1: UPDATE employee SET salary = salary * 1.1;
  4. Tx2: COMMIT;
  5. Tx1: COMMIT:

In this scenario, the CEO runs the first transaction (Tx1), so:

  1. She first checks the sum of all salaries in her company.
  2. Meanwhile, the HR department runs the second transaction (Tx2) as they have just managed to hire John Doe and gave him a 100k $ salary.
  3. The CEO decides that a 10% raise is feasible taking into account the total sum of salaries, being unaware that the salary sum has raised with 100k.
  4. Meanwhile, the HR transaction Tx2 is committed.
  5. The CEO transaction Tx1 is committed.

Boom! The CEO has taken a decision on an old snapshot, giving a raise that might not be sustained by the current updated salary budget.

You can view a detailed explanation of this use case (with lots of diagrams) in the following post.

Is this a Phantom Read or a Write Skew?

According to Jim Gray and co, this is a Phantom Read since the Write Skew is defined as:

A5B Write Skew Suppose T1 reads x and y, which are consistent with C(), and then a T2 reads x and y, writes x, and commits. Then T1 writes y. If there were a constraint between x and y, it might be violated. In terms of histories:

A5B: r1[x]...r2[y]...w1[y]...w2[x]...(c1 and c2 occur)

In Oracle, the Transaction Manager might or might not detect the anomaly above because it does not use predicate locks or index range locks (next-key locks), like MySQL.

PostgreSQL manages to catch this anomaly only if Bob issues a read against the employee table, otherwise, the phenomenon is not prevented.

UPDATE

Initially, I was assuming that Serializability would imply a time ordering as well. However, as very well explained by Peter Bailis, wall-clock ordering or Linearizability is only assumed for Strict Serializability.

Therefore, my assumptions were made for a Strict Serializable system. But that's not what Serializable is supposed to offer. The Serializable isolation model makes no guarantees about time, and operations are allowed to be reordered as long as they are equivalent to a some serial execution.

Therefore, according to the Serializable definition, such a Phantom Read can occur if the second transaction does not issue any read. But, in a Strict Serializable model, the one offered by 2PL, the Phantom Read would be prevented even if the second transaction does not issue a read against the same entries which we are trying to guard against phantom reads.



回答3:

The Postgres documentation defines a phantom read as:

A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.

Because your select returns the same value both before and after the other transaction committed, it does not meet the criteria for a phantom read.



回答4:

I just wanted to point that Vlad Mihalcea's answer is plain wrong.

Is this a Phantom Read or a Write Skew?

Neither of those -- there is no anomaly here, transactions are serializable as Tx1 -> Tx2.

SQL standard states: "A serializable execution is defined to be an execution of the operations of concurrently executing SQL-transactions that produces the same effect as some serial execution of those same SQL-transactions."

PostgreSQL manages to catch this anomaly only if Bob issues a read against the employee table, otherwise the phenomenon is not prevented.

PostgreSQL's behavior here is 100% correct, it just "flips" apparent transactions order.