I have a python script that executes multiple sql scripts (one after another) in Redshift. Some of the tables in these sql scripts can be queried multiple times. For ex. Table t1 can be SELECTed in one script and can be dropped/recreated in another script. This whole process is running in one transaction. Now, sometimes, I am getting deadlock detected error and the whole transaction is rolled back. If there is a deadlock on a table, I would like to wait for the table to be released and then retry the sql execution. For other types of errors, I would like to rollback the transaction. From the documentation, it looks like the table lock isn't released until end of transaction. I would like to achieve all or no data changes (which is accomplished by using transaction) but also would like to handle deadlocks. Any suggestion on how this can be accomplished?
相关问题
- How to generate 12 digit unique number in redshift
- How to handle quoted values in AWS Redshift unload
- Using foreign key in SQL server and DB locks
- psycopg2.ProgrammingError: syntax error at or near
- how can aws glue job upload several tables in reds
相关文章
- MySQL 5.6 deadlock for locking the same rows twice
- in redshift postgresql can I skip columns with the
- Redshift table update with join
- How do I identify a deadlock in SQL Azure?
- How to get a list of UDFs in Redshift?
- Sql Alchemy cannot run inside a transaction block
- Best Isolation Level to avoid deadlocks using an U
- Alternatives for Athena to query the data on S3
I would execute all of the SQL you are referring to in one transaction with a retry loop. Below is the logic I use to handle concurrency issues and retry (pseudocode for brevity). I do not have the system wait indefinitely for the lock to be released. Instead I handle it in the application by retrying over time.
The key components are catching every type of error, knowing which cases require a rollback, and having an exponential backoff for retries.