is there a solution for batch insert via hibernate in partitioned postgresql table? currently i'm getting an error like this...
ERROR org.hibernate.jdbc.AbstractBatcher - Exception executing batch:
org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1
at org.hibernate.jdbc.Expectations$BasicExpectation.checkBatched(Expectations.java:61)
at org.hibernate.jdbc.Expectations$BasicExpectation.verifyOutcome(Expectations.java:46)
at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:68)....
i have found this link http://lists.jboss.org/pipermail/hibernate-dev/2007-October/002771.html but i can't find anywhere on the web is this problem solved or how it can be get around
I faced the same problem while inserting documents through hibernate after lot of search found that it is expecting that updated rows should be returned so instead of null change it to new in trigger procedure which will resolve the problem as shown below
RETURN NEW
Appears if you can use RULES instead of triggers for the insert, then it can return the right number, but only with a single RULE without a WHERE statement.
ref1
ref2
ref3
another option may be to create a view that 'wraps' the partitioned table, then you return the NEW row out to indicate a successful row update, without accidentally adding an extra unwanted row to the master table.
ref: http://www.postgresql.org/docs/9.2/static/trigger-definition.html
If you went the view wrapper route one option is to also define trivial "instead of" triggers for delete and update, as well, then you can just use the name of the view table in place of your normal table in all transactions.
Another option that uses the view is to create an insert rule so that any inserts on the main table go to the view [which uses its trigger], ex (assuming you already have
partitioned_insert_trigger
and tablename_view and insert_view_trigger created as listed above)Then it will use your new working view wrapper insert.
I found another solution for the same problem on this webpage:
This suggests the same solution that @rogerdpack said, changing the Return Null to Return NEW, and adding a new trigger that deletes the duplicated tuple in the master with the query:
thnx! it did the trick, no problems poped up, so far :)....one thing thou... i had to implement
BatcherFactory
class and put it int thepersistence.xml
file, like this:from that factory i've called my batcher implementation with the code above
ps hibernate core 3.2.6 GA
thanks once again
You might want to try using a custom Batcher by setting the hibernate.jdbc.factory_class property. Making sure hibernate won't check the update count of batch operations might fix your problem, you can achieve that by making your custom Batcher extend the class BatchingBatcher, and then overriding the method doExecuteBatch(...) to look like:
Note that the new method doesn't check the results of executing the prepared statements. Keep in mind that making this change might affect hibernate in some unexpected way (or maybe not).
They say to use two triggers in a partitioned table or the @SQLInsert annotation here: http://www.redhat.com/f/pdf/jbw/jmlodgenski_940_scaling_hibernate.pdf pages 21-26 (it also mentions an @SQLInsert specifying a String method).
Here is an example with an after trigger to delete the extra row in the master: https://gist.github.com/copiousfreetime/59067