I am working with the spring batch data loader . I have 15000 files and processing only one file using multiresource partitioner.It seems that the table lock happens when trying to insert data in to the table.There is no parallel step defined here. it is very slow for file processing. Following is the code snippet for chunk item reader and writer and the sql output for table lock.
Spring config file
<step id="filestep" xmlns="http://www.springframework.org/schema/batch" >
<tasklet allow-start-if-complete="true" transaction-manager="ratransactionManager" >
<chunk writer="jdbcItenWriter" reader="fileItemReader" processor="itemProcessor" commit-interval="500" retry-limit="2">
<retryable-exception-classes>
<include class="org.springframework.dao.DeadlockLoserDataAccessException"/>
</retryable-exception-classes>
</chunk>
<listeners>
<listener ref="customStepExecutionListener">
</listener>
</listeners>
</tasklet>
<end on ="FAILED"/>
</step>
Oracle Locks
select
object_name,
object_type,
session_id,
type, -- Type or system/user lock
lmode, -- lock mode in which session holds lock
request,
block,
ctime,
owner,
id1,id2 -- Time since current mode was granted
from
v$locked_object, all_objects, v$lock
where
v$locked_object.object_id = all_objects.object_id AND
v$lock.id1 = all_objects.object_id AND
v$lock.sid = v$locked_object.session_id
and all_objects.OBJECT_NAME like '%MSC%'
order by
session_id, ctime desc, object_name
OBJECT_NAME OBJECT_TYPE SESSION_ID TYPE LMODE REQUEST BLOCK CTIME OWNER ID1 ID2
STAGING_TABLE_MSC TABLE 137 TM 3 0 0 39 CDRR 9289370 0
LMODE 3 is just a row lock, probably from a normal insert, and shouldn't be a problem.
If you had seen LMODE 6 that would mean the entire table is exclusively locked. For example, from a direct-path write from an
INSERT /*+APPEND*/
.See the V$LOCK documentation.