DB2 keeping n columns in sync between two tables

2019-08-24 16:13发布

问题:

I have a two tables (actually sets of tables) which may need to be kept in sync (The first set of tables I need all the data replicated to the second set where this data will be a subset).

If why matters (perhaps because it is obviously not very good form having replicated data) see (very bottom of this other question under the 'PS' section): DB2 update and insert trigger, references a lot of fields can I use * to shorten them (in iSeries but might not matter)

To do this I'm placing insert, update and delete triggers on what I'll call the 'primary set' which will insert, update and delete records onto the 'replicated set'.

My question is what if I want to now keep the replicated set in sync with the primary set? I'm looking at documentation right now to see the best way to do this. I don't want an insert trigger to try to insert into what ever table was initially inserted into, as this I expect will cause an error which would be reported back to the program making the insert, unless there is a way in the trigger to throw out such reporting? The best way at this time looks like checking if the record exists before the delete, same for insert.

Then there is the issue of recursing on an update trigger, I was wondering if there is more elegant solution than checking if the values are the same and then not updating?

The primary set can not be changed in any way (excepting triggers which do not modify input/output from the table in any way), while the schema of the replicated set is completely flexible.

Optimize for lazyness (I'm probably going to automate the creation of these triggers but if you have suggestions on how to keep them compact it will speed up my development).

The system is not heavily used, it has a tone of free resources... there is little need to be conservative with memory, processing and disk space.

回答1:

Instead of using replicated tables, you're going to be far better off using views. This will allow you to completely decouple your database access from your underlying database design. With a combination of updateable/deletable views and 'instead-of' triggers, you should be able to do things nicely. Keep in mind that this is the recommended setup, even if you're using only SQL (and in any language) as it allows you to decouple program design from database design. The fact that it allows you to decouple from iSeries PFs is just a bonus.

Good news - you don't have to switch everything to use SQL just yet - the iSeries is actually setup to allow you to access SQL objects (like views and triggers) as if it was a physical file, with native RPG file access.
Bad news - unlike logical files, views can't be ordered. If you're using a lot of keyed access over logical files, your efforts could be complicated... I haven't tried accessing an SQL index with a CHAIN operation to see if it gives me access to the rest of the record, so I have no idea if that will work.


EDIT:

Finally got around to testing this. It turns out that accessing indexes for their ordering (as if for a keyed logical file) will indeed grant access to the rest of the record. Not sure if that helps you or not.



回答2:

I'm posting this as an answer to make what I intend to do at this point very clear. Any improvement will accepted as an answer.

Implement after triggers for insert, update, delete on both tables. This way referential constraints will kick in and report back to the user accurately.

Delete and Insert triggers must check if the record already exists to prevent generating an error which will be sent back to the user.

Update trigger must check if the record can be updated before applying the update to prevent recursion.

This is the form of the triggers I intend to use:

Update trigger which will sync two tables(when applied to each table), only do update if record is out of sync:

create trigger tableUpdate after update on fromSchema/tName  
 referencing new as n                                               
 for each row mode db2sql                                           
 begin atomic  
    update toSchema/tName target set (<-- all target.columns = n.columns -->)
         where <-- all target.column's != n.column's -->;   
end

Insert trigger which will sync two tables(when applied to each table), only inserts if record does not exist:

create trigger tableInsert after insert on fromSchema/tName  
 referencing new as n                                               
 for each row mode db2sql                                           
 begin atomic  
    insert into toSchema/tName select                      
     t1.*                                                  
    from fromSchema/tName as t1 left join toSchema/tName as t2
    on <-- t1.pkColumn's = t2.pkColumn's -->                                   
    where <-- t2.pkColum's IS NULL -->   
end

Delete trigger which will sync two tables(when applied to each table), only deletes where record exists:

create trigger utbachDelete after delete on fromSchema/tName  
 referencing old as o                                               
 for each row mode db2sql                                           
 begin atomic   
    delete from toSchema/tName target where <-- target.keyValues = o.keyValues -->;                                                           
end 


标签: db2