The database-schema (Source and target) are very large (each has over 350 tables). I have got the task to somehow merge these two tables into one. The data itself (whats in the tables) has to be migrated. I have to be careful that there are no double entries for primary keys before or while merging the schemata. Has anybody ever done that already and would be able to provide me his solution or could anyone help me get a approach to the task? My approaches all failed and my advisor just tells me to get help online :/
To my approach: I have tried using the "all_constraints" table to get all pks from my db.
SELECT cols.table_name, cols.column_name, cols.position, cons.status, cons.owner
FROM all_constraints cons, all_cons_columns cols
WHERE cols.owner = 'DB'
AND cons.constraint_type = 'P'
AND cons.constraint_name = cols.constraint_name
AND cons.owner = cols.owner
ORDER BY cols.table_name, cols.position;
I also "know" that there has to be a sequence for the primary keys to add values to it:
CREATE SEQUENCE seq_pk_addition
MINVALUE 1
MAXVALUE 99999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
Because I am a noob if it comes to pl/sql (or sql in genaral) I really have no idea how/what I should do next :/
It would be awesome if there is anybody who could help me :)
Thanks for reading this endless post and for anybody who is willing to help in advance :D
(P.s.: Please ignore any grammar mistakes. I am a native speaker of german.)
Here is a link for an ERD of the database: https://ufile.io/9tdoj
virus scan: https://www.virustotal.com/#/file/dbe5f418115e50313a2268fb33a924cc8cb57a43bc85b3bbf5f6a571b184627e/detection
1st of all, for 350 tables, most probably, would need an
dynamic SQL
.CURSOR
or aCOLLECTION - table of VARCHAR2
with all table names.dynamic SQL
.loop
through the entire list of the tables name and, for each table generates a string which will be executed as SQL withEXECUTE IMMEDIATE
command.Oh dear! Normally, such a question would be quickly closed as "too broad", but we need to support victims of evil advisors!
As for the effort, I would need a week full time for an experienced expert plus two days quality checking for an experierenced QA engineer.
First of all, there is no way that such a complex data merge will work on the first try. That means that you'll need test copies of both schemas that can be easily rebuild. And you'll need a place to try it out. Normally this is done with an export of both schemas and an empty dev database.
Next, you need both schemas close enough to be able to compare the data. I'd do it with an import of the export files mentione above. If the schema names are identical than rename one during import.
Next, I'd doublecheck if the structure is really identical, with queries like
Next, I'd check if the primary and unique keys have overlaps:
As there are 300+ tables, I'd generate those queries:
As promised to help in my comment, i had prepared a dynamic code which you can try to get the data
merged
with the source and target tables. The logic is as below:Step1: Get all the table names from the
SOURCE
schema. In the query below you can you need to replace the schema(owner) name respectively. For testing purpose i had taken only 1 table so when you run it,remove the table name filtering clause.Step2: Get the constrained columns names for the table. This is used to prepared the
ON
clause which would be later used forMERGE
statement.Step3: Get the non-constrainted column names for the table. This would be used in
UPDATE
clause while usingMERGE
.Step4: Prepare the
insert
list when the data doesnot matchON
conditon ofMERGE
statement.Read my inline comments to understand each step.
Execution:
Output:
PS: i have tested this with a table with 2 columns out of which i was having unique key constraint .The DDL of table is as below:
At the end i wish you could understand my code(you being a noob) and implement something similar if the above fails for your requirement.