I have the following tables:
DataValue
DateStamp ItemId Value
---------- ------ -----
2012-05-22 1 6541
2012-05-22 2 12321
2012-05-21 3 32
tmp_holding_DataValue
DateStamp ItemId Value
---------- ------ -----
2012-05-22 1 6541
2012-05-22 4 87
2012-05-21 5 234
DateStamp
and ItemId
are the primary key columns.
I'm doing an insert which runs periodically throughout the day (in a stored procedure):
insert into DataValue(DateStamp, ItemId, Value)
select DateStamp, ItemId, Value from tmp_holding_DataValue;
This moves data from the holding table (tmp_holding_DataValue
) across into the main data table (DataValue
). The holding table is then truncated.
The problem is that as in the example, the holding table could contain items which already exist in the main table. Since the key will not allow duplicate values the procedure will fail.
One option would be to put a where clause on the insert proc, but the main data table has 10 million+ rows, and this could take a long time.
Is there any other way to get the procedure to just skip-over/ignore the duplicates as it tries to insert?
In
SQL Server 2008+
:I ran into a similar requirement that ended up throwing the same duplicate key error, and then the idea was to select multiple columns that are distinct (Primary) while returning also other columns, check:
In fact, the goal could be accomplished without Distinct as well since the aggregate function MAX will pick a single value.
You could assign the PK as Ignore Duplicate Key = Yes. Then it will just give a warning duplicate key ignored and continue. I am not guessing. I tested this.
What I found is that I cannot do this is SMSS. Have to drop and recreate the index via script. But you can right click on the index, select drop and recreate, and then just change Ignore Duplicate Key = Yes. For me SMSS did not immediately show the change.
Or I think you could use an outer join