I have a two-column primary key on a table. I have attempted to alter it to set the ignore_dup_key
to on with this command:
ALTER INDEX PK_mypk on MyTable
SET (IGNORE_DUP_KEY = ON);
But I get this error:
Cannot use index option ignore_dup_key to alter index 'PK_mypk' as it enforces a primary or unique constraint.
How else should I set IGNORE_DUP_KEY
to on?
Its not documented in Books Online, but I've found that while this is valid for Primary Keys, you can't change this with an ALTER INDEX, you'll have to drop and re-create the primary key.
Keep in mind that this flag doesn't allow you to actually store duplicate rows, it simply changes the error that results:
ON
A warning message will occur when duplicate key values are inserted into a unique
index. Only the rows violating the uniqueness constraint will fail.
OFF
An error message will occur when duplicate key values are inserted into a
unique index. The entire INSERT operation will be rolled back.
From http://msdn.microsoft.com/en-us/library/ms175132.aspx
ALTER TABLE [TableName] REBUILD WITH (IGNORE_DUP_KEY = ON)
It determines what happens when you insert duplicates only
See ALTER TABLE..index option
Specifies the error response when an
insert operation attempts to insert
duplicate key values into a unique
index. The IGNORE_DUP_KEY option
applies only to insert operations
after the index is created or rebuilt.
The option has no effect when
executing CREATE INDEX, ALTER INDEX,
or UPDATE.
..and it does not apply to PKs
The BOL comment for ALTER TABLE about this and "backwards compatibility" is somewhat confusing. I just tried it and BradC is correct.
CREATE TABLE dbo.foo (bar int PRIMARY KEY WITH (FILLFACTOR=90, IGNORE_DUP_KEY = ON))
GO
INSERT dbo.foo VALUES (1)
GO
INSERT dbo.foo VALUES (1)
GO
--gives
(1 row(s) affected)
Duplicate key was ignored.
(0 row(s) affected)
Personally I never want it to ignore the duplicate. If there is a duplicate value to a primary key, it needs to be fixed. I don't want it ignored and the other records inserted because then the user might think that they all got inserted. This setting is a cover-up for a bad insert process. A well designed process doesn't need this setting as it cleans the data before entering it (or uses upsert to update existing and insert new ones) and sends the bad records to a table so that they can be fixed and reinserted or sends an error back to the user, so they they know their record was not inserted.
Note that this setting only affects what happens if you try to insert a duplicate key, it won't allow you to insert a duplicate key.
If you're attempting to insert duplicate keys, you could drop the primary key index, insert your records, fix up the data (remove duplicates, etc.), then recreate the index.