I am implementing an A/B/View scenario, meaning that the View points to table A, while table B is updated, then a switch occurs and the view points to table B while table A is loaded.
The switch occurs daily. There are millions of rows to update and thousands of users looking at the view. I am on SQL Server 2012.
My questions are:
- how do I insert data into a table from another table in the fastest possible way? (within a stored proc)
- Is there any way to use BULK INSERT? Or, is using regular insert/select the fastest way to go?
I'd be inclined to use SSIS.
Make table A an OLEDB source and table B an OLEDB destination. You will bypass the transaction log so reduce the load on the DB. The only way (I can think of) to do this using T-SQL is to change the recovery model for your entire database, which is far from ideal because it means no transactions are stored, not just the ones for your transfer.
Setting up SSIS Transfer
Create a new project and drag a dataflow task to your design surface
Double click on your dataflow task which will take you through to the Data Flow tab. Then drag and drop an OLE DB source from the "Data flow Sources" menu, and an OLE DB destination from the "Data flow Destinations" menu
Double click on the OLE DB source, set up the connection to your server, choose the table you want to load from and click OK. Drag the green arrow from the OLE DB source to the destination then double click on the destination. Set up your connection manager, destination table name and column mappings and you should be good to go.
OLE DB Source docs on MSDN
OLE DB Destination docs on MSDN
You could to a Select ColA, ColB into DestTable_New From SrcTable. Once DestTable_New is loaded, recreate indexes and constraints.
Then rename DestTable to DestTable_Old and rename DestTable_New to DestTable. Renaming is extremly quick. If something turns out to have gone wrong, you also have a backup of the previous table close by (DestTable_Old).
I did this scenario once where we had to have the system running 24/7 and needed to load tens of millions of rows each day.
INSERT... SELECT...
functions fairly similarly toBULK INSERT
. You could use SSIS, like @GarethD says, but that might be overly complex if you're just copying rows from table1 to table2.If you are copying serious quantities of data, keep an eye on the transaction log -- it can bloat up pretty fast when doing huge inserts. One work-around is to "chunkify" the data you are inserting, by looping over an insert statment that processes, say, only 100,000 or 10,000 rows a time (depends on how wide your rows are, i.e. how many MB per pass).
(Just curious, are you doing
ALTER VIEW
to reset the view? I did something similar once, though we had to have four tables and four views to support past/present/next/swap sets.)You can simply do like this
This shall select the data from B, based on the criteria and shall insert it into A, provided the columns are same or you can specify column names instead of *.
I know the question is old, but I was hunting for an answer to the same question and didn't find anything really helpful. Yes the SSIS approach is a possibility, but the question wanted a stored proc.
To my delight I have discovered (almost) the solution that the original question wanted; you can do it with a CLR SP.
Select the data from TableA into a DataTable and then use the WriteToServer(DataTable dt) method of the SqlBulkCopy class with TableB as the DestinationTableName.
The only slight drawback is that the CLR procedure must use external access in order to use SqlBulkCopy, and does not work with context connection, so you need to fiddle a little bit with permissions and connection strings. But hey! nothing is ever perfect.
You could do the
as a couple answers suggest, that should be as fast as it can get (depending on how many indexes you'd need to recreate, etc).
But I would suggest using synonyms in order to change the pointer from one table to another. They're very transparent and in my opinion, cleaner than updating the view, or renaming tables.