Simple scenario. Stored procedure sp_Task1
calls sp_Task2
, and sp_Task2
has an output temp table called #Task2_Output
. I want sp_Task1
to take #Task2_Output
and perform further transactions with it.
How do I get these two things to happen:
1) Make sure sp_Task1
waits until sp_Task2
is completed to move on to the next line of code.
2) How to check that #Task2_Output
was successfully created from code within sp_Task1
.
Since you don't mention which DBMS, I assume it is SQL Server, since these kinds of situations usually arise there.
For Question 1 :
In SQL Server 2005, for a temporary table to be accessible to the calling stored procedure, you may have to use ##Task2_Output (that's right, two hashes) and not #Task2_Output, since temporary table names with two # prefixes are kinda global temporary tables available to the calling procedure but destroyed after their last use. However, you may have to look out for name name conflicts, if the table names are not dynamically assigned.
For Question 2 :
Get sp_Task2 to return a Return Code, with a RETURN statement.
If the table creation succeeds, say, Return 1.
If the table creation fails (known by checking @@ERROR
immediately after the statement), then, say, Return -99.
In sp_Task1, call sp_Task2 like the following :
Declare @MyRetCode_Recd_In_Task1 int
EXECUTE @MyRetCode_Recd_In_Task1 = sp_Task2 (with calling parameters if any).
Calling a stored procedure with a ReturnCode = StoredProcName fashion, the return code returned by the sp_Task2 is received by sp_Task1. You can then check the return code in sp_Task1 to see if all is well.
The accepted answer is not correct.
For Question 1:
Make sure sp_Task1 waits until sp_Task2 is completed to move on to the next line of code.
There is nothing you need to do. Calling Stored Procedures and Functions is not asynchronous. If you have the following inside of sp_Task1:
... other SQL commands
EXEC sp_Task2;
... more SQL commands
then the "more SQL commands" will not execute until sp_Task2;
completes (whether successfully or due to an error).
For Question 2:
How to check that #Task2_Output was successfully created from code within sp_Task1
As it has already been stated, local temporary tables (i.e. prefix = #
) created in a sub-process will cease to exist once that sub-process (i.e. the call to sp_Task2
) ends. You could use a global temporary table (i.e. prefix = ##
), but then sp_Task1
can only be called by one session/connection at a time, else you will run into a name conflict on ##Task2_Output
. But that can easily be avoided by simply creating the local temp table in sp_Task1
. The name of that temp table has to be known as it will be accessed in sp_Task1
after sp_Task2
completes anyway. So no reason to not create it first:
CREATE TABLE #Task2_Output (columns...);
... other SQL commands
EXEC sp_Task2;
... more SQL commands
SELECT ...
FROM #Task2_Output;
And then you don't need to test for it anyway.
Additionally:
If you do want to test for the existence of a temp table, use the following:
IF (OBJECT_ID(N'tempdb..#Task2_Output') IS NOT NULL)
BEGIN
-- non-NULL value from OBJECT_ID() means the table exists
..do something;
END;
You really, really should not prefix stored procedure names with sp_
. That prefix is reserved for system procs and is handled as a special case by SQL Server. It will first check for that proc in the [master]
database and then in the local, or specified, database. Hence it is an unnecessary performance hit.
Rather than using @@ERROR
, using the TRY
/ CATCH
structure is much more effective.