I've generated an sql file full of inserts but can't find any documentation of executing this script from a stored procedure - sql

I'm creating a stored procedure that will delete all the data in my database and then insert the data from my sql file. The reason I am using the delete and insert instead of a restore is because a restore requires that no one is connected to the database where as deleting and inserting allows people to still be connected.
Stored Procedure:
CREATE PROCEDURE DropAndRestore
-- Add the parameters for the stored procedure here
#filepath nvarchar(200)
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
Exec sp_MSFOREACHTABLE 'delete from ?
RESTORE DATABASE [landofbeds] -- These lines are what needs to be replaced
FROM DISK = #FilePath --
END
GO

The reason I am using the delete and insert instead of a restore is
because a restore requires that no one is connected to the database
where as deleting and inserting allows people to still be connected
If all you need is minimum downtime you can restore your database in db_copy. Then drop your db and rename db_copy to db.
Yes you should disconnect all the users to be able to drop your db, but it will take minimum time, while if you delete your data the table will still be unavailable for the whole duration of the delete, and as delete is always fully logged your users will wait.
To launch your script you can use xp_cmdshell that calls sqlcmd with -i but it's not a good idea. You have no control on your script execution and if something goes wrong you will have even more downtime for your users.
Does your tables have FK defined?
Exec sp_MSFOREACHTABLE 'delete from ?
will try to delete everything in order it decides and you may end up with errors when you try to delete rows that are referenced in other tables.

To execute your sql file from Stored procedure .. you can use xp_cmdshell. See steps below
First Create a Batch File (C:\testApps\test.bat) and execute your sql file from there..
e.g.
osql -S TestSQlServer -E -I C:\testApps\test.sql > C:\testApps\tlog.txt
Then add this line to your Calling Stored procedure
exec xp_cmdshell 'C:\testApps\test.bat'
Execute your procedure
**Please note you will need to enable xp_cmdshell

You can use bulk insert like this:
BULK INSERT landofbeds.dbo.SalesOrderDetail
FROM '\\computer\share\folder\neworders.txt'

Related

Loop Through All SSMS Databases without Recreating Stored Procedure

Background Information:
In Python, I might write something like this if I want to apply the same logic to different values in a list.
database_list = ["db_1", "db_2", "db_3"]
for x in range(0,len(database_list),1):
print("the database name is " + database_list[x])
What I am trying to do:
What I am trying to do in SSMS, is pull a list of DB objects for each database. I created a stored procedure to pull exactly what I want, but I have to run it against each database, so 10 databases mean running it 10 times.
My goal is to do this with a T-SQL query instead of Python.
I tried doing something like this:
exec sp_MSforeachdb 'USE ?; EXEC [dbo].[my_stored_procedure]';
The problem with this is, [dbo].[my_stored_procedure] has to exist in every database I want to do this in.
How can I create the stored procedure in 1 database, but execute it for all databases or a list of databases that I choose?
I know what you are trying to do and if it's what I think (you seem reluctant to actually say!) you can do the following:
In the master database, create your procedure. Normally you wouldn't do this, but in this case you must prefix it sp_
use master
go
create procedure sp_testproc as
select top 10 * from sys.tables
go
Now if you run this, it will return tables from the master database.
If you switch context to another database and exec master.dbo.sp_testproc, it will still return tables from the master database.
In master, run
sys.sp_MS_marksystemobject sp_testproc
Now switch context to a different database and exec master.dbo.sp_testproc
It will return tables from the database you are using.
Try creating your sproc in master and naming it with an sp_ prefix:
USE master
GO
CREATE PROCEDURE sp_sproc_name
AS
BEGIN
...
END
GO
-- You *may* need to mark it as a system object
EXEC sys.sp_MS_marksystemobject sp_sprocname
See: https://nickstips.wordpress.com/2010/10/18/sql-making-a-stored-procedure-available-to-all-databases/
It should then be available in all dbs
Create the stored procedure in the Master database with the sp_ prefix, and use dynamic SQL in the stored procedure so it resolves object names relative to the current database, rather than the database which contains the stored procedure.
EG
use master
go
CREATE OR ALTER PROCEDURE [dbo].[sp_getobjects]
AS
exec ('
select *
from [sys].[objects]
where is_ms_shipped = 0
order by type, name
')
go
use AdventureWorks2017
exec sp_getobjects
#LunchBox - it's your single stored procedure (that you create in one database) that is actually going to need to contain the "exec sp_MSforeach ...." command, and instead of the command to be executed being "EXEC ", it will need to be the actual SQL that you were going to put into the stored proc.
Eg. (inside your single stored procedure)
EXEC sp_MSforeachdb 'USE ?; SELECT * FROM <table>; UPDATE <another table> SET ...';
Think of the stored procedure (that you put into one database) as being no different than your Python code file - if you had actually wanted to achieve the same thing in Python, you would have either needed to create the stored proc in each database, or build the SQL statement string in Python and execute it against each database.
I understand what you thought you might be able to achieve with SQL, but stored procedures really don't work the way you were expecting. Even when you're in the context of a different database, but you run EXEC <different_db>.stored_proc, that stored proc ends up running in the context of the database in which it exists (not your context database).
Now, the only one issue you may come up against is that the standard sp_MSforeachdb stored proc has a limit of 2000 characters for the command that can be executed (although, it does have multiple "command" parameters, this may not be practical if you were planning on running a very large code block, perhaps with variables that carry all the way through). If this is something that might impact what you're intending to do, you could do a search online for "sp_MSforeachdb alternatives" - there seem to be a handful that people have created where the command parameter can contain a larger string.

Stored procedure with multiple 'INSERT INTO Table_Variable EXECUTE stored_procedure' statements [duplicate]

I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"

Alter or Create multiply stored procedures at once from multiply files in SQL Server 2008

I have a large amount of stored procedures that I am updating often and then transferring to a duplicate database on another server. I have been opening each “storedproc.sql” file from within SQL Server Management Studio 2008 and then selecting Execute in the tool bar which will ether create or alter an existing stored procedure. I have been doing this for each stored procedure.
I am looking for a script (or another way) that will allow me to alter all of the stored procedures on the databases with ones that are located in a folder at one time. I am basically looking for a script that will do something similar to the pseudo-code like text below.
USE [DatabaseName]
UPDATE [StoredProcName]
USING [directory\file\path\fileName.sql]
UPDATE [StoredProcNameN]
USING [directory\file\path\fileNameN.sql
…
Not the cleanest pseudo-code but hopefully you understand the idea. I would even be willing to drop all of the stored procedures (based on name) and then create the same stored procedures again on the database. If you need more clarity don’t hesitate to comment, I thank you in advance.
To further explain:
I am changing every reporting stored procedure for an SSRS conversion project. Once the report is developed, I move the report and the stored procedure to a server. I then have to manually run (ALTER or CREATE) each stored procedure against the duplicated database so the database will now be able to support the report on the server. So far this has not been too much trouble, but I will eventually have 65 to 85 stored procedures; and if I have to add one dataset field to each one, then I will have to run each one manually to update the duplicate database.
What I want to be able to do is have a SQL script that says: For this database, ALTER/CREATE this named stored procedure and you can find that .sql text file with the details in this folder.
Here is some code that I use to move all stored procedures from one database to another:
DECLARE #SPBody nvarchar(max);
DECLARE #SPName nvarchar(4000);
DECLARE #SPCursor CURSOR;
SET #SPCursor = CURSOR FOR
SELECT ao.name, sm.definition
FROM <SOURCE DATABASE>.sys.all_objects ao JOIN
<SOURCE DATABASE>.sys.sql_modules sm
ON sm.object_id = ao.object_id
WHERE ao.type = 'P' and SCHEMA_NAME(ao.schema_id) = 'dbo'
order by 1;
OPEN #SPCursor;
FETCH NEXT FROM #SPCursor INTO #SPName, #SPBody;
WHILE ##FETCH_STATUS = 0
BEGIN
if exists(select * from <DESTINATION DATABASE>.INFORMATION_SCHEMA.Routines r where r.ROUTINE_NAME = #SPName)
BEGIN
SET #query = N'DROP PROCEDURE '+#SPName;
exec <DESTINATION DATABASE>..sp_executesql #query;
END;
BEGIN TRY
exec <DESTINATION DATABASE>..sp_executesql #SPBody;
END TRY
BEGIN CATCH
select #ErrMsg = 'Error creating '+#SPName+': "'+Error_Message()+'" ('+#SPBody+')';
--exec sp__LogInfo #ProcName, #ErrMsg, #ProductionRunId;
END CATCH;
FETCH NEXT FROM #SPCursor INTO #SPName, #SPBody;
END;
You need to put in and as appropriate.
For Reference
c:\>for %f in (*.sql) do sqlcmd /S <servername> /d <dbname> /E /i "%f"
I recommend saving all your stored procedure script files starting with if exists(...) drop procedure followed by the create procedure section. Optionally include a go statement at the end depending on your needs.
You can then use a tool to concatenate all the files together into a single script file.
I use a custom tool for this that allows me to define dependency order, specify batch separators, script types, include folders, etc. Some text editors, such as UltraEdit have this capability.
You can also use the Microsoft Database Project to select batches of script files, and execute them against one or more database connections stored in the project. This is a good starting place that doesn't require any extra software, but can be a bit of a pain regarding adding and managing folders and files within the project.
Using a schema comparison tool such as RedGate's SQL Compare can be useful to synchronize the schema and/or objects of two databases. I don't recommend using this as a best practice deployment or "promote to production" tool though.

Can we delete the physical file from server when I delete the corresponding entry from database?

Can we delete the physical file from server when I delete the corresponding entry from database?
i.e., C:\Test\Test.txt -> when deleting this record from database, i need to delete the corresponding Test.txt file from mentioned location.
Is there a way, I m using SQL 2008.
Any help would be highly appreciable..
Thanks
The ways are:
use of xp_cmdshell proc (exec master..xp_cmdshell 'del C:\Test\Test.txt')
use the .NET CLR unsafe proc (need to write in any .NET language and deploy to sql server. Its a long story)
Both ways are ugly
And once again - it is the worst practice. Server should not delete user files, or any files, is they are not integral part of its database.
You could use CREATE TRIGGER FOR DELETE to create a trigger that runs when rows are deleted. The SQL statement that is run upon deletion can walk through the table deleted to get the deleted rows. For each row it can exec xp_cmdshell. xp_cmdshell is disabled by default, so you must enable it first using exec sp_configure.
I didnt tested this but i think it should work.
Try writing a stored procedure which has the filename as a parameter and delete it using the:
exec master.dbo.xp_cmdshell 'del <Filename>'
then create a trigger for after delete on the table containing the Filenames which calls the stored procedure and provides the Filename from table deleted, or maby you can run directly the command exec master.dbo.xp_cmdshell 'del ' from the trigger.
The better way would be to save the files as an Object in the Database instead of the file path, and when deleting you just delete the file Object.

SQL Server Transactions how can I commit my transaction

I have SQL Server 2005 stored procedure. Someone one is calling my stored procedure within a transaction. In my stored proc I'm logging some information (insert into a table). When the higher level transaction rolls back it removes my insert.
Is there anyway I can commit my insert and prevent the higher level rollback from removing my insert?
Thanks
Even if you start a new transaction, it will be nested within the outer transaction. SQL Server guarantees that a rollback will result in an unmodified database state. So there is no way you can insert a row inside an aborted transaction.
Here's a way around it, it's a bit of a trick. Create a linked server with rpc out = true and remote proc transaction promotion = false. The linked server can point to the same server as your procedure is running on. Then, you can use execte (<query>) at <server> to execute something in a new transaction.
if OBJECT_ID('logs') is not null drop table logs
create table logs (id int primary key identity, msg varchar(max))
if OBJECT_ID('TestSp') is not null drop procedure TestSp
go
create procedure TestSp as
execute ('insert into dbo.logs (msg) values (''test message'')') at LINKEDSERVER
go
begin transaction
exec TestSp
rollback transaction
select top 10 * from logs
This will end with a row in the log table, even though the transaction was rolled back.
Here's example code to create such a linked server:
IF EXISTS (SELECT srv.name FROM sys.servers srv WHERE srv.server_id != 0 AND
srv.name = N'LINKEDSERVER')
EXEC master.dbo.sp_dropserver #server=N'LINKEDSERVER',
#droplogins='droplogins'
EXEC master.dbo.sp_addlinkedserver #server = N'LINKEDSERVER',
#srvproduct=N'LOCALHOST', #provider=N'SQLNCLI', #datasrc=N'LOCALHOST',
#catalog=N'DatabaseName'
EXEC master.dbo.sp_serveroption #server=N'LINKEDSERVER', #optname=N'rpc out',
#optvalue=N'true'
EXEC master.dbo.sp_addlinkedsrvlogin #rmtsrvname=N'LINKEDSERVER',
#useself=N'True', #locallogin=NULL,#rmtuser=NULL, #rmtpassword=NULL
EXEC master.dbo.sp_serveroption #server=N'LINKEDSERVER',
#optname=N'remote proc transaction promotion', #optvalue=N'false'
In Oracle you would use autonomous transactions for that, however, SQL Server does not support them.
It is possible to declare a table variable and return it from your stored procedure.
The table variables survive the ROLLBACK, however, the upper level code should be modified to read the variable and store its data permanently.
Depending on permissions, you could call out using xp_cmdshell to OSQL thereby creating an entirely separate connection. You might be able to do something similar with the CLR, although I've never tried it. However, I strongly advise against doing something like this.
Your best bet is to establish what the conventions are for your code and the calling code - what kind of a contract is supported between the two. You could make it a rule that your code is never called within another transaction (probably not a good idea) or you could give requirements on what the calling code is responsible for when an error occurs.
Anything inside of a transaction will be part of that transaction. If you don't want it to be part of that transaction then do not put it inside.