I have a script in T-SQL that goes like this:
create table TableName (...)
SET IDENTITY INSERT TableName ON
And on second line I get error:
Cannot find the object "TableName" because it does not exist or you do not have permissions.
I execute it from Management Studio 2005. When I put "GO" between these two lines, it's working. But what I would like to acomplish is not to use "GO" because I would like to place this code in my application when it will be finished.
So my question is how to make this work without using "GO" so that I can run it programmatically from my C# application.
Without using GO, programmatically, you would need to make 2 separate database calls.
Run the two scripts one after the other - using two calls from your application.
You should only run the second once the first has successfully run anyway, so you could run the first script and on success run the second script. The table has to have been created before you can use it, which is why you need the GO in management studio.
From the BOL: "SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server". Therefore, as Jose Basilio already pointed out, you have to make separate database calls.
If this can help, I was faced with the same problem and I had to write a little (very basic) parser to split every single script in a bunch of mini-script which are sent - one at a time - to the database.
something even better than tpdi's temp table is a variable table. they run lightning fast and are dropped automatically once out of scope.
this is how you make one
declare #TableName table (ColumnName int, ColumnName2 nvarchar(50))
then to insert you just do this
insert into #TableName (ColumnName, ColumnName2)
select 1, 'A'
Consider writing a stored proc that creates a temporary table and does whatever it needs to with that. If you create a real table, your app won't be able to run the script more than once, unless it also drops the table -- in which case, you have exactly the functionality of a temp table.
Related
I'm trying to write a T-SQL script to create a database and the corresponding tables. I'm having a problem where the USE statement complains that the database that I just "created" doesn't exist. If I run the script within SQL Server Management Studio so that I can make use of the GO statement, I don't get this issue.
Is there a T-SQL equivalent of GO that I can use to make sure the CREATE DATABASE gets executed before the USE?
I've tried BEGIN/COMMIT TRANSACTION and BEGIN/END but they didn't help.
Is there a T-SQL equivalent of GO that I can use to make sure the CREATE DATABASE gets executed before the USE?
Yes. Dynamic SQL. Each dynamic SQL invocation is a parsed, compiled, and executed as a separate batch.
EG:
exec ('
create database foo
')
exec ('
use foo
create table bar(id int)
')
Note that when used in dynamic SQL use database only change the database context for the dynamic batch. When control returns to the calling batch, the database context is restored.
In C# you should use separate calls to SqlComand for each batch.
High level steps.
Open connection to master.
Create new database (just create database statement).
Instead of USE call SqlConnection.ChangeDatabase(String) Method
Execute remaining batches
I'm pretty new to SQL and SQL Server. I'm trying to run an ALTER PROCEDURE query from a .sql file called through C# code. Before I move on to making sure my query does what it's supposed to do, I want to verify that my ALTER PROCEDURE query actually altered the procedure, but I don't know how to verify that.
For example, in SQL Server, I can see where the stored procedure I'm trying to edit lives, in:
- database-name
- Programmability/
- Stored Procedures/
- dbo.MyStoredProcedure
If my ALTER TABLE query worked correctly, would I be able to see my procedure code here, or would I check somewhere else? Or am I thinking about this the wrong way?
Generally, we rely on error and exception messages to tell us when something like this has not worked. However, I suppose that it might be possible that the procedure Alter-ed was not the one that was intended (implying bugs in the name/path/call construction, of course).
In that case, you can get the current text of any SQL Module (Procedure, View, Trigger, etc., anything script-baseD) from the sys.sql_modules table:
SELECT definition FROM sys.sql_modules
WHERE object_id=OBJECT_ID('dbo.UserSamples_Insert')
You should note that usually when something like this happens without an error message it is because either:
You are executing in the wrong database (like PROD when you meant to be in DEV or vice-versa), or
You are not using the correct Schema (because you can make and use schemas other than 'dbo').
Wait, you say ALTER PROCEDURE twice, but then the third time you say ALTER TABLE. Which is it? I ask because unlike almost every other SQL object, tables are not script-based and their definition cannot be found in any of the Sql script repositories like sys.sql_modules. I actually use either SMO (from a client) or a tool that #SeanLange wrote years ago for that (from the server itself).
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
I have a stored procedure that causes blocking on my SQL server database. Whenever it does block for more than X amount of seconds we get notified with what query is being run, and it looks similar to below.
CREATE PROC [dbo].[sp_problemprocedure] (
#orderid INT
--procedure code
How can I tell what the value is for #orderid? I'd like to know the value because this procedure will run 100+ times a day but only cause blocking a handful of times, and if we can find some sort of pattern between the order id's maybe I'd be able to track down the problem.
The procedure is being called from a .NET application if that helps.
Have you tried printing it from inside the procedure?
http://msdn.microsoft.com/en-us/library/ms176047.aspx
If it's being called from a .NET application you could easily log out the parameter being passed from the .net app, but if you don't have access, also you can use SQL Server profiling. Filters can be set on the command type i.e. proc only as well as the database that is being hit otherwise you will be overwhelmed with all the information a profile can produce.
Link: Using Sql server profiler
rename the procedure
create a logging table
create a new one (same signature/params) which calls the original but first logs the params and starting timestamp and logs after the call finishes the end timestamp
create a synonym for this new proc with the name of the original
Now you have a log for all calls made by whatever app...
You can disbale/enable the logging anytime by simply redefining the synonym to point to the logging wrapper or to the original...
The easiest way would be to run a profiler trace. You'll want to capture calls to the stored procedure.
Really though, that is only going to tell you part of the story. Personally I would start with the code. Try and batch big updates into smaller batches. Try and avoid long-running explicit transactions if they're not necessary. Look at your triggers (if any) and cascading Foreign keys and make sure those are efficient.
easiest way is to do the following:
1) in .NET, grab the date-time just before running the procedure
2) in .Net, after the procedure is complete grab the date-time
3) in .NET, do some date-time math, and if it is "slow", write to a file (log) those start and end date-times, user info, all the the parameters, etc.
what is the best way of troubleshoot a stored procedure in SQL Server, i mean from where do you start etc..?
Test each SELECT statements (if any) outside of your stored procedure to see whether it returns the expected results;
Make INSERT and UPDATE statements as simple as possible;
Try to test Inserts and Updates outside of your SP so that you can check it gives the expected results;
Use the debugger provided with SSMS Express 2008.
Visual Studio 2008 / 2010 has a debug facility. Simply connect to to your SQL Server instance in 'Server Explorer' and browse to your stored procedure.
Visual Studio 'Test Edition' also can generate Unit Tests around your stored procedures.
Troubleshooting a complex stored proc is far more than just determining if you can get it to run or not and finding the step which won't run. What is most critical is whether it actually returns the corect results or performs the correct actions.
There are two kinds of stored procs that need extensive abilites to troublshoot. First the the proc which creates dynamic SQL. I never create one of these without an input parameter of #debug. When this parameter is set, I have the proc print the SQl statment as it would have run and not run it. Almost everytime, this leads you right away to the problem as you can then see the syntax error in the generated SQL code. You also can run this sql code to see if it is returning the records you expect.
Now with complex procs that have many steps that affect data, I always use an #test input parameter. There are two things I do with the #test parameter, first I make it rollback the actions so that a mistake in development won't mess up the data. Second, I have it display the data before it rollsback to see what the results would have been. (These actually appear in the reverse order in the proc; I just think of them in this order.)
Now I can see what would have gone into the table or been deleted from the tables without affecting the data permananently. Sometimes, I might start with a select of the data as it was before any actions and then compare it to a select run afterwards.
Finally, I often want to log actions of a complex proc and see exactly what steps happened. I don't want those logs to get rolled back if the proc hits an error, so I set up a table variable for the logging information I want at the start of the proc. After each step (or after an error depending on what I want to log), I insert to this table variable. After the rollback or commit statement, I select the results of the table variable or use those results to log to a permanent logging table. This can be especially nice if you are using dynamic SQL because you can log the SQL that was run and then when something strange fails on prod, you have a record of which statement was run when it failed. You do this in a table variable because those do not go out of scope in a rollback.
In SSMS, you can simply start by opening the proc., and clicking on the check mark button (Parse) next to the Execute button on the menu bar. It reports any errors it finds.
If there are no errors there and you're stored procedure is harmless to run (you're not inserting into tables, just creating a temp table for example), then comment out the CREATE PROCEDURE x (or ALTER PROCEDURE x) and declare all the parameters by copying that part, then define them with valid values. Then run it to see what happens.
Maybe this is simple, but it's a place to start.