USE DB that may not exist - sql

I have a script that has a USE DATABASE statement.
The script runs perfectly fine if the database exists. If it doesn't exist, it fails with the message "the database doesn't exist", which makes perfect sense.
Now, I don't it to fail so I added a check to select if the DB exists on sys.databases (which I will represent here with a IF 1=2 check for the sake of simplicity), so, if the DB exists (1=1), then run the "use" statement.
To my surprise, the script kept failing. So I tried to add a TRY CATCH block. Same result.
It seems that the use statement is evaluated prior to anything else, which id quite annoying because now my script may break.
So my question is: how can I have an use statement on a script to a database that may not exist?
BEGIN TRY
IF (1=1) BEGIN --if DB exists
USE DB_THAT_MAY_NOT_EXIST
END
END TRY
BEGIN CATCH
END CATCH

I don't believe you can do what you want to do. The documentation specifies that use is executed at both compile time and execution time.
As such, use on a database that does not exist is going to create a compile time error. I am not aware of a way to bypass compile time errors.
As another answer suggests, use the database qualifier in all your names.
You can also check if a database exists, without switching to it. Here is one way:
begin try
exec('use dum');
print 'database exists'
end try
begin catch
print 'database does not exist'
end catch

How about this? May be you could check in this way.
if db_id('dbname') is not null
-- do stuff
or try this:
if not exists(select * from sys.databases where name = 'dbname')
-- do stuff
So for table:
if object_id('objectname', 'somename') is not null
or
sp_msforeachdb ‘select * from ?.sys.tables’
Reference

Off the top of my head, you could fully qualify all your references to avoid the USE statement.
I hope someone comes up with a solution that requires less PT.
After doing your check to see if the DB exists, instead of
SELECT Moo FROM MyTable
use
SELECT Moo FROM MyDB.MySchema.MyTable

Related

GO statement behaves differently when called in sql job

Could you please explain me the behavior of GO statement.
I had given two T-SQl statements
truncate table CustomerDetails
GO
truncate table CustomerDetails_Log
GO
The table CustomerDetails does not exists, even if I truncate the table, it goes to the next statement, but displays in the Message section of SQL server that the "CustomerDetails table does not exist" and proceeds with the next truncation of CustomerDetails_Log.
If I place the same set of SQL statements in a SQL job, it fails at the first level, and does not proceed to next statement.
Can anyone please explain me this behavior, as why does GO behave differently in job and in TSQL places.
Thanks
P.S : I do understand that I have not understood the concept of GO properly, any good links would also be very helpful.
To make this work in a SQL job you can wrap each step in TRY/CATCH blocks. This has the added benefit of allowing you to handle/log any problems.
BEGIN TRY
TRUNCATE TABLE CustomerDetails
END TRY
BEGIN CATCH
-- Log error
END CATCH
GO
BEGIN TRY
TRUNCATE TABLE CustomerDetails_Log
END TRY
BEGIN CATCH
-- Log error
END CATCH
GO

Detect if SQL statement is correct

Question: Is there any way to detect if an SQL statement is syntactically correct?
Explanation:
I have a very complex application, which, at some point, need very specific (and different) processing for different cases.
The solution was to have a table where there is a record for each condition, and an SQL command that is to be executed.
That table is not accessible to normal users, only to system admins who define those cases when a new special case occurs. So far, a new record was added directly to the table.
However, from time to time there was typos, and the SQL was malformed, causing issues.
What I want to accomplish is to create a UI for managing that module, where to let admins to type the SQL command, and validate it before save.
My idea was to simply run the statement in a throw block and then capture the result (exception, if any), but I'm wondering of there is a more unobtrusive approach.
Any suggestion on this validation?
Thanks
PS. I'm aware of risk of SQL injection here, but it's not the case - the persons who have access to this are strictly controlled, and they are DBA or developers - so the risk of SQL injection here is the same as the risk to having access to Enterprise Manager
You can use SET PARSEONLY ON at the top of the query. Keep in mind that this will only check if the query is syntactically correct, and will not catch things like misspelled tables, insufficient permissions, etc.
Looking at the page here, you can modify the stored procedure to take a parameter:
CREATE PROC TestValid #stmt NVARCHAR(MAX)
AS
BEGIN
IF EXISTS (
SELECT 1 FROM sys.dm_exec_describe_first_result_set(#stmt, NULL, 0)
WHERE error_message IS NOT NULL
AND error_number IS NOT NULL
AND error_severity IS NOT NULL
AND error_state IS NOT NULL
AND error_type IS NOT NULL
AND error_type_desc IS NOT NULL )
BEGIN
SELECT error_message
FROM sys.dm_exec_describe_first_result_set(#stmt, NULL, 0)
WHERE column_ordinal = 0
END
END
GO
This will return an error if one exists and nothing otherwise.

Error in unreachable SQL Code

The following tsql fails:
IF OBJECT_ID('FDSCorp.XLFILES') IS NOT NULL
BEGIN
DELETE FROM FDSCorp.XLFILES;
INSERT INTO FDSCorp.XLFILES
SELECT DISTINCT * FROM dbo.XLFILES;
END
ELSE
exec sp_changeobjectowner XLFILES, FDSCorp;
Error:
The image data type cannot be selected as DISTINCT because it is not comparable.
Yes XLFilES has an image column, but in this case FDSCorp.XLFILES doesn't exist so that distinct code would never get to run.
This code is generated for each table in the database and I know that this section of the code will never be run on a table where it could fail due to the distinct issue.
I really don't want to have to overcomplicate the code checking for types which I can't use distinct with if that scenario could never happen in a real situation.
Is there some way I can bypass this check?
The only way to avoid the error is for you to prevent the server from "seeing" the code you don't want it to compile. Each batch is compiled entirely (including every statement, ignoring control flow) before execution starts:
IF OBJECT_ID('FDSCorp.XLFILES') IS NOT NULL
BEGIN
DELETE FROM FDSCorp.XLFILES;
exec sp_executesql N'INSERT INTO FDSCorp.XLFILES
SELECT DISTINCT * FROM dbo.XLFILES;';
END
ELSE
exec sp_changeobjectowner XLFILES, FDSCorp;
Now, when this batch is compiled, it won't attempt to compile the INSERT, since so far as this batch is concerned, it's just a string literal.

SQL Try Catch the exact errors caused by the recent variables

Query:
BEGIN TRY
SELECT #AccountNumber,
#AccountSuffix,
#Sedat,
#Dedo,
#Payalo,
#Artisto
FROM SWORDBROS
WHERE AMAZING ='HAPPENS'
END TRY
EGIN CATCH
Print #Sedat
END CATCH
How can I get the #Sedat, is it possible?
SQL 2005 , it will be in an SP
Like this, no?
BEGIN TRY
SELECT #AccountNumber,
#AccountSuffix,
#Sedat,
#Dedo,
#Payalo,
#Artisto
FROM SWORDBROS
WHERE AMAZING ='HAPPENS'
END TRY
BEGIN CATCH
--error handling only
END CATCH
--There is no finally block like .net
Print #Sedat
IN a proc when I want to trap the exact values that caused an erorr, this is what I do. I declare a table variable (very important must be a table variable not a temp table) that has the fields I want to have information on. I populate the table variable with records as I go. In a multitep proc, I would add one record for each step if I wanted to see the who process or only a record if I hit an error (which I would populate in this case in the catch block typically). Then in The catch block I would rollback the transaction and then I would insert the contents of the table varaible into a permanent exception processing table. You could also just do a select of this table if you wanted, but if I'm going to this much trouble it usually is for an automated process where I need to be able to research the problem at a later time, not see the problem when it hits becasue I'm not running it on my mchine or where I could see a select or print statement. By using the table varaible which stay in scope even after the rollback, my information is still available for me to log in my exception logging table. But it important that you do the logging to any permananent table after the rollback or the process will rollback with everything else.
which database are you using?
also, which programming language is this?
usually there would be an INTO clause and some local variables declared.
your query should also have a FROM clause at a minimum
It is not clear if you are expecting the returned values to be placed into the # variables or whether you are trying to dynamically specify which columns you want selected. In a Sql Server stored procedure you usually return a result set, not a bunch of individual variables. The syntax you have will not work if you want column values returned since what you have will dynamically specify which columns are wanted based on the column names passed into the stored procedure. And this will not work since the stored procedure must know which columns you are going after when it is analyzed as it is stored. Now the except clause will be trigged if there is a problem reading from the database (communication down, disk error, etc.) in which case none of the column values will be known.
Use the Sql Query Analyzer tool (under the "Tools" menu in SqlManager after you have selected a database) to define your stored procedure and test it. If you installed the documentation when you installed SqlManager go to Start>Programs>Microsoft Sql Server>Books Online and open the "Transact-SQL Reference" node for documentation on what can be done.

Nervous Running Queries (SQL Server): What do you think about this?

I'm a frequent SQL Server Management Studio user. Sometimes I'm in situations where I have an update or delete query to run, but I'm afraid some typo or logic error on my part is going to cause me to make undesired, massive changes to a table (like change 1000 rows when I meant to change 2).
In the past, I would just clench my fists and hold my breath, but then I wondered if I could do something like this before a running possibly catastrophic query:
1) Run below
begin transaction
(my update/insert/delete statement I want to run)
2) If I'm satisfied, call:
commit transaction
3) Or, if I've fouled something up, just call:
rollback transaction
Is my idea sound, or am I missing something fundamental? I know I could always restore my database, but that seems like overkill compared to above.
EDITS:
1) I agree with testing on a test site before doing anything, but there's still a chance for a problem happening on the production server. Maybe some condition is true on the test server that's not true on production.
2) I'm also used to writing my where first, or doing a select with my where first to ensure I'm isolating the correct rows, but again, something can always go wrong.
Run your WHERE statement as SELECT before you run it as UPDATE or DELETE
Yes, you absolutely can do this. Be aware that you are putting a lock on the table(s) in question, which might interfere with other database activity.
This particular statement has saved my butt at least twice.
SELECT * INTO Table2_Backup FROM Table1
I also agree wholeheartedly with Manu. SELECT before UPDATE or DELETE
Sounds pretty good to me - I basically use this default try/catch query for most of my heavy-lifting; works just as you sketched out, plus it gives you error info if something does go wrong:
BEGIN TRANSACTION
BEGIN TRY
-- do your work here
COMMIT TRANSACTION
END TRY
BEGIN CATCH
SELECT
ERROR_NUMBER() AS ErrorNumber,
ERROR_SEVERITY() AS ErrorSeverity,
ERROR_STATE() AS ErrorState,
ERROR_PROCEDURE() AS ErrorProcedure,
ERROR_LINE() AS ErrorLine,
ERROR_MESSAGE() AS ErrorMessage
ROLLBACK TRANSACTION
END CATCH
Marc
The most frequent cause of this fear is being forced to work on Production databases by hand. If that's the case...might be better to get some dev boxes. If not, I think you're fine...
One other thing you can do, though it will take practice: always write your WHERE clause first, so you never have to worry about running an UPDATE or DELETE on all rows.
Here is what I do when writing text to be run from a query window which I have done to fix bad data sent to us from the clients in an import (I always do this on dev first)
begin tran
delete mt
--select *
from mytable mt where <some condition>
--commit tran
--rollback tran
begin tran
update mt
set myfield = replace(myfield, 'some random text', 'some other random text'
--select myid, myfield, replace(myfield, 'some random text', 'some other random text'
from mytable mt where <some condition>
--commit tran
--rollback tran
Note that this way, I can run the select part of the query to first see the records that will be affected and how. Note the where clause is onthe same line as the table (or the last join if I had mulitple joins) This prevents the oops I forgot to highlight the whole thing problem. Note the delete uses an alias so if you run just the first line by accident, you don't delete the whole table.
I've found it best to write scripts so they can be run without highlighting (except when I highlight just the select part to see what records I'm affecting). Save the script in source control. If you have tested the tscript by running on dev and QA, you should be fine to run it on prod without the selects. If you are affecting a large update or delete on a table, I almost always copy those records to a work table first so that I can go back immediately if there is a problem.
If you want to be (or have to be) really paranoid about this, you should have some kind of log table for old/new vals (table/column/old val /new val; maybe user and a timestamp) and fill that with a trigger on insert/update/delete. It's really a hassle to restore some old values from this, but it may be helpful if all else goes horribly wrong. There is a pretty big performance impact, though.
SAP is using this approach (called change docs in SAP parlance) for changes through its GUI and gives a programmers a way to hook into this for changes done through "programs" (although you have to explicitly call this).