I'm trying to clear out a lot of clutter in various databases on a SQL Server that were left behind by some sloppy prior employees. My boss is afraid to let me delete unused tables because he believes it might not generate an error message, but might simply cause wrong data to be returned. I don't see how; if a table is joined in a SQL statement and the table doesn't exist, I can't imagine it not raising an error. Is this possible?
In normal circumstance you'd always get an error, however these may not be normal circumstances.
It's unlikely, but if you have procedures building dynamic SQL using the information_schema views to get table names you could potentially write some SQL that fails silently if the table isn't there
It's technically possible depending on how you access the data and how any stored procedures querying said table were written. E.g. if there are TRY CATCH blocks in your T-SQL, it could be possible. But that's just theoretical. In practice, you'll get real ugly errors if you delete a table.
What I would do in the interim is rename the table and then run every unit test and integration test you can find. If something references the table, rename it back to what it was, otherwise dump it.
Related
We had another developer come through and complete some work for us. Unfortunately he didn’t work well within our team and management let him go.
Because of this now I’m stuck debugging his code and undoing work that was done. He did not document his code (one of the reasons he was let go), rarely notating anything, therefore I have no idea where to begin looking.
When I run a basic SELECT on two specific tables in our DB:
SELECT * FROM table_name
Using SQL Server Management Studio I get this...
Msg 207, Level 16, State 1, Line 1
Invalid column name 'eventTime'.
There was an eventTime column but wasn’t necessary and wasn't being used in any PHP file, however it seems somehow directly tied to the table now and I have no idea where to look to find it. The error message provided is pointing to my SELECT statement, but there is nothing wrong with it, nor does it even reference the eventTime column.
I’ve looked and there don’t seem to be any triggers or stored procedures referencing this table. Is there another way I can try to track this down?
This sounds like a hard'ish problem. Here are some ideas.
My first thought is that table_name is a view, and somehow the view has gotten out-of-sync with the underlying table definitions. I have seen problems with types in some circumstances. I imagine the same could happen with column names.
The next thought is that table_name has computed columns. In this case, the computed columns could be using a function and the function call could be generating the error. I cannot think of any other way to run code with a simple select.
I don't think the problem would be a foreign key constraint unless. So, a third option is that a foreign key constraint is referencing a table in the same database but a different schema. The different schema could have permissions that make the table inaccessible.
For any of these, scripting out the definition in SSMS will help you fix the problem.
I am currently trying to clean up some stored procedures. There are about 20 of them that look very similar and do many of the same things, but take in slightly different parameters and filter data differently.
For the most part, all of the stored procs start by loading some data in to one or two table variables (which is generally where the procs differ). After that, all of the code for each sproc is more or less the same. They perform some logging and apply some additional common filters.
I wanted to at least turn the common pieces in to stored procs so that the code would be easier to read and we wouldn't have to open 20 procedures to update the same line of sql, but the use of table variables prevents it. We are using Sql Server 2005, and to my knowledge we cannot used table valued parameters in our stored procedures.
We could, however, change all of the table variables to temp tables and reference them in the new common stored procedures. I am assuming that is a fairly common practice, but wanted to know if it was actually a good idea.
In the nested stored procedures, should I assume that a temp table has already been created elsewhere and just query off of it? I can test whether or not the table exists, but what if it doesn't? Is there a good alternative to this in 2005? Is it confusing for other developers who open one of the nested stored procs and see a temp table that is created elsewhere? Do I just need to add lots of informative comments?
In your nested proc; to be sure, you can check whether table exist or not. If not exist then RAISERROR with some error message. Since, you are using SS2005, #temptable would be the option you have. Commenting your code for ease of readability is never a bad practice.
Talking about naming convention; follow any convention that fits better in your organization (I just gave a proper name to the SP that will reflect the purpose of the SP). If code changes is happening then changing the comment accordingly is the developer responsibility. Other than this, whatever you are doing looks correct to me.
At work we have a table to hold settings which essentially contains the following columns:
PARAMNAME
VALUE
Most of the time new settings are added but on rare occasions, settings are removed. Unfortunately this means that any scripts which might have previously updated this value will continue to do so despite the fact that the update results in "0 rows updated" and leads to unexpected behaviour.
This situation was picked up recently by a regression test failure but only after much investigation into why the data in the system was different.
So my question is: Is there a way to generate an error condition when an update results in zero rows updated?
Here are some options I have thought of, but none of them are really all that desirable:
PL/SQL wrapper which notices the failed update and throws an exception.
Not ideal as it doesn't stop anyone/a script from manually doing an update.
A trigger on the table which throws an exception.
Goes against our current policy of phasing out triggers.
Requires updating trigger every time a setting is removed and maintaining a list of obsolete settings (if doing exclusion).
Might have problems with mutating table (if doing inclusion by querying what settings currently exist).
A PL/SQL wrapper seems like the best option to me. Triggers are a great thing to phase out, with the exception of generating sequences and inserting history records.
If you're concerned about someone manually updating rather than using the PL/SQL wrapper, just restrict the user role so that it does not have UPDATE privileges on the table but has EXECUTE privileges on the procedure.
Not really a solution but a method to organize things a bit:
Create a separate table with the parameter definitions and link to that table from the parameter value table. Make the reference to the parameter definition required (nulls not allowed).
Definition table PARAMS (ID, NAME)
Actual settings table PARAM_VALUES (PARAM_ID, VALUE)
(changing your table structure is also a very effective way to evoke errors in scripts that have not been updated...)
May be you can use MERGE statement
here is a link for it
http://www.oracle-developer.net/display.php?id=203
The merge statement allows you to combine insert and update in the same query, so in case the desired row does not exist you may insert a record in a buffer table to indicate the the row does not exist or else you can update the required record
Hope it helps
I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.
I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.