Renaming a column without breaking the scripts and stored procedures - sql

I want to modify a column name to new name present in a table
but here problem i want to manually modify the column name present in Triggers or SP's.
Is there a any better way of doing it.
To rename a column am using this
sp_RENAME 'Tablename.old_Column', 'new_column' , 'COLUMN';
similarly how can i do it for triggers or SP's.? without opening each script?

Well, there are a bunch of 3rd party tools that are promising this type of "safe rename", some for free and some are not:
ApexSQL has a free tool for that, as MWillemse wrote in his answer,
RedGate have a commercial tool called SQLPrompt that also have a safe renaming feture, However it is far from being free.
Microsoft have a visual studio add-in called SQL Server Data Tools (or SSDT in the short version), as Dan Guzman wrote in his comment.
I have to say I've never tried any of these specific tools for that specific task, but I do have some experience with SSDT and some of RedGate's products and I consider them to be very good tools. I know nothing about ApexSQL.
Another option is to try and write the sql script yourself, However there are a couple of things to take into consideration before you start:
Can your table be accessed directly from outside the sql server? I mean, is it possible that some software is executing sql statement directly on that table? If so, you might break it when you rename that column, and no sql tool will help in this situation.
Are your sql scripting skills really that good? I consider myself to be fairly experienced with sql server, but I think writing a script like that is beyond my skills. Not that it's impossible for me, but it will probably take too much time and effort for something I can get for free.
Should you decide to write it yourself, there are a few articles that might help you in that task:
First, Microsoft official documentation of sys.sql_expression_dependencies.
Second, an article called Different Ways to Find SQL Server Object Dependencies that is written by a 13 years experience DBA,
and last but not least, a related question on StackExchange's Database Administrator's website.
You could, of course, go with the safe way Gordon Linoff suggested in his comment, or use synonyms like destination-data suggested in his answer, but then you will have to manually modify all of the columns dependencies manually, and from what I understand, that is what you want to avoid.

Renaming the Table column
Deleting the Table column
Alter Table Keys
Best way use Database Projects in Visual Studio.
Refer this links
link 1
link 2

you can do what #GorDon suggested.
Apart from this,you can also play with this query,
select o.name, sc.* from sys.syscomments sc inner join sys.objects o
on sc.id=o.object_id where sc.text like '%oldcolumnname%'
this will return list of all proc and trigger.Also you can modify filter to get exact list.then it will be very easy for you to modify,manually.
But whatever you decide,don't simply drop old column.
To be safe,even keep back up.

This suggestion relates to Oracle DB, however there may be equivalent solutions in other DBMS's.
A temporary solution to your issue is to create a pseudocolumn. This solution looks a little hacky because the syntax for a pseudocolumn requires an expression. The simplest expression I can think of is the case statement below. Let me know if you can make it more simple.
ALTER TABLE <<tablename>> ADD (
<<new_column_name>> AS (
CASE
WHEN 1=1 THEN <<tablename>>.<<old_column_name>>
END)
);
This strategy basically creates a new column on the fly by evaluating the case statement and copying the value of <<old_column_value>> to <<new_column_value>>. Because you are dynamically interpolating this column there is a performance penalty vs just selecting the original column.
The one gotcha is that this will only work if you are duplicating a column once. Multiple pseudocolumns cannot contain duplicate expressions in Oracle.
The other strategy you can consider is to create a view and you can name the columns whatever you want. You can even INSERT/UPDATE/DELETE (execute DML) against views, but this would give you a whole new table_name, not just a new column. You could however rename the old table, and name your view the same as your old table. This also has a performance penalty vs just accessing the underlying table.

You might want to replace that text in definition. However, you will be needing a dedicated administrator connection in sql server. Versions also vary in setting up a dedicated administrator connection. Setting up the startup parameter by adding ;-T7806 under advanced. And by adding Admin: before the servername upon logging in. By then, you may be able to modify the value of the definition.

Related

How to copy from table design from database to another vb.net access

The aim is when updating the application and update the access database without altering the data so update by update only the new tables or new columns so i want to copy the exact table with it's structure to the old database vb.net and access database.
what I've tried is detecting the differences between the old database and the new one by getting in combobox1 the only missed table and in combobox2 the missed columns in the old database in exact table already there in both database and get it's data type .
so i want to copy the entire table and then create only missed columns
thank you
There is not a built in tool to do this.
But, worse yet, there is no "generate" change scripts in Access
(Like say with SQL server).
So, how do you approach this issue? What do some of the accounting systems or commercial programs that use ms-access as the database?
Well, you have to build a kind of "up-grade" system in your software.
This means two things:
To add a new column to a table (for example), you NEVER go open up the access database with access, but "add" or "write" the code to add that field in question.
In fact, I had an applcation deployed out in the field - many desktops.
So, I had a code module called upgrade. And each time I needed a new field or whatever, then I would write the code to add that new colum.
AS LONG as I always added things into that code module, I was ok. (never break the rule for adding new fields, tables or even increasing the length of some field? - use code).
And it became quite easy after I had some code written. I would in fact often cut + paste a previous bit of code to add a new column to a table.
However, after about 5 years, that messy code module had 800+ lines of code in it!!!
But, I ALSO realized that MOST things like adding a new column or whatever? Same code over and over.
So, what I did next was built a "upgrade" table. It looked like this:
Version action SQL RunCode
2.5 AddTable tblCustomers
2.5 AddField "sql here to add table"
etc. etc.
So, I had a version number, and then I compare against the up-grade table. I had "action", and the code would simple loop this table, and do whatever.
So, for example, to add a field, you can use access "DDL" command (data definition commands - most SQL systems support this, and so does Access).
so, say like this:
' any new table code goes here:
If lngVer < 1148 Then
' add event Invoice text option
ExecuteSQLNR "ALTER TABLE dbo.Events ADD InvoiceText ntext NULL"
ExecuteSQLNR "ALTER TABLE dbo.Events ADD HideEventDate bit NULL default 0"
Or, say to increase a column lengh from 50 to 55
db.Execute "ALTER TABLE tblGroupRemind ALTER COLUMN Anotes text(255)", dbFailOnError
As noted, since oh so many the commands were VERY similar, then I started putting that information into a table, and then I would execute the required upgrades in a loop.
For a whole new table? Well, I thought that was too much code, so I always included a blank empty database - and for new tables, I would place them in that upgrade.accDB table - and "transfer/copy" the table from that upgrade database to the real one. That way, I could with great ease create a whole new table, and create in Access designers, and then add/copy that table to the "upgrade.accDB" database.
As noted? The above ideas an approaches work quite well.
In fact, over time, I found it LESS hassle while coding away to add the new column or whatever LESS effort then having to open up ms-acces, and then the table, and then the designer and make the changes.
However, the BIG issue with above?
Well, you have to get all users at least upgraded to your EXISTING schema, and there is no automated tools.
in fact, before I had any automated tools? I would open up note pad, and if I added some field to some table? I would simple type into note pad that new field in such and such table is required).
Then, when on customer site, I would open up their database, and then go look at the note pad document for the list of changes I was to make. (that is what I was doing before I started automating the process - and of course it not always practical to be "on site" or have the customers database.
But, ONCE I had all of the above working?
Then during development, I would open up my "upgrade" database, add the new row and action (new table, new column, (and more).
I even had a column that defined the function to run AFTER that one command. I mean, quite often when you add a new column, or change somthing in a table, often you need to copy data, or at least process some data after you make that change.
Once you get above going?
Then you simple NEVER make changes in the data tables directly, but use your "system" for this. And that works REALLY well.
For one, a customer could open up a older data file - say one from 4 or 5 years ago. The applcation version number would be detected, and then the upgrade code would run all though the versions to update that database. (and I did this automatic on startup - so they never even knew such a upgrade had occurred).
So, you just have to make sure that for each change you make, you put that code in your upgrade system, and you are done.
But, for existing systems? You have to look at what changes you made since last deploy, and write out the "ddl" commands (the alter table SQL commands).
There is no automated way of doing this.
As FYI?
One of the BEST and more valuable free tools in Visual Tools is the SQL server compare utility. It will not only automatic detect and tell you the changes between two SQL server databases, but will also upgrade for you. (very nice).
But, such a system is not available for Access. In fact, so valuable is that utility for SQL server, you might consider upgrading from Access to SQL server for this applcation. With that utility? I can work local, add fields, columns, tables and even stored procedures to that SQL database. When I am on site (or even by VPN), then I run that compare tool - it shows the changes, and ALSO has a button to update the target schema.
I don't know of a automated "schema" checker and updater for Access.
So, what I suggest for above ONLY works if you put such a system in place, and THEN as a developer always make your schema changes to your upgrade system, and never directly in the database with ms-access.

SQL data mapping & migration tool

I am developing one automation tool for data migration from one table to other table, here I am looking for one function or SP for which I will pass source column and destination column as input parameter. I want output parameter to return true when source column data is compatible to copy to destination column. If not then it should return false.
For example, if a source column is varchar and a destination column is integer, the script should check all the data in a source column in good enough to move to an integer column or not and return the output flag. I want a script to work like this for all types of data types. Any suggestions will be helpful.
If you're on SQL Server 2012 you have TRY_CAST(), TRY_CONVERT() and TRY_PARSE() at your disposal (see this post by Biz Nigatu of blog.dbandbi for comparison).
That said, you still need to check for truncation errors, e.g. by converting to target datatype and back, then comparing the original value with the one after conversions.
I've seen similar tools in the past, might be a good idea to see if one isn't already available online for free. Even a purchase might be less expensive than the time you put into developing and troubleshooting your own tool.
I have an SSIS solution for this that I put together using EzAPI. I've got it posted to GitHub, so feel free to look:
https://github.com/thevinnie/SyncDatabases
Now, the part of that which is relevant to you would be where I use the information schema to make sure that the source and destination are schema matches. If not, C# script task will generate the statement to create, alter or drop the necessary column(s).
The EzAPI part is cool with SSIS because it will allow you to programatically generate an SSIS package. For the project requirements, I needed to be able to load data every time and not let schema changes in the source break the process.
Comments and recommendations are welcome. Hopefully it'll help, but I think you'll be looking at the INFORMATION_SCHEMA.COLUMNS either way.

Is it safe to change name column in msdb sysjobs?

I need to rename a lot of similar named job names so i want to run an update statement to change the name column of msdb..sysjobs.
Editing system tables is something i am pretty aware of, but changing job names doesn't feel that dangerous, because i think the job_id is what counts. Is it safe to do it in this case?
I am using sql server 2008.
When you have the same jobs running in multiple environments, it's a better practice to refer to jobs by name rather than by ID, because a job with the same name will be created with a different ID in every environment.
If you change the name of the jobs, make sure that any scripts you use to manipulate those jobs also use the new name.
I think there is nothing to worry about, you can modify it.
Please find detail about sysjobs here.
It is NOT save to just rename it. Using sql profiler shows, that renaming a package using the object exlorer will start several procedures. It will create a new package using sp_add_dtspackage and drop it later using sp_drop_dtspackage. The package id is although set using a parameter, so setting it random really seems to be a potential risk.

Getting around cyclical foreign key errors when trying to generate insert data scripts in SQL 2008

I am trying to generate some insert scripts using the SQL Server 2008 Script Wizard. Upon generating the scripts, I get the following error:
"The selected database contains foreign keys that create a cycle. Publishing data only is not supported for databases with cyclical foreign key relationships."
I've attempted to disable and remove all constraints in the database. The error is still occurring. Is there any way to get around this? Possibly make SQL ignore the constraints while generating the scripts.
On the Wizard page where you choose the radio button to select All Database Objects or Specific Objects, make sure to select All Database Objects. For some reason the tool needs something in there to generate even if you just want the table insert script.
Once I changed that radio button to All Database Objects, and selected the Advanced option to generate Type of script = Data Only, it worked all the way through.
I had the same problem as the OP. Then I tried again, this time in the advanced options, for the "types of data to script" option, I selected "schema and data" rather than data only. Then it worked for me without complaining about cyclical keys.
I was having the same issue, and I discovered today that you can use SQL Server Management Studio 2012 against a 2008 R2 DB and you won't get the error:
Sql Server Scripting Data Only: Workaround for CyclicalForeignKeyException?
Saving to file vs. to a new Query editor window seems to make it work for me on Management Studio 2008 :\
First off IMHO HLGEM's response is a bit cavalier--there are valid reasons at times to have cyclic references.
That said I think the script generator is hyper-sensitive. It seems to think just about any PK/FK pair is "cyclic" and I ended up having to use a copy of my database from which I'd stripped all keys to get the export to get beyond the "cyclical" error. A script like the following can help you drop keys globally but of course be careful!
SELECT
'ALTER TABLE ' + object_name(parent_obj) + ' DROP CONSTRAINT ' + [name]
AS Script
from sysobjects where xtype IN ('F')
[I didn't write this. See http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=46682]
Further the tool is pretty useless in terms of feedback since its report doesn't provide enough detail to narrow down where the supposed cyclical references exist.
Finally I found the tool to be pretty flaky in that I get random timeouts. One other observation that I haven't researched extensively is I think the tool may require you to start from scratch after the cyclical error to clear it's cache since I see different behavior when I use Previous button vs. starting afresh.
You can export the data by setting the script option - "Script Check Constraints" to False
Sorry this will not work :(
You will have to determine which table is causing the issue.
I was getting the same error because I didn't had a table selected in the object list (one big table I wanted to create in another script). Selecting all of the tables solved the problem.
PD: Maybe a little bit late, but searching CyclicalForeignKeyException gets first in Google.

INSERT vs INSERT INTO

I have been working with T-SQL in MS SQL for some time now and somehow whenever I have to insert data into a table I tend to use syntax:
INSERT INTO myTable <something here>
I understand that keyword INTO is optional here and I do not have to use it but somehow it grew into habit in my case.
My question is:
Are there any implications of using INSERT syntax versus INSERT INTO?
Which one complies fully with the standard?
Are they both valid in other implementations of SQL standard?
INSERT INTO is the standard. Even though INTO is optional in most implementations, it's required in a few, so it's a good idea to include it if you want your code to be portable.
You can find links to several versions of the SQL standard here. I found an HTML version of an older standard here.
They are the same thing, INTO is completely optional in T-SQL (other SQL dialects may differ).
Contrary to the other answers, I think it impairs readability to use INTO.
I think it is a conceptional thing: In my perception, I am not inserting a row into a table named "Customer", but I am inserting a Customer. (This is connected to the fact that I use to name my tables in singular, not plural).
If you follow the first concept, INSERT INTO Customer would most likely "feel right" for you.
If you follow the second concept, it would most likely be INSERT Customer for you.
It may be optional in mySQL, but it is mandatory in some other DBMSs, for example Oracle. So SQL will be more potentially portable with the INTO keyword, for what it's worth.
In SQL Server 2005, you could have something in between INSERT and INTO like this:
INSERT top(5) INTO tTable1 SELECT * FROM tTable2;
Though it works without the INTO, I prefer using INTO for readability.
One lesson I leaned about this issue is that you should always keep it consistent! If you use INSERT INTO, don't use INSERT as well. If you don't do it, some programmers may ask the same question again.
Here is my another related example case: I had a chance to update a very very long stored procedure in MS SQL 2005. The problem is that too many data were inserted to a result table. I had to find out where the data came from. I tried to find out where new records were added. At the beginning section of SP, I saw several INSERT INTOs. Then I tried to find "INSERT INTO" and updated them, but I missed one place where only "INSERT" was used. That one actually inserted 4k+ rows of empty data in some columns! Of course, I should just search for INSERT. However, that happened to me. I blame the previous programmer IDIOT:):)
They both do the same thing. INTO is optional (in SQL Server's T-SQL) but aids readability.
I started wtiting SQL on ORACLE, so when I see code without INTO it just looks 'broken' and confusing.
Yes, it is just my opinion, and I'm not saying you should always use INTO. But it you don't you should be aware that many other people will probably think the same thing, especially if they haven't started scripting with newer implementations.
With SQL I think it's also very important to realise that you ARE adding a ROW to a TABLE, and not working with objects. I think it would be unhelpful to a new developer to think of SQL table rows/entries as objects. Again, just me opinion.
INSERT INTO is SQL standard while INSERT without INTO is not SQL standard.
I experimented them on SQL Server, MySQL, PostgreSQL and SQLite as shown below.
Database
INSERT INTO
INSERT
SQL Server
Possible
Possible
MySQL
Possible
Possible
PostgreSQL
Possible
Impossible
SQLite
Possible
Impossible
In addition, I also experimented DELETE FROM and DELETE without FROM on SQL Server, MySQL, PostgreSQL and SQLite as shown below:
Database
DELETE FROM
DELETE
SQL Server
Possible
Possible
MySQL
Possible
Impossible
PostgreSQL
Possible
Impossible
SQLite
Possible
Impossible
I prefer using it. It maintains the same syntax delineation feel and readability as other parts of the SQL language, like group BY, order BY.
If available use the standard function. Not that you ever need portability for your particular database, but chances are you need portability for your SQL knowledge.
A particular nasty T-SQL example is the use of isnull, use coalesce!