change attributes data type in database table when it is already filled with records - sql

Could we change attribute's data type when the database table has record in SQL?
I am using Microsoft Management Studio 2008. The error that i am getting is:
** Error converting data type nvarchar to float. **

In short: It is possible with alter column command ONLY if the altered data type is compatible with newly modified one. In addition, it is recommended to be done with transaction.
For example: You may change a column from a varchar(50) to a nvarchar(200), with a script below.
alter table TableName
alter column ColumnName nvarchar(200)
Edit: Regarding your posted error while altering column type.
** Error converting data type nvarchar to float. **
One way would be to create a new column, and convert all good (convertible and compatible) records to new column. After that you may wanna to clean-up the bad records that do not convert, delete old column and re-name your newly added and populated column back to the original name. Important: use testing environment for all this manipulations first. Usually, playing with productions tables turns to be a bad practice to screw things up.
References to look for more discussions on similar SE posts:
Change column types in a huge table
How to change column datatype in SQL Server database without losing data

Obviously, there is no default conversion to your new datatype. One solution could be to create a second column with the requested type, and write your own conversion function. Once this done, delete the first column and rename the second one with the same name.

Things to consider: How big your table is. You then use the alter table syntax. We do not know what data type you want to change, so just for e.g.
alter column:
Alter Table [yourTable] Alter column [yourColumn] varchar(15)
You could also try to add a new column and then update that column using your old column. Drop the old column. Rename the new column. This is a safe better way, becasue at times the data that you hold might not react well to the new data type...
A post to look into for ideas: Change column types in a huge table, How to change column datatype in SQL database without losing data

Alter datatype of that column ..But In general sql wont allow to channge.It will prompt u drop that column..There is setting to achive that thing.
Go to Tool->Option->designers->Table and Database designers and Uncheck Prevent saving option.I m taking abt sql server 2008R2.Now u can easily alter data type.

Related

Change column type from bigint to numeric(18,0) in sql server

I have around 10 tables which have data in them. I need to change the fields which have data type bigint to numeric(18,0).
We have verified data in our DB, there would not be any data loss. In our lower environment, what we have done is:
Took backup for existing table, renamed it temporarily
Create a new table with numeric data type
Populate data from backup table
If everything is okay, then delete backup table
The above is the process we have followed in lower environments.
But, we cannot follow above procedure when it comes to prod. We would like to change using ALTER statement. Since it is PROD environment, we have to be careful with changes. As I said earlier, there would not be any data loss.
But still wanted to know - what internally happens when we execute the ALTER statement?
Will it drop the table and recreate it with new definitions and populate the data back? If so, are there any risk associated with this?
Any thoughts on how this could be properly handled in PROD would be appreciated.
I might suggest an approach that doesn't rebuild the data. Use a computed column instead. Something like this:
sp_rename 'table.dbo.col', '_col', 'COLUMN';
alter table table add col as (cast(_col as numeric(18, 0));
You can then access col as the type that you want. You will not have to rewrite any data, so there will not be any locks or other issues with performance. Of course, select * will be a bit redundant, but you probably shouldn't be doing that anyway.

How can I alter a UDT in HSQLDB?

In HSLQDB v 2.3.1 there is a create type clause for defining UDTs. But there appears to be no alter type clause, as far as the docs are concerned (and the db returns a unexpected token error if I try this).
Is it possible to amend/drop a UDT in HSQLDB? What would be the best practice, if for example I originally created
create type CURRENCY_ID as char(3)
because I decide I'm going to use ISO codes. But then I actually decide that I'm going to store the codes as integers instead. What is the best way to modify the schema in my db? (this is a synthetic example, obviously I wouldn't use integers in this case).
I guess I might do
alter table inventory alter column ccy set data type int
drop type CURRENCY_ID
create type CURRENCY_ID as int
alter table inventory alter column ccy set data type CURRENCY_ID
but is there a better way to do this?
After trying various methods, I ended up writing a script to edit the *.script file of the database directly. It's a plain text file with SQL commands that recreates the DB programmatically. In detail:
open db, shutdown compact
Edit the script file: replace the type definition, e.g. create type XXX as int to create type XXX as char(4)
For each table, replace the insert into table XXX values (i,...) with insert into table XXX values('str',...). This was done with a script that had the mappings from the old (int) value into the new (char) value.
In my particular case, I was changing a primary key, so I had to remove the identity directive from the create table statement, and also I had to remove a line that had a alter table XXX alter column YYY restart sequence 123.
save and close script file, open db, shutdown compact
This isn't great, but it worked. Advantages :
Ability to re-define UDT.
Ability to map the table values programmatically.
Method is generic and can be used for other schema changes, beside UDTs.
Cons
No checking that schema is consistent (although it does throw up errors if it can't read the script).
Dangerous when reading file as a text file. e.g. what if I have a VARCHAR column with newlines in it? When I parse the script file and write it back, this would need to be escaped.
Not sure if this works with non-memory DBs. i.e. those that don't only have a *.script file when shutdown.
Probably not efficient for large DBs. My DB was small ~ 1MB.

Change datatype varchar to nvarchar in existing SQL Server 2005 database. Any issues?

I need to change column datatypes in a database table from varchar to nvarchar in order to support Chinese characters (currently, the varchar fields that have these characters are only showing question marks).
I know how to change the values, but I want to see if it's safe to do so. Is there anything to look out for before I do the changing? Thanks!
Note that this change is a size-of-data update, see SQL Server table columns under the hood. The change will add a new NVARCHAR column, it will update each row copying the dta from the old VARCHAR to the new NVARCHAR column, and then it will mark the old VARCHAR column as dropped. IF the table is large, this will generate a large log, so be prepared for it. After the update, run DBCC CLEANTABLE to reclaim the space used by the former VARCHAR column. If you can afford it , better run ALTER TABLE ... REBUILD, which will not only reclaim the space it will also completely remove physical deleted VARCHAR column. The linked article at the beginning has more details.
You may also be interested in enabling Unicode Compression for your table.
You can do on non primary key fields:
ALTER TABLE [TableName]
ALTER COLUMN [ColumnName] nvarchar(N) null
On the primary key fields it will not work - you will have to recreate the table
Make sure that the length doesn't exceed 4000 since the maximum for VARCHAR is 8000 while NVARCHAR is only 4K.
The table will get bigger. Each character in the column will take twice the space to store. You might not notice unless the table is really big.
Stored procedures/views/queries that work with the column data might need to be modified to deal with the nvarchar.
Check all the dependencies for this table as stored procs, functions, temp tables based on this table and variables used for inserts/updates etc may also need to be updated to NVARCHAR.
Also check if the table is in replication! That could cause you a new set of problems!

a special case when modifing the database

sometimes i face the following case in my database design,, i wanna to know what is the best practice to handle this case:::
for example i have a specific table and after a while ,, when the database in operation and some real data are already entered.. i need to add some required fields (that supposed not to accept null)..
what is the best practice in this situation..
make the field accept null as (some data already entered in the table ,, and scarify the important constraint )and try to force the user to enter this field through some validation in the code..
truncate all the entered data and reentered them again (tedious work)..
any other suggestions about this issue...
It depends on requirements. If the data to populate existing rows for the new column isn't available immediately then I would generally prefer to create a new table and just populate new rows when the data exists. If and when you have all the data for every row then put the new column into the original table.
If possible i would set a default value for the new column.
e.g. For Varchar
alter table table_name
add column_name varchar(10) not null
constraint column_name_default default ('Test')
After you have updated you could then drop the default
alter table table_name
drop constraint column_name_default
A lot will come down to your requirements.
It depends on your application, your database scheme, your entities.
The best way to go about it is to truncate the data and re - enter it again, but it need not be too tedious an item. Temporary tables and table variables could assist a great deal with this issue. A simple procedure comes to mind to go about it:
In SQL Server Management Studio, Right - click on the table you wish to modify and select Script Table As > CREATE To > New Query Editor Window.
Add a # in front of the table name in the CREATE statement.
Move all records into the temporary table, using something to the effect of:
INSERT INTO #temp SELECT * FROM original
Then run the script to keep all your records into the temporary table.
Truncate your original table, and make any changes necessary.
Right - click on the table and select Script Table As > INSERT To > Clipboard, paste it into your query editor window and modify it to read records from the temporary table, using INSERT .. SELECT.
That's it. Admittedly not quite straightforward, but a well - kept database is almost always worth a slight hassle.

Changing the size of a column referenced by a schema-bound view in SQL Server

I'm trying to change the size of a column in sql server using:
ALTER TABLE [dbo].[Address]
ALTER COLUMN [Addr1] [nvarchar](80) NULL
where the length of Addr1 was originally 40.
It failed, raising this error:
The object 'Address_e' is dependent on column 'Addr1'.
ALTER TABLE ALTER COLUMN Addr1 failed because one or more objects access
this column.
I've tried to read up on it and it seems that because some views are referencing this column and it seems that SQL Server is actually trying to drop the column that raised the error.
Address_e is a view created by the previous DB Administrator.
Is there any other way I can change the size of the column?
ALTER TABLE [table_name] ALTER COLUMN [column_name] varchar(150)
The views are probably created using the WITH SCHEMABINDING option and this means they are explicitly wired up to prevent such changes. Looks like the schemabinding worked and prevented you from breaking those views, lucky day, heh? Contact your database administrator and ask him to do the change, after it asserts the impact on the database.
From MSDN:
SCHEMABINDING
Binds the view to the schema of the underlying table or tables. When
SCHEMABINDING is specified, the base
table or tables cannot be modified in
a way that would affect the view
definition. The view definition itself
must first be modified or dropped to
remove dependencies on the table that
is to be modified.
If anyone wants to "Increase the column width of the replicated table" in SQL Server 2008, then no need to change the property of "replicate_ddl=1". Simply follow below steps --
Open SSMS
Connect to Publisher database
run command -- ALTER TABLE [Table_Name] ALTER COLUMN [Column_Name] varchar(22)
It will increase the column width from varchar(x) to varchar(22) and same change you can see on subscriber (transaction got replicated). So no need to re-initialize the replication
Hope this will help all who are looking for it.
See this link
Resize or Modify a MS SQL Server Table Column with Default Constraint using T-SQL Commands
the solution for such a SQL Server problem is going to be
Dropping or disabling the DEFAULT Constraint on the table column.
Modifying the table column data type and/or data size.
Re-creating or enabling the default constraint back on the sql table column.
Bye
here is what works with the version of the program that I'm using: may work for you too.
I will just place the instruction and command that does it. class is the name of the table. you change it in the table its self with this method. not just the return on the search process.
view the table class
select * from class
change the length of the columns FacID (seen as "faci") and classnumber (seen as "classnu") to fit the whole labels.
alter table class modify facid varchar (5);
alter table class modify classnumber varchar(11);
view table again to see the difference
select * from class;
(run the command again to see the difference)
This changes the the actual table for good, but for better.
P.S. I made these instructions up as a note for the commands. This is not a test, but can help on one :)
Check the column collation. This script might change the collation to the table default. Add the current collation to the script.
You can change the size of the column in 3 steps:
Alter view Address_e and take in comment column /*Addr1*/
Run your script
ALTER TABLE [dbo].[Address]
ALTER COLUMN [Addr1] [nvarchar](80) NULL
Then again alter view Address_e, in order to uncomment column Addr1