different collation on different tables - sql

I have a table (table1) with collation Latin1_General_CP1_CI_AS. I have written a procedure which will create insert script for the newly inserted data in the table (table1). I am using this script to insert data in the other table (table2) with same structure but different collation SQL_Latin1_General_CP1_CI_AS. Till now its working fine. I want to know will it create any problem in the future. If yes then, in what scenario.

Collation affects only how data are sorted and compared. So it cannot cause any problem with inserts. But it can cause problems when comparing the column values of the two tables. For multi-language data consider using nvarchar vs varchar.

Related

column name or number ... not match table definition in table created by create script.Error displayed while inserting rows from one table to another

I have two identical tables in different server instances. One server is production and the other one is for testing. The testing tables where created by using scripts created by SQL management studio (Right click on table -->script table as --> Create). To move test data i am using a linked server and the following code :
set identity_insert <Server>.<DB>.<schema>.<SomeID> ON
insert into <Server>.<DB>.<schema>.<TestTb>
select top 100 * from <Server>.<DB>.<schema>.<ProdTB>
set identity_insert <Server>.<DB>.<schema>.<SomeID> OFF
The above worked for a couple of the tables i created. In the last one, i get the "column name or number of supplied values does not match table definition in table created by create script" error.i have checked the Columns collation and everything is ok.
The only difference i have is that i haven't created all the indexes found in the Production env, but i don't really think this causes the error.
I' m working on Sql server 2008.
Always specify the columns list in insert statements, and in insert...select you must always specify it twice - both in the insert clause and in the select clause.
Also, SQL Server will raise an error if you use set identity_insert on without explicitly specifying the columns list in the insert clause, so even if you did get all the columns in the correct order, you would still get an error in this case.
For more information, read Aaron Bertrand's Bad habits to kick: SELECT or INSERT without a column list which Shnugo linked to in his comment.

DB2: How to add new column between existing columns?

I have an existing DB2 database and a table named
employee with columns
id,e_name,e_mobile_no,e_dob,e_address.
How can I add a new column e_father_name before e_mobile_no?
You should try using the ADMIN_MOVE_TABLE procedure which allows to change the table structure.
The ALTER TABLE only allows adding columns to the end of the table. The reason is that it would change the physical structure of the table, i.e., each row would need to be adapted to the new format. This would be quite expensive.
Using the mentioned procedure ADMIN_MOVE_TABLE you would copy the entire table and during that process change the table structure. It requires a significant amount of space and time.
In DB2 IBM i v7r1 you can do it, try on your DB2 version
alter table yourtable
add column e_father_name varchar(10) before e_mobile_no
I always do the following --
Take a backup/dump of table data and db2look
(If you dump to a CSV file as I do I suggest dumping in the new format so for example put null for the new column in the right place.
Drop table and indexes
Create table with the new colunn
Load data with old values
Recreate all indexes and runstats.
Once you have done it a few times it becomes old hat.

change attributes data type in database table when it is already filled with records

Could we change attribute's data type when the database table has record in SQL?
I am using Microsoft Management Studio 2008. The error that i am getting is:
** Error converting data type nvarchar to float. **
In short: It is possible with alter column command ONLY if the altered data type is compatible with newly modified one. In addition, it is recommended to be done with transaction.
For example: You may change a column from a varchar(50) to a nvarchar(200), with a script below.
alter table TableName
alter column ColumnName nvarchar(200)
Edit: Regarding your posted error while altering column type.
** Error converting data type nvarchar to float. **
One way would be to create a new column, and convert all good (convertible and compatible) records to new column. After that you may wanna to clean-up the bad records that do not convert, delete old column and re-name your newly added and populated column back to the original name. Important: use testing environment for all this manipulations first. Usually, playing with productions tables turns to be a bad practice to screw things up.
References to look for more discussions on similar SE posts:
Change column types in a huge table
How to change column datatype in SQL Server database without losing data
Obviously, there is no default conversion to your new datatype. One solution could be to create a second column with the requested type, and write your own conversion function. Once this done, delete the first column and rename the second one with the same name.
Things to consider: How big your table is. You then use the alter table syntax. We do not know what data type you want to change, so just for e.g.
alter column:
Alter Table [yourTable] Alter column [yourColumn] varchar(15)
You could also try to add a new column and then update that column using your old column. Drop the old column. Rename the new column. This is a safe better way, becasue at times the data that you hold might not react well to the new data type...
A post to look into for ideas: Change column types in a huge table, How to change column datatype in SQL database without losing data
Alter datatype of that column ..But In general sql wont allow to channge.It will prompt u drop that column..There is setting to achive that thing.
Go to Tool->Option->designers->Table and Database designers and Uncheck Prevent saving option.I m taking abt sql server 2008R2.Now u can easily alter data type.

Change datatype varchar to nvarchar in existing SQL Server 2005 database. Any issues?

I need to change column datatypes in a database table from varchar to nvarchar in order to support Chinese characters (currently, the varchar fields that have these characters are only showing question marks).
I know how to change the values, but I want to see if it's safe to do so. Is there anything to look out for before I do the changing? Thanks!
Note that this change is a size-of-data update, see SQL Server table columns under the hood. The change will add a new NVARCHAR column, it will update each row copying the dta from the old VARCHAR to the new NVARCHAR column, and then it will mark the old VARCHAR column as dropped. IF the table is large, this will generate a large log, so be prepared for it. After the update, run DBCC CLEANTABLE to reclaim the space used by the former VARCHAR column. If you can afford it , better run ALTER TABLE ... REBUILD, which will not only reclaim the space it will also completely remove physical deleted VARCHAR column. The linked article at the beginning has more details.
You may also be interested in enabling Unicode Compression for your table.
You can do on non primary key fields:
ALTER TABLE [TableName]
ALTER COLUMN [ColumnName] nvarchar(N) null
On the primary key fields it will not work - you will have to recreate the table
Make sure that the length doesn't exceed 4000 since the maximum for VARCHAR is 8000 while NVARCHAR is only 4K.
The table will get bigger. Each character in the column will take twice the space to store. You might not notice unless the table is really big.
Stored procedures/views/queries that work with the column data might need to be modified to deal with the nvarchar.
Check all the dependencies for this table as stored procs, functions, temp tables based on this table and variables used for inserts/updates etc may also need to be updated to NVARCHAR.
Also check if the table is in replication! That could cause you a new set of problems!

SQL: Insert all records from one table to another table without specific the columns

I want to insert all the record from the back up table foo_bk into foo table without specific the columns.
if i try this query
INSERT INTO foo
SELECT *
FROM foo_bk
i'll get error "Insert Error: Column name or number of supplied values does not match table definition."
Is it possible to do bulk insert from one table to another without supply the column name?
I've google it but can't seem to find an answer. all the answer require specific the columns.
You should not ever want to do this. Select * should not be used as the basis for an insert as the columns may get moved around and break your insert (or worse not break your insert but mess up your data. Suppose someone adds a column to the table in the select but not the other table, you code will break. Or suppose someone, for reasons that surpass understanding but frequently happen, decides to do a drop and recreate on a table and move the columns around to a different order. Now your last_name is is the place first_name was in originally and select * will put it in the wrong column in the other table. It is an extremely poor practice to fail to specify columns and the specific mapping of one column to the column you want in the table you are interested in.
Right now you may have several problems, first the two structures don't match directly or second the table being inserted to has an identity column and so even though the insertable columns are a direct match, the table being inserted to has one more column than the other and by not specifying the database assumes you are going to try to insert to that column. Or you might have the same number of columns but one is an identity and thus can't be inserted into (although I think that would be a different error message).
Per this other post: Insert all values of a..., you can do the following:
INSERT INTO new_table (Foo, Bar, Fizz, Buzz)
SELECT Foo, Bar, Fizz, Buzz
FROM initial_table
It's important to specify the column names as indicated by the other answers.
Use this
SELECT *
INTO new_table_name
FROM current_table_name
You need to have at least the same number of columns and each column has to be defined in exactly the same way, i.e. a varchar column can't be inserted into an int column.
For bulk transfer, check the documentation for the SQL implementation you're using. There are often tools available to bulk transfer data from one table to another. For SqlServer 2005, for example, you could use the SQL Server Import and Export Wizard. Right-click on the database you're trying to move data around in and click Export to access it.
SQL 2008 allows you to forgo specifying column names in your SELECT if you use SELECT INTO rather than INSERT INTO / SELECT:
SELECT *
INTO Foo
FROM Bar
WHERE x=y
The INTO clause does exist in SQL Server 2000-2005, but still requires specifying column names. 2008 appears to add the ability to use SELECT *.
See the MSDN articles on INTO (SQL2005), (SQL2008) for details.
The INTO clause only works if the destination table does not yet exist, however. If you're looking to add records to an existing table, this won't help.
All the answers above, for some reason or another, did not work for me on SQL Server 2012. My situation was I accidently deleted all rows instead of just one row. After our DBA restored the table to dbo.foo_bak, I used the below to restore. NOTE: This only works if the backup table (represented by dbo.foo_bak) and the table that you are writing to (dbo.foo) have the exact same column names.
This is what worked for me using a hybrid of a bunch of different answers:
USE [database_name];
GO
SET IDENTITY_INSERT dbo.foo ON;
GO
INSERT INTO [dbo].[foo]
([rown0]
,[row1]
,[row2]
,[row3]
,...
,[rown])
SELECT * FROM [dbo].[foo_bak];
GO
SET IDENTITY_INSERT dbo.foo OFF;
GO
This version of my answer is helpful if you have primary and foreign keys.
As you probably understood from previous answers, you can't really do what you're after.
I think you can understand the problem SQL Server is experiencing with not knowing how to map the additional/missing columns.
That said, since you mention that the purpose of what you're trying to here is backup, maybe we can work with SQL Server and workaround the issue.
Not knowing your exact scenario makes it impossible to hit with a right answer here, but I assume the following:
You wish to manage a backup/audit process for a table.
You probably have a few of those and wish to avoid altering dependent objects on every column addition/removal.
The backup table may contain additional columns for auditing purposes.
I wish to suggest two options for you:
The efficient practice (IMO) for this can be to detect schema changes using DDL triggers and use them to alter the backup table accordingly. This will enable you to use the 'select * from...' approach, because the column list will be consistent between the two tables.
I have used this approach successfully and you can leverage it to have DDL triggers automatically manage your auditing tables. In my case, I used a naming convention for a table requiring audits and the DDL trigger just managed it on the fly.
Another option that might be useful for your specific scenario is to create a supporting view for the tables aligning the column list. Here's a quick example:
create table foo (id int, name varchar(50))
create table foo_bk (id int, name varchar(50), tagid int)
go
create view vw_foo as select id,name from foo
go
create view vw_foo_bk as select id,name from foo_bk
go
insert into vw_foo
select * from vw_foo_bk
go
drop view vw_foo
drop view vw_foo_bk
drop table foo
drop table foo_bk
go
I hope this helps :)
You could try this:
SELECT * INTO foo FROM foo_bk
This is a valid question for example when wanting to append newly imported rows from an imported csv file of the same raw structure into an existing table which may have DB constraints set up such as PKs and FKs.
I would simply do the following, for example:
INSERT INTO roles select * from new_imported_roles_from_csv_file
I also like this when if any new rows violate uniqueness during this operation, the INSERT will fail, not insert anything and in away 'protect' the target table from bad inbound data.