Is it possible to alter a table and add columns with a dynamic name/data type based off some previously select query?
The pseudo equivalent for what I'm looking to do in SQL would be:
foreach row in tableA
{
alter tableB add row.name row.datatype
}
This is for SQL Server.
As mentioned, you can do this with dynamic sql. Something along these lines:
Declare #SQL1 nvarchar(4000)
SELECT #SQL1=N'ALTER TABLE mytable'+NCHAR(13)+NCHAR(10)
+N' ADD COLUMN '+ my_new_column_name + ' varchar(25)'+NCHAR(13)+NCHAR(10)
-- SELECT LEN(#SQL1), #SQL1
EXECUTE (#SQL1)
Apart from the fact that this is messy, error prone, a security risk, requires high authorization to execute and needs multiple variables for batches bigger than 4000 characters, it is usually also a bad idea from a design point of view (depending on when/why you are doing this).
Sure, you can do this with dynamic sql.
I have the following code
SELECT tA.FieldName As [Field Name],
COALESCE(tO_A.[desc], tO_B.[desc], tO_C.Name, tA.OldVAlue) AS [Old Value],
COALESCE(tN_A.[desc], tN_B.[desc], tN_C.Name, tA.NewValue) AS [New Value],
U.UserName AS [User Name],
CONVERT(varchar, tA.ChangeDate) AS [Change Date]
FROM D tA
JOIN
[DRTS].[dbo].[User] U
ON tA.UserID = U.UserID
LEFT JOIN
A tO_A
on tA.FieldName = 'AID'
AND tA.oldValue = CONVERT(VARCHAR, tO_A.ID)
LEFT JOIN
A tN_A
on tA.FieldName = 'AID'
AND tA.newValue = CONVERT(VARCHAR, tN_A.ID)
LEFT JOIN
B tO_B
on tA.FieldName = 'BID'
AND tA.oldValue = CONVERT(VARCHAR, tO_B.ID)
LEFT JOIN
B tN_B
on tA.FieldName = 'BID'
AND tA.newValue = CONVERT(VARCHAR, tN_B.ID)
LEFT JOIN
C tO_C
on tA.FieldName = 'CID'
AND tA.oldValue = tO_C.Name
LEFT JOIN
C tN_C
on tA.FieldName = 'CID'
AND tA.newValue = tN_C.Name
WHERE U.Fullname = #SearchTerm
ORDER BY tA.ChangeDate
When running the code I am getting the error pasted in the title after adding the two joins for table C. I think this may have something to do with the fact I'm using SQL Server 2008 and have restored a copy of this db on to my machine which is 2005.
I do the following:
...WHERE
fieldname COLLATE DATABASE_DEFAULT = otherfieldname COLLATE DATABASE_DEFAULT
Works every time. :)
You have a mismatch of two different collations in your table. You can check what collations each column in your table(s) has by using this query:
SELECT
col.name, col.collation_name
FROM
sys.columns col
WHERE
object_id = OBJECT_ID('YourTableName')
Collations are needed and used when ordering and comparing strings. It's generally a good idea to have a single, unique collation used throughout your database - don't use different collations within a single table or database - you're only asking for trouble....
Once you've settled for one single collation, you can change those tables / columns that don't match yet using this command:
ALTER TABLE YourTableName
ALTER COLUMN OffendingColumn
VARCHAR(100) COLLATE Latin1_General_CI_AS NOT NULL
To find the fulltext indices in your database, use this query here:
SELECT
fti.object_Id,
OBJECT_NAME(fti.object_id) 'Fulltext index',
fti.is_enabled,
i.name 'Index name',
OBJECT_NAME(i.object_id) 'Table name'
FROM
sys.fulltext_indexes fti
INNER JOIN
sys.indexes i ON fti.unique_index_id = i.index_id
You can then drop the fulltext index using:
DROP FULLTEXT INDEX ON (tablename)
Use the collate clause in your query:
LEFT JOIN C tO_C on tA.FieldName = 'CID' AND tA.oldValue COLLATE Latin1_General_CI_AS = tO_C.Name
I may not have the syntax exactly right (check BOL), but you can do this to change the collation on-the-fly for the query - you may need to add the clause for each join.
edit: I realized this was not quite right - the collate clause goes after the field you need to change - in this example I changed the collation on the tA.oldValue field.
Identify the fields for which it is throwing this error and add following to them:
COLLATE DATABASE_DEFAULT
There are two tables joined on Code field:
...
and table1.Code = table2.Code
...
Update your query to:
...
and table1.Code COLLATE DATABASE_DEFAULT = table2.Code COLLATE DATABASE_DEFAULT
...
This can easily happen when you have 2 different databases and specially 2 different databases from 2 different servers. Best option is to change it to a common collection and do the join or comparison.
SELECT
*
FROM sd
INNER JOIN pd ON sd.SCaseflowID COLLATE Latin1_General_CS_AS = pd.PDebt_code COLLATE Latin1_General_CS_AS
#Valkyrie awesome answer. Thought I put in here a case when performing the same with a subquery insides a stored procedure, as I wondered if your answer works in this case, and it did awesome.
...WHERE fieldname COLLATE DATABASE_DEFAULT in (
SELECT DISTINCT otherfieldname COLLATE DATABASE_DEFAULT
FROM ...
WHERE ...
)
In the where criteria add collate SQL_Latin1_General_CP1_CI_AS
This works for me.
WHERE U.Fullname = #SearchTerm collate SQL_Latin1_General_CP1_CI_AS
To resolve this problem in the query without changing either database, you can cast the expressions on other side of the "=" sign with
COLLATE SQL_Latin1_General_CP1_CI_AS
as suggested here.
The root cause is that the sql server database you took the schema from has a collation that differs from your local installation. If you don't want to worry about collation re install SQL Server locally using the same collation as the SQL Server 2008 database.
error (Cannot resolve the collation conflict between .... ) usually occurs while comparing data from multiple databases.
since you cannot change the collation of databases now, use COLLATE DATABASE_DEFAULT.
----------
AND db1.tbl1.fiel1 COLLATE DATABASE_DEFAULT =db2.tbl2.field2 COLLATE DATABASE_DEFAULT
I have had something like this before, and what we found was that the collation between 2 tables were different.
Check that these are the same.
Thanks to marc_s's answer I solved my original problem - inspired to take it a step further and post one approach to transforming a whole table at a time - tsql script to generate the alter column statements:
DECLARE #tableName VARCHAR(MAX)
SET #tableName = 'affiliate'
--EXEC sp_columns #tableName
SELECT 'Alter table ' + #tableName + ' alter column ' + col.name
+ CASE ( col.user_type_id )
WHEN 231
THEN ' nvarchar(' + CAST(col.max_length / 2 AS VARCHAR) + ') '
END + 'collate Latin1_General_CI_AS ' + CASE ( col.is_nullable )
WHEN 0 THEN ' not null'
WHEN 1 THEN ' null'
END
FROM sys.columns col
WHERE object_id = OBJECT_ID(#tableName)
gets:
ALTER TABLE Affiliate ALTER COLUMN myTable NVARCHAR(4000) COLLATE Latin1_General_CI_AS NOT NULL
I'll admit to being puzzled by the need to col.max_length / 2 -
Check the level of collation that is mismatched (server, database,table,column,character).
If it is the server, these steps helped me once:
Stop the server
Find your sqlservr.exe tool
Run this command:
sqlservr -m -T4022 -T3659 -s"name_of_insance"
-q "name_of_collation"
Start your sql server:
net start name_of_instance
Check the collation of your server again.
Here is more info:
https://www.mssqltips.com/sqlservertip/3519/changing-sql-server-collation-after-installation/
I have used the content from this site to create the following script which changes collation of all columns in all tables:
CREATE PROCEDURE [dbo].[sz_pipeline001_collation]
-- Add the parameters for the stored procedure here
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
SELECT 'ALTER TABLE [' + SYSOBJECTS.Name + '] ALTER COLUMN [' + SYSCOLUMNS.Name + '] ' +
SYSTYPES.name +
CASE systypes.NAME
WHEN 'text' THEN ' '
ELSE
'(' + RTRIM(CASE SYSCOLUMNS.length
WHEN -1 THEN 'MAX'
ELSE CONVERT(CHAR,SYSCOLUMNS.length)
END) + ') '
END
+ ' ' + ' COLLATE Latin1_General_CI_AS ' + CASE ISNULLABLE WHEN 0 THEN 'NOT NULL' ELSE 'NULL' END
FROM SYSCOLUMNS , SYSOBJECTS , SYSTYPES
WHERE SYSCOLUMNS.ID = SYSOBJECTS.ID
AND SYSOBJECTS.TYPE = 'U'
AND SYSTYPES.Xtype = SYSCOLUMNS.xtype
AND SYSCOLUMNS.COLLATION IS NOT NULL
AND NOT ( sysobjects.NAME LIKE 'sys%' )
AND NOT ( SYSTYPES.name LIKE 'sys%' )
END
If this occurs across the whole of your DB then it's better to change your DB collation like so:
USE master;
GO
ALTER DATABASE MyOptionsTest
COLLATE << INSERT COLATION REQUIRED >> ;
GO
--Verify the collation setting.
SELECT name, collation_name
FROM sys.databases
WHERE name = N'<< INSERT DATABASE NAME >>';
GO
Reference here
For those who have a CREATE DATABASE script (as was my case) for the database that is causing this issue you can use the following CREATE script to match the collation:
-- Create Case Sensitive Database
CREATE DATABASE CaseSensitiveDatabase
COLLATE SQL_Latin1_General_CP1_CS_AS -- or any collation you require
GO
USE CaseSensitiveDatabase
GO
SELECT *
FROM sys.types
GO
--rest of your script here
or
-- Create Case In-Sensitive Database
CREATE DATABASE CaseInSensitiveDatabase
COLLATE SQL_Latin1_General_CP1_CI_AS -- or any collation you require
GO
USE CaseInSensitiveDatabase
GO
SELECT *
FROM sys.types
GO
--rest of your script here
This applies the desired collation to all the tables, which was just what I needed. It is ideal to try and keep the collation the same for all databases on a server.
Hope this helps.
More info on the following link: SQL SERVER – Creating Database with Different Collation on Server
You could easily do this by using 4 easy steps
backup your database, just incase
change database collation: right click database, select properties, go to the options and change the collation to the required collation.
Generate a script to Drop and Recreate all your database objects: right click your database, select tasks, select generate script... ( make sure you select Drop & Create on the Advanced options of the Wizard, Also select Schema & Data )
Run the Script Generated above
INSERT INTO eSSLSmartOfficeSource2.[dbo].DeviceLogs (DeviceId,UserId,LogDate,UpdateFlag)
SELECT DL1.DeviceId ,DL1.UserId COLLATE DATABASE_DEFAULT,DL1.LogDate
,0 FROM eSSLSmartOffice.[dbo].DeviceLogs DL1
WHERE NOT EXISTS
(SELECT DL2.DeviceId ,DL2.UserId COLLATE DATABASE_DEFAULT
,DL2.LogDate ,DL2.UpdateFlag
FROM eSSLSmartOfficeSource2.[dbo].DeviceLogs DL2
WHERE DL1.DeviceId =DL2.DeviceId
and DL1.UserId collate Latin1_General_CS_AS=DL2.UserId collate Latin1_General_CS_AS
and DL1.LogDate =DL2.LogDate )
Added code to #JustSteve's answer to deal with varchar and varchar(MAX) columns:
DECLARE #tableName VARCHAR(MAX)
SET #tableName = 'first_notes'
--EXEC sp_columns #tableName
SELECT 'Alter table ' + #tableName + ' alter column ' + col.name
+ CASE ( col.user_type_id )
WHEN 231
THEN ' nvarchar(' + CAST(col.max_length / 2 AS VARCHAR) + ') '
WHEN 167
THEN ' varchar(' + CASE col.max_length
WHEN -1
THEN 'MAX'
ELSE
CAST(col.max_length AS VARCHAR)
end
+ ') '
END + 'collate Latin1_General_CI_AS ' + CASE ( col.is_nullable )
WHEN 0 THEN ' not null'
WHEN 1 THEN ' null'
END
FROM sys.columns col
WHERE object_id = OBJECT_ID(#tableName)
I had a similar error (Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "SQL_Latin1_General_CP1250_CI_AS" in the INTERSECT operation), when I used old jdbc driver.
I resolved this by downloading new driver from Microsoft or open-source project jTDS.
here is what we did, in our situation we need an ad hoc query to be executed using a date restriction on demand, and the query is defined in a table.
Our new query needs to match data between different databases and include data from both of them.
It seems that the COLLATION is different between the db that imports data from the iSeries/AS400 system, and our reporting database - this could be because of the specific data types (such as Greek accents on names and so on).
So we used the below join clause:
...LEFT Outer join ImportDB..C4CTP C4 on C4.C4CTP COLLATE Latin1_General_CS_AS=CUS_Type COLLATE Latin1_General_CS_AS
You may not have any collation issues in your database whatsoever, but if you restored a copy of your database from a backup on a server with a different collation than the origin, and your code is creating temporary tables, those temporary tables would inherit collation from the server and there would be conflicts with your database.
ALTER DATABASE test2 --put your database name here
COLLATE Latin1_General_CS_AS --replace with the collation you need
I had a similar requirement; documenting my approach here for anyone with a similar scenario...
Scenario
I have a database from a clean install with the correct collations.
I have another database which has the wrong collations.
I need to update the latter to use the collations defined on the former.
Solution
Use SQL Server Schema Comparison (from SQL Server Data Tools / Visual Studio) to compare source (clean install) with destination (the db with invalid collation).
In my case I compared the two DBs directly; though you could work via a project to allow you to manually tweak pieces in between...
Run Visual Studio
Create a new SQL Server Data Project
Click Tools, SQL Server, New Schema Comparison
Select the source database
Select the target database
Click options (⚙)
Under Object Types select only those types you're interested in (for me it was only Views and Tables)
Under General select:
Block on possible data loss
Disable & reenable DDL triggers
Ignore cryptographic provider file path
Ignore File & Log File Path
Ignore file size
Ignore filegroup placement
Ignore full text catalog file path
Ignore keyword casing
Ignore login SIDs
Ignore quoted identifiers
Ignore route lifetime
Ignore semicolon between statements
Ignore whitespace
Script refresh module
Script validation for new constraints
Verify collation compatibility
Verify deployment
Click Compare
Uncheck any objects flagged for deletion (NB: those may still have collation issues; but since they're not defined in our source/template db we don't know; either way, we don't want to lose things if we're only targeting collation changes). You can unchceck all at once by right clicking on the DELETE folder and selecting EXCLUDE.
Likewise exclude for any CREATE objects (here since they don't exist in the target they can't have the wrong collation there; whether they should exist is a question for another topic).
Click on each object under CHANGE to see the script for that object. Use the diff to ensure that we're only changing the collation (anything other differences manually detected you'll likely want to exclude / handle those objects manually).
Click Update to push changes
This does still involve some manual effort (e.g. checking that you're only impacting the collation) - but it handles dependencies for you.
Also you can keep a database project of the valid schema so you can use a universal template for your DBs should you have more than 1 to update, assuming all target DBs should end up with the same schema.
You can also use find/replace on the files in a database project should you wish to mass amend settings there (e.g. so you could create the project from the invalid database using schema compare, amend the project files, then toggle the source/target in the schema compare to push your changes back to the DB).
I read practically every answer and comment here so far. It got me to an easy solution by combining the responses made. So here is how it was easy for me to resolved:
Create a script of the database. Right-click database > Tasks > Generate Script. Be sure to include Schema and Data
Delete the database after you have saved the script. Right-click database > Delete
Remove the part of the script that will recreate the database, ie, delete everything upto the first line that starts with:
USE < DATABASENAME >
GO
Create the database 'manually', ie, right-click on Tables > Create database...
Run the script that sets the default collation that you want for the new empty database.
USE master;
GO
ALTER DATABASE << DatabaseName >>
COLLATE << INSERT COLATION REQUIRED >> ;
GO
Run the script you saved to recreate the database
Credit to
#Justin for providing the script to check the collation on the database, and how to update it
#RockScience for mentioning that the change on collation will only apply to new tables/objects
#Felix Mwiti Mugambi (acknowledging my fellow Kenyan :) ) for indicating the need to recreate the database. (I usually avoid dropping and creating for complex databases)
(1) Is there a good/reliable way to query the system catalogue in order
to find all stored procedures which create some temporary tables in their
source code bodies but which don't drop them at the end of their bodies?
(2) In general, can creating temp tables in a SP and not dropping
them in the same SP cause some problems and if so, what problems?
I am asking this question in the contexts of
SQL Server 2008 R2 and SQL Server 2012 mostly.
Many thanks in advance.
Not 100% sure if this is accurate as I don't have a good set of test data to work with. First you need a function to count occurrences of a string (shamelessly stolen from here):
CREATE FUNCTION dbo.CountOccurancesOfString
(
#searchString nvarchar(max),
#searchTerm nvarchar(max)
)
RETURNS INT
AS
BEGIN
return (LEN(#searchString)-LEN(REPLACE(#searchString,#searchTerm,'')))/LEN(#searchTerm)
END
Next make use of the function like this. It searches the procedure text for the strings and reports when the number of creates doesn't match the number of drops:
WITH CreatesAndDrops AS (
SELECT procedures.name,
dbo.CountOccurancesOfString(UPPER(syscomments.text), 'CREATE TABLE #') AS Creates,
dbo.CountOccurancesOfString(UPPER(syscomments.text), 'DROP TABLE #') AS Drops
FROM sys.procedures
JOIN sys.syscomments
ON procedures.object_id = syscomments.id
)
SELECT * FROM CreatesAndDrops
WHERE Creates <> Drops
1) probably no good / reliable way -- though you can extract the text of sp's using some arcane ways that you can find in other places.
2) In general - no this causes no problems -- temp tables (#tables) are scope limited and will be flagged for removal when their scope disappears.
and table variables likewise
an exception is for global temp tables (##tables) which are cleaned up when no scope holds a reference to them. Avoid those guys -- there are usually (read almost always) better ways to do something than with a global temp table.
Sigh -- if you want to go down the (1) path then be aware that there are lots of pitfalls in looking at code inside sql server -- many of the helper functions and information tables will truncate the actual code down to a NVARCHAR(4000)
If you look at the code of sp_helptext you'll see a really horrible cursor that pulls the actual text..
I wrote this a long time ago to look for strings in code - you could run it on your database -- look for 'CREATE TABLE #' and 'DROP TABLE #' and compare the outputs....
DECLARE #SearchString VARCHAR(255) = 'DELETE FROM'
SELECT
[ObjectName]
, [ObjectText]
FROM
(
SELECT
so.[name] AS [ObjectName]
, REPLACE(comments.[c], '#x0D;', '') AS [ObjectText]
FROM
sys.objects AS so
CROSS APPLY (
SELECT CAST([text] AS NVARCHAR(MAX))
FROM syscomments AS sc
WHERE sc.[id] = so.[object_id]
FOR XML PATH('')
)
AS comments ([c])
WHERE
so.[is_ms_shipped] = 0
AND so.[type] = 'P'
)
AS spText
WHERE
spText.[ObjectText] LIKE '%' + #SearchString + '%'
Or much better - use whatever tool of choice you like on your codebase - you've got all your sp's etc scripted out into source control somewhere, right.....?
I think SQL Search tool from red-gate would come handy in this case. You can download from here. This tool will find the sql text within stored procedures, functions, views etc...
Just install this plugin and you can find sql text easily from SSMS.
I believe what I am attempting to achieve may only be done through the use of Dynamic SQL. However, I have tried a couple of things without success.
I have a table in database DB1 (lets say DB1.dbo.table1, in a MS SQL server) that contains the names of other databases in the server (DB2,DB3, etc). Now, all the dbs listed in that table contain a particular table (lets call it desiredTable) which I want to query. So what I'm looking for is a way of creating a stored procedure/script/whatever that queries DB1.dbotable1 for the other DBs and then run a statement on each of the dbs retrieved, something like:
#DBNAME = select dbName from DB1.dbo.table1
select value1 from #DBNAME.dbo.desiredTable
Is that possible? I'm planning on running the sp/script in various systems DB1.dbo.table1 being a constant.
You need to build a query dinamically and then execute it. Something like this:
DECLARE #MyDynamicQuery VARCHAR(MAX)
DECLARE #MyDynamicDBName VARCHAR(20)
SELECT #MyDynamicDBName = dbName
FROM DB1.dbo.table1
SET #MyDynamicQuery = 'SELECT value1 FROM ' + #MyDynamicDBName + '.dbo.desiredTable'
EXEC(#MyDynamicQuery)
You can use the undocumented stored procedure, sp_MSForEachDB. The usual warnings about using an undocumented stored procedure apply though. Here's an example of how you might use it in your case:
EXEC sp_MSForEachDB 'SELECT value1 FROM ?.dbo.desiredTable'
Notice the use of ? in place of the DB name.
I'm not sure how you would limit it to only DBs in your own table. If I come up with something, then I'll post it here.
I have an application that needs to return search results from a SQL Server 2008 database. I would like to use a single stored procedure to return the results but I am finding that as I build the stored procedure it is full of many Else .. Else If statements with the query repeated over and over with slight variations depending on the users search criteria.
Is there a better way to go about this? I am trying to avoid writing dynamic SQL because I would like the benefits of an execution plan but I am thinking there must be a better way. Does anyone have any suggestions or perhaps examples of how best to design a stored procedure that has to deal with many search parameters, many of which may be NULL? Thank you.
Not really.
With SQL Server 2005 and above with statement level recompilation then there is less of a penalty with OR clauses, just maintenance complexity.
Using Richard Harrison's approach makes it worse because OR is not-sargable, runs slowly, most likely won't use indexes.
Dynamic SQL opens up SQL injection, quoting and caching issues.
This leaves sp_executesql as per CountZero's answer which still requires building up strings.
The solution may not be code based... do you really need to search on all fields at any one time? I'd try to split into simple and advanced searches, or work out what the most common are and try to cover these queries.
I've always done this by using default values and conditions; e.g.
CREATE PROCEDURE [dbo].[searchForElement]
(
#Town nvarchar(100) = '',
#County nvarchar(100) = '',
#postcode nvarchar(100) = ''
)
AS
BEGIN
SET NOCOUNT ON;
SELECT <fields>
FROM table
WHERE
(#Town = '' OR Town LIKE '%'+#Town+'%')
AND (#County = '' OR County LIKE '%'+#County+'%')
AND (#postcode = '' OR postcode LIKE '%'+#PostCode +'%')
END
Edit:
As #gbn correctly advises the above will result in an index scan which may be a problem for large tables. If this is a problem the solution is to below using ISNULL and the fact that adding NULL to anything results in NULL it will allow an index seek because the '%' is understood by the optimiser (tested on SQL2008). This may be less readable but it makes better use of the indexes.
CREATE PROCEDURE [dbo].[searchForElement]
(
#Town nvarchar(100) = NULL,
#County nvarchar(100) = NULL,
#postcode nvarchar(100) = NULL
)
AS
BEGIN
SET NOCOUNT ON;
SELECT <fields>
FROM table
WHERE Town LIKE ISNULL('%'+#Town+'%', '%')
AND County LIKE ISNULL('%'+#County+'%', '%')
AND Postcode LIKE ISNULL('%'+#PostCode +'%', '%')
END
I always run into this problem myself. Tend to use dynamic SQL, as long as you use the sp_executesql then the optimizer will try to use the same execution plan.
http://ayyanar.blogspot.com/2007/11/performance-difference-between-exec-and.html