Is it possible to alter a table and add columns with a dynamic name/data type based off some previously select query?
The pseudo equivalent for what I'm looking to do in SQL would be:
foreach row in tableA
{
alter tableB add row.name row.datatype
}
This is for SQL Server.
As mentioned, you can do this with dynamic sql. Something along these lines:
Declare #SQL1 nvarchar(4000)
SELECT #SQL1=N'ALTER TABLE mytable'+NCHAR(13)+NCHAR(10)
+N' ADD COLUMN '+ my_new_column_name + ' varchar(25)'+NCHAR(13)+NCHAR(10)
-- SELECT LEN(#SQL1), #SQL1
EXECUTE (#SQL1)
Apart from the fact that this is messy, error prone, a security risk, requires high authorization to execute and needs multiple variables for batches bigger than 4000 characters, it is usually also a bad idea from a design point of view (depending on when/why you are doing this).
Sure, you can do this with dynamic sql.
Related
Is there a way to create a dynamic temp table. Below sql code is declaring a variable #tic. I am planning to insert contents from table1 to temp table #df. So instead of giving directly as #df, I am passing as a variable. But below is code is not successful. Can anyone help me here?
declare #tic as varchar(100) = 'df'
select *
into '#' + #tic from (
select * from [dbo].[table1])
select * from #df
Is there a way? Well, I think of the answer as "yes and no and maybe".
As far as I know, there is no way to do this using a local temporary table. As Stu explains in the comment, you would need dynamic SQL to define the table name and then the table would not be visible in the outer scope, because it is a local temporary table.
The "yes" is because one type of temporary table are global temporary tables. These are tables that persist across different scopes. And they are defined using ## instead of # as the prefix. So this works:
declare #tic as varchar(100) = 'df'
declare #sql nvarchar(max);
set #sql = 'select * into ##' + #tic + ' from table1';
select #sql;
exec sp_executesql #sql;
select * from ##df;
(Here is a db<>fiddle.)
The "maybe" is because I'm quite skeptical that you really need this. Dynamic table names are rarely useful in SQL systems, precisely because they depend on dynamic SQL. Introducing dynamic names into SQL (whether columns or tables) is dangerous, both because of the danger of SQL injection and also because it can introduce hard-to-debug syntax errors.
If you are trying to solve a real problem, there might be alternative approaches that are better suited to SQL Server.
I have a single table that contains questions with corresponding references to another table and field that contain the answers. Something like:
I would like to query the questions table and return QID, QuestionText and the value contained in the [ResponseTable].[ResponseField] for each QID. The design seamed flexible at the time. However the app developer is expecting a stored procedure and the SQL developer was counting on an in app solution for this issue.
I am at the end of my rope trying to build this query. How would you suggest accomplishing this task?
I don't think you'll like hearing this answer because it will likely mean some major rework, but I think it's the right answer. Get rid of the questions table and put the questions into new Question fields in the Client1, Client9, and Jobs tables; one for each response.
For example the Client1 table will have these fields:
ColorPref
ColorPrefQuestion
Rating
RatingQuestion
...and so on
Working around that design will be manageable where working around the design you have now will be a headache.
It sounds like a redesign should be considered (storing all responses in one table, for example), but if that's not a possibility then dynamic SQL (using sp_executesql) can be used. However, it can be dangerous to use as it is vulnerable to SQL injection. There are some precautions that can be taken, such as using QUOTENAME on table and column names. This is also a good read before using dynamic SQL: The Curse and Blessings of Dynamic SQL.
DECLARE #tableName NVARCHAR(50)
DECLARE #columnName NVARCHAR(50)
DECLARE #query NVARCHAR(MAX)
SET #tableName = 'Client1'
SET #columnName = 'ColorPref'
SET #query = 'SELECT ' + QUOTENAME(#columnName) + ' FROM ' + QUOTENAME(#tableName)
EXEC sp_executesql #query
Until you get to the rewrite you mentioned, consider the idea of using a view to bring these response tables together.
CREATE VIEW ClientResponses AS
SELECT QID, ResponseField FROM [Client1]
UNION
SELECT QID, ResponseField FROM [Jobs]
UNION
SELECT QID, ResponseField FROM [Client9]
-- ..... add the new tables as they are created
This will
Avoid dynamic SQL
Give you a single place to maintain querying
Provide a pretty simple, readable way to hobble this together
I am working on a reporting project based in SQL but I have restricted access to the DB; I can only make SELECT Queries and insert the data I retrieve into temp Tables/table variables. I cannot create/execute stored procedures or any sort of functions.
The query I am running is meant to pool together all Engineers and the different key skills that they have so that we can later on see what Skills each engineer has or which Engineers fall under a certain skill.
To this end, I am trying to create a table variable/temp table with a flexible structure, a structure based on previously obtained values in the same query.
For E.g.
1st Output:
Adam
Brad
Julio
Martinez
2nd Output (Skill separated by white space):
VOIP
TTS
DBA
Exchange
Server
Create temp table/table variable that uses 1st output as rows and 2nd output as columns or vice versa. I will then populate this new table according to different values on the main DB.
Please advise how this can be done, or provide any other solution to this problem.
Thank you
I believe you can.
First of all you need to create temp table with dynamic structure based on query. It can be done like this:
declare script template:
Set #ScriptTmpl = 'Alter table #tempTable Add [?] varchar(100);
build script that will insert columns you need based on query:
Select #TableScript = #TableScript + Replace(#ScriptTmpl, '?',
ColumnName) From ... Where ...
then execute script and then fill your new table with values from second query
UPD:
here is the full sample of temporary table dynamic creation. I used global temporary table in my sample:
declare #scriptTemplate nvarchar(MAX)
declare #script nvarchar(MAX)
declare #tableTemplate nvarchar(MAX)
SET #tableTemplate = 'create table ##tmptable (?)'
SET #scriptTemplate = '? nvarchar(500),'
SET #script = ''
Drop table ##tmptable
Select #script = #script + Replace(#scriptTemplate, '?', [Name])
From Account
Where name like 'ES_%'
SET #script = LEFT(#script, LEN(#script) - 1)
SET #script = Replace(#tableTemplate, '?', #script)
Select #script
exec(#script)
Select * from ##tmptable
Firstly, you may be able to achieve what you want through pivots, rather than temporary tables.
Secondly, if you really want to create a table with column name "Adam Brad", the solution is dynamic SQL, which you may not be able to do based on your permissions.
I'm wondering if copying an existing Table into a Temporary Table results in a worse performance compared to Dynamic SQL.
To be concrete i wonder if i should expect a different performance between the following two SQL Server stored procedures:
CREATE PROCEDURE UsingDynamicSQL
(
#ID INT ,
#Tablename VARCHAR(100)
)
AS
BEGIN
DECLARE #SQL VARCHAR(MAX)
SELECT #SQL = 'Insert into Table2 Select Sum(ValColumn) From '
+ #Tablename + ' Where ID=' + #ID
EXEC(#SQL)
END
CREATE PROCEDURE UsingTempTable
(
#ID INT ,
#Tablename Varachar(100)
)
AS
BEGIN
Create Table #TempTable (ValColumn float, ID int)
DECLARE #SQL VARCHAR(MAX)
SELECT #SQL = 'Select ValColumn, ID From ' + #Tablename
+ ' Where ID=' + #ID
INSERT INTO #TempTable
EXEC ( #SQL );
INSERT INTO Table2
SELECT SUM(ValColumn)
FROM #TempTable;
DROP TABLE #TempTable;
END
I'm asking this since I'm currently using a Procedure build in the latter style where i create many Temporary Tables in the beginning as simple extracts of existing Tables and am afterwards working with these Temporary Tables.
Could I improve the performance of the stored procedure by getting rid of the Temporary Tables and using Dynamic SQL instead? In my opinion the Dynamic SQL Version is a lot uglier to programm - therefore i used Temporary Tables in the first place.
Table variables suffer performance problems because the query optimizer always assumes there will be exactly one row in them. If you have table variables holding > 100 rows, I'd switch them to temp tables.
Using dynamic sql with EXEC(#sql) instead of exec sp_executesql #sql will prevent the query plan from being cached, which will probably hurt performance.
However, you are using dynamic sql on both queries. The only difference is that the second query has the unnecessary step of loading to a table variable first, then loading into the final table. Go with the first stored procedure you have, but switch to sp_executesql.
In the posted query the temporary table is an extra write.
It is not going to help.
Don't just time a query look at the query plan.
If you have two queries the query plan will tell you the split.
And there is a difference between a table variable and temp table
The temp table is faster - the query optimizer does more with a temp table
A temporary table can help in a few situations
The output from a select is going to be used more than once
You materialize the output so it is only executed once
Where you see this is with a an expensive CTE that is evaluated many times
People of falsely think a CTE is just executed once - no it is just syntax
The query optimizer need help
An example
You are doing a self join on a large table with multiple conditions and some of conditions eliminate most of the rows
A query to a #temp can filter the rows and also reduce the number of join conditions
I agree with everyone else that you always need to test both... I'm putting it in an answer here so it's more clear.
If you have an index setup that is perfect for the final query, going to temp tables could be nothing but extra work.
If that's not the case, pre-filtering to a temp table may or may not be faster.
You can predict it at the extremes - if you're filtering down from a million to a dozen rows, I would bet it helps.
But otherwise it can be genuinely difficult to know without trying.
I agree with you that maintenance is also an issue and lots of dynamic sql is a maintenance cost to consider.
Is there a clean way of cloning a record in SQL that has an index(auto increment). I want to clone all the fields except the index. I currently have to enumerate every field, and use that in an insert select, and I would rather not explicitly list all of the fields, as they may change over time.
Not unless you want to get into dynamic SQL. Since you wrote "clean", I'll assume not.
Edit: Since he asked for a dynamic SQL example, I'll take a stab at it. I'm not connected to any databases at the moment, so this is off the top of my head and will almost certainly need revision. But hopefully it captures the spirit of things:
-- Get list of columns in table
SELECT INTO #t
EXEC sp_columns #table_name = N'TargetTable'
-- Create a comma-delimited string excluding the identity column
DECLARE #cols varchar(MAX)
SELECT #cols = COALESCE(#cols+',' ,'') + COLUMN_NAME FROM #t WHERE COLUMN_NAME <> 'id'
-- Construct dynamic SQL statement
DECLARE #sql varchar(MAX)
SET #sql = 'INSERT INTO TargetTable (' + #cols + ') ' +
'SELECT ' + #cols + ' FROM TargetTable WHERE SomeCondition'
PRINT #sql -- for debugging
EXEC(#sql)
There's no easy and clean way that I can think of off the top of my head, but from a few items in your question I'd be concerned about your underlying architecture. Maybe you have an absolutely legitimate reason for wanting to do this, but usually you want to try to avoid duplicates in a database, not make them easier to cause. Also, explicitly naming columns is usually a good idea. If you're linking to outside code, it makes sure that you don't break that link when you add a new column. If you're not (and it sounds like you probably aren't in this scenario) I still prefer to have the columns listed out because it forces me to review the effects of the change/new column - even if it's just to look at the code and decide that adding the new column is not a problem.
DROP TABLE #tmp_MyTable
SELECT * INTO #tmp_MyTable
FROM MyTable
WHERE MyIndentID = 165
ALTER TABLE #tmp_MyTable
DROP Column MyIndentID
INSERT INTO MyTable
SELECT *
FROM #tmp_MyTable
This also deals with a unique key projectnum as well as the primary key.
CREATE TEMPORARY TABLE projecttemp SELECT * FROM project WHERE projectid='6';
ALTER TABLE projecttemp DROP COLUMN projectid;
UPDATE projecttemp SET projectnum = CONCAT(projectnum, ' CLONED');
INSERT INTO project SELECT NULL,projecttemp.* FROM projecttemp;
You could create an insert trigger to do this, however, you would lose the ability to do an insert with an explicit ID. It would, instead, always use the value from the sequence.
You could create a trigger to do it for you. To make sure that trigger only works for cloning, you could create a separate username CLONE and log in with it. Or, even better, if your DBMS supports it, create a role named CLONE and any user can log in using that role and do the cloning. The trigger code would be something like:
if (CURRENT_ROLE = 'CLONE') then
new.ID = assign new id from generator/sequence
Of course, you would grant that role only to the users who are allowed to clone records.