How to alter table add column then update within single script? - sql

We have a custom database updater which runs various SQL scripts on SQL Server. Some of the scripts need to add a new column to a table, then populate the values, in a single script, within a transaction:
using (var scope = new TransactionScope()) {
... alter table MyTable add FOOBAR int;
... update schappointment set FOOBAR = 1;
}
Problem is, SQL spits back "Invalid column name 'FOOBAR' because the "alter table" command hasn't taken effect. Wrapping it in exec() makes no difference:
... exec('alter table MyTable add FOOBAR int;')
... update schappointment set FOOBAR = 1;
It works OK within SQL Management Studio because it splits it up with GO commands (which I know are not valid T-SQL commands).
I'd prefer not to create any new dependencies in the project.
My preference is not to split the schema & data scripts as this doubles the number of scripts for no reason other than to make SQL happy.
How can I solve this?

You can't do this exactly in a single statement (or batch) and it seems the tool you are using does not support GO as a batch delimiter.
You can use EXEC to run it in a child batch though.
ALTER TABLE A
ADD c1 INT, c2 VARCHAR(10);
EXEC('
UPDATE A
SET c1 = 23,
c2 = ''ZZXX'';
');
NB: All single quotes in the query need to be doubled up as above to escape them inside a string literal.
I tried this approach this is working.
alter table then update in single statement

Our solution was to sprinkle "GO" commands throughout the scripts, and customize our script execution function to split them up and run them seperately.
This mimics what SQL management studio does and seems to work.

Related

Using foreach loop container to create new columns in a table- can't solve "incorrect syntax near '#P1'" error

I'm trying to use a foreach loop container to take row values and make them into fields. But for some reason I can't get it to work without running into this error:
alter table /databasename/.dbo.cp_hh_foo..." failed with the following error: "Incorrect syntax near '#P1'.
The problem appears to be in the final execute SQL statement. The first two execute SQL statements work fine. I think I've made a mistake in my variable/parameter mappings, but I'm not sure.
My data flow looks like this:
Basically what's going on is this:
First Execute SQL Task creates the new table
Second Execute SQL Task selects a table with full result set going into an object-type variable "AllocItems"
Foreach Loop container (configured as an ADO enumerator) maps specific row from "AllocItems" onto variable "AllocItemsSQL1". These are the strings which should become field names in the table I'm creating
Execute SQL Task within foreach loop container alters the table. The SQL query: alter table MIT_Client_Profitability.dbo.cp_hh_footprint add ? varchar(255)
Things I've tried:
within the final execute sql task, adding parentheses around the parameter: "(?)" instead of "?"
within final execute sql task, changing parameter name to "Param1"
within final execute sql task, changing parameter size
within second execute sql task, changing "result name"
within final execute sql task, changing query to "declare #SQL varchar(255) set #SQL = 'alter table MIT_Client_Profitability.dbo.cp_hh_footprint add ? varchar(255)' exec(#SQL)"
Thanks in advance for any suggestions!
To build on David's answer
Create a new SSIS variable, #[User::Sql] of type String and in the Expression box, add the following syntax
"alter table MIT_Client_Profitability.dbo.cp_hh_footprint add " + #[User::AllocItemSQL1] + " varchar(255);"
The nice thing about this approach is that you can put a breakpoint on the Execute SQL Task and see what the statement looks like prior to the task attempting to execute it. And then modify the Execute SQL Task to use the new variable and remove the parameter.
Otherwise, the dynamic tsql approach ought to have worked, just you needed to modify the syntax. The token replacement won't work inside the string. Something more like this should work according to my mental model
declare #newcolumn varchar(255) = ?;
declare #SQL varchar(255) = 'alter table MIT_Client_Profitability.dbo.cp_hh_footprint add ' + #newcolumn + ' varchar(255)';
exec(#SQL);
This
alter table MIT_Client_Profitability.dbo.cp_hh_footprint add ? varchar(255)
is a Data Definition Language (DDL) statement. DDL cannot be paramterized. You'll have to create the statement with string concatenation.

Error in creating 2 tables in Green Screen STRSQL

I'm getting an error while trying to create 2 tables in Green Screen STRSQL.
CREATE TABLE QTEMP/CUSTOMER AS (SELECT * FROM CBHHUBFP/SSCUSTP)
CREATE TABLE QTEMP/ADDRESS AS (SELECT * FROM QTEMP/CUSTOMER)
ERROR: Keyword Create not expected
Valied Tokens End-Of-Statement
Am I missing something here?
Using STRSQL you can only execute one SQL statement at time.
Re my comment to the accepted answer by #dcieslak, the following is an example of a Dynamic Compound Statement (DCS) with syntax that should be valid for use with the /*SYS naming-option, on any system [level of DB2 for IBM i], since the availability of that DCS feature; notice the addition of the WITH DATA clause to make the statement syntactically correct, and enclosing the two semicolon separated requests as CREATE TABLE statements, inside of the BEGIN and END:
begin
CREATE TABLE QTEMP/CUSTOMER AS (SELECT * FROM qiws/qcustcdt )
with data
;
CREATE TABLE QTEMP/ADDRESS AS (SELECT * FROM QTEMP/CUSTOMER)
with data
;
end
-- Table ADDRESS created in QTEMP. /* <-- feedback of final rqs */
While that is possible to enter as a single request, there is likely no point in coding that, per the extra overhead; perhaps if run under isolation and doing more work and coding exception handling, then there would be value. IOW, the Start Interactive SQL Session (STRSQL) scripting environment allows the isolation and user decisions to react to exceptions when the statement are entered individually, successively, Enter pressed after each.
So unless the idea is to test what might be written in a routine [as a compound statement, statements between BEGIN-END pairs] without actually coding the CREATE PROCEDURE [or CREATE FUNCTION ¿or CREATE TRIGGER?] with a routine-body, then the implicitly created routine [as procedure] that is then run and deleted to implement the DCS, is probably mostly just a bunch of extra/unnecessary work.

SQL - How to: IF then GO (execute new query) in the same script without dynamic SQL?

In short, I'm managing a bunch of versioned SQL Scripts where one requirement is that they need to be sort of backwards compatible in that the same scripts can be executed multiple times, still guaranteeing the same end result for the latest version. Basically, say we are on version 3. We need to be able to run scripts for versions 1, 2 and 3 over and over, without errors, and still guarantee that the end result is the same complete version 3.
Now this is easy with normal scenarios (just check if column / table / type is right and create / modify if not), but how do you deal with for instance a trigger that's way over 8000 characters long and can't be executed as dynamic SQL? As version 2 is installed, the triggers are dropped, at the end, new ones are created to match v2's datamodel. But if v3 removed one of the columns referred to by the v2 trigger, that trigger will now fail.
I can't make any kind of IF checks to see if our log has v3 scripts, or if the datamodel doesn't match the requirements. I'd hate to make others do manual labor to do something I'm sure can be automated one way or another. So is there any nice gimmick, trick or just something I missed that could help?
Thanks for the help. :)
but how do you deal with for instance a trigger that's way over 8000
characters long and can't be executed as dynamic SQL?
It can be executed using sp_executesql for which size of sql statement is limited only by available database server memory.
You need to check if object exists and create it if you need or delete otherwise.
if object_id(N'your_table_name','U') is null
CREATE TABLE
...
GO
/* add column */
if not exists (select * from sys.columns
where object_id=object_id('TableName','U') and name='ColumnName')
ALTER TABLE TableName
ADD ColumnName
GO
/* creating Stored Procedure */
if object_id(N'ProcedureName','P') is null
EXEC sp_executesql N'CREATE PROCEDURE ProcedureName AS print 1'
GO
ALTER PROCEDURE ProcedureName
AS
/*your actual code here*/
GO
/* and so on */
Object types for object_id function you can see here.

Using table just after creating it: object does not exist

I have a script in T-SQL that goes like this:
create table TableName (...)
SET IDENTITY INSERT TableName ON
And on second line I get error:
Cannot find the object "TableName" because it does not exist or you do not have permissions.
I execute it from Management Studio 2005. When I put "GO" between these two lines, it's working. But what I would like to acomplish is not to use "GO" because I would like to place this code in my application when it will be finished.
So my question is how to make this work without using "GO" so that I can run it programmatically from my C# application.
Without using GO, programmatically, you would need to make 2 separate database calls.
Run the two scripts one after the other - using two calls from your application.
You should only run the second once the first has successfully run anyway, so you could run the first script and on success run the second script. The table has to have been created before you can use it, which is why you need the GO in management studio.
From the BOL: "SQL Server utilities interpret GO as a signal that they should send the current batch of Transact-SQL statements to SQL Server". Therefore, as Jose Basilio already pointed out, you have to make separate database calls.
If this can help, I was faced with the same problem and I had to write a little (very basic) parser to split every single script in a bunch of mini-script which are sent - one at a time - to the database.
something even better than tpdi's temp table is a variable table. they run lightning fast and are dropped automatically once out of scope.
this is how you make one
declare #TableName table (ColumnName int, ColumnName2 nvarchar(50))
then to insert you just do this
insert into #TableName (ColumnName, ColumnName2)
select 1, 'A'
Consider writing a stored proc that creates a temporary table and does whatever it needs to with that. If you create a real table, your app won't be able to run the script more than once, unless it also drops the table -- in which case, you have exactly the functionality of a temp table.

How to see the values of a table variable at debug time in T-SQL?

Can we see the values (rows and cells) in a table valued variable in SQL Server Management Studio (SSMS) during debug time? If yes, how?
DECLARE #v XML = (SELECT * FROM <tablename> FOR XML AUTO)
Insert the above statement at the point where you want to view the table's contents. The table's contents will be rendered as XML in the locals window, or you can add #v to the watches window.
That's not yet implemented according this Microsoft Connect link:
Microsoft Connect
This project https://github.com/FilipDeVos/sp_select has a stored procedure sp_select which allows for selecting from a temp table.
Usage:
exec sp_select 'tempDb..#myTempTable'
While debugging a stored procedure you can open a new tab and run this command to see the contents of the temp table.
In the Stored Procedure create a global temporary table ##temptable and write an insert query within your stored procedure which inserts the data in your table into this temporary table.
Once this is done you can check the content of the temporary table by opening a new query window.
Just use "select * from ##temptable"
If you are using SQL Server 2016 or newer, you can also select it as JSON result and display it in JSON Visualizer, it's much easier to read it than in XML and allows you to filter results.
DECLARE #v nvarchar(max) = (SELECT * FROM Suppliers FOR JSON AUTO)
I have come to the conclusion that this is not possible without any plugins.
SQL Server Profiler 2014 lists the content of table value parameter. Might work in previous versions too.
Enable SP:Starting or RPC:Completed event in Stored Procedures group and TextData column and when you click on entry in log you'll have the insert statements for table variable.
You can then copy the text and run in Management Studio.
Sample output:
declare #p1 dbo.TableType
insert into #p1 values(N'A',N'B')
insert into #p1 values(N'C',N'D')
exec uspWhatever #PARAM=#p1
Why not just select the Table and view the variable that way?
SELECT * FROM #d
Sorry guys, I'm a little late to the party but for anyone that stumbles across this question at a later date, I've found the easiest way to do this in a stored procedure is to:
Create a new query with any procedure parameters declared and initialised at the top.
Paste in the body of your procedure.
Add a good old fashioned select query immediately after your table variable is initialised with data.
If 3. is not the last statement in the procedure, set a breakpoint on the same line, start debugging and continue straight to your breakpoint.
Profit!!
messi19's answer should be the accepted one IMHO, since it is simpler than mine and does the job most of the time, but if you're like me and have a table variable inside a loop that you want to inspect, this does the job nicely without too much effort or external SSMS plugins.