Azure Data Factory Script Activity does not like the keyword GO - azure-sql-database

If I create a script, e.g.
print 'hello'
GO
print 'cats'
GO
Then the script errors when I try to run my ADF pipeline:
Operation on target GreetCatsActivity failed: Incorrect syntax near 'GO'.
Is GO not allowed in scripts? The issue is I need it to run a gigantic script that is autogenerated and has tons of GO statements in it. Part of the script might reference things created earlier in the script so I suspect the GO statements are important to ensure items are created to be used later on.
Could I be doing something wrong or is there another way to handle this?

Just throught of sharing this as a think this may be helpful .
You can use the a Mapping data flow pipeline to replace the "GO" .
What I tried is on the source side I added the scripts ( a files with the extension .sql , I am assuming you must be gaving something similar ) which is shared with "GO" above & I used the FIlter to ger rid of the all the GO's and on the sink I did write back the scripts ( without GO to a blob .
Now I wanted to automate the execution of the command using the Script activity in ADF .
For executing the queries we will use the a Lookup which will read the file which we created in the last step and a for each loop and a Script activity inside that .
My lookup output looks something like
and so wjem you are setting the scripts activity , please pass dynamic value as i did
HTH

Related

SSIS: Excel data source - if column not exists use other column

I am using select statement in excel source to select just specific columns data from excel for import.
But I am wondering, is it possible to select data such way when I select for example column with name: Column_1, but if this column is not exists in excel then it will try to select column with name Column_2? Currently if Column_1 is missing, then data flow task fails.
Use a Script task and write .net code to read the excel file and then perform the check for the Column_1 availability in the file. If the column does not present then use Column_2 as input. Script Task in SSIS can act as a source.
SSIS is metadata based and will not support dynamic metadata, however you can use Script Component as #nitin-raj suggested to handle all known source columns. There is a good post below on how it can be done.
Dynamic File Connections
If you have many such files that can have varying columns then it is better to create a custom component.However, you cannot have dynamic metadata even with custom component, the set of columns should be known upfront to SSIS.
If the list of columns keep changing and you cannot know in advance what are expected columns then you are better off handling the entire thing in C#/VB.Net using Script Task of control flow
As a best practice, because SSIS meta data is static, any data quality and formatting issues in source files should be corrected before ssis data flow task runs.
I have seen this situation before and there is a very simple fix. In the beginning of your ssis package, using a file task to create copy of the source excel file and then run a c# script or execute a powershell to rename the columns so that if column 1 does not exist, it is either added at the appropriate spot in excel file or in case the column name is wrong is it corrected.
As a result of this, you will not need to refresh your ssis meta data every time it fails. This is a standard data standardization practice.
The easiest way is to add two data flow tasks, one data flow for each Excel source select statement and use precedence constraints to execute the second data flow when the first one fails.
The disadvantage of this approach is that if the first data flow task fails for another reason, it will also try to execute the second one. You will need some advanced error handling to check if the error is thrown due to missing columns or not.
But if have a similar situation, I will use a Script Task to check if the column exists and build the SQL command dynamically. Note that this SQL command must always return the same metadata (you must use aliases).
Helpful links
Overview of SSIS Precedence Constraints
Working with Precedence Constraints in SQL Server Integration Services
Precedence Constraints

Multiple outputs from single SP - SQL Server 2008

I've been testing multiple theories but having issues. I've created an SP and a BAT command to export a file via BCP and send it to a third party. Normally I would BCP it within the SP, but due to server:folder connectivity, I'm trying to perform the %OUTFILE% within the BAT (If I'm over complicating it, let me know.)
I can't post entire code, so I'll psuedo replace it.
CREATE PROCEDURE
{{{populates a temp table}}}
SELECT {requirements} FROM #table;
SELECT {requirements2} FROM #table;
SELECT {requirements3} FROM #table;
END
Now this works in live form, just fine.
The BAT file I sent the client is
SET hourVAR
SET OUTFILE="{FileDirectory}"
bcp "exec SPICreated" queryout %OUTFILE% params
Normally I would do this either within a multiple step job (I can't do a job for them, though) or I would make the BAT file include the entire BCP "SELECT FROM" but the select is ~30 columns long, and due to the 3rd party vendor I'm trying to put all the 'bulk' in the SP.
Can anyone provide insight on how I may better do this? If I assign a variable to the "SELECT" portion, can I call it from the BAT file? SQL Server is not my forte.
(Trying not to create 3 duplicated SPs, and trying to avoid a ~100 line BAT file.)
--- For those wondering, running the SP with all 3 "SELECTs" caused it to become broken in compilation and somehow becomes.
Additional Info: This is all from the same table, but I need 3 different data sets in 3 different documents.
Data Resembles:
1|2|3|4|5
A|B|C|D|E
Z|X|Y|V|C
AA|BB|3|D|5
I need Document One to be
1|2|3
A|B|C
Z|X|Y
AA|BB|D
I need document Two to be
1|5
A|E
Z|C
AA|3
I need document Three to be
1|3
A|D
Z|V
AA|D
EDIT: Added data examples to assist query. Queries already work to get the data within a view, but not for BCP

SSIS Script Component - only to change variables

I have a series of task that are very similar:
SELECT a,b FROM c
Lookup in another table and change value in column b.
Save new value back to c and if not match, send the result on to an error table.
That part is pretty straight forward and illustrated here:
Source ==> Lookup =match=> SQL Update command
=No match=> SQL Save Error command
(Hope you understand what I mean - but it works!)
I now have to repeat this a number of times, where my source-sql changes. So what I want to do is to insert a Script Component in front of the Source and set my User::Sql variable like:
Variables.Sql = "SELECT d, e FROM f"
All of the above is contained in a Data Flow. When I have created one I can then copy that one and only change the Sql variable in the script and then it should all work.
My problem is: When I insert the Script Command it asks me if it is a Source, Destination or Transscript script. And by only setting the variable it does not produce any rows for output and cannot connect to my Source.
Anyone know how to make that work?
(I have simplified the above. I actually want to update multiple variables and use those in my Source, Lookup and Error update as well - therefore it is not more simple just to change the SQL script in the initial Source! But being able to do the above, I will be able to achieve what I want :-))
You should set your variable containing the SQL query in the control flow, before you execute the dataflow.
Then you need to use that variable as an expression in your Dataflow. You can parametrize the query used in the lookup or any other parameters of your dataflow.
If your dataflows really have always the same structure, you could even generate a list of queries and call your dataflow task in a loop, preventing the duplication of the same tasks.

create function sql server 2005 disable errors

I'm trying to write a *.bat file which runs all sql-scripts in given folder (every file in this folder has a create function script):
for /r "%~dp0\Production\Functions" %%X in (*.sql) do (
sqlcmd -S%1 -d%2 -b -i "%%X"
)
But some functions in the folder are depended on others. So I get Invalid object name error. Is there a way to disable this error?
Rename your files so that they're listed in the correct order of precedence. So, for example, if FuncA.sql uses FuncB.sql, then rename the files as 001-FuncB.sql, 002-FuncA.sql.
It is not possible to disable errors generated by SQL when you run (what I think of as) code-based object: stored procedures, functions, views, triggers, and anything else that has to be the sole object of a batch submitted to SQL.
It is also awkward at best to work around this problem. Some options:
One way, as Joe Stefanelli recommends, is to name your files such that they get executed in proper order (by name, or perhaps by date created or something more esoteric).
Another way is to group related functions in single scripts, such that referenced objects must be created before referencing objects.
Or combine the above two, putting all your dependent objects in one script you can guarantee will always run first. Not so useful if your have nested references.
A last (and more kludgy) way is to iterate over your scripts several times (assuming your "create" script will properly deal with an object that already exists), until a given pass raises no errors.
For development purposes, we store code-based objects in individual files, but when it comes time to wrap the code up for push to Production systems, I glom the files together, test it, and shuffle the contents around and retest until no more errors are generated.

Mysql: How to call sql script file from other sql script file?

Suppose I have wrote script Table_ABC.sql which creates table ABC. I have created many such scripts for each of required tables. Now i want to write a script that call all of these script files in a sequence so basically I want another script file createTables.sql. Mysql provides option to execute a script file from "mysql" shell application but could find some command like exec c:/myscripts/mytable.sql. Please tell me if there is any command that can be written in sql script itself to call other one in latest mysql versions or alternative for same.
Thanks
You can use source command. So your script will be something like:
use your_db;
source script/s1.sql;
source script/s2.sql;
-- so on, so forth