Regex to split SQL script but ignore splitting GO under commented script - sql

I'm trying to parse a sort of big SQL script containing commands like create functions and store procedures. I want to split the file in strings whenever I find a GO statement (I want to execute it using ordinary ado.net instead of SMO).
The problem is that I haven't find a suitable regular expression for that so far. Using a simple \bGO\b ignoring case will split it. But will also split all go(s) inside a comment like
/*****************************\
sql statement 1
GO
sql statement 2
GO
\****************************/
My requirement is : Do not split the script if it is under comment even though the script contains GO statement. Suppose my entire script is as below :
sql statement 1
GO
/*****************************\
sql statement 2
GO
sql statement 3
GO
\****************************/
sql statement 4
Expected output should be like
First command :
sql statement 1
Second command :
/*****************************\
sql statement 2
GO
sql statement 3
GO
\****************************/
sql statement 4
Have any idea on this ?
Thanks in advance. :)

You can remove all the comments and then split by GO:
/\/\**\\[^\\]*?\\\**\// # match the comment
\/\**\\ # matches /*****\
[^\\]*? # any text within the comments
\\\**\/ # matches \*****/
Remove above finds and split the result by GO.

Related

SQL - Remove trailing " where it exists in column

I have a SQL table that will be receiving new data daily. At times, the data in 3 of the 10 columns contains a trailing double quote ("). Is there an easy way to remove that final quote where it exists? Whatever the query or procedure is, I will be running it from either python or vba, depending on where this project goes in the next 2 weeks - but I think if I can get it to work from Microsoft SQL Server Mgmt Studio, I'll be able to modify for either one.
Try:
UPDATE table
SET column = left(column,length(column)-1)
WHERE column like '%"'

Execute SQL step in pentaho

I have created transformation which includes table input,sql step and excel o/p step.
Table input-->Run a query and get the field "query" which includes sql query select * from dual
Execute sql step-->Dynamically passing that query field using '?' and enabling variable substitution
Excel o/p-Expecting o/p is the sql query should be triggered and get the result in excel o/p
But i can't get the fiels from execute sql step.. How i can do this???
Thanks
Kavitha S
Use Database join instead of Execute SQL step. The Database Join step allows you to run a query against a database using data obtained from previous steps.
Database join Input: You can pass any of data you want from previous step using ? notation in SQL query defined inside the step.
Database join Output: Executes parametrized SQL query and adds new parameters as an output.
The step is what you need for your 2nd step. See more info about the Database join step in the documentation.
In PDI, "Execute SQL Step" is not meant for generating rows. It will not add any extra row to the data stream. You got Table Input step to generate multiple rows.
What you can try as an alternative is to break the transformation into two parts.
Part 1: Table Input Step > (query rows are generated) >> Use "Set variables" or "copy rows to result" to some other steps to set the query into some variable e.g: query.
Part 2: Take another Table Input Step (into a next .ktr file) and use the variable substitution of ${query} >> Finally output the result set to the excel output.
For dynamically sql queries, you can read this blog.
In case you have some lookups to do with the query generated, you can use Dynamic SQL row to generate the rows.
Hope it helps :)

From unix to Sql

I am making a shell script where I am reading inputs from one file. File contains data
123
1234
121
I am reading the inputs from this file using while read line do condition and putting all inputs in SQL statements.Now in my shell script i am going on SQL Prompt and running some queries. In one condition, I am using EXECUTE IMMEDIATE STATEMENT in SQL.
as
EXECUTE IMMEDIATE 'CREATE TABLE BKP_ACDAGENT4 as SELECT * FROM BKP_ACDAGENT WHERE DATASOURCEAGENTID IN ('123','1234','121')';
I want this to be execute, but somehow its not working.
Can anyone help me in executing it?
You need to escape the single quotes which you have used for the predicates in your IN list, that is the single quotes in
WHERE DATASOURCEAGENTID IN ('123','1234','121')';
are causing the issue here. You need to escape the single quotes using two single quotes
EXECUTE IMMEDIATE 'CREATE TABLE BKP_ACDAGENT4 as SELECT * FROM BKP_ACDAGENT WHERE DATASOURCEAGENTID IN (''123'',''1234'',''121'')';
The above will work on all Oracle version.
If you're one Oracle 10g or above, you can use q keyword
EXECUTE IMMEDIATE q'[CREATE TABLE BKP_ACDAGENT4 as SELECT * FROM BKP_ACDAGENT WHERE DATASOURCEAGENTID IN ('123','1234','121')]';

Removing unwanted SQL queries based on a condition

I have not had experience in SQL queries or SQL database , so please excuse me if my terminology is wrong.
So, I have a file containing around 17,000 SQL insert statements where I enter data for 5 columns/attributes in a database. In those 17,000 statements there are only around 1200 statements which have data for all of the 5 columns/attributes while the rest have data only for 4 columns. I need to delete all those unwanted statements( which dont have data for all 5 columns).
Is there a simple way/process to do it other than going one by one and deleting? If so, it would be great if someone could help me out with it.
A different approach from my fine colleagues here would be to run the file into a staging/disposable database. Use the delete that #Rob called out in his response to pare the table down to the desired dataset. Then use an excellent, free tool like SSMS Tools Pack to reverse engineer those insert statements.
I can think of two approaches:
1: Using SQL: insert all the data and then run a query that removes any records where it does not have all of the necessary data. If the table is not currently empty, keep track of the ID where your current data "ends" so that your query can use that as a WHERE statement.
DELETE FROM myTable WHERE a IS NULL OR b IS NULL /* etc. */
2: Process the SQL file with a regular expression: Use a text editor or command line to match either "bad" records or "good" records. Most text editors have a find and replace that allows you to use regular expressions. And command line you can use grep or other tools to process. Or even a script that parses in your language of choice, for that matter.
Open file in notepad++, replace all "bad" lines using regular expressions.

tsql : outputting each record to their own text file

is there a simple way to just output each record in a select statement to write to its own file?
for example, if you have the tsql query in sql server 2005,
select top 10 items, names + ':' + address from book
and you ended up with 10 text files with the individual name and addresses in each file.
is there a way to do this without writing an extensive spWriteStringToFile procedure? I'm hoping there is some kind of output setting or something in the select statement.
thanks in advance
SQL returns the result set first, there's no opportunity in there for writing records to specific files until afterwards.
Being SQL Server 2005, it's possible you could use a SQLCLR (.NET 2.0 code) function in a SQL statement without having to make a separate application.
In SSMS, you can do a results to file, but that wouldnt split each record out into its own file. I pretty sure you cannot do this out of the box, so it sounds like you will be rolling your own solution.
You'd do this in some client, be it Java, VBA or SSIS typically.