SQL batch variable - sql

I need to automate a query so I can perform it on all servers on a network. The thing is more servers are added constantly, so I need to keep it dynamic. On a table I have a list of the currently active servers, but they are repeated several times as each one has different data. Also it shows only a number, and the server name has an specific format. I did this to solve it:
select distinct ('swp0'+ cast(rtl_loc_id as nvarchar(4000)) +'r01')
from basename..inv_valid_destinations
I got this to an output file, but now I want to use it as an input to sqlcmd. Each server name (each line from previous output) should be used as an -s argument. I have tried different ways of doing this to no avail. It should be something like this:
SQLCMD -Sswp0241r01 -Uswpos -isalto_folio.sql -osalto_folio.txt
As I said, more servers appear constantly and we need to perform a query on all active servers at the time and produce an output file. Could you help me out?

If you really want to do this in batch you can use a loop, where yourfile is the file containing a list of servers.
for /F "tokens=*" %%A in (readme.txt) do (
SQLCMD -S%%A -Uswpos -isalto_folio.sql -osalto_folio.txt
)

Related

How do I repeatedly run a Hive query using each line of a multi line input as the parameter?

Using Hue, I've got a Hive query that will take an input (eg. an ID number) and return a record based on that. I need to handle multiple numbers to look up in one go (in serial or parallel) and collate the results (i.e. list the records for each, one after the other) so input might be:
1234567890
45345353
32423422
1323122
etc...
I've got access to Hue (which I'm supposed to use), Hive, Oozie and Beeline. How do I:
1.) extract the number for each line
2.) repeatedly call my HiveQL query passing in each number in turn
3.) supply the total output to the user in one go
I don't know Python if that's relevant but could attempt a shell script.
I'm guessing one way might be to get the multi-line user input via Oozie (can it prompt a user for input?), then pass that to a shell script which extracts the number from each line and uses beeline to repeatedly run my Hive query with the next number as the parameter?
Thanks

Multiple outputs from single SP - SQL Server 2008

I've been testing multiple theories but having issues. I've created an SP and a BAT command to export a file via BCP and send it to a third party. Normally I would BCP it within the SP, but due to server:folder connectivity, I'm trying to perform the %OUTFILE% within the BAT (If I'm over complicating it, let me know.)
I can't post entire code, so I'll psuedo replace it.
CREATE PROCEDURE
{{{populates a temp table}}}
SELECT {requirements} FROM #table;
SELECT {requirements2} FROM #table;
SELECT {requirements3} FROM #table;
END
Now this works in live form, just fine.
The BAT file I sent the client is
SET hourVAR
SET OUTFILE="{FileDirectory}"
bcp "exec SPICreated" queryout %OUTFILE% params
Normally I would do this either within a multiple step job (I can't do a job for them, though) or I would make the BAT file include the entire BCP "SELECT FROM" but the select is ~30 columns long, and due to the 3rd party vendor I'm trying to put all the 'bulk' in the SP.
Can anyone provide insight on how I may better do this? If I assign a variable to the "SELECT" portion, can I call it from the BAT file? SQL Server is not my forte.
(Trying not to create 3 duplicated SPs, and trying to avoid a ~100 line BAT file.)
--- For those wondering, running the SP with all 3 "SELECTs" caused it to become broken in compilation and somehow becomes.
Additional Info: This is all from the same table, but I need 3 different data sets in 3 different documents.
Data Resembles:
1|2|3|4|5
A|B|C|D|E
Z|X|Y|V|C
AA|BB|3|D|5
I need Document One to be
1|2|3
A|B|C
Z|X|Y
AA|BB|D
I need document Two to be
1|5
A|E
Z|C
AA|3
I need document Three to be
1|3
A|D
Z|V
AA|D
EDIT: Added data examples to assist query. Queries already work to get the data within a view, but not for BCP

Write results of SQL query to multiple files based on field value

My team uses a query that generates a text file over 500MB in size.
The query is executed from a Korn Shell script on an AIX server connecting to DB2.
The results are ordered and grouped by a specific field.
My question: Is it possible, using SQL, to write all rows with this specific field value to its own text file?
For example: All rows with field VENDORID = 1 would go to 1.txt, VENDORID = 2 to 2.txt, etc.
The field in question currently has 1000+ different values, so I would expect the same amount of text files.
Here is an alternative approach that gets each file directly from the database.
You can use the DB2 export command to generate each file. Something like this should be able to create one file :
db2 export to 1.txt of DEL select * from table where vendorid = 1
I would use a shell script or something like Perl to automate the execution of such a command for each value.
Depending on how fancy you want to get, you could just hardcode the extent of vendorid, or you could first get the list of distinct vendorids from the table and use that.
This method might scale a bit better than extracting one huge text file first.

While taking table backup using mysqldump command, can we skip particular column..?

I need to know, is there any option to skip particular column and take remaining table backup using mysqldump command.
If yes please let me know.
I wanted to move a table from one host to another but only include some of the columns and replace others with dummy data (like password columns). So I made a shell script that makes it possible to run a SELECT query and get INSERT statements as result.
You find the script here: https://gist.github.com/1239299

Unable to update the table of SQL Server with BCP utility

We have a database table that we pre-populate with data as part of our deployment procedure. Since one of the columns is binary (it's a binary serialized object) we use BCP to copy the data into the table.
So far this has worked very well, however, today we tried this technique on a Windows Server 2008 machine for the first time and noticed that not all of the columns were being updated. Out of the 31 rows that are normally inserted as part of this operation, only 2 rows actually had their binary columns populated correctly. The other 29 rows simply had null values for their binary column. This is the first situation where we've seen an issue like this and this is the same .dat file that we use for all of our deployments.
Has anyone else ever encountered this issue before or have any insight as to what the issue could be?
Thanks in advance,
Jeremy
My guess is that you're using -c or -w to dump as text, and it's choking on a particular combination of characters it doesn't like and subbing in a NULL. This can also happen in Native mode if there's no format file. Try the following and see if it helps. (Obviously, you'll need to add the server and login switches yourself.)
bcp MyDatabase.dbo.MyTable format nul -f MyTable.fmt -n
bcp MyDatabase.dbo.MyTable out MyTable.dat -f MyTable.fmt -k -E -b 1000 -h "TABLOCK"
This'll dump the table data as straight binary with a format file, NULLs, and identity values to make absolutely sure everything lines up. In addition, it'll use batches of 1000 to optimize the data dump. Then, to insert it back:
bcp MySecondData.dbo.MyTable in MyTable.dat -f MyTable.fmt -n -b 1000
...which will use the format file, data file, and set batching to increase the speed a little. If you need more speed than that, you'll want to look at BULK INSERT, FirstRow/LastRow, and loading in parallel, but that's a bit beyond the scope of this question. :)