BCP output file not getting generated - bcp

i am running below query in the ssms 2012.
exec master.dbo.xp_cmdshell 'bcp uctconfiguration.dbo.requirement out D:\requirement.txt -w -T -S "servername"'
Below is the log
NULL Starting copy... SQLState = S1000, NativeError = 0 Error =
[Microsoft][SQL Server Native Client 11.0]Warning: BCP import with a
format file will convert empty strings in delimited columns to NULL.
1000 rows successfully bulk-copied to host-file. Total received: 1000
1000 rows successfully bulk-copied to host-file. Total received: 2000
1000 rows successfully bulk-copied to host-file. Total received: 3000
NULL 3148 rows copied. Network packet size (bytes): 4096 Clock Time
(ms.) Total : 297 Average : (10599.33 rows per sec.) NULL
Data is getting copied to the file.But when i go to the location specified in BCP command , i am not able to find the output file

When running commands in xp_cmdshell paths are relative to the server

Related

integer expression expected shell scripting - BASH

I am trying to get the LAG between Primary & Standby database using the below shell script. The query works fine returning the values "DATABASE IS OUTOFSYNC" or "DATABASE IS INSYNC" for an instance that has 1 Node which returns a single value, but I get an error "[: 0 1: integer expression expected" for an instance that has two Nodes which returns two values for the LAG on the first Node and the Second Node.
So here is the code:
#!/bin/bash
get_status=$(sqlplus -s "/as sysdba" <<EOF
set pagesize 0 feedback off verify off heading off echo off;
SELECT prim.seq - tgt.seq seq_gap
FROM
(
SELECT thread#, MAX(sequence#) seq, MAX(completion_time) tm
FROM
v\$archived_log
GROUP BY
thread#
)
prim,
(
SELECT thread#, MAX(sequence#) seq, MAX(completion_time) tm
FROM
v\$archived_log
WHERE
dest_id IN
(
SELECT
dest_id
FROM
v\$archive_dest
WHERE
target = 'STANDBY'
)
AND
applied = 'YES'
GROUP BY
thread#
)
tgt
WHERE
prim.thread# = tgt.thread#;
exit;
EOF
)
if [ "$get_status" -ge 5 ]; then
echo "DATABASE IS OUTOFSYNC"
else
echo "DATABASE IS INSYNC"
fi
Is there a better way to write this script?
After adding typeset -p get_status after the query and before the if I get the below results:
declare -- get_status=" 1
0"
./dgtest2.sh: line 41: [: 1
0: integer expression expected
DATABASE IS INSYNC
The query is returning more than one value/string (for 2 nodes or threads) as shown in picture/screenshot and it seems like my script is only coded to address a single value/string generated by the query.
enter image description here
Is there away to modify the script to address multiple values/strings generated by the query
The logic should be if all values returned are -ge 5 it should report "DATABASE IS OUTOFSYNC" else "DATABASE IS INSYNC" for all values returned are -lt 5.
The logic for one value -lt 5 and one value -ge 5 would not suffice as the values constantly change on the database.
Any values from 0 - 4 that the database returns whether from both Nodes should report as "DATABASE IS INSYNC" and any value from 5 upwards that the database returns whether from both Nodes should report as "DATABASE IS OUTOFSYNC".
One idea would be to capture the status values (returned by the sqlplus script) into an array and then loop through the array testing said status values.
Instead of:
variable=$(sqlplus ...)
We want:
variable=( $(sqlplus ...) )
For OP's current scripting, with a name change for the variable, we will replace this:
get_status=$(sqlplus -s "/as sysdba" <<EOF
set pagesize 0 feedback off verify off heading off echo off;
SELECT prim.seq - tgt.seq seq_gap
...
exit;
EOF
)
With this:
status_array=( $(sqlplus -s "/as sysdba" <<EOF
set pagesize 0 feedback off verify off heading off echo off;
SELECT prim.seq - tgt.seq seq_gap
...
exit;
EOF
) )
One idea for the follow-on logic testing:
default database status is INSYNC
if any status values are -ge 5 then set database status to OUTOFSYNC
The code for this looks like:
db_status='INSYNC'
for status in "${status_array[#]}"
do
[[ "${status}" -ge 5 ]] && db_status='OUTOFSYNC' && break
done
echo "DATABASE IS ${db_status}"
I'm not setup to run the sqlplus script but I should be able to simulate the results with the following array assignments:
status_array=(1)
status_array=(7)
status_array=(0 1)
status_array=(5 7)
status_array=(5 3)
Running our code for each of these array assignments gives us:
##################### status_array=(1)
DATABASE is INSYNC
##################### status_array=(7)
DATABASE is OUTOFSYNC
##################### status_array=(0 1)
DATABASE is INSYNC
##################### status_array=(5 7)
DATABASE is OUTOFSYNC
##################### status_array=(5 3)
DATABASE is OUTOFSYNC

SAS YYMMDD10. works but YYMMDDn10 doesn't

This is my data (sorry for no script, it's just proc create table from mssql):
testdb.testtable
id - numeric
date_from - numeric (datetime from mssql)
date_to - numeric (datetime from mssql)
base_id - numeric
base_id2 - string (length 64)
What I tried to do was:
proc sql;
update testdb.testtable tt
set base_id2 = CATX('_',
('data from other table'),
put(datepart(date_from),yymmddn10.),
put(datepart(date_to),yymmddn10.),
put(base_id,z4.)
)
where (....)
;
quit;
And I get this error:
The width value for YYMMDDN is out of bounds. It should be between 2 and 8.
The width value for YYMMDDN is out of bounds. It should be between 2 and 8.
What I really don't understand is that when I use format with separators, YYMMDD10., it works.
When i run:
proc sql;
select datepart(date_from) format=yymmddn10. from testdb.testtable;
quit;
It returns 20191227 - It's great. When I run
proc sql;
select put(datepart(date_from),yymmddn10.) from testdb.testtable;
quit;
It fails with the same error.
What do I miss?
There seems to be a bug in PROC SQL that allows you to attach a format that cannot work (the maximum width needed for a date without separators is 8 bytes).
It is also interesting that PROC PRINT (and the simple SELECT query in PROC SQL, like in your example) do not mind that the format width is invalid.
542 data test1;
543 now=date();
544 run;
NOTE: The data set WORK.TEST1 has 1 observations and 1 variables.
545
546 data test2 ;
547 set test1;
548 format now yymmddn10.;
----------
29
ERROR 29-185: Width specified for format YYMMDDN is invalid.
549 run;
NOTE: The SAS System stopped processing this step because of errors.
WARNING: The data set WORK.TEST2 may be incomplete. When this step was stopped there were 0 observations
and 1 variables.
WARNING: Data set WORK.TEST2 was not replaced because this step was stopped.
550
551 proc sql;
552 create table test2 as select now format=yymmddn10. from test1;
NOTE: Table WORK.TEST2 created, with 1 rows and 1 columns.
553 select * from test2;
554 quit;
555
556 proc print data=test2;
557 run;
NOTE: There were 1 observations read from the data set WORK.TEST2.
558
559 data test3;
560 set test2;
561 run;
ERROR: Width specified for format YYMMDDN is invalid.
NOTE: The SAS System stopped processing this step because of errors.
WARNING: The data set WORK.TEST3 may be incomplete. When this step was stopped there were 0 observations
and 1 variables.
WARNING: Data set WORK.TEST3 was not replaced because this step was stopped.
Also interesting is that if you use that format specification in PROC FREQ
proc freq data=test2; tables now; run;
it adds a space and a 'F7'x character in front of the data string.
The FREQ Procedure
Cumulative Cumulative
now Frequency Percent Frequency Percent
---------------------------------------------------------------
÷20200218 1 100.00 1 100.00
The number in format is the given width. Format YYMMDDn has 8 chars, so I should have used YYMMDDn8. And it worked.
I was having a long struggle with it and I still don't understand why did it work in format= and not in put().

SQLCMD - SQL Server Job

suppose below is my query.
DECLARE #INT INT
SET #INT = (SELECT COUNT(1) FROM MyTable)
IF (#INT = 0)
RAISERROR('Fail',16,1)
I want to use this query and create a SQL Server job using "Operating System (CmdExec)" type.
I want to SQL Job step to be successful if #INT <> 0 and fail if #INT = 0.
How can I write the cmd?
My try (and it doesn't work)...the server is different than where I will be setting up the job.
sqlcmd -S 123.45.67.890 -E -V15 -d MyDB -Q "DECLARE #INT INT
SET #INT = (SELECT COUNT(1) FROM MyTable)
IF (#INT = 0)
RAISERROR('Fail',16,1)" -b
Job failure msg.
Executed as user: LocalSVR\USER. Msg 105, Level 15, State 1, Server RemoteSVR, Line 1 Unclosed quotation mark after the character string 'DECLARE #INT INT '. Process Exit Code 15. The step failed.
Please note, the step fails for the same reason when MyTable has 1+ record(s).
Update: As per Scott's suggestion, I changed the formatting (removed line breaks) and it seems to work. #Scott, not sure how to mark your comment as answer. If you can write a response, I will mark it as answer. Cheers!!!
You're trying to execute multiple lines as opposed to a single query. You can get round that by using IF NOT EXISTS
SQLCMD -b -S"MyServer" -d"MyDatabase" -Q"IF NOT EXISTS(SELECT * FROM MyTable) RAISERROR('Fail',16,1)"
Note that the -b (On error batch abort) option means that SQLCMD itself will fail giving you a Trappable ErrorLevel

BCP/ Bulk Insert Fails (tab delimited file)

I have been trying to import data (tab delimited) into SQL server. The source data is exported from IBM Cognos. Data can be downloaded from: sample data
I have tried BCP / Bulk Insert, but it did not help. The original datafile contains a header row (which needs to be skipped).
==================================
Schema:
CREATE TABLE [dbo].[DIM_Assessment](
[QueryType] [nvarchar](4000) NULL,
[QueryDate] [nvarchar](4000) NULL,
[APUID] [nvarchar](4000) NULL,
[AssessmentID] [nvarchar](4000) NULL,
[ICDCode] [nvarchar](4000) NULL,
[ICDName] [nvarchar](4000) NULL,
[LoadDate] [nvarchar](4000) NULL
) ON [PRIMARY]
GO
=============================
Format File generated using the following command
bcp [dbname].dbo.dim_assessment format nul -c -f C:\config\dim_assessment.Fmt -S <IP> -U sa -P Pwd
Content of the format file:
11.0
7
1 SQLCHAR 0 8000 "\t" 1 QueryType SQL_Latin1_General_CP1_CI_AS
2 SQLCHAR 0 8000 "\t" 2 QueryDate SQL_Latin1_General_CP1_CI_AS
3 SQLCHAR 0 8000 "\t" 3 APUID SQL_Latin1_General_CP1_CI_AS
4 SQLCHAR 0 8000 "\t" 4 AssessmentID SQL_Latin1_General_CP1_CI_AS
5 SQLCHAR 0 8000 "\t" 5 ICDCode SQL_Latin1_General_CP1_CI_AS
6 SQLCHAR 0 8000 "\t" 6 ICDName SQL_Latin1_General_CP1_CI_AS
7 SQLCHAR 0 8000 "\r\n" 7 LoadDate SQL_Latin1_General_CP1_CI_AS
=============================
I tried importing data using BCP / Bulk Insert, however, non of them worked.
bcp [dbname].dbo.dim_assessment IN C:\dim_assessment.dat -f C:\config\dim_assessment.Fmt -S <IP> -U sa -P Pwd
BULK INSERT dim_assessment FROM '\\dbserver\DIM_Assessment.dat'
WITH (
DATAFILETYPE = 'char',
FIELDTERMINATOR = '\t',
ROWTERMINATOR = '\r\n'
);
GO
Thank you in advance for your help#
Your input file is in a terrible format.
Your format file and your BULK INSERT command both state that the end of a row should be a carriage return and line feed combination, and that there are seven columns of data. However if you open your CSV file in Notepad you will quickly see that the carriage returns and line feeds are not observed correctly in Windows (meaning they must be something other than precisely \r\n). You can also see that there aren't actually seven columns of data, but five:
QueryType QueryDate APUID AssessmentID ICDCode ICDName LoadDate
PPIC 2013-11-20 10:23:14 11431 10963 Tremors
PPIC 2013-11-20 10:23:14 11431 11299 THUMB PAIN
PPIC 2013-11-20 10:23:14 11431 11348 Environmental allergies
...
Just looking at it visually you can tell it isn't right, and you need to get a better source file before throwing it over the wall at SQL Server and expecting it to handle it smoothly:
Just Saved your file as .CSV and bulk inserted with the following statement.
BULK INSERT dim_assessment FROM 'C:\Blabla\TestFile.csv'
WITH (
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
);
GO
Returned Message
(22587 row(s) affected)
Loaded Data
Just notice that some data from ICD name has overflown into LoadDate Column, Just use the | pipe character to deliminate and use the same bulk insert statement with FIELDTERMINATOR = '|' and happy days .
Opening the file via Excel shows the following:
There are indeed 7 row headers
Only the first six of them are populated
Columns 1, 2 and 3 hold identical values
There is some confusing data, where the fifth column can be either empty, or filled with numbers, or filled with text.
I guess that, in these conditions, bulk insert might not work properly. As Excel seems to manage your file in quite a clean way, you should think about an extra step, from CSV to Excel and then to your database.
Ok, so, this was a seemingly simple task to push delimited data from flat-file to SQL server. I thought BCP was the way to go (i used it earlier and was successful).
A quick rundown of what was suggested:
a. fix the source file
b. saving source data in native excel format
c. saving source data as pipe-delimited data
I tried all the options, but, it was adding multiple steps to my process, but was do-able.
I stumbled upon invoke-sqlcmd & import-csv commandlets from powershell. Turns out, I can import the data using powershell directly. it is a bit slow at this time, but, i can live with that for now.
$DATA=IMPORT-CSV dim_assessment.CSV -Delimiter "`t"
FOREACH ($LINE in $DATA)
{
$QueryType="`'"+$Line.QueryType+"`'"
$QueryDate="`'"+$Line.QueryDate+"`'"
$APUID="`'"+$Line.APUID+"`'"
$AssessmentID="`'"+$Line.AssessmentID+"`'"
$ICDCode="`'"+$Line.ICDCode+"`'"
$ICDName=$Line.ICDName
$ICDName = $ICDName.replace("'","''")
$ICDName="`'"+$ICDName+"`'"
$LoadDate="`'"+$Line.LoadDate+"`'"
$SQLHEADER="INSERT INTO [dim_assessment] ([QueryType],[QueryDate],[APUID],[AssessmentID],[ICDCode],[ICDName],[LoadDate])"
$SQLVALUES="VALUES ($QueryType,$QueryDate,$APUID,$AssessmentID,$ICDCode,$ICDName,$LoadDate)"
$SQLQUERY=$SQLHEADER+$SQLVALUES
Invoke-Sqlcmd –Query $SQLQuery -ServerInstance HA -U sa -P Pwd
}
Thanks for all your help!

Groovy Sql to execute statements in a batch

I am using Groovy Sql to fetch results. This is the output from my Linux box. Actually there are 2 statements involved sp_configure 'number of open partitions' and go see below
%isql -U abc -P abc -S support
1> sp_configure 'number of open partitions'
2> go
Parameter Name Default Memory Used Config Value
Run Value Unit Type
------------------------------ ----------- ----------- ------------
------------ -------------------- ----------
number of open partitions 500 5201 5000
5000 number dynamic
(1 row affected)
(return status = 0)
1>
I am using groovy code
def sql = Sql.newInstance("jdbc:abc:sybase://harley:6011;DatabaseName=support;",dbuname,dbpassword,Driver)
sql.eachRow("sp_configure 'number of open partitions'"){ row ->
/*println row.run_value*/
}
Is there a way to execute statements in batch?
I am using Sybase
Not sure if it will work, but you might be able to do:
sql.call("sp_configure 'number of open partitions'")
sql.eachRow("go"){ row ->
...
}
Have not actually tried this [yet] but:
sql.call("sp_configure 'number of open partitions'")
int[] updateCounts = sql.withBatch({
sql.eachRow("go"){ row ->
...
}
})
// check your updateCounts here for errors
Just try this
sql.eachRow("sp_configure 'number of open partitions'"){ row ->
println row.'Parameter Name'.trim }