Error testing and control from DOS - error-handling

I'm running DOS 6.0.6002 on a windows server enterprise system, SP2.
SQL Server 2008 R2 (10.50.4000)
I have a main control program in DOS.
I'm invoking an sql program through sqlcmd.
A simplified version looks like this:
set sqlsvr=myServer
set logfile=logfile.txt
sqlcmd -S %sqlsvr% -d myDB -i import_some_stuff.sql > "%logfile%" 2>&1
echo error level = %ERRORLEVEL%
I need this program to be pretty robust. It has to run every day against a lot of files and tables. If it fails, I need to catch it and notify sysadmin. For now, just catch it.
So to test this, I've tried the following tests:
1) Renaming the file to one that does not exist.
Result: it returns and errorlevel of 1 (that is it caught the error!) bravo!
2) typing in some syntactical rubbish at the front of the sql program.
Result: it prints the error message in the log file, BUT it DOES NOT return an error (so the return value in %ERRORLEVEL% is zero. This seems incredible to me. What am I missing?

Try the -b option to sqlcmd:
-b
Specifies that sqlcmd exits and returns a DOS ERRORLEVEL value when an
error occurs.
The value that is returned to the DOS ERRORLEVEL
variable is 1 when the SQL Server error message has a severity level
greater than 10; otherwise, the value returned is 0. If the -V option
has been set in addition to -b, sqlcmd will not report an error if the
severity level is lower than the values set using -V. Command prompt
batch files can test the value of ERRORLEVEL and handle the error
appropriately. sqlcmd does not report errors for severity level 10
(informational messages).
If the sqlcmd script contains an incorrect comment, syntax error,
or is missing a scripting variable, ERRORLEVEL returned is 1.
Here is the documentation

Related

Declare bash variables inside sql EOF

how to declare variable in bash command. See "?"
I thought we could almost run any bash statement with ! or host in front of line
#!/bin/bash
sqlplus scott/tiger#orcl << EOF
! export v10="Hi" Doesn't work, why?
! echo $v10 Doesn't work, why?
! echo "Done" Works perfectly and also other bash commands
select * from dept; Works perfectly
exit
EOF
Thank you
What #jordanm says "probably" is exactly what is happening. When you specify a host command from within sqlplus, a separate shell process is spawned, the command executed by that process, then that process is terminated and control returns to sqlplus. Any environment variables that are set in that child shell process are good only within it, so when it terminates, they are gone.
As for your specific lines that "work" and "don't work" .. "export v10="Hi" does work but there is no stdout display of the 'export' command, and as explained, that variable v10 ceases to exist once the child process completes and control returns to sqlplus. The "echo $v10" also works, but since that is a new shell process, it has no value for $v10, so there is nothing to echo.
What are you trying to accomplish by setting enviornment variables from within sqlplus?
found it, all I had to do was
<< EOF
whenever sqlerror exit failure rollback
whenever oserror exit failure rollback
#scriptname.sql
EXIT
EOF

Check the Errorlevel value and process the if command

I have a specific requirement to write a single command to check the registry value and process accordingly.
The command which i have used is:
reg query "HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL" /v MSSQLSERVER > nul & if %ERRORLEVEL% NEQ 0 (echo "SQL Not Installed") else (echo "SQL Installed")
First time, since the errorlevel value is 0, it shows
SQL Installed
even if SQL Not installed and from next continuous run onwards it shows as:
SQL Not Installed
What is the issue with the cmd.
The "problem" with your code is that when the parser reach that line, the %errorlevel% variable is replaced with its value, and then the line is executed. So, any change to the value of the variable is not seen. The "usual" way of handling this cases is enabling delayed expansion and change the %errorlevel% sintax with !errorlevel! to indicate the parser that the value will change and need to be retrieved at execution time.
But, in this case, as you have the requirement of a "one liner", change the if test to
if errorlevel 1 (echo "SQL Not Installed") else (echo "SQL Installed")
A standard construct to check for errorlevel without reading the variable value.
You have also the posibility to code it as
reg query .... && echo Installed || echo Not Installed
This executes the reg command (with all your parameters, of course). On success, the command after && is executed. On failure the command after || is executed.
if ERRORLEVEL 1 should work:
reg query "HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL" /v MSSQLSERVER > nul & if ERRORLEVEL 1 (echo "SQL Not Installed") else (echo "SQL Installed")

How to route SQL print output to log file?

Anyone who knows how to route print ' ' in an sql script to a logfile when using Invoke-Sqlcmd?
I tried using sqlcmd -o someoutfile.txt, but it overwrites, it does not append to existing file. And if an SQL error occurs, only the error message is sent to file, not the print ' '.
When using Invoke-Sqlcmd | out-file someoutfile.txt -Append, it appends only Write-Output and eventually SQL errors, but not the print ' ' in the sql script excuted.
Has anyone found a solution for this?
Invoke-SqlCmd implements T-SQL PRINT statements and RAISERROR using the verbose parameter. To capture verbose output, first you'll need to include the parameter in your call to invoke-sqlcmd i.e. invoke-sqlcmd -verbose and next you can do one of two things:
If you're using Powershell V3 or higher you can redirect verbose output:
invoke-sqlcmd -verbose 4>&1 | outfile someoutfile.txt
If you're using Powershell V2 you can't redirect verbose output to a file, however you can use start-transcript to send all screen output to a file. One gotcha with this approach--it will not work with SQL Agent Powershell job step. It will however work with a cmdexec job step which calls powershell.exe.
And one moment...
The command "Invoke-Sqlcmd" has a parameter -SeverityLevel.
SeverityLevel specifies the lower limit for the error message severity level Invoke-Sqlcmd returns to the ERRORLEVEL PowerShell.
Invoke-Sqlcmd does not report severities for informational messages that have a severity of 10!
Severity Level 10: Status Information
This is an informational message that indicates a problem caused by mistakes in the information the user has entered. Severity level 0 is not visible in SQL Server.

How to get SQLCMD to output errors and warnings only

How can you get SQLCMD, when executing a SQL script file, to just output any errors or warnings it encounters?
I essentially dont want information based messages to be output.
It's not possible to prevent SQLCMD returning non-error output messages by passing it a parameter.
However, what you can do is redirect error messages to STDERR, then direct all other messages to NUL.
This is done by passing the -r parameter. From books online:
-r[ 0 | 1] msgs to stderr
Redirects the error message output to the screen (stderr). If you do not specify a parameter or if you specify 0, only error messages that have a severity level of 11 or higher are redirected. If you specify 1, all error message output including PRINT is redirected. Has no effect if you use -o. By default, messages are sent to stdout.
Set -r depending on exactly which error messages you want to display, but to display all error message output, an example command would be:
sqlcmd -Q "select 1 as a; select 1/0 as b" -E -r1 1> NUL
Just as an addition to this, if you are sending errors out to file, I found this https://www.simple-talk.com/sql/sql-tools/the-sqlcmd-workbench/
which I have used. If you omit setting the OUT, then you only get an error log created.
So you have a command like this :
sqlcmd -x -E -S MyServer -i C:\MySQLBatchToRun.sql
Then in MySQLBatchToRun.sql , something like this
USE MyDatabase
:Error C:\MyErrorLog.txt
:r C:\MySQLScript.sql
GO
In MySQLScript.sql you have the actual SQL to run. It's a bit convoluted, but works. The only issue I have is that it seems to create an empty error log file, even if there is not an error.
It looks like print statements are sent to stderr with -r1 so you can use them to log separate from your output like so:
sqlcmd -Q "print 'hello logfile';select 'Ted' as bro" -r1 1> C:\output.txt 2> C:\logfile.txt
This also works with -i inputfile like:
sqlcmd -i helloTed.sql -r1 1> C:\output.txt 2> C:\logfile.txt
helloTed.sql:
print 'hello logfile';
select 'Ted' as bro
Probably you could use -Q and insert exec a stored proc that contains prints.

127 Return code from $?

What is the meaning of return value 127 from $? in UNIX.
Value 127 is returned by /bin/sh when the given command is not found within your PATH system variable and it is not a built-in shell command. In other words, the system doesn't understand your command, because it doesn't know where to find the binary you're trying to call.
Generally it means:
127 - command not found
but it can also mean that the command is found,
but a library that is required by the command is NOT found.
127 - command not found
example: $caat
The error message will
bash:
caat: command not found
now you check using echo $?
A shell convention is that a successful executable should exit with the value 0. Anything else can be interpreted as a failure of some sort, on part of bash or the executable you that just ran. See also $PIPESTATUS and the EXIT STATUS section of the bash man page:
For the shell’s purposes, a command which exits with a zero exit status has succeeded. An exit status
of zero indicates success. A non-zero exit status indicates failure. When a command terminates on a
fatal signal N, bash uses the value of 128+N as the exit status.
If a command is not found, the child process created to execute it returns a status of 127. If a com-
mand is found but is not executable, the return status is 126.
If a command fails because of an error during expansion or redirection, the exit status is greater than
zero.
Shell builtin commands return a status of 0 (true) if successful, and non-zero (false) if an error
occurs while they execute. All builtins return an exit status of 2 to indicate incorrect usage.
Bash itself returns the exit status of the last command executed, unless a syntax error occurs, in
which case it exits with a non-zero value. See also the exit builtin command below.
It has no special meaning, other than that the last process to exit did so with an exit status of 127.
However, it is also used by bash (assuming you're using bash as a shell) to tell you that the command you tried to execute couldn't be executed (i.e. it couldn't be found). It's unfortunately not immediately deducible though, if the process exited with status 127, or if it couldn't found.
EDIT:
Not immediately deducible, except for the output on the console, but this is stack overflow, so I assume you're doing this in a script.
If you're trying to run a program using a scripting language, you may need to include the full path of the scripting language and the file to execute. For example:
exec('/usr/local/bin/node /usr/local/lib/node_modules/uglifycss/uglifycss in.css > out.css');
This error is also at times deceiving. It says file is not found even though the files is indeed present. It could be because of invalid unreadable special characters present in the files that could be caused by the editor you are using. This link might help you in such cases.
-bash: ./my_script: /bin/bash^M: bad interpreter: No such file or directory
The best way to find out if it is this issue is to simple place an echo statement in the entire file and verify if the same error is thrown.
If the IBM mainframe JCL has some extra characters or numbers at the end of the name of unix script being called then it can throw such error.
In addition to the given answers, note that running a script file with incorrect end-of-line characters could also result in 127 exit code if you use /bin/sh as your shell.
As an example, if you run a shell script with CRLF end-of-line characters in a UNIX-based system and in the /bin/sh shell, it is possible to encounter some errors like the following I've got after running my script named my_test.sh :
$ ./my_test.sh
sh: 2: ./my_test.sh: not found
$ echo $?
127
As a note, using /bin/bash, I got 126 exit code, which is in accordance with gnu.org documentation about the bash :
If a command is not found, the child process created to execute it returns a status of 127. If a command is found but is not executable, the return status is 126.
Finally, here is the result of running my script in /bin/bash :
arman#Debian-1100:~$ ./my_test.sh
-bash: ./my_test.sh: /bin/bash^M: bad interpreter: No such file or directory
arman#Debian-1100:~$ echo $?
126
go to C:\Program Files\Git\etc
open gitconfig with notepad
change
[core]
autocrlf = true
To
[core]
autocrlf = false