Capture Data State on Error at runtime - SQL Server - sql

I have a question regarding error handling practices within SQL Server.
What I would like to accomplish is easy error re-creation. I have a very active SQL Server installation with constantly changing data in the tables I am interested in. It is modeling an active warehouse environment.
I've already built a generic error handler for all the stored procedures on this installation in order to track errors and log specifics about the cause of the error such as:
calling line (this gives the EXEC statement of the stored procedure as well as input variables)
error_message
error_state
error_number
error_line
etc.
What I am missing is reproducibility. Even if I were to run the same statement just a few minutes after being notified that an error occurred, I cannot be sure that my results would be the same due to the underlying data changing.
I would like to capture the state of the data on the database when the error occurred.
This could be something like a database image that I could then import into a clean SQL Server installation and execute the erring line in order to perfectly capture what was happening on the database the moment the error occurred.
Due to the nature of needing to capture this at runtime, I would prefer a light-weight solution. Perhaps only capturing the tables relevant to the failing statement.
Does anyone know if this is possible or has been done before? It is really only critical to try and suss out logical errors. It wouldn't be necessary for something like a deadlock.
I would ultimately turn these data subsets into XML or JSON and include them in the error log when appropriate.

Related

sqlplus hangs after being called from a batch file, without throwing up error message

Essentially, where I work we run a variety of reporting processes that follow the same basic structure...
A batch file calls an sql script which executes a stored procedure. Another script extracts the data from Oracle and writes to a csv. Finally, an excel macro runs to create the final output.
We have been encountering an issue recently where if the procedure takes approximately longer than an hour to run, it will then hang indefinitely without moving on to the next line of the batch file. No error message is thrown up.
The most frustrating part is that certain procedures sometimes have the issue, and then the next day they do not.
Has anyone else ever encountered this issue? Or have any idea what could be causing this problem? I feel like it could be connection/firewall related, but it really is not my area of expertise!
You should instrument the batch file and use extended SQL tracing to reveal where ALL of your time is going. Nothing can escape proper instrumentation. You will find the source of the problem. What you do about it varies depending upon the particular problem (i.e., anti-pattern).
I see issues like this all the time. What I do it to connect to the DB and see what is running by checking gV$session. Key is to identify what SQL the script is running, then see if there are any reasons for it to be "hung" (there are MANY possible reasons). for example, missing indexes ; missing or not up to date stats ; workload on the instance ; blocking locks ; ...
If you have the SQL Tuning Advisor, you can run the SQL through there to get some ideas on solutions. Also ADDM Report may provide some additional solutions.

SQLCODE=-514 SQLSTATE=26501 occurred when I fnished the rebind operator

I want to make sure the new procedure valid, insteading of the DB2 always query by the cache pool, I have to rebind the database (db2rbind command). And then I deploy the application on WebSphere. BUT, when I login to the application, the error occurs:
The cursor "SQL_CURSN200C4" is not in a prepared state..SQLCODE=-514 SQLSTATE=26501,DRIVER=3.65.97
further more, the most weird thing is that the error just occurred only once. It will not never occur after this time, and the application runs very well. I'm so curious about how it occurs and the reason why it only occurs only once.
ps: my DB2 version is 10.1 Enterprise Server Edition.
and the sql which the error stack point to is very simple just like:
select * from table where 1=1 and field_name="123" with ur
Unless you configure otherwise (statementCacheSize=0) or manually use setPoolable(false) in your application, WebSphere Application Server data sources cache and reuse PreparedStatements. A rebind can cause statements in the cache to become invalid. Fortunately, WebSphere Application Server has built-in knowledge of the -514 error code and will purge the bad statement from the cache in response to an occurrence of this error, so that the invalidated prepared statement does not continue to be reused and cause additional errors to the application. You might be running into this situation, which could explain how the error occurs just once after the rebind.

Error logging in SQL Server 2012

I guess by default only error with severity 20-25 will gets logged into SQL Server error log file. How can I change the config to log even other errors which are less severity? Or where I can find less severe errors getting logged in SQL Server? I am interested in errors such as
could not find stored procedure spName
etc.
I extracted the following block from this article, which would be a good starting point to explore the SQL Server error log config changes.
Logging Errors
SQL Server Agent takes these errors from the error log, so if follows that the errors must be logged in the first place. There is no way of attaching alerts to errors that aren't logged. All error messages with a severity level from 19 through 25 are written to the error log automatically.
So, what if you want to log information messages, or messages of low severity? If you wish to have an alert on any errors that are of a severity less that 19, then you have to modify their entry in the sysmessages table to set them to be always logged.
You do this using sp_alterMessage with the WITH_LOG option to set the dLevel column to 128 . If a message has been altered to be WITH_LOG, it is always subsequently written to the application log, however the error happens.
Even if RAISERROR is executed without the WITH LOG option, the error that you have altered is written to the application log, and is therefore spotted by the alert. There are good reasons for wanting to do this, as it will then log, and optionally alert you to, syntax errors that are normally seen only by the end-user.
You can force any error that is triggered programmatically to be written to the error log by using the WITH LOG parameter with the RAISERROR command. So, with a user-defined error severity (9) you can log an incident, cause an alert to be fired, which in turn emails someone, or runs a job, simply by using RAISERROR.
Naturally, because the job that responds to the alert can be run by the Agent under a different User, you do not need to assign unsafe permissions to the ordinary user. You can use xp_LogEvent if, as is likely, you do not want the user to see the error. (Only the Raiserror call can utilize the 'PrintF' formatting placeholders in the error messages, so logging the error with a formatted message using xp_logevent results in a literal recording of the message, string format identifiers and all.)
I would recommend to refer the original article for more/detailed information on this.
Please follow this link to find out more on severities.

Problems running vb.net program on superuser-type user

I have written a program in vb.net for in house use that connects to a Progress OpenEdge database. Now I'm having a really weird runtime problem.
I have a .exe file that runs on my local C: drive, the C: drive of the servers, from a certain network Location (but not other places on the network) just fine on at least two regular users. The problem is that when I submit it to my IT manager for review she gives it back and says it wont even run; on looking at the error, it seems to fail on the very first select query (which happens before the form finishes loading) Specifically, it ultimately boils down to the error below:
System.Data.Odbc.OdbcException: ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver]Number contains an invalid character: ?
ERROR [HY000] [DataDirect][ODBC Progress OpenEdge Wire Protocol driver]Number contains an invalid character: ?
now, certainly, I'm using data sources in visual studio and parameterized queries. So, yes, if it's trying to run it as straight SQL and not filling the parameters like it's supposed to, then there is a question mark in a number field. My question is: why does the same .exe, in the same place, run by a user with HIGHER privileges throw errors?
Are you initializing the integer variable with zero (0). The question mark in progress means unknown value.
If you are still running into a problem or haven't verified a solution yet, maybe check this KnowledgeBase article on SQL tracking out to make sure that the interpretation/execution of these statements is correct.
Providing that everything is the same with the SQL statements, the problem is most likely with the way the .exe is being run. There may be filling with an alpha character rather than numeric input depending on how the .exe is being run.

Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal).
The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc.
I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that.
I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant.
Thanks,
Matt
mess up the file that you are trying to import by pasting some bad data or saving it in another format like UTF-8 or something like that
We always have a task at the end that closes the dataflow in our meta data tables. To test errors, I simply remove the ? that is the variable for the stored proc it runs. Easy to do and easy to fix back the way it was and it doesn't mess up anything datawise as our error trapping then closes the the data flow with an error. You could do something similar by adding a task to call a stored proc with an input variable but assign no parameters to it so it will fail. Then once the test is done, simply disable that task.
Data will make it to the destination if it is not running as a transaction. If you want to prevent populating partial data you have to use transactions. Then there is an option to set the end result of a control flow item as "failed" irrespective of the actual result but this is not available in data flow items. You will have to either produce an actual error in the data or code in a situation that will create an error. There is no other way...
Could we try with transaction level property of the package?
On failure of the data flow it will revert all the data from the target.
On successful dataflow only it will commit the data to target otherwise it will roll back the data from target.