SSIS ADO Net Destination Errors - error-handling

I'm having a great deal of trouble converting errors into warning in my package. The package reads from a csv file and inserts the data into a table. If there are any issues with the data i want an email sent to the owners. I don't want an error message for myself but i do want the package to finish execution.
I've tried hanging a script component off the ADO Net destination error stream. When the error (Can't insert dbnull into table) occurs the scrip component is never called. I want to kill certain errors and convert them to warning with meaningfull messages and send that to the users. I have been using the OnWarning and OnError events to do this
Issues that i have come across are that i have a number of steps in the dataflow and if a warning/error does occur there is now way to determine which task caused it.
In addition the Error output on the ADO NET called "Set this value to selected cells" always reverts back to "Fail Component" no matter what i change it to.
Why doesn't the Error stream run ? How do i prevent the error event ?

Related

Capture Data State on Error at runtime - SQL Server

I have a question regarding error handling practices within SQL Server.
What I would like to accomplish is easy error re-creation. I have a very active SQL Server installation with constantly changing data in the tables I am interested in. It is modeling an active warehouse environment.
I've already built a generic error handler for all the stored procedures on this installation in order to track errors and log specifics about the cause of the error such as:
calling line (this gives the EXEC statement of the stored procedure as well as input variables)
error_message
error_state
error_number
error_line
etc.
What I am missing is reproducibility. Even if I were to run the same statement just a few minutes after being notified that an error occurred, I cannot be sure that my results would be the same due to the underlying data changing.
I would like to capture the state of the data on the database when the error occurred.
This could be something like a database image that I could then import into a clean SQL Server installation and execute the erring line in order to perfectly capture what was happening on the database the moment the error occurred.
Due to the nature of needing to capture this at runtime, I would prefer a light-weight solution. Perhaps only capturing the tables relevant to the failing statement.
Does anyone know if this is possible or has been done before? It is really only critical to try and suss out logical errors. It wouldn't be necessary for something like a deadlock.
I would ultimately turn these data subsets into XML or JSON and include them in the error log when appropriate.

SSIS ERROR: The attempt to add a row to the Data Flow task buffer failed with error code 0xC0047020

I have problems with SSIS process(acctually the same problem occurs for two different processes). So, we are doing some ETL work using SSIS. We have Business Intelligence project that executes without errors from Visual Studio. However when its deployed on the server and scheduled as a job it fails with errors like:
INTRASTAT_Towar:Error: SSIS Error Code
DTS_E_PROCESSINPUTFAILED. The ProcessInput
method on component "Union All 3" (876) failed with error
code 0xC0047020 while processing input "Union All Input
2" (916). The identified component returned an error from
the ProcessInput method. The error is specific to the
component, but the error is fatal and will cause the Data
Flow task to stop running. There may be error messages
posted before this with more information about the failure.
INTRASTAT_Towar:Error: SSIS Error Code
DTS_E_PRIMEOUTPUTFAILED. The PrimeOutput
method on istat_dekl_wynik_poz SELECT returned error
code 0xC02020C4. The component returned a failure
code when the pipeline engine called PrimeOutput(). The
meaning of the failure code is defined by the component,
but the error is fatal and the pipeline stopped executing.
There may be error messages posted before this with
more information about the failure.
INTRASTAT_Towar:Error: The attempt to add a row to the
Data Flow task buffer failed with error code 0xC0047020.**
The other returned errors are similar, sometimes instead of 0xC0047020, there is unspecified error. Errors occur only on this one table, that has a lot of different stuff inside data flow task(unions, multicasts, conditional split). Most other data flows have just source, dest and transformation, and they are not causing any problems. I've been suggested to try manipulating DefaultBufferMaxRows and DefaultBufferSize properties values for the data flow task, but after doing some research I dont belive that it will solve the issue, as they are currently put on default values. Any suggestions?
Well, I managed to work the issue with my packages. I was using 2012 SSIS version, but I executed packages in 32 bit environment in BIDS. The server acctually executed in 64 bit and for some projects that was the problem. One checkbox in step properties to make it execute in 32 bit env and I solved the problem we have been fighting for weeks.
I was also facing the same issue, I just did below step.
Open Data Flow tab>>Click anywhere except any task and then Right Click>>Properties>>Change **ForceExecutionValueType** to **Int64**
You can follow my screenshot:
Watch out for indexes on the destination tables - especially unique ones because this will throw an error that's doesn't pinpoint the problem.
For people who stumbled here for the same error. If you are trying to copy data from one Azure SQL database to another using SQL Server Import and Export Wizard. Use the 64-bit version.
From your windows search for SQL Server 2019 Import and Export Data (64-bit).

Error logging in SQL Server 2012

I guess by default only error with severity 20-25 will gets logged into SQL Server error log file. How can I change the config to log even other errors which are less severity? Or where I can find less severe errors getting logged in SQL Server? I am interested in errors such as
could not find stored procedure spName
etc.
I extracted the following block from this article, which would be a good starting point to explore the SQL Server error log config changes.
Logging Errors
SQL Server Agent takes these errors from the error log, so if follows that the errors must be logged in the first place. There is no way of attaching alerts to errors that aren't logged. All error messages with a severity level from 19 through 25 are written to the error log automatically.
So, what if you want to log information messages, or messages of low severity? If you wish to have an alert on any errors that are of a severity less that 19, then you have to modify their entry in the sysmessages table to set them to be always logged.
You do this using sp_alterMessage with the WITH_LOG option to set the dLevel column to 128 . If a message has been altered to be WITH_LOG, it is always subsequently written to the application log, however the error happens.
Even if RAISERROR is executed without the WITH LOG option, the error that you have altered is written to the application log, and is therefore spotted by the alert. There are good reasons for wanting to do this, as it will then log, and optionally alert you to, syntax errors that are normally seen only by the end-user.
You can force any error that is triggered programmatically to be written to the error log by using the WITH LOG parameter with the RAISERROR command. So, with a user-defined error severity (9) you can log an incident, cause an alert to be fired, which in turn emails someone, or runs a job, simply by using RAISERROR.
Naturally, because the job that responds to the alert can be run by the Agent under a different User, you do not need to assign unsafe permissions to the ordinary user. You can use xp_LogEvent if, as is likely, you do not want the user to see the error. (Only the Raiserror call can utilize the 'PrintF' formatting placeholders in the error messages, so logging the error with a formatted message using xp_logevent results in a literal recording of the message, string format identifiers and all.)
I would recommend to refer the original article for more/detailed information on this.
Please follow this link to find out more on severities.

Word VBA unhandled error not displaying a message

I have a Word Template with VBA.
Wherever I need to I handle my errors with either a "On Error Resume Next / On Error Goto 0" round a call to a collection that might fail, say, or a full error handler (On Error Goto label: / Resume exit_label:)
Tools Options is set to Break on Unhandled Errors
I have found that for this template (only it seems) that unhandled errors are not being reported. The VBA code just stops running without telling the user anything.
I have tried exporting all VBA modules / forms, saving the template as a .dotx, closing and reopening then saving as a .dotm again and adding the code back but same problem persists.
Other templates for this client work fine. A deliberately added delete for a non-existent bookmark causes an error to be shown. The same delete call is ignored by the iffy template BUT the code stops running so it isn't doing Resume Next.
If I add a full error handler to the procedure concerned it does error and reports via my msgbox code.
In some respects this wouldn't matter, anywhere I am expecting errors to be a possibility I handle them but I have a certain amount of code that deals with images in headers and they are prone to errors. At this point I want the code to error where they happen so I can analyse how to get around the error, just telling the user isn't going to get the template working. What I have at the moment is the code "dying" and it is very hard to track where without putting in loads of debug statements and seeing where they stopped.
Has anyone seen something like this before? Could I have set something for this template specifically to tell it to not error on unhandled errors?
Other templates on the same machine error quite happily.
The template is very complex so I don't have the option to make it from scratch. Turfing out the VBA and putting it back didn't fix it. If I have to I can add handlers for all procedures but that shouldn't be necessary as errors won't happen once I have tightened up the code.
Any help really appreciated - for a few days I thought it was just VBA dying when working with images, now I really it is something more general than that.
Ok. I have found the source of the problem.
I had used a couple of standard procedures we have in various projects for conveying busy/not busy to the user - we call them InterfaceOn and InterfaceOff. They change the cursor, switch off screen updating, etc.
One line for the Off proc is "Application.EnableCancelKey = wdCancelDisabled".
I hadn't realised how severe that line is, it appears to stop the errors being reported and the code just dies. The help for it warns it is risky but doesn't explicitly state that it'll kill VBA on an unhandled error.
The InterfaceOn procedure has the line that reverses the EnableCancelKey restriction.
These procedures are used a lot so I am surprised this issue hasn't shown up before. Have removed the EnableCancelKey lines for now. No particular need to be that dramatic with this project.

Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal).
The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc.
I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that.
I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant.
Thanks,
Matt
mess up the file that you are trying to import by pasting some bad data or saving it in another format like UTF-8 or something like that
We always have a task at the end that closes the dataflow in our meta data tables. To test errors, I simply remove the ? that is the variable for the stored proc it runs. Easy to do and easy to fix back the way it was and it doesn't mess up anything datawise as our error trapping then closes the the data flow with an error. You could do something similar by adding a task to call a stored proc with an input variable but assign no parameters to it so it will fail. Then once the test is done, simply disable that task.
Data will make it to the destination if it is not running as a transaction. If you want to prevent populating partial data you have to use transactions. Then there is an option to set the end result of a control flow item as "failed" irrespective of the actual result but this is not available in data flow items. You will have to either produce an actual error in the data or code in a situation that will create an error. There is no other way...
Could we try with transaction level property of the package?
On failure of the data flow it will revert all the data from the target.
On successful dataflow only it will commit the data to target otherwise it will roll back the data from target.