How to make the SSIS package status to failure when propagate was set to false for a Sequence container - error-handling

I have an SSIS package with for each loop > sequence container. The sequence container is trying to read file from For each loop and process its data. The requirement was to not fail the entire package when any exception happened in processing a file but to continue processing the next file until all the files were processed from the for each loop. For this, I have set the Propagate variable for the sequence container to False. I have also added email step on On Error event of Sequence container. The package is running as expected and able to process all files even when any exception happened with any file. But I would like the status of my SSIS package to be failed finally since one of the files got failed. How can I achieve that ?

Did you try this options?
(SSIS version in russian on the left side but it's sequence container)
View -> Properties window -> Then click on your sequence container and it will show you ther properties of sequence container.
If i were you first of all i would try property "FailPackageOnFailture" - it should cover your question if i get it right.
P.S. Also you can see the whole properties of your project when you click on a free place in your project
UPDATED (after comments and more clear understanding task):
The idea is - set this param Maximum ErrorCount for SQ as max as you want - in this case it wont stop the package because 1 of the files was failed in SQ and next file will process, but it should stop package after SQ will finish his work because you don't change MaximumErrorCount for package.
Important - a value of zero sets the error count threshold to infinity and package or task never get's Failure

Related

How to Clear SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED?

I got this error message after running my SSIS package.
Source : flat text file
Target : sql database
Warning: 0x80019002 at Package: SSIS Warning Code DTS_W_MAXIMUMERRORCOUNTREACHED. The Execution method succeeded, but the number of errors raised (3) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "........\Package.dtsx" finished: Failure.
The program '[7312] DtsDebugHost.exe: DTS' has exited with code 0 (0x0).
Below is my simple source file
CAS, SubName, ListCode, Type, CountryCode, ListName
^1000413-72-8^,^fasiglifam^,^447^,^Chemical Inventory^,^EU^,^ECICS Custom Tariff Codes^
^1000413-72-8^,^fasiglifam^,^0^,^^,^NN^,^SPHERA Global Substance List^
Below SSIS flow task failing in source. Target is SQL database.
Please help me on this .
In case of error means some issue is there and you need to fix it. But, still if you want to set the number to higher value, you can set at the package level.

Loop exit not working

So I have a workflow which is supposed to throw an error after a certain condition is satisfied. (False condition) As you can see in the log directly below, it works: I do a loop exit first for the group 'coms' and an error is thrown. However, Flowgear seems to only read the last executed node and then determine the workflows status from that. Since the loop finishes last and is successful, if you look in the second log, you can see that the workflow has been evaluated as 'successful' although an error was thrown inside.
Any ideas how to make the loop break? Also why does flowgear only consider the last node? There should be an option in the error node to stop all execution.
Iterator nodes (Splitter and Loop) will consume the errors. The only way at this stage to get the workflow to return an error is to cause an error in the AnyError or UnhandledError part of the workflow. I've created a workflow to demonstrate this here: http://flowgear.me/s/UdpGBbd
Hope this helps.

Pentaho Data Integration: Error Handling

I'm building out an ETL process with Pentaho Data Integration (CE) and I'm trying to operationalize my Transformations and Jobs so that they'll be able to be monitored. Specifically, I want to be able to catch any errors and then send them to an error reporting service like Honeybadger or New Relic. I understand how to do row-level error reporting but I don't see a way to do job or transaction failure reporting.
Here is an example job.
The down path is where the transformation succeeds but has row errors. There we can just filter the results and log them.
The path to the right is the case where the transformation fails all-together (e.g. DB credentials are wrong). This is where I'm having trouble: I can't figure out how to get the error info to be sent.
How do I capture transformation failures to be logged?
You can not capture job-level errors details inside the job itself.
However there are other options for monitoring.
First option is using database logging for transformations or jobs (see the "Log" tab in the job/trans parameters dialog) - this way you always have up-to-date information about the execution status so you can, say, write a job that periodically scans the logging database and sends error reports wherever you need.
Meanwhile this option seems to be something pretty heavy-weight for development and support and not too flexible for further modifications. So in our company we ended up with monitoring on a job-execution level - i.e. when you run a job with kitchen.bat and it fails by any reason you get an "error" status of execution of the kitchen, so you can easily examine it and perform necessary actions with whenever tools you'd like - .bat commands, PowerShell or (in our case) Jenkins CI.
You could use the writeToLog("e", "Message") function in the Modified Java Script step.
Documentation:
// Writes a string to the defined Kettle Log.
//
// Usage:
// writeToLog(var);
// 1: String - The Message which should be written to
// the Kettle Debug Log
//
// writeToLog(var,var);
// 1: String - The Type of the Log
// d - Debug
// l - Detailed
// e - Error
// m - Minimal
// r - RowLevel
//
// 2: String - The Message which should be written to
// the Kettle Log

SSIS execute package task - retry on failure on child package

I have a SSIS package which calls a number of execute package tasks which perform ETL operations
Is there a way to configure the Execute package tasks so that they retry a defined number of times (currently, on the failure of one of the tasks in the child package, the execute package task fails. When this happens, I would like the task to be retried before giving up and failing the parent package)
One solution I know of is to set a flag for each package in the database, set it to a defined value on success and call each package in a for loop container till the flag is successful or the count exceeds a predefined retry count.
Is there a cleaner or more generic way to do this?
Yes, put Execute Package Task in a For Loop Container. Define a variable, which will do the count, one for a successrun indicator and a MAX_COUNT Constance. In properties of the Package Task - Expressions, define
FailPackageOnFailure - False
After Execute Task put a Script Task Read/Write Vars: SuccessfulRun, script:
Dts.Variables["SuccessfulRun"].Value = 1
In properties of the For Loop:
InitExpression - #Val_Counter = 0
EvalExpression - #Counter < #MAX_COUNT && #SuccessfulRun == 0
AssingExpression - #Val_Counter = #Val_Counter + 1
Connect PackageTask with ScriptTask using Success line.
OR
In For Loop Container define expression
MaximumErrorCount - Const_MAX_COUNT
But this one hasn't been tested by me yet...

CTest build ID not set

I have a CDash configured to accept posts for automatic builds and tests. However, when any system attempts to post results to the CDash, the following error is produced. The result is that each result gets posted four times (presumably the original posting attempt plus the three retries).
Can anyone give me a hint as to what sets this mysterious build ID? I found some code that seems to produce a similar error, but still no lead on what might be happening.
Build::GetNumberOfErrors(): BuildId not set
Build::GetNumberOfWarnings(): BuildId not set
Submit failed, waiting 5 seconds...
Retry submission: Attempt 1 of 3
Server Response:
The buildid for CDash is computed based on the site name, the build name and the build stamp of the submission. You should have a Build.xml file in a Testing/20110311-* directory in your build tree. Open that up and see if any of those fields (near the top) is empty. If so, you need to set BUILDNAME and SITE with -D args when configuring with CMake. Or, set CTEST_BUILD_NAME and CTEST_SITE in your ctest -S script.
If that's not it, then this is a mystery. I've not seen this error occur before...
I'm having the same issue though Site and Buildname are available in test.xml and are visible on cdash (4 times). I can see the jobs increment by refreshing between retries so it seems that the submission succeeds and reports a timeout.
Update: This seems to have started when I added the -j(nprocs) switch to the ctest command. changing CtestSubmitRetryDelay: 20 (was 5) allowed a server response through that indicates the cdash version may not be able to handle the multi-proc option I'll have to look into that for my issue. Perhaps setting CtestSubmitRetryDelay to a larger number will get you back a server response as it did for me. g'luck!
Out of range value for column 'processorclockfrequency'