How do i stop SSIS Script component to process data - error-handling

i am processing a ragged semicolon delimited file using script component as transformation.
The component is able to process the data and load to oledb destination. But when error is found it should stop processing further. As i am using try catch block the component doesn't fail and continue to process till the end.
Is there any way i could stop the processing further without failing the component/package?
Let me know if any other information/details required?
sample code:
str.split(";");
if(column[0] == "H")
{
col1=Column[3];
}
if(column[0] != "T")
{
try
{
Row.col1=Column[0];
Row.col2=Column[1];
.....
}
catch
{
update the variable to check if we have error in file.
}
}
Thank you for your time.

The general idea will be that you want to use try/catch blocks to ensure the data processing itself doesn't abort. Once you know your script isn't reporting a failure back to the engine, it's a simple process to not call the AddRow()
Pseudocode
foreach(line in fileReader)
{
try
{
// perform dangerous operations here
// Only add row if you have been able to parse current line
Output0Buffer.AddRow();
Output0Buffer.Col1 = parsedContent;
}
catch
{
// Signal that we should break out of the loop
// do not propagate the error
// You might want to do something though so you know you
// have an incomplete load
break;
}
}
If you are looking to just skip the current bad line, you can substitute continue for the break above.
C# loop - break vs. continue

I didn't get any help from anywhere, But as a work around i have placed a return statement in the code. It checks the error variable if it's true then i will return without processing further. But the thing still is it processes the whole file :(. But it works!!!

Related

Tcl: catch errors from all commands

Sorry, I couldn't come up with a better title. My problems is the following: If I execute a Tcl proc, I can wrap the execution in catch to catch and process errors. I do this in my code to have the same error output everywhere. However, my program provides numerous procs which the user can use in scripts (mostly at the outermost level), and it would be cumbersome if the user would have to wrap every one of them in a catch. I could of course use an additional level of indirection in each of those commands, but I wanted to ask whether there is a way to catch errors from all commands executed, without explicitly using catch on each invocation?
Thanks for your help!
Read this paragraph first: ↓
The single most important principle of error handling is don't throw away errors unless you know for sure that that's the correct way to handle them. Doing so just because they're unsightly is very bad! (Logging them is far better.)
The closest you can get is to run the whole of your existing code inside a catch or try. You can put a source inside that, so a little driver script to retrofit on existing code is just something like:
set argc [llength [set argv [lassign $argv argv0]]]
catch {source $argv0}
Assuming you're applying it during the call to the overall script. You also might need to set up interp bgerror:
interp bgerror {} {apply {{msg opt} {
if {[lindex [dict get $opt -errorcode] 0] eq "EXPECTED"} {
# Ignore this
} else {
# Unexpected error; better tell the user
puts "ERROR: $msg"
puts [dict get $opt -errorinfo]
}
}}}
It's not a really good idea to do this though. If you hide all the errors, how will you find and fix any errors? Using try is better, since that lets you hide only expected errors:
try {
source $argv0
} trap EXPECTED {} {
# Ignore this
}
and I'd probably wrap things up so that I have local variables:
apply {{} {
global argv0 argv argc
set argc [llength [set argv [lassign $argv argv0]]]
try {
uplevel #0 [list source $argv0]
} trap EXPECTED {msg} {
# Log this; the logging engine is out of scope for the question
log DEBUG $msg
}
}}
You'll need to experiment to see what to trap.

To nest conditionals inside your Main, or to not nest conditionals

Issue
Attempting to identify which is the best practice for executing sequential methods. Either, nesting conditionals one after another, or nesting conditionals one inside another, within a main function. In addition, if you could supply "why" one method would be better than the other besides what's most acceptable, I'd sincerely appreciate it. Here are my examples:
Nesting one after another
int main()
{
// conditional 1
if (!method_one())
{
... do something
}
else
{
... prompt error for method 1!
}
// conditional 2
if (!method_two())
{
... do something
}
else
{
... prompt error for method 2!
}
// conditional 3
if (!method_three())
{
... do something
}
else
{
... prompt error for method 3!
}
return 0;
}
Nesting one inside another
int main()
{
// conditional 1
if (!method_one())
{
if (!method_two())
{
if (!method_three())
{
... next steps in sequence
}
else
{
... prompt error for method 3!
}
... prompt error for method 2!
}
... prompt error for method 1!
}
return 0;
}
Observations
I've seen both used, however, not sure which is better practice and/or more commonly acceptable.
The two options aren't actually entirely logically identical - in the "Nesting one after another", for example, method_two() will run even if method_one() fails; if method_two() has any side effects this may be undesirable. Furthermore, if both method_one() and method_two() are destined to fail, "Nesting one after another" will print two error prompts, whereas 'Nesting one inside another" will only error prompt on method_one().
You could close the difference by appending a goto End at the end of each else in "Nesting one after another", so it skips over the remaining checks, but the use of goto would probably get you slapped. Alternatively, you could return at the end of each else, perhaps with an error code, and let whoever is calling your main function deal with understanding what went wrong.
With that in mind, "Nesting one after another" is probably easier to read and understand, since there's less indentation/the code is kept flat, and what happens on failure is immediately next to the check. (That 2nd point can be addressed by reordering the error prompt for method_one() to before the check for method_two() for "Nesting one inside another")

Bypassing functions that do not exist

how would it be possible to bypass functions that are not existing in DM
such that the main code would still run? Try/catch does not seem to work, e..g
image doSomething(number a,number b)
{
try
{
whateverfunction(a,b)
}
catch
{
continue
}
}
number a,b
doSomething(a,b)
Also conditioning wont work, e.g..
image doSomething(number a,number b)
{
if(doesfunctionexist("whateverfunction"))
{
whateverfunction(a,b)
}
}
number a,b
doSomething(a,b)
thanks in advance!
As "unknown" commands are caught by the script-interpreter, there is no easy way to do this. However, you can construct a workaround by using ExecuteScriptCommand().
There is an example tutorial to be found in this e-book, but in short, you want to do something like the following:
String scriptCallStr = "beep();\n"
scriptCallStr = "MyUnsaveFunctionCall();\n"
number exitVal
Try { exitVal = ExecuteScriptString(scriptCallStr ); }
Catch { exitVal = -1; break; }
if ( -1 == exitVal )
{
OKDialog("Sorry, couldn't do:\n" + scriptCallStr )
}
else
{
OKDialog( "All worked. Exit value: " + exitVal )
}
This works nicely and easy for simple commands and if your task is only to "verify" that a script could run.
It becomes clumsy, when you need to pass around parameters. But even then there are ways to do so. (The 'outer' script could create an object and pass the object-ID per string. Similarly, the 'inner' script can do the same and return the script-object ID as exit-value.)
Note: You can of course also put doesfunctionexist inside the test-script, if you do only want to have a "safe test", but don't actually want to execute the command.
Depending on what you need there might also be another workaround solution: Wrapper-functions in a library. This can be useful if you want to run the same script on different PCs with some of which having the functionality - most likely some microscope - while others don't.
You can make your main-script use wrapper methods and then you install different versions of the wrapper method script scripts as libraries.
void My_SpecialFunction( )
{
SpecialFunction() // use this line on PCs which have the SpecialFunction()
DoNothing() // use alternative line on PCs which don't have the SpecialFunction()
}
My_SpecialFunction( )
I have used this in the past where the same functionality (-stage movement-) required different commands on different machines.

What is "await do" in Perl 6?

I see the following code in Perl 6:
await do for #files -> $file {
start {
#do something ... }
}
which runs in async mode.
Why does the above code need do? What is the purpose of do in Perl 6? Could someone please explain the above code in detail?
Also is there are an option to write something like this:
for #files -> $file {
start {
#do something ... }
}
and await after the code for the promises to be fulfilled?
The purpose of do
The for keyword can be used in two different ways:
1) As a stand-alone block statement:
for 1..5 { say $_ }
2) As a statement modifier appended to the end of a statement:
say $_ for 1..5;
When the bare for keyword is encountered in the middle of a larger statement, it is interpreted as that second form.
If you want to use the block form inside a larger statement (e.g. as the argument to the await function), you have to prefix it with do to tell the parser that you're starting a block statement here, and want its return value.
More generally, do makes sure that what follows it is parsed using the same rules it would be parsed as if it were its own statement, and causes it to provide a return value. It thus allows us to use any statement as an expression inside a larger statement. do if, do while, etc. all work the same way.
Explanation of your code
The code you showed...
await do for #files -> $file {
start {
#do somthing ... }
}
...does the following:
It loops of over the array #files.
For each iteration, it uses the start keyword to schedule an asynchronous task, which presumably does something with the current element $file. (The $*SCHEDULER variable decides how the task is actually started; by default it uses a simple thread pool scheduler.)
Each invocation of start immediately returns a Promise that will be updated when the asynchronous task has completed.
The do for collects a sequence of all the return values of the loop body (i.e. the promises).
The await function accepts this sequence as its argument, and waits until all the promises have completed.
How to "await after the code"
Not entirely sure what you mean here.
If you want to remember the promises but not await them just jet, simply store them in an array:
my #promises = do for #files -> $file {
start {
#do something ... }
}
#other code ...
await #promises;
There is no convenience functionality for awaiting all scheduled/running tasks. You always have to keep track of the promises.

Primefaces p:fileUpload - do something on beginning and end of upload (multiple files)

Primefaces 4.0 has nice component to upload files, including multiple files at once. By some dark miracle, it actually works.
Code:
<p:fileUpload fileUploadListener="#{someBean.handleSingleFileUpload}"
mode="advanced" multiple="true" auto="true" dragDropSupport="true"
update=":form_info" sizeLimit="100000" allowTypes="/(\.|\/)(xml)$/" />
Problem is that listener someBean.handleSingleFileUpload is called once for each file. I dealt with that nicely, but I cannot see any way to execute some code at beginning and on end of entire upload process. IMO rather large oversight.
For example:
at beginning clear info textarea
now multiple files are uploaded simultaneously...
at end reload something based on data that was just uploaded
Of course, things at beginning and end should be executed only once, regardless of amount of files. Is there any way to do that? In primefaces docs for p:fileUpload there are no other attribute calling bean method than fileUploadListener.
Well, I ended up simply using counter. Obvious in hindsight, eh. Example:
private int eventCounter = 0;
public void handleSingleFileUpload(FileUploadEvent event)
{
beginImport();
try
{
// code to import file, for example parse xml
} catch (Exception ex)
{
ex.printStackTrace(); /// or whatever
} // block catch Exception
finally
{
finishImport();
} // block finally
}
private synchronized void beginImport()
{
if (eventCounter == 0)
{
// insert code to execute before first file is started
} // block if just started
eventCounter++;
}
private synchronized void finishImport()
{
eventCounter--;
if (eventCounter < 0) eventCounter = 0; // Just in case...
if (eventCounter > 0) return; // not really finished yet
// insert code to execute when last file is done
}
It is hackish solution and I fear that in some cases it could call begin-n-finish pair twice (it should not do begin or finish code twice in row) and that could theoretically interfere with ongoing import of n-th file.
It works for me, at least for now. Will see how it holds when xml will have thousands or more of entries instead of dozen. I would prefer something like onStartOfEverything and onEndOfEverything as arguments in p:fileUpload tag, but ah well.