Why is ⎕SIGNAL not caught by :: error guards? - error-handling

{11::¯1 ⋄ 2÷0}⍬
¯1
{11::¯1 ⋄ ⎕SIGNAL 11}⍬
DOMAIN ERROR
Why is the first signal caught, while the second is not?

As per the documentation for ⎕SIGNAL (my emphasis):
The state indicator is cut back to exit from the function or operator containing the line that invoked ⎕SIGNAL, or is cut back to exit the Execute (⍎) expression that invoked ⎕SIGNAL. If executed within a nested dfn, the state indicator is cut back to exit from the capsule containing the line that invoked ⎕SIGNAL. An error is then generated.
In other words, by the time ⎕SIGNAL is done doing its thing, we're already outside the dfn and thus the dfn's error guard (::) is not in effect any more.
To work around this, you have to use ⎕SIGNAL in a separate capsule. For example, you can define a cover function outside the function where you want to use it:
Signal←{⎕SIGNAL ⍵}
{11::¯1 ⋄ Signal 11}⍬
¯1
Alternatively, you can put ⎕SIGNAL in its own execution capsule:
{11::¯1 ⋄ ⍎'⎕SIGNAL 11'}⍬
¯1

Related

Ability to use leave inside a filter block

I am writing an interpreter of CIL code, and now it is time to interpreter leave in a filter block.
How to handle it?
First idea was is that this is an invalid instruction being in a filter block, so such a code can't be executed. But then I looked at the standard and found strange things.
The standard ECMA-335 says that leave can be used (!) to exit a filter block:
The leave instruction is similar to the br instruction, but the former can be used to exit a try,
filter, or catch block whereas the ordinary branch instructions can only be used in such a
block to transfer control within it.
At the same time:
Control cannot be transferred out of a filter block except through the use of a throw instruction or
executing the final endfilter instruction. In particular, it is not valid to execute a ret or leave
instruction within a filter block.
Seems to be a contradiction.

CBMC Toy Example

I'm new to CBMC and experimenting with it. In this link here, there is a toy example for checking the function binsearch with CBMC. I decided to run the following command that they provided, just changing up the number of times the loop was unwound:
cbmc binsearch.c --function binsearch --unwind 4 --bounds-check --unwinding-assertions
It returned the following:
** Results:
[binsearch.unwind.0] unwinding assertion loop 0: FAILURE
prog.c function binsearch
[binsearch.array_bounds.1] line 7 array `a' lower bound in a[(signed long int)middle]: SUCCESS
[binsearch.array_bounds.2] line 7 array `a' upper bound in a[(signed long int)middle]: SUCCESS
[binsearch.array_bounds.3] line 9 array `a' lower bound in a[(signed long int)middle]: SUCCESS
[binsearch.array_bounds.4] line 9 array `a' upper bound in a[(signed long int)middle]: SUCCESS
Is the fact that the unwinding assertion failed because there weren't enough iterations a bad thing? From my point-of-view, it seems like the example is bug-free because the code didn't access portions of memory that it's not supposed to, but I'm not sure based on that one unwinding assertions failure. Anyone have any ideas about the safety? Does that failure matter?
Based on the --unwinding-assertion property, which checks the following:
Checks whether --unwind is large enough to cover all program paths. If the argument is too small, CBMC will detect that not enough unwinding is done reports that an unwinding assertion has failed.
I'd say that it is alerts to the possibility that there aren't enough loop iterations to make sure that the function won't access the array outside of the bounds. This means that while the function didn't violate any properties with 4, we need to check all paths before we can say that it is safe for certain.

In LabView, how to run a block only after exit from a while loop?

In LabView, I want to take some readings into a Measurement File in a while loop and run a Read From Measurement File block only after exit from a while loop as below:
How can I achieve this event driven running?
P.S. Other blocks are removed for convenience.
Enforce execution order with an error wire as shown.
Wire error out from your Write to Measurement File function to error in of the Read from Measurement File.
LabVIEW dataflow works like this: data does not appear at the output terminals of a function, VI or structure (such as the While loop) until the loop completes, and a function, VI or structure does not execute until data is available at all input terminals that are wired. So the Read will not execute until the error data is output from the completed While loop.
Using the error wire to enforce the order of execution like this is common practice in LabVIEW, and has another advantage: most VIs are written to not perform their function if an error is present at error in, but instead 'fall through' and return the same error at their output. So you can wire up a chain of operations linked by the error wire and catch and handle any errors at the end of the chain.
If you want more precise control over how LabVIEW handles errors look at the help, but this page describes how to specifically ignore an error if you don't want it to stop your program, and this site has a good overview of techniques for error handling.

How can I prevent QuickCheck from catching all exceptions?

The QuickCheck library seems to catch all exceptions that are thrown when testing a property. In particular, this behavior prevents me from putting a time limit on the entire QuickCheck computation. For example:
module QuickCheckTimeout where
import System.Timeout (timeout)
import Control.Concurrent (threadDelay)
import Test.QuickCheck (quickCheck, within, Property)
import Test.QuickCheck.Monadic (monadicIO, run, assert)
-- use threadDelay to simulate a slow computation
prop_slow_plus_zero_right_identity :: Int -> Property
prop_slow_plus_zero_right_identity i = monadicIO $ do
run (threadDelay (100000 * i))
assert (i + 0 == i)
runTests :: IO ()
runTests = do
result <- timeout 3000000 (quickCheck prop_slow_plus_zero_right_identity)
case result of
Nothing -> putStrLn "timed out!"
Just _ -> putStrLn "completed!"
Because QuickCheck catches all the exceptions, timeout breaks: it doesn't actually abort the computation! Instead, QuickCheck treats the property as having failed, and tries to shrink the input that caused the failure. This shrinking process is then not run with a time bound, causing the total time used by the computation to exceed the prescribed time limit.
One might think I could use QuickCheck's within combinator to bound the computation time. (within treats a property as having failed if it doesn't finish within the given time limit.) However, within doesn't quite do what I want, since QuickCheck still tries to shrink the input that caused the failure, a process that can take far too long. (What could alternatively work for me is a version of within that prevents QuickCheck from trying to shrink the inputs to a property that failed because it didn't finish within the given time limit.)
How can I prevent QuickCheck from catching all exceptions?
Since QuickCheck does the right thing when the user manually interrupts the test by pressing Ctrl+C, you might be able to work around this issue by writing something similar to timeout, but that throws an asynchroneous UserInterrupt exception instead of a custom exception type.
This is pretty much a straight copy-and-paste job from the source of System.Timeout:
import Control.Concurrent
import Control.Exception
timeout' n f = do
pid <- myThreadId
bracket (forkIO (threadDelay n >> throwTo pid UserInterrupt))
(killThread)
(const f)
With this approach, you'll have to use quickCheckResult and check the failure reason to detect whether the test timed out or not. It seems to work decent enough:
> runTests
*** Failed! Exception: 'user interrupt' (after 13 tests):
16
Maybe the chasingbottoms package would be useful? http://hackage.haskell.org/packages/archive/ChasingBottoms/1.3.0.3/doc/html/Test-ChasingBottoms-TimeOut.html
Not answering the main question, but your suggested alternative:
What could alternatively work for me is a version of within that prevents QuickCheck from trying to shrink the inputs to a property that failed because it didn't finish within the given time limit
There is noShrinking which should work for that:
https://hackage.haskell.org/package/QuickCheck/docs/Test-QuickCheck.html#v:noShrinking
As a downside, this will disable shrinking also if the tests fails for other reasons besides the timeout.

How to really trap all errors with $etrap in Intersystems Caché?

I've been banging my head a lot because of this. In the way that $etrap (error handling special variable) was conceived you must be careful to really trap all errors. I've been partially successful in doing this. But I'm still missing something, because when run in user mode (application mode) there are internal Cache library errors that are still halting the application.
What I did was:
ProcessX(var)
set sc=$$ProcessXProtected(var)
w !,"after routine call"
quit sc
ProcessXProtected(var)
new $etrap
;This stops Cache from processing the error before this context. Code
; will resume at the line [w !,"after routine call"] above
set $etrap="set $ECODE = """" quit:$quit 0 quit"
set sc=1
set sc=$$ProcessHelper(var)
quit sc
ProcessHelper(var)
new $etrap
; this code tells Cache to keep unwindind error handling context up
; to the previous error handling.
set $etrap="quit:$quit 0 quit"
do AnyStuff^Anyplace(var)
quit 1
AnyStuffFoo(var)
; Call anything, which might in turn call many sub routines
; The important point is that we don't know how many contexts
; will be created from now on. So we must trap all errors, in any
; case.
;Call internal Cache library
quit
After all this, I can see that when I call the program from a prompt it works! But when I call from Cache Terminal Script (application mode, I was told) it fails and aborts the program (the error trapping mechanism doesn't work as expected).
Is is possible that an old-style error trap ($ZTRAP) is being set only in Usermode?
The documentation on this is pretty good, so I won't repeat it all here, but a key point is that $ZTRAP isn't New-ed in the same way as $ETRAP. In a way, it is "implicitly new-ed", in that its value only applies to the current stack level and subsequent calls. It reverts to any previous value once you Quit up past the level it was set in.
Also, I'm not sure if there's a defined order of precedence between $ETRAP and $ZTRAP handlers, but if $ZTRAP is of higher precedence, that would override your $ETRAPs.
You could try setting $ZTRAP yourself right before you call the library function. Set it to something different than $ETRAP so you can be sure which one was triggered.
Even that might not help though. If $ZTRAP is being set within the library function, the new value will be in effect, so this won't make a difference. This would only help you if the value of $ZTRAP came from somewhere further up the stack.
You didn't mention what library function caused this. My company has source code for some library functions, so if you can tell me the function name I'll see what I can find. Please give me the value of $ZVersion too so I can be sure we're talking about the same version of Cache.