Is there a way to "catch" errors in GAP? - gap-system

Suppose I'm using a package function f that takes a single argument x and performs some error checking on that argument. Like, is x of the right type? Etc. And if there is an error then f(x) throws an error with Error().
Is there an easy way to write a wrapper function fCatch around f to catch that error, and say return false if f(x) throws an error but true otherwise? The naïve way to accomplish this same effect would be to copy the code for f and change all the Error(...); lines into a return false; and set a return true; at the end, but it's bad form to duplicate all the error-checking code.

As per Alexander Konovalov's comment, the following works, except the Error(...) message that f(x) triggers still prints.
fCatch := function(x)
local result;
BreakOnError := false;
result := CALL_WITH_CATCH(f, [ x ])[1];;
BreakOnError := true;
return result;
end;
A warning: the CALL_WITH_CATCH function, and any other function with an ALL CAPS name, is undocumented and intended to only be used internally in GAP.

Related

range() as method argument for loop iterator

this code generates a nice loop :
for i in range(self.ix):
__pyx_t_3 = __pyx_v_self->ix;
__pyx_t_4 = __pyx_t_3;
for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {
__pyx_v_i = __pyx_t_5;
but i want to be able to pass this iterator as argument (so i can have different iterators/generators)
if i try this :
cdef ixrange(self): return range(self.ix)
already is too unwieldy and the loop is no longer a simple loop. :
static PyObject *__pyx_f_3lib_10sparse_ary_9SparseAry_ixrange(struct __pyx_obj_3lib_10sparse_ary_SparseAry *__pyx_v_self) {
PyObject *__pyx_r = NULL;
__Pyx_RefNannyDeclarations
__Pyx_RefNannySetupContext("ixrange", 0);
__Pyx_XDECREF(__pyx_r);
__pyx_t_1 = __Pyx_PyInt_From_unsigned_int(__pyx_v_self->ix); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 139, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_1);
__pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_range, __pyx_t_1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 139, __pyx_L1_error)
__Pyx_GOTREF(__pyx_t_2);
__Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
__pyx_r = __pyx_t_2;
__pyx_t_2 = 0;
goto __pyx_L0;
/* function exit code */
__pyx_L1_error:;
__Pyx_XDECREF(__pyx_t_1);
__Pyx_XDECREF(__pyx_t_2);
__Pyx_AddTraceback("lib.sparse_ary.SparseAry.ixrange", __pyx_clineno, __pyx_lineno, __pyx_filename);
__pyx_r = 0;
__pyx_L0:;
__Pyx_XGIVEREF(__pyx_r);
__Pyx_RefNannyFinishContext();
return __pyx_r;
}
any way i can make it simple loop again ?
I don't think it is possible.
The effect of for i in range(self.ix): is to generate a range object, and then call next on that range object until a StopIteration exception is raised. That's also the behaviour that Cython takes for general for loops.
There's actually a fairly long list of conditions that have to be met for Cython to transform that into an optimized loop:
range must be used directly - i.e. it = range(...); for i in it won't work because you might need the range object after the loop.
the types of the start and stop point must be integers.
the type of the loop variable must be an integer (Cython can infer this, but if you assign something of a different type to the loop variable then it won't work)
Cython must know that you haven't reassigned range
The sign of the stop variable must be known.
In your case I think you're hoping for Cython to look inside the cdef function and pick out the range. This is beyond what the current optimizer can do. But also, you can override cdef functions in a derived class (in C++ terms, they're "virtual"), so Cython could almost never safely make the optimization.
Cython is usually able to optimize best when you really restrict the types it's working with. The request that it should generate a fast C for loop but also be able to handle arbitrary iterators does not seem realistically achievable.

Can you retry a Zig function call when it returns an error?

Zig's documentation shows different methods of error handling including bubbling the error value up the call stack, catching the error and using a default value, panicking, etc.
I'm trying to figure out how to retry functions which provide error values.
For example, in the below snippet from ziglearn, is there a way to retry the nextLine function in the event that a user enters greater than 100 characters?
fn nextLine(reader: anytype, buffer: []u8) !?[]const u8 {
var line = (try reader.readUntilDelimiterOrEof(
buffer,
'\n',
)) orelse return null;
// trim annoying windows-only carriage return character
if (#import("builtin").os.tag == .windows) {
return std.mem.trimRight(u8, line, "\r");
} else {
return line;
}
}
test "read until next line" {
const stdout = std.io.getStdOut();
const stdin = std.io.getStdIn();
try stdout.writeAll(
\\ Enter your name:
);
var buffer: [100]u8 = undefined;
const input = (try nextLine(stdin.reader(), &buffer)).?;
try stdout.writer().print(
"Your name is: \"{s}\"\n",
.{input},
);
}
This should do what you want.
const input = while (true) {
const x = nextLine(stdin.reader(), &buffer) catch continue;
break x;
} else unreachable; // (see comment) fallback value could be an empty string maybe?
To break it down:
instead of try, you can use catch to do something in the case of an error, and we're restarting the loop in this case.
while loops can also be used as expressions and you can break from them with a value. they also need an else branch, in case the loop ends without breaking away from it. in our case this is impossible since we're going to loop forever until nextLine suceeds, but if we had another exit condition (like a limit on the number of retries), then we would need to provide a "fallback" value, instead of unreachable.
You can also make it a one-liner:
const input = while (true) break nextLine(stdin.reader(), &buffer) catch continue else unreachable;
Hopefully the new self-hosted compiler will be able to pick up on the fact that the else branch is not necessary, since we're going to either break with a value loop forever.

return#forEach does not seem to exit forEach{}

The count is 4 at the end of the code below. I expected 0. Why is it 4? How can I get 0?
var count = 0;
"hello".forEach {
if(it == 'h')
{
println("Exiting the forEach loop. Count is $count");
return#forEach;
}
count++;
}
println("count is $count");
Output:
Exiting the forEach loop. Count is 0
count is 4
return#forEach does not exit forEach() itself, but the lambda passed to it ("body" of forEach()). Note that this lambda is executed several times - once per each item. By returning from it you actually skip only a single item, so this is similar to continue, not to break.
To workaround this you can create a label in the outer scope and return to it:
var count = 0;
run loop# {
"hello".forEach {
if(it == 'h')
{
println("Exiting the forEach loop. Count is $count");
return#loop;
}
count++;
}
}
This is described here: https://kotlinlang.org/docs/returns.html#return-at-labels
Note that the use of local returns in previous three examples is similar to the use of continue in regular loops. There is no direct equivalent for break, but it can be simulated by adding another nesting lambda and non-locally returning from it
It is 4 because the forEach call the lambda passed to it for each character in the string, so the return#forEach in your code return for the first element. You can use a for loop and use break to obtain 0.
return#forEach returns from the lambda function. But the forEach function is a higher-order function that calls the lambda repeatedly for each item in the iterator. So when you return from the lambda, you are only returning for that single item in the iterator. It is analogous to using continue in a traditional for loop.
If you want to exit iteration in a higher-order function completely, you have to use labels. And as I type this, I see another answer already shows how to do that, but I'll leave this in case the different explanation helps.
If your objective is to count the number of characters before 'h', you could do something like this:
val numCharsBeforeH = "hello".takeWhile { it != 'h' }.length
From your comment to Tenfour04's answer:
This is not very convenient. Why didn't the makers of Kotlin create a "break" equivalent?
Here is a quote of the "Loops" section of the Coding conventions:
Prefer using higher-order functions (filter, map etc.) to loops.
Exception: forEach (prefer using a regular for loop instead, unless
the receiver of forEach is nullable or forEach is used as part of a
longer call chain).
When making a choice between a complex expression using multiple
higher-order functions and a loop, understand the cost of the
operations being performed in each case and keep performance
considerations in mind.
Indeed, using a regular for loop with break does what you expect:
var count = 0;
for (char in "hello") {
if (char == 'h') {
println("Breaking the loop. Count is $count")
break
}
count++
}
println("count is $count")
Output:
Breaking the loop. Count is 0
count is 0
Except for very simple operations, there are probably better ways to do what you need than using forEach.

julia: display the `catch` error in control flow

Consider the following try/catch flow
function test(x)
try x^3
if x < 0; error("i only accept x >= 0"); end
return x^3
catch
return abs(x)^3
end
end
How can I display the error message (and stack trace) in the case test(-2) # == 8? In this case I know the error, but if it's a more complicated function with asserts etc, I'd like to know what specifically failed.
Trying rethrow() needs to be done in the try-catch block, but I still want a return value.
You can save the Exception into a variable after writing the variable name of your choice right after catch. error creates an ErrorException. You can see the fields of this Exception using fieldnames(ErrorException). The msg field gives you the message you passed to error. Alternatively, you may use showerror method.
function test(x)
try x^3
if x < 0; error("i only accept x >= 0"); end
return x^3
catch e
showerror(stdout, e)
# or
println(e.msg)
end
end
For the stack trace, you may use stacktrace(catch_backtrace()). We pass catch_backtrace to stacktrace, because what we usually want is to obtain the stack trace of the context of the most recent exception and not the current context.

How do I write a generic memoize function?

I'm writing a function to find triangle numbers and the natural way to write it is recursively:
function triangle (x)
if x == 0 then return 0 end
return x+triangle(x-1)
end
But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it.
Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:
triangle[0] = 0;
triangle[x_] := triangle[x] = x + triangle[x-1]
That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.
Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup:
fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html
I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation!
You're also asking the wrong question for your original problem ;)
This is a better way for that case:
triangle(n) = n * (n - 1) / 2
Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion.
I bet something like this should work with variable argument lists in Lua:
local function varg_tostring(...)
local s = select(1, ...)
for n = 2, select('#', ...) do
s = s..","..select(n,...)
end
return s
end
local function memoize(f)
local cache = {}
return function (...)
local al = varg_tostring(...)
if cache[al] then
return cache[al]
else
local y = f(...)
cache[al] = y
return y
end
end
end
You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities.
In C# 3.0 - for recursive functions, you can do something like:
public static class Helpers
{
public static Func<A, R> Memoize<A, R>(this Func<A, Func<A,R>, R> f)
{
var map = new Dictionary<A, R>();
Func<A, R> self = null;
self = (a) =>
{
R value;
if (map.TryGetValue(a, out value))
return value;
value = f(a, self);
map.Add(a, value);
return value;
};
return self;
}
}
Then you can create a memoized Fibonacci function like this:
var memoized_fib = Helpers.Memoize<int, int>((n,fib) => n > 1 ? fib(n - 1) + fib(n - 2) : n);
Console.WriteLine(memoized_fib(40));
In Scala (untested):
def memoize[A, B](f: (A)=>B) = {
var cache = Map[A, B]()
{ x: A =>
if (cache contains x) cache(x) else {
val back = f(x)
cache += (x -> back)
back
}
}
}
Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following:
val correctMem = memoize(memoize _)
I don't think that this will compile, but it does illustrate the idea.
Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind.
I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow.
Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms.
If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem.
function memoize (f)
local cache = {}
return function (x)
if cache[x] then
return cache[x]
else
local y = f(x)
cache[x] = y
return y
end
end
end
triangle = memoize(triangle);
Note that to avoid a stack overflow, triangle would still need to be seeded.
Here's something that works without converting the arguments to strings.
The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK.
local function m(f)
local t = { }
local function mf(x, ...) -- memoized f
assert(x ~= nil, 'nil passed to memoized function')
if select('#', ...) > 0 then
t[x] = t[x] or m(function(...) return f(x, ...) end)
return t[x](...)
else
t[x] = t[x] or f(x)
assert(t[x] ~= nil, 'memoized function returns nil')
return t[x]
end
end
return mf
end
I've been inspired by this question to implement (yet another) flexible memoize function in Lua.
https://github.com/kikito/memoize.lua
Main advantages:
Accepts a variable number of arguments
Doesn't use tostring; instead, it organizes the cache in a tree structure, using the parameters to traverse it.
Works just fine with functions that return multiple values.
Pasting the code here as reference:
local globalCache = {}
local function getFromCache(cache, args)
local node = cache
for i=1, #args do
if not node.children then return {} end
node = node.children[args[i]]
if not node then return {} end
end
return node.results
end
local function insertInCache(cache, args, results)
local arg
local node = cache
for i=1, #args do
arg = args[i]
node.children = node.children or {}
node.children[arg] = node.children[arg] or {}
node = node.children[arg]
end
node.results = results
end
-- public function
local function memoize(f)
globalCache[f] = { results = {} }
return function (...)
local results = getFromCache( globalCache[f], {...} )
if #results == 0 then
results = { f(...) }
insertInCache(globalCache[f], {...}, results)
end
return unpack(results)
end
end
return memoize
Here is a generic C# 3.0 implementation, if it could help :
public static class Memoization
{
public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function)
{
var cache = new Dictionary<T, TResult>();
var nullCache = default(TResult);
var isNullCacheSet = false;
return parameter =>
{
TResult value;
if (parameter == null && isNullCacheSet)
{
return nullCache;
}
if (parameter == null)
{
nullCache = function(parameter);
isNullCacheSet = true;
return nullCache;
}
if (cache.TryGetValue(parameter, out value))
{
return value;
}
value = function(parameter);
cache.Add(parameter, value);
return value;
};
}
}
(Quoted from a french blog article)
In the vein of posting memoization in different languages, i'd like to respond to #onebyone.livejournal.com with a non-language-changing C++ example.
First, a memoizer for single arg functions:
template <class Result, class Arg, class ResultStore = std::map<Arg, Result> >
class memoizer1{
public:
template <class F>
const Result& operator()(F f, const Arg& a){
typename ResultStore::const_iterator it = memo_.find(a);
if(it == memo_.end()) {
it = memo_.insert(make_pair(a, f(a))).first;
}
return it->second;
}
private:
ResultStore memo_;
};
Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function).
Next, a driver functon, and an implementation. only the driver function need be public
int fib(int); // driver
int fib_(int); // implementation
Implemented:
int fib_(int n){
++total_ops;
if(n == 0 || n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
And the driver, to memoize
int fib(int n) {
static memoizer1<int,int> memo;
return memo(fib_, n);
}
Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...)
This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader.
In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use.
The example from it's manpage:
# This is the documentation for Memoize 1.01
use Memoize;
memoize('slow_function');
slow_function(arguments); # Is faster than it was before
You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation.
Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program!
Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html
Extending the idea, it's also possible to memoize functions with two input parameters:
function memoize2 (f)
local cache = {}
return function (x, y)
if cache[x..','..y] then
return cache[x..','..y]
else
local z = f(x,y)
cache[x..','..y] = z
return z
end
end
end
Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache.
But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up.
function gcd (a, b)
if b == 0 then return a end
return gcd(b, a%b)
end
As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be.
Recursion isn't necessary. The nth triangle number is n(n-1)/2, so...
public int triangle(final int n){
return n * (n - 1) / 2;
}
Please don't recurse this. Either use the x*(x+1)/2 formula or simply iterate the values and memoize as you go.
int[] memo = new int[n+1];
int sum = 0;
for(int i = 0; i <= n; ++i)
{
sum+=i;
memo[i] = sum;
}
return memo[n];