In CoffeeScript, how should I format a function call that takes an anonymous function and other arguments? - formatting

Here's a contrived example I've come up with:
fn = (f, a, b, c)-> alert("#{f() + a + b + c}")
fn((-> "hi"), 1, 2, 3)
I'm wondering what's the suggested way to format that last line? This example is easy to understand, but imagine if the anonymous function (the (-> "hi")) was multi-line and took multiple arguments. This code would become very ugly and start to look lisp-like.
fn2 = (f, a, b, c)-> alert("#{f(1, 2) + a + b + c}")
fn2(((a, b) ->
c = a + b
c), 1, 2, 3)
This can get pretty nasty. Is there some way I should format this code to make it more readable or is the best advice to name the anonymous function?
I notice a few similar questions asking how to do this. The difference here is I'm asking how to format it.

I've seen this style used a couple of times:
fn2 (a, b) ->
a + b
, 1, 2, 3
For example, in setTimeout calls:
setTimeout ->
alert '1 second has passed'
, 1000
But i think in general it's better to separate the parameter function in a variable:
add = (a, b) ->
a + b
fn2 add, 1, 2, 3
Or, if it's possible to change the function definition, make the function parameter the last one:
fn2 1, 2, 3, (a, b) ->
a + b

In the Coffeescript documentation there's an example with function parameter last
task 'build:parser', 'rebuild the Jison parser', (options) ->
require 'jison'
code = require('./lib/grammar').parser.generate()
dir = options.output or 'lib'
fs.writeFile "#{dir}/parser.js", code
The Coffeescript test files have lots of examples with function last
test "multiple semicolon-separated statements in parentheticals", ->
nonce = {}
eq nonce, (1; 2; nonce)
eq nonce, (-> return (1; 2; nonce))()
It's when the function isn't last that you need messier indentation and commas, or extra parenthesis to define the function's boundaries.

Related

Getting the name of a function in a macro

I wrote a macro to make code cleaner and clearer by using error message templates stored in a dict. In this example it inserts name of the function, see discourse and #__FUNCTION__, that throws the error.
exception = Dict(:foo => ("bar ~", ArgumentError))
macro ⛔(id)
msg, type = exception[id]
quote
msg = replace($msg, "~" => StackTraces.stacktrace()[1].func)
throw($type(msg))
end
end
This works fine in f that uses positional arguments,
f(x, p) = p < 5 ? x : #⛔ foo
f(1, 10)
#> ERROR: ArgumentError: bar f
but displays #..# when a keyword argument is used.
g(x; p = 3) = p < 5 ? x : #⛔ foo
g(1, p = 10)
#> ERROR: ArgumentError: bar #g#N
Here N is the nth expression evaluated in the session. Redefining g with the same syntax increments this number.
I am stuck after I couldn't spot a difference between the code produced by #⛔.
#macroexpand function f(x, p) p < 5 ? x : #⛔ foo end
#macroexpand function g(x; p = 3) p < 5 ? x : #⛔ foo end
Question
What is happening in g that is different from f?
That is the name of the method the expanded code is called in -- definitions with keyword arguments are lowered to dispatch helper methods called "keyword sorters": https://docs.julialang.org/en/v1/devdocs/functions/#Keyword-arguments. These make keywords arguments use dispatch (internally) and compilation just as other functions, instead of just dictionary lookup as e.g. in Python.
I don't think you can do what you want easily in that case, as the conversion process always happens. Taking a previous stack frame would work, but then you have to know whether you are within a keyword method beforehand.
Maybe the following approach works: pattern match the method name (#<f>#<N>), and then check whether it is an actual kwsorter method of any method of f. If so, proceed with f.

Raku pop() order of execution

Isn't order of execution generally from left to right in Raku?
my #a = my #b = [9 , 3];
say (#a[1] - #a[0]) == (#b[1] R- #b[0]); # False {as expected}
say (#a.pop() - #a.pop()) == (#b.pop() R- #b.pop()); # True {Huh?!?}
This is what I get in Rakudo(tm) v2020.12 and 2021.07.
The first 2 lines make sense, but the third I can not fathom.
It is.
But you should realize that the minus infix operator is just a subroutine under the hood, taking 2 parameters that are evaluated left to right. So when you're saying:
$a - $b
you are in fact calling the infix:<-> sub:
infix:<->($a,$b);
The R meta-operator basically creates a wrap around the infix:<-> sub that reverses the arguments:
my &infix:<R->($a,$b) = &infix:<->.wrap: -> $a, $b { nextwith $b, $a }
So, if you do a:
$a R- $b
you are in fact doing a:
infix:<R->($a,$b)
which is then basically a:
infix:<->($b,$a)
Note that in the call to infix:<R-> in your example, $a become 3, and $b becomes 9 because the order of the arguments is processed left to right. This then calls infix:<->(3,9), producing the -6 value that you would also get without the R.
It may be a little counter-intuitive, but I consider this behaviour as correct. Although the documentation could probably use some additional explanation on this behaviour.
Let me emulate what I assumed was happening in line 3 of my code prefaced with #a is the same as #b is 9, 3 (big number then little number)
(#a.pop() - #a.pop()) == (#b.pop() R- #b.pop())
(3 - 9) == (3 R- 9)
( -6 ) == ( 6 )
False
...That was my expectation. But what raku seems to be doing is
(#a.pop() - #a.pop()) == (#b.pop() R- #b.pop())
#R meta-op swaps 1st `#b.pop()` with 2nd `#b.pop()`
(#a.pop() - #a.pop()) == (#b.pop() - #b.pop())
(3 - 9) == (3 - 9)
( -6 ) == ( -6 )
True
The R in R- swaps functions first, then calls the for values. Since they are the same function, the R in R- has no practical effect.
Side Note: In fuctional programming a 'pure' function will return the same value every time you call it with the same parameters. But pop is not 'pure'. Every call can produce different results. It needs to be used with care.
The R meta op not only reverses the operator, it will also reverse the order in which the operands will be evaluated.
sub term:<a> { say 'a'; '3' }
sub term:<b> { say 'b'; '9' }
say a ~ b;
a
b
ab
Note that a happened first.
If we use R, then b happens first instead.
say a R~ b;
b
a
ba
The problem is that in your code all of the pop calls are getting their data from the same source.
my #data = < a b a b >;
sub term:<l> { my $v = #data.shift; say "l=$v"; return $v }
sub term:<r> { my $v = #data.shift; say "r=$v"; return $v }
say l ~ r;
l=a
r=b
ab
say l R~ r;
r=a
l=b
ab
A way to get around that is to use the reduce meta operator with a list
[-](#a.pop, #a.pop) == [R-](#a.pop, #a.pop)
Or in some other way make sure the pop operations happen in the order you expect.
You could also just use the values directly from the array without using pop.
[-]( #a[0,1] ) == [R-]( #a[2,3] )
Let me emulate what happens by writing the logic one way for #a then manually reversing the operands for #b instead of using R:
my #a = my #b = [9 , 3];
sub apop { #a.pop }
sub bpop { #b.pop }
say apop - apop; # -6
say bpop - bpop; # -6 (operands *manually* reversed)
This not only appeals to my sense of intuition about what's going on, I'm thus far confused why you were confused and why Liz has said "It may be a little counter-intuitive" and you've said it is plain unintuitive!

Destructuring a list with equations in maxima

Say that I have the following list of equations:
list: [x=1, y=2, z=3];
I use this pattern often to have multiple return values from a function. Kind of of like how you would use an object, in for example, javascript. However, in javascript, I can do things like this. Say that myFunction() returns the object {x:1, y:2, z:3}, then I can destructure it with this syntax:
let {x,y,z} = myFunction();
And now x,y,z are assigned the values 1,2,3 in the current scope.
Is there anything like this in maxima? Now I use this:
x: subst(list, x);
y: subst(list, y);
z: subst(list, z);
How about this. Let l be a list of equations of the form somesymbol = somevalue. I think all you need is:
map (lhs, l) :: map (rhs, l);
Here map(lhs, l) yields the list of symbols, and map(rhs, l) yields the list of values. The operator :: means evaluate the left-hand side and assign the right-hand side to it. When the left-hand side is a list, then Maxima assigns each value on the right-hand side to the corresponding element on the left.
E.g.:
(%i1) l : [a = 12, b = 34, d = 56] $
(%i2) map (lhs, l) :: map (rhs, l);
(%o2) [12, 34, 56]
(%i3) values;
(%o3) [l, a, b, d]
(%i4) a;
(%o4) 12
(%i5) b;
(%o5) 34
(%i6) d;
(%o6) 56
You can probably achieve it and write a function that could be called as f(['x, 'y, 'z], list); but you will have to be able to make some assignments between symbols and values. This could be done by writing a tiny ad hoc Lisp function being:
(defun $assign (symb val) (set symb val))
You can see how it works (as a first test) by first typing (form within Maxima):
:lisp (defun $assign (symb val) (set symb val))
Then, use it as: assign('x, 42) which should assign the value 42 to the Maxima variable x.
If you want to go with that idea, you should write a tiny Lisp file in your ~/.maxima directory (this is a directory where you can put your most used functions); call it for instance myfuncs.lisp and put the function above (without the :lisp prefix); then edit (in the very same directory) your maxima-init.mac file, which is read at startup and add the two following things:
add a line containing load("myfuncs.lisp"); before the following part;
define your own Maxima function (in plain Maxima syntax with no need to care about Lisp). Your function should contain some kind of loop for performing all assignments; now you could use the assign(symbol, value) function for each variable.
Your function could be something like:
f(vars, l) := for i:1 thru length(l) do assign(vars[i], l[i]) $
which merely assign each value from the second argument to the corresponding symbol in the first argument.
Thus, f(['x, 'y], [1, 2]) will perform the expected assigments; of course you can start from that for doing more precisely what you need.

Matlab: how do I run the optimization (fmincon) repeately?

I am trying to follow the tutorial of using the optimization tool box in MATLAB. Specifically, I have a function
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)+b
subject to the constraint:
(x(1))^2+x(2)-1=0,
-x(1)*x(2)-10<=0.
and I want to minimize this function for a range of b=[0,20]. (That is, I want to minimize this function for b=0, b=1,b=2 ... and so on).
Below is the steps taken from the MATLAB's tutorial webpage(http://www.mathworks.com/help/optim/ug/nonlinear-equality-and-inequality-constraints.html), how should I change the code so that, the optimization will run for 20 times, and save the optimal values for each b?
Step 1: Write a file objfun.m.
function f = objfun(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1)+b;
Step 2: Write a file confuneq.m for the nonlinear constraints.
function [c, ceq] = confuneq(x)
% Nonlinear inequality constraints
c = -x(1)*x(2) - 10;
% Nonlinear equality constraints
ceq = x(1)^2 + x(2) - 1;
Step 3: Invoke constrained optimization routine.
x0 = [-1,1]; % Make a starting guess at the solution
options = optimoptions(#fmincon,'Algorithm','sqp');
[x,fval] = fmincon(#objfun,x0,[],[],[],[],[],[],...
#confuneq,options);
After 21 function evaluations, the solution produced is
x, fval
x =
-0.7529 0.4332
fval =
1.5093
Update:
I tried your answer, but I am encountering problem with your step 2. Bascially, I just fill the my step 2 to your step 2 (below the comment "optimization just like before").
%initialize list of targets
b = 0:1:20;
%preallocate/initialize result vectors using zeros (increases speed)
opt_x = zeros(length(b));
opt_fval = zeros(length(b));
>> for idx = 1, length(b)
objfun = #(x)objfun_builder(x,b)
%optimization just like before
x0 = [-1,1]; % Make a starting guess at the solution
options = optimoptions(#fmincon,'Algorithm','sqp');
[x,fval] = fmincon(#objfun,x0,[],[],[],[],[],[],...
#confuneq,options);
%end the stuff I fill in
opt_x(idx) = x
opt_fval(idx) = fval
end
However, it gave me the output is:
Error: "objfun" was previously used as a variable, conflicting
with its use here as the name of a function or command.
See "How MATLAB Recognizes Command Syntax" in the MATLAB
documentation for details.
There are two things you need to change about your code:
Creation of the objective function.
Multiple optimizations using a loop.
1st Step
For more flexibility with regard to b, you need to set up another function that returns a handle to the desired objective function, e.g.
function h = objfun_builder(x, b)
h = #(x)(objfun(x));
function f = objfun(x)
f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) + b;
end
end
A more elegant and shorter approach are anonymous functions, e.g.
objfun_builder = #(x,b)(exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) + b);
After all, this works out to be the same as above. It might be less intuitive for a Matlab-beginner, though.
2nd Step
Instead of placing an .m-file objfun.m in your path, you will need to call
objfun = #(x)(objfun_builder(x,myB));
to create an objective function in your workspace. In order to loop over the interval b=[0,20], use the following loop
%initialize list of targets
b = 0:1:20;
%preallocate/initialize result vectors using zeros (increases speed)
opt_x = zeros(length(b))
opt_fval = zeros(length(b))
%start optimization of list of targets (`b`s)
for idx = 1, length(b)
objfun = #(x)objfun_builder(x,b)
%optimization just like before
opt_x(idx) = x
opt_fval(idx) = fval
end

How can I use a functor name like a variable in Prolog?

We have this assignment for our Prolog course. After two months of one hour per week of Prolog, it is still an enigma to me, my thinking seems unable to adapt from procedural languages - yet.
There is a knowledge base containing predicates/functors with the same name and arities 1, 2 and 3.
The call form should be
search(functor_name, argument, S).
The answers should find all occurrences with this functor name and argument, regardless of arity.
The answers should be of the form:
S = functor_name(argument);
S = functor_name(argument,_);
S = functor_name(_,argument);
S = functor_name(argument,_,_);
S = functor_name(_,argument,_);
S = functor_name(_,_,argument);
false.
I have found out that I could use call to test if the entry in the knowledge base exists.
But call does not seem to work with a variable for the functor name. I am totally baffled, no idea how to use a variable for a functor name.
UPDATE:
My question has been partly answered.
My new code gives me true and false for arities 1, 2 and 3 (see below).
search(Person,Predicate) :-
ID = Person, Key = Predicate, current_functor(Key,1),
call(Key,ID)
; ID = Person, Key = Predicate, current_functor(Key,2),
(call(Key,ID,_);call(Key,_,ID))
; ID = Person, Key = Predicate, current_functor(Key,3),
(call(Key,ID,_,_);call(Key,_,ID,_);call(Key,_,_,ID)).
UPDATE2:
Another partial answer has come in. That one gives me S as a list of terms, but the "other" arguments are placeholders:
search2(Predicate, Arg, S) :-
( Arity = 2 ; Arity = 3 ; Arity = 4 ),
functor(S, Predicate, Arity),
S =.. [_,Predicate|Args],
member(Arg, Args).
The result is quite nice. Still missing: the Predicate should not be inside the brackets and the other arguments should be taken literally from the knowledge base, not written as placeholders. The current result looks like this:
?- search2(parent,lars,S).
S = parent(parent, lars) ;
S = parent(parent, lars, _G1575) ;
S = parent(parent, _G1574, lars) ;
S = parent(parent, lars, _G1575, _G1576) ;
S = parent(parent, _G1574, lars, _G1576) ;
S = parent(parent, _G1574, _G1575, lars).
I am giving up with this question, because the question was posed in the wrong way from the beginning. I should have asked more specifically - which I could not, because I am still no good in Prolog.
#false helped me most. I am accepting his answer.
There are two approaches here, one "traditional" (1970s) that implements literally what you want:
search(F, Arg, S) :-
( N = 1 ; N = 2 ; N = 3 ), % more compactly: between(1,3, N)
functor(S, F, N),
S =.. [_|Args], % more compactly: between(1,N, I), arg(I,S,Arg)
member(Arg, Args).
The other reconsiders the explicit construction of the goal. Actually, if you have a functor F, and arguments A1, A2, A3 you can immediately write the goal call(F, A1, A2, A3) without any use of functor/3 or (=..)/2.
There are many advantages of using call(F, A1, A2, A3) in place of Goal =.. [F, A1, A2, A3], call(Goal): In many situations it is cleaner, faster, and much easier to typecheck. Further, when using a module system, the handling of potential module qualifications for F will work seamlessly. Whereas (=..)/2 will have to handle all ugly details explicitly, that is more code, more errors.
search(F,A,call(F,A)).
search(F,A,call(F,A,_)).
search(F,A,call(F,_,A)).
search(F,A,call(F,A,_,_)).
search(F,A,call(F,_,A,_)).
search(F,A,call(F,_,_,A)).
If you want to shorten this, then rather construct call/N dynamically:
search(F, Arg, S) :-
( N = 2 ; N = 3 ; N = 4 ),
functor(S, call, N),
S =.. [_,F|Args],
member(Arg, Args).
Note that call needs an extra argument for the functor F!
You can use the "univ" operator, =.., to construct a goal dynamically:
?- F=member, X=1, L=[1,2,3], Goal =.. [F, X, L], call(Goal).
F = member,
X = 1,
L = [1, 2, 3],
Goal = member(1, [1, 2, 3]) .