What's the difference between `a: [b 1]` and `a: [b: 1]`, in red? - rebol

What is the difference between
a: [b 1]
; and
a: [b: 1]
both give the same results for
> a/b
1
they differ for a/1 though.
When do you use what? And the 2nd is a set, what is the 1st?

the 2nd is a set, what is the 1st?
You can get answers by looking at the type:
>> type? first [b 1]
== word!
>> type? first [b: 1]
== set-word!
What is the difference
When you use the expression a/b you are writing something that acts like a SELECT statement, looking up "any word type" matching b in the block indicated by a, then returning the item after it in the block.
Red follows heritage from Rebol--defaulting path selections to be the "non-strict" form of SELECT, which uses a "non-strict" form of equality
>> (first [a:]) = (first [a]) ;-- default comparison
true
>> select [b 1] (quote b)
== 1
>> select [b: 1] (quote b)
== 1
To get the strict behavior of telling the difference, you need to use the /CASE refinement (in the sense of "case-sensitive"):
>> (first [a:]) == (first [a]) ;-- strict comparison
true
>> select/case [b: 1] (quote b)
== none
>> select/case [b: 1] (quote b:)
== 1
Red seems to be at least a little more consistent about this than R3-Alpha, for instance honoring the equality of 1% and 0.01:
>> 1% = 0.01
== true ;-- both R3-Alpha and Red
>> select [0.01 "test"] 1%
== "test" ;-- in Red
>> select [0.01 "test"] 1%
== none ;-- in R3-Alpha
But it shows that there's a somewhat dodgy history behind equality semantics.
When do you use what?
Good question. :-/ Notation-wise in your source, you should use that which you feel most naturally fits what you want to express. If you think a SET-WORD! is appropriate then use it, otherwise use a WORD!. Implementation-wise, there are some nuances that are beyond the scope of a simple answer (locals gathering in FUNCTION, for instance). If you know something will ultimately need to be transformed into an assignment, it may be helpful to use SET-WORDs.
Path evaluation is sketchy, in my opinion. It arose as a syntactic convenience, but then produced a cross product of behaviors for every type being selected from every other type. And that's to say nothing of the variance in how functions work (what would x: :append/dup/only/10/a mean?)
Small example: PATH! behavior in Rebol used a heuristic where if you are evaluating a path it will act as a PICK if the path component is an integer:
>> numbers: [3 2 1]
>> pick numbers 3
== 1 ;-- because the 3rd element is a 1
>> select numbers 3
== 2 ;-- because 2 comes after finding a 3
>> numbers/3
== 1 ;-- acts like PICK because (...)/3 uses an INTEGER!
...but as above, it will act like a SELECT (non-strict) if the thing being chosen is a WORD!:
>> words: [a b c]
>> select words 'a
== b ;-- because b is the thing after a in the block
>> pick words 'a
;-- In Rebol this is an error, Red gives NONE at the moment
>> words/a
== b ;-- acts like SELECT because (...)/a uses a WORD!
So the difference between SELECT and PICK accounts for that difference you're seeing.
It gets weirder for other types. Paths are definitely quirky, and could use a grand unifying theory of some sort.

And the 2nd is a set, what is the 1st?
It seems you are looking at both [b 1] and [b: 1] as code, but they are actually just data. More precisely, they are lists of two elements: a word! or set-word! value followed by an integer! value.
a/b is a syntactic sugar for select a 'b, which retrieves the value following 'b word (using a find call internally). For convenience, the searching for 'b also matches other word types:
red>> find [:b] 'b
== [:b]
red>> find [/b] 'b
== [/b]
red>> find ['b] 'b
== ['b]
red>> find [b] 'b
== [b]
As a side note, remember that a lit-word will evaluate to a word, which is sometimes referred by the "word-decaying" rule:
red>> 'b
== b
/case refinement for find and select will apply a stricter matching, ensuring that the types are also the same. Though, you obviously cannot use it with path notation, you would need to replace the path with a select/case call instead.
So, both are giving the same result for a/b, because both will return the value following b word (regardless of his "word sub-type"):
red>> [b 1] = [b: 1] ;-- loose comparison, used by `find` and `select`.
== true
red>> [b 1] == [b: 1] ;-- strict comparison, used by `find/case` and `select/case`.
== false
they differ for a/1 though.
Integer values have specific semantics in paths. They act as sugar for pick, so a/1 is equivalent to pick a 1. You can also force that behavior other words referring to integers in paths, by making them get-word! values:
red>> c: 1
== 1
red>> a: [b 123]
== [b 1]
red>> a/:c
== b
red>> a: [b: 123]
== [b: 123]
red>> a/:c
== b:
red>> c: 2
== 2
red>> a/:c
== 123
Read more about paths from Rebol Core Manual: http://www.rebol.com/docs/core23/rebolcore-16.html#section-2.10
When do you use what?
For a/b vs a/1 usage, it depends if you want to achieve a select or a pick operation.
For [b 1] vs [b: 1], it depends on the later use of the block. For example, if you are constructing a block for serving as an object or map specification, then the set-word form is a better fit:
red>> a: [b:]
== [b:]
red>> append a 123
== [b: 123]
red>> c: object a
== make object! [
b: 123
]
Also, you should use the set-word form each time you imply a "key/value" relationship, it makes your intent clearer for yourself and other readers as well.

Related

Raku pop() order of execution

Isn't order of execution generally from left to right in Raku?
my #a = my #b = [9 , 3];
say (#a[1] - #a[0]) == (#b[1] R- #b[0]); # False {as expected}
say (#a.pop() - #a.pop()) == (#b.pop() R- #b.pop()); # True {Huh?!?}
This is what I get in Rakudo(tm) v2020.12 and 2021.07.
The first 2 lines make sense, but the third I can not fathom.
It is.
But you should realize that the minus infix operator is just a subroutine under the hood, taking 2 parameters that are evaluated left to right. So when you're saying:
$a - $b
you are in fact calling the infix:<-> sub:
infix:<->($a,$b);
The R meta-operator basically creates a wrap around the infix:<-> sub that reverses the arguments:
my &infix:<R->($a,$b) = &infix:<->.wrap: -> $a, $b { nextwith $b, $a }
So, if you do a:
$a R- $b
you are in fact doing a:
infix:<R->($a,$b)
which is then basically a:
infix:<->($b,$a)
Note that in the call to infix:<R-> in your example, $a become 3, and $b becomes 9 because the order of the arguments is processed left to right. This then calls infix:<->(3,9), producing the -6 value that you would also get without the R.
It may be a little counter-intuitive, but I consider this behaviour as correct. Although the documentation could probably use some additional explanation on this behaviour.
Let me emulate what I assumed was happening in line 3 of my code prefaced with #a is the same as #b is 9, 3 (big number then little number)
(#a.pop() - #a.pop()) == (#b.pop() R- #b.pop())
(3 - 9) == (3 R- 9)
( -6 ) == ( 6 )
False
...That was my expectation. But what raku seems to be doing is
(#a.pop() - #a.pop()) == (#b.pop() R- #b.pop())
#R meta-op swaps 1st `#b.pop()` with 2nd `#b.pop()`
(#a.pop() - #a.pop()) == (#b.pop() - #b.pop())
(3 - 9) == (3 - 9)
( -6 ) == ( -6 )
True
The R in R- swaps functions first, then calls the for values. Since they are the same function, the R in R- has no practical effect.
Side Note: In fuctional programming a 'pure' function will return the same value every time you call it with the same parameters. But pop is not 'pure'. Every call can produce different results. It needs to be used with care.
The R meta op not only reverses the operator, it will also reverse the order in which the operands will be evaluated.
sub term:<a> { say 'a'; '3' }
sub term:<b> { say 'b'; '9' }
say a ~ b;
a
b
ab
Note that a happened first.
If we use R, then b happens first instead.
say a R~ b;
b
a
ba
The problem is that in your code all of the pop calls are getting their data from the same source.
my #data = < a b a b >;
sub term:<l> { my $v = #data.shift; say "l=$v"; return $v }
sub term:<r> { my $v = #data.shift; say "r=$v"; return $v }
say l ~ r;
l=a
r=b
ab
say l R~ r;
r=a
l=b
ab
A way to get around that is to use the reduce meta operator with a list
[-](#a.pop, #a.pop) == [R-](#a.pop, #a.pop)
Or in some other way make sure the pop operations happen in the order you expect.
You could also just use the values directly from the array without using pop.
[-]( #a[0,1] ) == [R-]( #a[2,3] )
Let me emulate what happens by writing the logic one way for #a then manually reversing the operands for #b instead of using R:
my #a = my #b = [9 , 3];
sub apop { #a.pop }
sub bpop { #b.pop }
say apop - apop; # -6
say bpop - bpop; # -6 (operands *manually* reversed)
This not only appeals to my sense of intuition about what's going on, I'm thus far confused why you were confused and why Liz has said "It may be a little counter-intuitive" and you've said it is plain unintuitive!

How to make a union of two SetHashes in Perl 6?

Given two SetHashes, one or both of them can be empty. I want to add all the elements of the second SetHash to the first one.
Since the output of the union operator is a Set, the only (and supposedly not the best one) way I managed to do it is the following.
my SetHash $s1 = <a b c>.SetHash;
my SetHash $s2 = <c d e>.SetHash;
$s1 = ($s1 (|) $s2).SetHash; # SetHash(a b c d e)
UPD: Probably this is more simple, but having to convert to .keys makes me uncomfortable.
$s1{ $s2.keys } X= True; # SetHash(a b c d e)
I'm hoping Elizabeth Mattijsen will read over your question and our answers and either comment or provide her own answer. In the meantime here's my best shot:
my %s1 is SetHash = <a b c> ;
my %s2 is SetHash = <c d e> ;
%s1 = |%s1, |%s2 ; # SetHash(a b c d e)
Elizabeth implemented the is Set (and cousins ) capability for variables declared with the % sigil (i.e. variables that declare their primary nature to be Associative) in the Rakudo 2017.11 compiler release.
Prefix |, used within an expression that's an argument list, "flattens" its single argument if said single argument is composite (the relevant doc claims it must be a Capture, Pair, List, Map, or Hash):
say [ [1,2], [3,4]]; # [[1 2] [3 4]]
say [|[1,2], [3,4]]; # [1 2 [3 4]]
say [|1, 2, [3,4]]; # [1 2 [3 4]]

Destructuring a list with equations in maxima

Say that I have the following list of equations:
list: [x=1, y=2, z=3];
I use this pattern often to have multiple return values from a function. Kind of of like how you would use an object, in for example, javascript. However, in javascript, I can do things like this. Say that myFunction() returns the object {x:1, y:2, z:3}, then I can destructure it with this syntax:
let {x,y,z} = myFunction();
And now x,y,z are assigned the values 1,2,3 in the current scope.
Is there anything like this in maxima? Now I use this:
x: subst(list, x);
y: subst(list, y);
z: subst(list, z);
How about this. Let l be a list of equations of the form somesymbol = somevalue. I think all you need is:
map (lhs, l) :: map (rhs, l);
Here map(lhs, l) yields the list of symbols, and map(rhs, l) yields the list of values. The operator :: means evaluate the left-hand side and assign the right-hand side to it. When the left-hand side is a list, then Maxima assigns each value on the right-hand side to the corresponding element on the left.
E.g.:
(%i1) l : [a = 12, b = 34, d = 56] $
(%i2) map (lhs, l) :: map (rhs, l);
(%o2) [12, 34, 56]
(%i3) values;
(%o3) [l, a, b, d]
(%i4) a;
(%o4) 12
(%i5) b;
(%o5) 34
(%i6) d;
(%o6) 56
You can probably achieve it and write a function that could be called as f(['x, 'y, 'z], list); but you will have to be able to make some assignments between symbols and values. This could be done by writing a tiny ad hoc Lisp function being:
(defun $assign (symb val) (set symb val))
You can see how it works (as a first test) by first typing (form within Maxima):
:lisp (defun $assign (symb val) (set symb val))
Then, use it as: assign('x, 42) which should assign the value 42 to the Maxima variable x.
If you want to go with that idea, you should write a tiny Lisp file in your ~/.maxima directory (this is a directory where you can put your most used functions); call it for instance myfuncs.lisp and put the function above (without the :lisp prefix); then edit (in the very same directory) your maxima-init.mac file, which is read at startup and add the two following things:
add a line containing load("myfuncs.lisp"); before the following part;
define your own Maxima function (in plain Maxima syntax with no need to care about Lisp). Your function should contain some kind of loop for performing all assignments; now you could use the assign(symbol, value) function for each variable.
Your function could be something like:
f(vars, l) := for i:1 thru length(l) do assign(vars[i], l[i]) $
which merely assign each value from the second argument to the corresponding symbol in the first argument.
Thus, f(['x, 'y], [1, 2]) will perform the expected assigments; of course you can start from that for doing more precisely what you need.

Clojure - sequence of objects

I have a Clojure function:
(def obseq
(fn []
(let [a 0
b 1
c 2]
(println (seq '(a b c))))))
It outputs :
(a b c)
I want it to output a sequence containing the values of a, b, and c, not their names.
Desired output:
(1 2 3)
How do I implement this?
Short answer:
clj.core=> (defn obseq []
(let [a 0
b 1
c 2]
(println [a b c])))
#'clj.core/obseq
clj.core=> (obseq)
[0 1 2]
nil
Long answer:
Quoting a form like '(a b c) recursively prevents any evaluation inside the quoted form. So, the values for your 3 symbols a, b, and c aren't substituted. It is much easier to use a vector (square brackets), which never needs to be quoted (unlike a list). Since the vector is unquoted, the 3 symbols are evaluated and replaced by their values.
If you wanted it to stay a list for some reason, the easiest way is to type:
clj.core=> (defn obseq [] (let [a 0 b 1 c 2] (println (list a b c))))
#'clj.core/obseq
clj.core=> (obseq)
(0 1 2)
nil
This version also has no quoting, so the 3 symbols are replaced with their values. Then the function (list ...) puts them into a list data structure.
Note that I also converted your (def obseq (fn [] ...)) into the preferred form (defn obseq [] ...), which has the identical result but is shorter and (usually) clearer.

In a series! what is the best way of removing the last element

What is the most succinct way of removing the last element in a Rebol series?
Options I have found so far are
s: "abc"
head remove back tail s
and
s: "abc"
take/last s
Define "best". Highest performance? Most clarity? What is it you want the expression to return at the end, or do you not care? The two you provide return different results...one the head of the series after the removal and the other the item removed.
If you want the head of the series after removal, you need take/last s followed by s to get that expression. Comparison:
>> delta-time [
loop 10000 [
s: copy "abc"
take/last s
s
]
]
== 0:00:00.012412
>> delta-time [
loop 10000 [
s: copy "abc"
head remove back tail s
]
]
== 0:00:00.019222
If you want the expression to evaluate to the item removed, you'd need to compare take/last s against something convoluted like also (last s) (remove back tail s)... because also will run the first expression and then the second...returning the result of the first:
>> delta-time [
loop 10000 [
s: copy "abc"
take/last s
]
]
== 0:00:00.010838
>> delta-time [
loop 10000 [
s: copy "abc"
also last s remove back tail s
]
]
== 0:00:00.024859
etc.
If you don't care about the result, I'm going with take/last. If you do care about the result and want the head of the series, I'm going with take/last s followed by s. To me that reads better than head remove back tail s, although it's an aesthetic choice. It's still faster, at least on my netbook.
If you want the tail of the series at the end of the expression, remove back tail s is surprisingly similar in performance to take/last s followed by tail s. I'd say the latter is more explicit, and probably preferable, in case the reader forgets the return convention of REMOVE.
And also last s remove back tail s looks terrible, but is a reminder about also, which is pretty useful and easy to forget it's there. FWIW, it performs about the same as using an intermediate variable.
Here I wrote a REMOVE-LAST function,
remove-last: func [
"Removes value(s) from tail of a series."
series [series! port! bitset! none!]
/part range [number!] "Removes to a given length."
] [
either part [
clear skip tail series negate range
] [
remove back tail series
]
]
Example use:
b: [a b c d e f g]
remove-last b ;== [], 'g removed, tail of the series return.
head remove-last/part b 2 ;== [a b c d], 'e and 'f removed
It returns the tail of the series to be able to use in following situation;
b: [a b c d e f g]
head insert remove-last b 'x ;== [a b c d e f x]