As title says, I don't understand why f^:proposition^:_ y is a while loop. I have actually used it a couple times, but I don't understand how it works. I get that ^: repeats functions, but I'm confused by its double use in that statement.
I also can't understand why f^:proposition^:a: y works. This is the same as the previous one but returns the values from all the iterations, instead of only the last one as did the one above.
a: is an empty box and I get that has a special meaning used with ^: but even after having looked into the dictionary I couldn't understand it.
Thanks.
Excerpted and adapted from a longer writeup I posted to the J forums in 2009:
while =: ^:break_clause^:_
Here's an adverb you can apply to any code (which would equivalent of the
loop body) to create a while loop. In case you haven't seen it before, ^: is the power conjunction. More specifically, the phrase f^:n y applies the function f to the argument y exactly n times. The count n maybe be an integer or a function which applied to y produces an integer¹.
In the adverb above, we see the power conjunction twice, once in ^:break_clause and again in ^:_ . Let's first discuss the latter. That _ is J's notation for infinity. So, read literally, ^:_ is "apply the function an infinite number of times" or "keep reapplying forever". This is related to a while-loop's function, but it's not very useful if applied literally.
So, instead, ^:_ and its kin were defined to mean "apply a function to its limit", that is, "keep applying the function until its output matches its input". In that case, applying the function again would have no effect, because the next iteration would have the same input as the previous (remember that J is a functional language). So there's
no point in applying the function even once more: it has reached its limit.
For example:
cos=: 2&o. NB. Cosine function
pi =: 1p1 NB. J's notation for 1*pi^1 analogous to scientific notation 1e1
cos pi
_1
cos cos cos pi
0.857553
cos^:3 pi
0.857553
cos^:10 pi
0.731404
cos^:_ pi NB. Fixed point of cosine
0.739085
Here, we keep applying cosine until the answer stops changing: cosine has reached its fixed point, and more applications are superfluous. We can visualize this by showing the
intermediate steps:
cos^:a: pi
3.1415926535897 _1 0.54030230586813 ...73 more... 0.73908513321512 0.73908513321
So ^:_ applies a function to its limit. OK, what about ^:break_condition? Again, it's the same concept: apply the function on the left the number of times specified by the function on the right. In the case of _ (or its function-equivalent, _: ) the output is "infinity", in the case of break_condition the output will be 0 or 1 depending on the input (a break condition is boolean).
So if the input is "right" (i.e. processing is done), then the break_condition will be 0, whence loop_body^:break_condition^:_ will become loop_body^:0^:_ . Obviously, loop_body^:0 applies the loop_body zero times, which has no effect.
To "have no effect" is to leave the input untouched; put another way, it copies the input to the output ... but if the input matches the output, then the function has reached its limit! Obviously ^:_: detects this fact and terminates. Voila, a while loop!
¹ Yes, including zero and negative integers, and "an integer" should be more properly read as "an arbitrary array of integers" (so the function can be applied at more than one power simultaneously).
f^:proposition^:_ is not a while loop. It's (almost) a while loop when proposition returns 1 or 0. It's some strange kind of while loop when proposition returns other results.
Let's take a simple monadic case.
f =: +: NB. Double
v =: 20 > ] NB. y less than 20
(f^:v^:_) 0 NB. steady case
0
(f^:v^:_) 1 NB. (f^:1) y, until (v y) = 0
32
(f^:v^:_) 2
32
(f^:v^:_) 5
20
(f^:v^:_) 21 NB. (f^:0) y
21
This is what's happening: every time that v y is 1, (f^:1) y is executed. The result of (f^:1) y is the new y and so on.
If y stays the same for two times in a row → output y and stop.
If v y is 0→ output y and stop.
So f^:v^:_ here, works like double while less than 20 (or until the result doesn't change)
Let's see what happens when v returns 2/0 instead of 1/0.
v =: 2 * 20 > ]
(f^:v^:_) 0 NB. steady state
0
(f^:v^:_) 1 NB. (f^:2) 1 = 4 -> (f^:2) 4 = 16 -> (f^:2) 16 = 64 [ -> (f^:0) 64 ]
64
(f^:v^:_) 2 NB. (f^:2) 2 = 8 -> (f^:2) 8 = 32 [ -> (f^:0) 32 ]
32
(f^:v^:_) 5 NB. (f^:2) 5 = 20 [ -> (f^:0) 20 ]
20
(f^:v^:_) 21 NB. [ (f^:0) 21 ]
21
You can have many kinds of "strange" loops by playing with v. (It can even return negative integers, to use the inverse of f).
Related
I am working with a legacy code base written in VB and have run into a conditional operator that I don't understand and cannot figure out what to search for to resolve it.
What I am dealing with is the following code and the variables that result as true. The specific parts that I do not understand are (1) the relationship between the first X and the first parens (-2 and (2) the role of X < 2
If X is a value below -2 it evaluates as false.
If X is a value above 2 it evaluates as true.
If Y is below 5 it evaluates to true as expected.
X = 3
Y = 10
If X > (-2 And X < 2) Or Y < 5 Then
'True
Else
'False
End If
I'm gonna leave off the Or Y < 5 part of the expression for this post as uninteresting, and limit myself to the X > (-2 And X < 2) side of the expression.
I haven't done much VB over the last several years, so I started out with some digging into Operator Precedence rules in VB, to be sure I have things right. I found definitive info on VBA and VB.Net, and some MSDN stuff that might have been VB6 but could also have been the 2013 version of VB.Net. All of them, though, gave the < and > comparison operators higher precedence over the And operator, regardless of whether you see And as logical or bitwise.
With that info, and also knowing that we must look inside parentheses first, I'm now confident the very first part of the expression to be evaluated is X < 2 (rather than -2 And X). Further, we know this will produce a Boolean result, and this Boolean result must be then converted to an Integer to do a bitwise (not logical) And with -2. That result (I'll call it n), which is still an Integer, can at last be compared to see if X > n, which will yield the final result of the expression as a Boolean.
I did some more digging and found this Stack Overflow answer about converting VB Booleans to Integers. While not definitive documentation, I was once privileged to meet the author (Hi #JaredPar) and know he worked on the VB compiler team at Microsoft, so he should know what he's talking about. It indicates that VB Boolean True has the surprising value of -1 as an integer! False becomes the more-normal 0.
At this point we need to talk about the binary representation of negative numbers. Using this reference as a guide (I do vaguely remember learning about this in college, but it's not the kind of thing I need every day), I'm going to provide a conversion table for integers from -3 to +3 in an imaginary integer size of only 4 bits (short version: invert the bit pattern and add one to get the negative representation):
-3 1101
-2 1110
-1 1111 --this helps explain **why** -1 was used for True
0 0000
1 0001
2 0010
3 0010
Stepping back, let's now consider the original -2 And X < 2 parenthetical and look at the results from the True (-1) and False (0) possible outcomes for X < 2 after a bitwise And with -2:
-2 (1110) And True (1111) = 1110 = -2
-2 (1110) And False (0000) = 0000 = 0
Really the whole point here from using the -1 bit pattern is anything And True produces that same thing you started with, whereas anything And False produces all zeros.
So if X < 2 you get True, which results in -2; otherwise you end up with 0. It's interesting to note here that if our language used positive one for True, you'd end up with the same 0000 value doing a bitwise And with -2 that you get from False (1110 And 0001).
Now we know enough to look at some values for X and determine the result of the entire original expression. Any positive integer is greater than both -2 and 0, so the expression should result in True. Zero and -1 are similar: they are less than two, and so will be compared again only as greater than -2 and thus always result in True. Negative two, though, and anything below, should be False.
Unfortunately, this means you could simplify the entire expression down to X > -2. Either I'm wrong about operator precedence, my reference for negative integer bit patterns is wrong, you're using a version of VB that converts True to something other than -1, or this code is just way over-complicated from the get-go.
I believe the sequence is as follows:
r = -2 and X 'outputs X with the least significant bit set to 0 (meaning the first lesser even number)
r = (r < 2) 'outputs -1 or 0
r = (X > r) 'outputs -1 or 0
r = r Or (Y < 5)
Use AndAlso and OrElse.
I want to solve the following problem in Haskell:
Let n be a natural number and let A = [d_1 , ..., d_r] be a set of positive numbers.
I want to find all the positive solutions of the following equation:
n = Sum d_i^2 x_i.
For example if n= 12 and the set A= [1,2,3]. I would like to solve the following equation over the natural numbers:
x+4y+9z=12.
It's enough to use the following code:
[(x,y,z) | x<-[0..12], y<-[0..12], z<-[0..12], x+4*y+9*z==12]
My problem is if n is not fixed and also the set A are not fixed. I don't know how to "produce" a certain amount of variables indexed by the set A.
Instead of a list-comprehension you can use a recursive call with do-notation for the list-monad.
It's a bit more tricky as you have to handle the edge-cases correctly and I allowed myself to optimize a bit:
solve :: Integer -> [Integer] -> [[Integer]]
solve 0 ds = [replicate (length ds) 0]
solve _ [] = []
solve n (d:ds) = do
let maxN = floor $ fromIntegral n / fromIntegral (d^2)
x <- [0..maxN]
xs <- solve (n - x * d^2) ds
return (x:xs)
it works like this:
It's keeping track of the remaining sum in the first argument
when there this sum is 0 where are obviously done and only have to return 0's (first case) - it will return a list of 0s with the same length as the ds
if the remaining sum is not 0 but there are no d's left we are in trouble as there are no solutions (second case) - note that no solutions is just the empty list
in every other case we have a non-zero n (remaining sum) and some ds left (third case):
now look for the maximum number that you can pick for x (maxN) remember that x * d^2 should be <= n so the upper limit is n / d^2 but we are only interested in integers (so it's floor)
try all from x from 0 to maxN
look for all solutions of the remaining sum when using this x with the remaining ds and pick one of those xs
combine x with xs to give a solution to the current subproblem
The list-monad's bind will handle the rest for you ;)
examples
λ> solve 12 [1,2,3]
[[0,3,0],[3,0,1],[4,2,0],[8,1,0],[12,0,0]]
λ> solve 37 [2,3,4,6]
[[3,1,1,0],[7,1,0,0]]
remark
this will fail when dealing with negative numbers - if you need those you gonna have to introduce some more cases - I'm sure you figure them out (it's really more math than Haskell at this point)
Some hints:
Ultimately you want to write a function with this signature:
solutions :: Int -> [Int] -> [ [Int] ]
Examples:
solutions 4 [1,2] == [ [4,0], [0,1] ]
-- two solutions: 4 = 4*1^2 + 0*2^2, 4 = 0*1^2 + 1*2^2
solutions 22 [2,3] == [ [1,2] ]
-- just one solution: 22 = 1*2^2 + 2*3^2
solutions 10 [2,3] == [ ]
-- no solutions
Step 2. Define solutions recursively based on the structure of the list:
solutions x [a] = ...
-- This will either be [] or a single element list
solutions x (a:as) = ...
-- Hint: you will use `solutions ... as` here
I was following 'A tour of GO` on http://tour.golang.org.
The table 15 has some code that I cannot understand. It defines two constants with the following syntax:
const (
Big = 1<<100
Small = Big>>99
)
And it's not clear at all to me what it means. I tried to modify the code and run it with different values, to record the change, but I was not able to understand what is going on there.
Then, it uses that operator again on table 24. It defines a variable with the following syntax:
MaxInt uint64 = 1<<64 - 1
And when it prints the variable, it prints:
uint64(18446744073709551615)
Where uint64 is the type. But I can't understand where 18446744073709551615 comes from.
They are Go's bitwise shift operators.
Here's a good explanation of how they work for C (they work in the same way in several languages).
Basically 1<<64 - 1 corresponds to 2^64 -1, = 18446744073709551615.
Think of it this way. In decimal if you start from 001 (which is 10^0) and then shift the 1 to the left, you end up with 010, which is 10^1. If you shift it again you end with 100, which is 10^2. So shifting to the left is equivalent to multiplying by 10 as many times as the times you shift.
In binary it's the same thing, but in base 2, so 1<<64 means multiplying by 2 64 times (i.e. 2 ^ 64).
That's the same as in all languages of the C family : a bit shift.
See http://en.wikipedia.org/wiki/Bitwise_operation#Bit_shifts
This operation is commonly used to multiply or divide an unsigned integer by powers of 2 :
b := a >> 1 // divides by 2
1<<100 is simply 2^100 (that's Big).
1<<64-1 is 2⁶⁴-1, and that's the biggest integer you can represent in 64 bits (by the way you can't represent 1<<64 as a 64 bits int and the point of table 15 is to demonstrate that you can have it in numerical constants anyway in Go).
The >> and << are logical shift operations. You can see more about those here:
http://en.wikipedia.org/wiki/Logical_shift
Also, you can check all the Go operators in their webpage
It's a logical shift:
every bit in the operand is simply moved a given number of bit
positions, and the vacant bit-positions are filled in, usually with
zeros
Go Operators:
<< left shift integer << unsigned integer
>> right shift integer >> unsigned integer
I have been reading about bit operators in Objective-C in Kochan's book, "Programming in Objective-C".
I am VERY confused about this part, although I have really understood most everything else presented to me thus far.
Here is a quote from the book:
The Bitwise AND Operator
Bitwise ANDing is frequently used for masking operations. That is, this operator can be used easily to set specific bits of a data item to 0. For example, the statement
w3 = w1 & 3;
assigns to w3 the value of w1 bitwise ANDed with the constant 3. This has the same ffect of setting all the bits in w, other than the rightmost two bits to 0 and preserving the rightmost two bits from w1.
As with all binary arithmetic operators in C, the binary bit operators can also be used as assignment operators by adding an equal sign. The statement
word &= 15;
therefore performs the same function as the following:
word = word & 15;
Additionally, it has the effect of setting all but the rightmost four bits of word to 0. When using constants in performing bitwise operations, it is usually more convenient to express the constants in either octal or hexadecimal notation.
OK, so that is what I'm trying to understand. Now, I'm extremely confused with pretty much this entire concept and I am just looking for a little clarification if anyone is willing to help me out on that.
When the book references "setting all the bits" now, all of the bits.. What exactly is a bit. Isn't that just a 0 or 1 in 2nd base, in other words, binary?
If so, why, in the first example, are all of the bits except the "rightmost 2" to 0? Is it 2 because it's 3 - 1, taking 3 from our constant?
Thanks!
Numbers can be expressed in binary like this:
3 = 000011
5 = 000101
10 = 001010
...etc. I'm going to assume you're familiar with binary.
Bitwise AND means to take two numbers, line them up on top of each other, and create a new number that has a 1 where both numbers have a 1 (everything else is 0).
For example:
3 => 00011
& 5 => 00101
------ -------
1 00001
Bitwise OR means to take two numbers, line them up on top of each other, and create a new number that has a 1 where either number has a 1 (everything else is 0).
For example:
3 => 00011
| 5 => 00101
------ -------
7 00111
Bitwise XOR (exclusive OR) means to take two numbers, line them up on top of each other, and create a new number that has a 1 where either number has a 1 AND the other number has a 0 (everything else is 0).
For example:
3 => 00011
^ 5 => 00101
------ -------
6 00110
Bitwise NOR (Not OR) means to take the Bitwise OR of two numbers, and then reverse everything (where there was a 0, there's now a 1, where there was a 1, there's now a 0).
Bitwise NAND (Not AND) means to take the Bitwise AND of two numbers, and then reverse everything (where there was a 0, there's now a 1, where there was a 1, there's now a 0).
Continuing: why does word &= 15 set all but the 4 rightmost bits to 0? You should be able to figure it out now...
n => abcdefghjikl
& 15 => 000000001111
------ --------------
? 00000000jikl
(0 AND a = 0, 0 AND b = 0, ... j AND 1 = j, i AND 1 = i, ...)
How is this useful? In many languages, we use things called "bitmasks". A bitmask is essentially a number that represents a whole bunch of smaller numbers combined together. We can combine numbers together using OR, and pull them apart using AND. For example:
int MagicMap = 1;
int MagicWand = 2;
int MagicHat = 4;
If I only have the map and the hat, I can express that as myInventoryBitmask = (MagicMap | MagicHat) and the result is my bitmask. If I don't have anything, then my bitmask is 0. If I want to see if I have my wand, then I can do:
int hasWand = (myInventoryBitmask & MagicWand);
if (hasWand > 0) {
printf("I have a wand\n");
} else {
printf("I don't have a wand\n");
}
Get it?
EDIT: more stuff
You'll also come across the "bitshift" operator: << and >>. This just means "shift everything left n bits" or "shift everything right n bits".
In other words:
1 << 3 = 0001 << 3 = 0001000 = 8
And:
8 >> 2 = 01000 >> 2 = 010 = 2
"Bit" is short for "binary digit". And yes, it's a 0 or 1. There are almost always 8 in a byte, and they're written kinda like decimal numbers are -- with the most significant digit on the left, and the least significant on the right.
In your example, w1 & 3 masks everything but the two least significant (rightmost) digits because 3, in binary, is 00000011. (2 + 1) The AND operation returns 0 if either bit being ANDed is 0, so everything but the last two bits are automatically 0.
w1 = ????...??ab
3 = 0000...0011
--------------------
& = 0000...00ab
0 & any bit N = 0
1 & any bit N = N
So, anything bitwise anded with 3 has all their bits except the last two set to 0. The last two bits, a and b in this case, are preserved.
#cHao & all: No! Bits are not numbers. They’re not zero or one!
Well, 0 and 1 are possible and valid interpretations. Zero and one is the typical interpretation.
But a bit is only a thing, representing a simple alternative. It says “it is” or “it is not”. It doesn’t say anything about the thing, the „it“, itself. It doesn’t tell, what thing it is.
In most cases this won’t bother you. You can take them for numbers (or parts, digits, of numbers) as you (or the combination of programming languages, cpu and other hardware, you know as being “typical”) usaly do – and maybe you’ll never have trouble with them.
But there is no principal problem if you switch the meaning of “0“ and “1”. Ok, if doing this while programming assembler, you’ll find it a bit problematic as some mnemonics will do other logic then they tell you with their names, numbers will be negated and such things.
Have a look at http://webdocs.cs.ualberta.ca/~amaral/courses/329/webslides/Topic2-DeMorganLaws/sld017.htm if you want.
Greetings
I have a question about the SQL standard which I'm hoping a SQL language lawyer can help with.
Certain expressions just don't work. 62 / 0, for example. The SQL standard specifies quite a few ways in which expressions can go wrong in similar ways. Lots of languages deal with these expressions using special exceptional flow control, or bottom psuedo-values.
I have a table, t, with (only) two columns, x and y each of type int. I suspect it isn't relevant, but for definiteness let's say that (x,y) is the primary key of t. This table contains (only) the following values:
x y
7 2
3 0
4 1
26 5
31 0
9 3
What behavior is required by the SQL standard for SELECT expressions operating on this table which may involve division(s) by zero? Alternatively, if no one behavior is required, what behaviors are permitted?
For example, what behavior is required for the following select statements?
The easy one:
SELECT x, y, x / y AS quot
FROM t
A harder one:
SELECT x, y, x / y AS quot
FROM t
WHERE y != 0
An even harder one:
SELECT x, y, x / y AS quot
FROM t
WHERE x % 2 = 0
Would an implementation (say, one that failed to realize on a more complex version of this query that the restriction could be moved inside the extension) be permitted to produce a division by zero error in response to this query, because, say it attempted to divide 3 by 0 as part of the extension before performing the restriction and realizing that 3 % 2 = 1? This could become important if, for example, the extension was over a small table but the result--when joined with a large table and restricted on the basis of data in the large table--ended up restricting away all of the rows which would have required division by zero.
If t had millions of rows, and this last query were performed by a table scan, would an implementation be permitted to return the first several million results before discovering a division by zero near the end when encountering one even value of x with a zero value of y? Would it be required to buffer?
There are even worse cases, ponder this one, which depending on the semantics can ruin boolean short-circuiting or require four-valued boolean logic in restrictions:
SELECT x, y
FROM t
WHERE ((x / y) >= 2) AND ((x % 2) = 0)
If the table is large, this short-circuiting problem can get really crazy. Imagine the table had a million rows, one of which had a 0 divisor. What would the standard say is the semantics of:
SELECT CASE
WHEN EXISTS
(
SELECT x, y, x / y AS quot
FROM t
)
THEN 1
ELSE 0
END AS what_is_my_value
It seems like this value should probably be an error since it depends on the emptiness or non-emptiness of a result which is an error, but adopting those semantics would seem to prohibit the optimizer for short-circuiting the table scan here. Does this existence query require proving the existence of one non-bottoming row, or also the non-existence of a bottoming row?
I'd appreciate guidance here, because I can't seem to find the relevant part(s) of the specification.
All implementations of SQL that I've worked with treat a division by 0 as an immediate NaN or #INF. The division is supposed to be handled by the front end, not by the implementation itself. The query should not bottom out, but the result set needs to return NaN in this case. Therefore, it's returned at the same time as the result set, and no special warning or message is brought up to the user.
At any rate, to properly deal with this, use the following query:
select
x, y,
case y
when 0 then null
else x / y
end as quot
from
t
To answer your last question, this statement:
SELECT x, y, x / y AS quot
FROM t
Would return this:
x y quot
7 2 3.5
3 0 NaN
4 1 4
26 5 5.2
31 0 NaN
9 3 3
So, your exists would find all the rows in t, regardless of what their quotient was.
Additionally, I was reading over your question again and realized I hadn't discussed where clauses (for shame!). The where clause, or predicate, should always be applied before the columns are calculated.
Think about this query:
select x, y, x/y as quot from t where x%2 = 0
If we had a record (3,0), it applies the where condition, and checks if 3 % 2 = 0. It does not, so it doesn't include that record in the column calculations, and leaves it right where it is.