In most languages, if you want to swap two variables, it's something like:
var c = b
b = a
a = c
Yes, you can do fancy hacks with XOR if you like but it's generally 3 lines of code for a single operation. Are there any languages that have swapping variables as a primitive in the language?
Lua, Python, Ruby and more support this notation:
a, b = b, a
And javascript sure needs no temporary variable either ;)
a = -(b = (a += b) - b) + a;
For more examples on how to swap variables (in 86 languages), see: http://rosettacode.org/wiki/Generic_swap
In most dynamic languages you can do something like this to swap:
a, b = b, a
Now a have the value of b, and b has the value of a. I am not sure if this is what you meant or not.
Related
I have 4 non negative real variable that are A, B, C and X. Based on the current problem that I have, I notice that the variable X must belong to the interval of [B,C] and the relation will be a bunch of if-else conditions like this:
If A < B:
x = B
elseif A > C:
x = C
elseif B<=A<=C:
x = A
As you can see, it quite difficult to reformulate as a Mixed Integer Programming problem with corresponding decision variable (d1, d2 and d3). I have try reading some instructions regarding if-then formulation using big M method at this site:
https://www.math.cuhk.edu.hk/course_builder/1415/math3220/L2%20(without%20solution).pdf but it seem that this problem is more challenging than their tutorial.
Could you kindly provide me with a formulation for this situation ?
Thank you very much !
I was teaching a friend about programming and I had a hard time convincing them that a = b and b = a are two very different things.
I eventually found the correct words to describe it (right associative) which got me thinking.
Are there any programming languages which are left associative? I have never seen a language where:
a = b results in b being set to the value of a.
You misunderstood associativity. An operator op is associative if (a op b) op c is equivalent to a op (b op c). For operators that are not associative it thus becomes relevant whether a op b op c stands for the former or the latter. Thus we distinguish between left-associative operators, where it's (a op b) op c, and right-associative operators, where it's a op (b op c).
Most operators in most languages are left-associative. Take for example -: a - b - c is equivalent to (a - b) - c in most languages, not a - (b - c).
The assignment operator is an exception from that as (a = b) = c is generally not legal (as you can't assign to the result of an assignment). Thus in most languages a = b = c is equivalent to a = (b = c). A notable exception is Python where a = b = c does not associate at all and is simply illegal.
None of this has anything to do with the difference between a = b and b = a. Since this involves only a single use of the = operator, associativity does not factor into this at all. Rather the relevant property is commutativity: An operator op is commutative if a op b is equivalent to b op a. I'm not aware of any language where assignment is commutative, nor do I have an idea of how it could be.
Concepts like left-commutativity or right-commutativity do not exist. There is, to the best of my knowledge, no term for the question "Does a = b assign b to a or vice-versa?" - that's just part of the semantics of the assignment operator.
I don't believe so. Though you can always overload the assignment operator and cause complete confusion inside your c++.
R has both <- and -> assignment operators defined.
> b <- 42
> b -> a
> a
[1] 42
If we talk about Operator associativity we should actually consider two types of associativity: side_associative (e.g. = ) or non-associative (e.g. == ) as it doesn't make any difference for a machine if it get's the line from left to right or from right to left.
There are no such programming languages that are originally left associative but some of them allow it via operator overloading and some (e.g. R) will allow both: -> and <-.
I believe no-one will like it in Europe but it might be found lovely in the Middle East. I can imagine that there are IDEs that switch right-side with left-side but at the end compilers are written in an European way.
I can make it work like this:
book = xlrd.open_workbook(Path+'infile')
sheet = book.sheet_by_index(0)
A, B, C, D = ([] for i in range (4))
A = sheet.col_values(0)
B = sheet.col_values(1)
C = sheet.col_values(2)
D = sheet.col_values(3)
but what I want is to make it work like this:
dyn_var_list = [A, B, C, D]
assert(len(sheet.row_values(0))==len(dyn_var_list))
for index, col in enumerate(sheet.row_values(0)):
dyn_var_list[index].append(col)
however, so far I can only get one value in my lists, using the code above, which is due to the usage of "(0)" after the row_values I guess, but I don't know how to resolve this as of yet.
Try
for c in range(sheet.ncols):
for r in range(sheet.nrows):
dyn_var_list[c].append(sheet.cell(r,c).value)
Here sheet.nrows gives you the number of rows in the sheet.
I've some variables, Lets say a, b, c, d. All belongs to a fixed interval [0, e]
Now i've some relations between them like
a > b
a > c
b > d
Something like this; I want to make a function which print all the possible cases for this.
Example:
a b c d
a c b d
a b d c
a c b d
In essence, what you have is a directed acyclic graph.
A relatively simple approach is to store, for each variable, a set of the variables that must precede them. (In your example, this storage would map b to {a}, c to {a}, and d to {b}.) You can then write a recursive function that generates all valid tails consisting of a subset of these variables (in your case, for example, the subset {c,d} produces two valid tails: [c,d] and [d,c]). This recursive function examines each variable in the subset and determines whether its prerequisites are already met. (For example, since b maps to {a}, any subset including both a and b cannot produce a tail that begins with b.) If so, then it can recursively call itself on the subset excluding that variable.
There are some optimizations you can then perform, if desired. For example, you can use dynamic programming to avoid repeatedly re-computing the set of valid tails for the same subset.
I'm curious how to optimize this code :
fun n = (sum l, f $ f0 l, g $ g0 l)
where l = map h [1..n]
Assuming that f, f0, g, g0, and h are all costly, but the creation and storage of l is extremely expensive.
As written, l is stored until the returned tuple is fully evaluated or garbage collected. Instead, length l, f0 l, and g0 l should all be executed whenever any one of them is executed, but f and g should be delayed.
It appears this behavior could be fixed by writing :
fun n = a `seq` b `seq` c `seq` (a, f b, g c)
where
l = map h [1..n]
a = sum l
b = inline f0 $ l
c = inline g0 $ l
Or the very similar :
fun n = (a,b,c) `deepSeq` (a, f b, g c)
where ...
We could perhaps specify a bunch of internal types to achieve the same effects as well, which looks painful. Are there any other options?
Also, I'm obviously hoping with my inlines that the compiler fuses sum, f0, and g0 into a single loop that constructs and consumes l term by term. I could make this explicit through manual inlining, but that'd suck. Are there ways to explicitly prevent the list l from ever being created and/or compel inlining? Pragmas that produce warnings or errors if inlining or fusion fail during compilation perhaps?
As an aside, I'm curious about why seq, inline, lazy, etc. are all defined to by let x = x in x in the Prelude. Is this simply to give them a definition for the compiler to override?
If you want to be sure, the only way is to do it yourself. For any given compiler version, you can try out several source-formulations and check the generated core/assembly/llvm byte-code/whatever whether it does what you want. But that could break with each new compiler version.
If you write
fun n = a `seq` b `seq` c `seq` (a, f b, g c)
where
l = map h [1..n]
a = sum l
b = inline f0 $ l
c = inline g0 $ l
or the deepseq version thereof, the compiler might be able to merge the computations of a, b and c to be performed in parallel (not in the concurrency sense) during a single traversal of l, but for the time being, I'm rather convinced that GHC doesn't, and I'd be surprised if JHC or UHC did. And for that the structure of computing b and c needs to be simple enough.
The only way to obtain the desired result portably across compilers and compiler versions is to do it yourself. For the next few years, at least.
Depending on f0 and g0, it might be as simple as doing a strict left fold with appropriate accumulator type and combining function, like the famous average
data P = P {-# UNPACK #-} !Int {-# UNPACK #-} !Double
average :: [Double] -> Double
average = ratio . foldl' count (P 0 0)
where
ratio (P n s) = s / fromIntegral n
count (P n s) x = P (n+1) (s+x)
but if the structure of f0 and/or g0 doesn't fit, say one's a left fold and the other a right fold, it may be impossible to do the computation in one traversal. In such cases, the choice is between recreating l and storing l. Storing l is easy to achieve with explicit sharing (where l = map h [1..n]), but recreating it may be difficult to achieve if the compiler does some common subexpression elimination (unfortunately, GHC does have a tendency to share lists of that form, even though it does little CSE). For GHC, the flags fno-cse and -fno-full-laziness can help avoiding unwanted sharing.