Related
Here it is written in point 1:
This file defines a set of attributes, all of which are concrete
derivations (i.e., not functions). In fact, we define a mutually
recursive set of attributes. That is, the attributes can refer to each
other. This is precisely what we want since we want to “plug” the
various packages into each other.
This seems a little bit difficult to understand.
For example if derivation A depends on derivation B and derivation B depends on derivation A, then how is such a mutually recursive pair of derivations built in Nix/NixOS ?
Could you please give a simple example how and why such mutually recursive derivations do not lead to problems ?
If A depends on B and viceversa, it's a cyclic dependency and Nix cannot handle that.
But mutually recursive sets are a different thing. It just means A can depend on B of the same set:
rec {
a = 1;
b = 2;
c = a+b;
}
As stated by jhegedus, it's equivalent to (because of laziness):
let s = with s; {
a = 1;
b = 2;
c = a+b;
};
in s
But this is a cycle, and doesn't work:
rec {
a = b;
b = a;
}
I'll post this anyway because it is more than nothing and it might help someone:
Here in point 1: http://nixos.org/nix/manual/#ex-hello-composition, it is written : "we define a mutually recursive set of attributes", this is a little bit confusing. Does this not lead to chicken-egg problem ?
joco42_
Say, package 1 depends on package 2 but package 2 depends on package 1, isn't that a problem ?
joco42_
Can such cyclic dependencies really exist in nix ?
kmicu
No it’s not a proble
kmicu
m with Nix http://augustss.blogspot.hu/2011/05/more-points-for-lazy-evaluation-in.html
joco42_
kmicu: many thanks
kmicu
http://nixos.org/nix/manual/#sec-constructs
joco42_
kmicu: many thanks, I've just asked this on sof before i saw your comment Cyclic dependencies in Nix/NixOS explained on a simple example
joco42_
kmicu: so basically nix expression are written in a lazy language ?
kmicu
Yes, “The Nix expression language is a pure, lazy, functional language.”
(there is also an example at http://lethalman.blogspot.com/2014/11/nix-pill-17-nixpkgs-overriding-packages.html )
Basically, nix language can handle recursion because it is lazy:
nix-repl> fix=f : let r= f r ; in r
nix-repl> p= s: { a=3;b=4; c=s.a+s.b;}
nix-repl> fix p
{ a = 3; b = 4; c = 7; }
I'd like to optimize my c code which looks heavy.
Do anyone have idea to optimize below code?
// r, g, b are variables
for (x from 0 to 255)
{
for (y from 0 to 255) {
// TODO: optimize here.
arr[x][y] = (r > g) ? (r > b ? (g > b ? r - b : r - g) : b - g) : (g > b ? (r > b ? g - b : g - r) : b - r);
}
This level of optimisation is often best left to the compiler itself, which generally knows more about the target architecture than most users of the language.
The first thing it would do (or that you might do if it doesn't) is move the calculation outside of the loop.
This is, of course, assuming that code is representative and r/g/b were constant throughout the loop, However, I suspect your code is a simplification and they are actually dependent on the loop variables.
But, you should probably optimise it first for readability, since micro-optimisations of this sort rarely deliver the performance benefits you want. It's usually far better to do macro-optimisation tasks such as more targeted data structures (trading space for time) or better algorithm selection.
Since your code appears to be getting the maximum spread of the three values (the difference between the highest and lowest), a readability optimisation could be as simple as:
// No complex expressions or side effects allowed, you've been warned!
#define ordered(a,b,c) ((a >= b) && (b >= c))
if (ordered(r,g,b)) arr[x][y] = b - r;
else if (ordered(r,b,g)) arr[x][y] = g - r;
else if (ordered(b,r,g)) arr[x][y] = g - b;
else if (ordered(b,g,r)) arr[x][y] = r - b;
else if (ordered(g,r,b)) arr[x][y] = b - g;
else /* g,b,r */ arr[x][y] = r - g;
#undef ordered
That's far more readable than that ternary monstrosity you have :-)
And, if it turns out you do need more grunt, you can revert to said code but, for the love of whatever gods you believe in (if any), comment it thoroughly to explain what it's meant to do, so the next person maintaining your code thinks kindly of you1.
And only revert if you can establish that the improvement is worth it. The prime directive of optimisation is "measure, don't guess".
1 You should always assume the coder that follows you is a psychopath who knows where you live. In my case, you'd be half right, I have no idea where you live :-)
Also I would advise to test out if this code is really heavy or troublesome. Just assumptions can often lead to bad optimization as the used compiler may optimize especially that issue.
Is there a hack to support range case in a c(99?) or objective C switch statement ?
I know this is not supported to write something like this:
switch(x)
case 1:
case 2..10:
case 11:
But I was thinking there should be a way to generate code with a #define macro. Of course
I can define a macro with the list of cases but I was hoping for a more elegant way like
CASERANGE(x,x+10) which would generate:
case x
case x+1
case x+2
is it even possible ?
GCC has an extension to the C language that allows something similar to your first example, but other than that, if there was a portable/ANSI way of doing it, it would have been done by now. I don't believe there is one.
Doing this with macros is near to or impossible. Compiler extensions exist, but they are compiler specific and not cross-platform/standard. There is no standard way to do this, use if/else chains instead.
In modern C (C99, with variable length macros), doing this with macros is possible. But you probably wouldn't want to code this completely yourself. P99 provides a toolbox for this. In particular there is a meta-macro P99_FOR that allows you to do unrolling of finite length argument lists.
#define P00_CASE_FL(NAME, X, I) case I: NAME(X); break
#define CASES_FL(NAME, ...) P99_FOR(NAME, P99_NARG(__VA_ARGS__), P00_SEQ, P00_CASE_FL, __VA_ARGS__)
would expand CASES_FL(myFunc, oi, ui, ei) to something like
case 0: myFunc(oi); break; case 1: myFunc(ui); break; case 2: myFunc(ei); break
Edit: to reply to the concrete question
#define P00_CASESEP(NAME, I, X, Y) X:; Y
#define P00_CASERANGE(NAME, X, I) case ((NAME)+I)
#define P99_CASERANGE(START, LEN) P99_FOR(START, LEN, P00_CASESEP, P00_CASERANGE, P99_REP(LEN,))
where P00_CASESEP just ensures that there are the :; between the cases, and P99_REP generates a list with LEN empty arguments.
You'd use that e.g as
switch(i) {
P99_CASERANGE('0',10): return i;
}
Observe the : after the macro to keep it as close as possible to the usual case syntax, and also that the LEN parameter has to expand to a plain decimal number, not an expression or so.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why do most programming languages only have binary equality comparison operators?
I have had a simple question for a fairly long time--since I started learning programming languages.
I'd like to write like "if x is either 1 or 2 => TRUE (otherwise FALSE)."
But when I write it in a programming language, say in C,
( x == 1 || x == 2 )
it really works but looks awkward and hard to read. I guess it should be possible to simplify such an or operation, and so if you have any idea, please tell me. Thanks, Nathan
Python allows test for membership in a sequence:
if x in (1, 2):
An extension version in C#
step 1: create an extension method
public static class ObjectExtensions
{
public static bool Either(this object value, params object[] array)
{
return array.Any(p => Equals(value, p));
}
}
step 2: use the extension method
if (x.Either(1,2,3,4,5,6))
{
}
else
{
}
While there are a number of quite interesting answers in this thread, I would like to point out that they may have performance implications if you're doing this kind of logic inside of a loop depending on the language. A far as for the computer to understand, the if (x == 1 || x == 2) is by far the easiest to understand and optimize when it's compiled into machine code.
When I started programming it seemed weird to me as well that instead of something like:
(1 < x < 10)
I had to write:
(1 < x && x < 10)
But this is how most programming languages work, and after a while you will get used to it.
So I believe it is perfectly fine to write
( x == 1 || x == 2 )
Writing it this way also has the advantage that other programmers will understand easily what you wrote. Using a function to encapsulate it might just make things more complicated because the other programmers would need to find that function and see what it does.
Only more recent programming languages like Python, Ruby etc. allow you to write it in a simpler, nicer way. That is mostly because these programming languages are designed to increase the programmers productivity, while the older programming languages' main goal was application performance and not so much programmer productivity.
It's Natural, but Language-Dependent
Your approach would indeed seem more natural but that really depends on the language you use for the implementation.
Rationale for the Mess
C being a systems programming language, and fairly close to the hardware (funny though, as we used to consider a "high-level" language, as opposed to writing machine code), it's not exactly expressive.
Modern higher-level languages (again, arguable, lisp is not that modern, historically speaking, but would allow you to do that nicely) allow you to do such things by using built-in constructs or library support (for instances, using Ranges, Tuples or equivalents in languages like Python, Ruby, Groovy, ML-languages, Haskell...).
Possible Solutions
Option 1
One option for you would be to implement a function or subroutine taking an array of values and checking them.
Here's a basic prototype, and I leave the implementation as an exercise to you:
/* returns non-zero value if check is in values */
int is_in(int check, int *values, int size);
However, as you will quickly see, this is very basic and not very flexible:
it works only on integers,
it works only to compare identical values.
Option 2
One step higher on the complexity ladder (in terms of languages), an alternative would be to use pre-processor macros in C (or C++) to achieve a similar behavior, but beware of side effects.
Other Options
A next step could be to pass a function pointer as an extra parameter to define the behavior at call-point, define several variants and aliases for this, and build yourself a small library of comparators.
The next step then would be to implement a similar thing in C++ using templates to do this on different types with a single implementation.
And then keep going from there to higher-level languages.
Pick the Right Language (or learn to let go!)
Typically, languages favoring functional programming will have built-in support for this sort of thing, for obvious reasons.
Or just learn to accept that some languages can do things that others cannot, and that depending on the job and environment, that's just the way it is. It mostly is syntactic sugar, and there's not much you can do. Also, some languages will address their shortcomings over time by updating their specifications, while others will just stall.
Maybe a library implements such a thing already and that I am not aware of.
that was a lot of interesting alternatives. I am surprised nobody mentioned switch...case - so here goes:
switch(x) {
case 1:
case 2:
// do your work
break;
default:
// the else part
}
it is more readable than having a
bunch of x == 1 || x == 2 || ...
more optimal than having a
array/set/list for doing a
membership check
I doubt I'd ever do this, but to answer your question, here's one way to achieve it in C# involving a little generic type inference and some abuse of operator overloading. You could write code like this:
if (x == Any.Of(1, 2)) {
Console.WriteLine("In the set.");
}
Where the Any class is defined as:
public static class Any {
public static Any2<T> Of<T>(T item1, T item2) {
return new Any2<T>(item1, item2);
}
public struct Any2<T> {
T item1;
T item2;
public Any2(T item1, T item2) {
this.item1 = item1;
this.item2 = item2;
}
public static bool operator ==(T item, Any2<T> set) {
return item.Equals(set.item1) || item.Equals(set.item2);
}
// Defining the operator== requires these three methods to be defined as well:
public static bool operator !=(T item, Any2<T> set) {
return !(item == set);
}
public override bool Equals(object obj) { throw new NotImplementedException(); }
public override int GetHashCode() { throw new NotImplementedException(); }
}
}
You could conceivably have a number of overloads of the Any.Of method to work with 3, 4, or even more arguments. Other operators could be provided as well, and a companion All class could do something very similar but with && in place of ||.
Looking at the disassembly, a fair bit of boxing happens because of the need to call Equals, so this ends up being slower than the obvious (x == 1) || (x == 2) construct. However, if you change all the <T>'s to int and replace the Equals with ==, you get something which appears to inline nicely to be about the same speed as (x == 1) || (x == 2).
Err, what's wrong with it? Oh well, if you really use it a lot and hate the looks do something like this in c#:
#region minimizethisandneveropen
public bool either(value,x,y){
return (value == x || value == y);
}
#endregion
and in places where you use it:
if(either(value,1,2))
//yaddayadda
Or something like that in another language :).
In php you can use
$ret = in_array($x, array(1, 2));
As far as I know, there is no built-in way of doing this in C. You could add your own inline function for scanning an array of ints for values equal to x....
Like so:
inline int contains(int[] set, int n, int x)
{
int i;
for(i=0; i<n; i++)
if(set[i] == x)
return 1;
return 0;
}
// To implement the check, you declare the set
int mySet[2] = {1,2};
// And evaluate like this:
contains(mySet,2,x) // returns non-zero if 'x' is contained in 'mySet'
In T-SQL
where x in (1,2)
In COBOL (it's been a long time since I've even glanced briefly at COBOL, so I may have a detail or two wrong here):
IF X EQUALS 1 OR 2
...
So the syntax is definitely possible. The question then boils down to "why is it not used more often?"
Well, the thing is, parsing expressions like that is a bit of a bitch. Not when standing alone like that, mind, but more when in compound expressions. The syntax starts to become opaque (from the compiler implementer's perspective) and the semantics downright hairy. IIRC, a lot of COBOL compilers will even warn you if you use syntax like that because of the potential problems.
In .Net you can use Linq:
int[] wanted = new int{1, 2};
// you can use Any to return true for the first item in the list that passes
bool result = wanted.Any( i => i == x );
// or use Contains
bool result = wanted.Contains( x );
Although personally I think the basic || is simple enough:
bool result = ( x == 1 || x == 2 );
Thanks Ignacio! I translate it into Ruby:
[ 1, 2 ].include?( x )
and it also works, but I'm not sure whether it'd look clear & normal. If you know about Ruby, please advise. Also if anybody knows how to write this in C, please tell me. Thanks. -Nathan
Perl 5 with Perl6::Junction:
use Perl6::Junction 'any';
say 'yes' if 2 == any(qw/1 2 3/);
Perl 6:
say 'yes' if 2 == 1|2|3;
This version is so readable and concise I’d use it instead of the || operator.
Pascal has a (limited) notion of sets, so you could do:
if x in [1, 2] then
(haven't touched a Pascal compiler in decades so the syntax may be off)
A try with only one non-bitwise boolean operator (not advised, not tested):
if( (x&3) ^ x ^ ((x>>1)&1) ^ (x&1) ^ 1 == 0 )
The (x&3) ^ x part should be equal to 0, this ensures that x is between 0 and 3. Other operands will only have the last bit set.
The ((x>>1)&1) ^ (x&1) ^ 1 part ensures last and second to last bits are different. This will apply to 1 and 2, but not 0 and 3.
You say the notation (x==1 || x==2) is "awkward and hard to read". I beg to differ. It's different than natural language, but is very clear and easy to understand. You just need to think like a computer.
Also, the notations mentioned in this thread like x in (1,2) are semantically different then what you are really asking, they ask if x is member of the set (1,2), which is not what you are asking. What you are asking is if x equals to 1 or to 2 which is logically (and semantically) equivalent to if x equals to 1 or x equals to 2 which translates to (x==1 || x==2).
In java:
List list = Arrays.asList(new Integer[]{1,2});
Set set = new HashSet(list);
set.contains(1)
I have a macro that I use a lot that's somewhat close to what you want.
#define ISBETWEEN(Var, Low, High) ((Var) >= (Low) && (Var) <= (High))
ISBETWEEN(x, 1, 2) will return true if x is 1 or 2.
Neither C, C++, VB.net, C#.net, nor any other such language I know of has an efficient way to test for something being one of several choices. Although (x==1 || x==2) is often the most natural way to code such a construct, that approach sometimes requires the creation of an extra temporary variable:
tempvar = somefunction(); // tempvar only needed for 'if' test:
if (tempvar == 1 || tempvar == 2)
...
Certainly an optimizer should be able to effectively get rid of the temporary variable (shove it in a register for the brief time it's used) but I still think that code is ugly. Further, on some embedded processors, the most compact and possibly fastest way to write (x == const1 || x==const2 || x==const3) is:
movf _x,w ; Load variable X into accumulator
xorlw const1 ; XOR with const1
btfss STATUS,ZERO ; Skip next instruction if zero
xorlw const1 ^ const2 ; XOR with (const1 ^ const2)
btfss STATUS,ZERO ; Skip next instruction if zero
xorlw const2 ^ const3 ; XOR with (const2 ^ const3)
btfss STATUS,ZERO ; Skip next instruction if zero
goto NOPE
That approach require two more instructions for each constant; all instructions will execute. Early-exit tests will save time if the branch is taken, and waste time otherwise. Coding using a literal interpretation of the separate comparisons would require four instructions for each constant.
If a language had an "if variable is one of several constants" construct, I would expect a compiler to use the above code pattern. Too bad no such construct exists in common languages.
(note: Pascal does have such a construct, but run-time implementations are often very wasteful of both time and code space).
return x === 1 || x === 2 in javascript
I want to check whether a value is equal to 1. Is there any difference in the following lines of code
Evaluated value == 1
1 == evaluated value
in terms of the compiler execution
In most languages it's the same thing.
People often do 1 == evaluated value because 1 is not an lvalue. Meaning that you can't accidentally do an assignment.
Example:
if(x = 6)//bug, but no compiling error
{
}
Instead you could force a compiling error instead of a bug:
if(6 = x)//compiling error
{
}
Now if x is not of int type, and you're using something like C++, then the user could have created an operator==(int) override which takes this question to a new meaning. The 6 == x wouldn't compile in that case but the x == 6 would.
It depends on the programming language.
In Ruby, Smalltalk, Self, Newspeak, Ioke and many other single-dispatch object-oriented programming languages, a == b is actually a message send. In Ruby, for example, it is equivalent to a.==(b). What this means, is that when you write a == b, then the method == in the class of a is executed, but when you write b == a, then the method in the class of b is executed. So, it's obviously not the same thing:
class A; def ==(other) false end; end
class B; def ==(other) true end; end
a, b = A.new, B.new
p a == b # => false
p b == a # => true
No, but the latter syntax will give you a compiler error if you accidentally type
if (1 = evaluatedValue)
Note that today any decent compiler will warn you if you write
if (evaluatedValue = 1)
so it is mostly relevant for historical reasons.
Depends on the language.
In Prolog or Erlang, == is written = and is a unification rather than an assignment (you're asserting that the values are equal, rather then testing that they are equal or forcing them to be equal), so you can use it for an assertion if the left hand side is a constant, as explained here.
So X = 3 would unify the variable X and the value 3, whereas 3 = X would attempt to unify the constant 3 with the current value of X, and be equivalent of assert(x==3) in imperative languages.
It's the same thing
In general, it hardly matters whether you use,
Evaluated value == 1 OR 1 == evaluated value.
Use whichever appears more readable to you. I prefer if(Evaluated value == 1) because it looks more readable to me.
And again, I'd like to quote a well known scenario of string comparison in java.
Consider a String str which you have to compare with say another string "SomeString".
str = getValueFromSomeRoutine();
Now at runtime, you are not sure if str would be NULL. So to avoid exception you'll write
if(str!=NULL)
{
if(str.equals("SomeString")
{
//do stuff
}
}
to avoid the outer null check you could just write
if ("SomeString".equals(str))
{
//do stuff
}
Though this is less readable which again depends on the context, this saves you an extra if.
For this and similar questions can I suggest you find out for yourself by writing a little code, running it through your compiler and viewing the emitted asembler output.
For example, for the GNU compilers, you do this with the -S flag. For the VS compilers, the most convenient route is to run your test program in the debugger and then use the assembeler debugger view.
Sometimes in C++ they do different things, if the evaluated value is a user type and operator== is defined. Badly.
But that's very rarely the reason anyone would choose one way around over the other: if operator== is not commutative/symmetric, including if the type of the value has a conversion from int, then you have A Problem that probably wants fixing rather than working around. Brian R. Bondy's answer, and others, are probably on the mark for why anyone worries about it in practice.
But the fact remains that even if operator== is commutative, the compiler might not do exactly the same thing in each case. It will (by definition) return the same result, but it might do things in a slightly different order, or whatever.
if value == 1
if 1 == value
Is exactly the same, but if you accidentally do
if value = 1
if 1 = value
The first one will work while the 2nd one will produce an error.
They are the same. Some people prefer putting the 1 first, to void accidentally falling into the trap of typing
evaluated value = 1
which could be painful if the value on the left hand side is assignable. This is a common "defensive" pattern in C, for instance.
In C languages it's common to put the constant or magic number first so that if you forget one of the "=" of the equality check (==) then the compiler won't interpret this as an assignment.
In java, you cannot do an assignment within a boolean expression, and so for Java, it is irrelevant which order the equality operands are written in; The compiler should flag an error anyway.