Math using OpenSSL BIGNUMs: How to calculate multiplicative inverse? - cryptography

I'm having some issues with OpenSSL BIGNUMs. I'm trying to implement a basic ElGamal protocol. The problem i'm having is that some of these calculations are not returning the values i'm expecting! This is mostly obvious when i try to calculate a multiplicative inverse.
For example:
static void roughExample() {
BIGNUM *u1 = BN_new();
BIGNUM *u2 = BN_new();
BIGNUM *one = BN_new();
BN_one(one);
BIGNUM *negOne = BN_new();
BN_zero(negOne);
BN_sub(negOne, negOne, one); // So negOne should = -1 and it does
BN_print_fp(stdout, negOne); //shows -1 here
BN_rand_range(u1, q); // q = large prime, point being that u1 = random large num
BN_print_fp(stdout, u1);
BN_exp(u2, u1, negOne, ctx);
BN_print_fp(stdout, u2); // in the output, u1 = u2.
}
The output above comes out as: -1, [large prime], [exact same large prime].
So clearly BN_exp isn't doing what i'm expecting it to. In this case I suppose i can just use BN_div to do 1/x, but i'm wondering if there is a larger picture that i'm missing. I am using doing lots of calculations like this and i won't always be able to visually verify that the value has changed like i expected.
Can anyone help? Thanks in advance!
Edit: I tried using 1/x, and it produces a value of 0... :/

You need a BN_mod_inverse function. Other functions don't provide such algorithm.
You can also perform exponentiation to (q-2) modulo q, if q is prime. And you will get inverse field member due to Fermat's little theorem.

Related

Iterative solution to compute powers

I am working on developing an efficient iterative code to compute mn. After some thinking and googling I found this code;
public static int power(int n, int m)
// Efficiently calculates m to the power of n iteratively
{
int pow=m, acc=1, count=n;
while(count!=0)
{
if(count%2==1)
acc=acc*pow;
pow=pow*pow;
count=count/2;
}
return acc;
}
This logic makes sense to me except the fact that why are we squaring value of pow towards the end each time. I am familiar with similar recursive approach, but this squaring is not looking very intuitive to me. Can I kindly get some help her? An example with explanation will be really helpful.
The accumulator is being squared each iteration because count (which is the inverse cumulative power) is being halved each iteration.
If the count is odd, the accumulator is multiplied by the number. This algorithm relies on integer arithmetic, which discards the fractional part of a division, effectively further decrementing by 1 when the count is odd.
This is a very tricky solution to understand. I am solving this problem in leetcode and have found the iterative solution. I have spent a whole day to understand this beautiful iterative solution. The main problem is this iterative solution does not work as like its recursive solution.
Let's pick an example to demonstrate. But first I have to re-write your code by replacing some variable name as the name in your given code is very confusing.
// Find m^n
public static int power(int n, int m)
{
int pow=n, result=1, base=m;
while(pow > 0)
{
if(pow%2 == 1) result = result * base;
base = base * base;
pow = pow/2;
}
return result;
}
Let's understand the beauty step by step.
Let say, base = 2 and power = 10.
Calculation
Description
2^10= (2*2)^5 = 4^5
even
We have changed the base to 4 and power to 5. So it is now enough to find 4^5. [base multiplied with itself and power is half
4^5= 4*(4)^4 = 4^5
odd
We separate single 4 which is the base for current iteration. We store the base to result variable. We will now find the value of 4^4 and then multiply with the result variable.
4^4= (4*4)^2 = 16^2
even
We change the base to 16 and power to 2. It is now enough to find 16^2
16^2= (16*16)^1 = 256^1
even
We change the base to 256 and power to 1. It is now enough to find 256^1
256^1 = 256 * 256^0
odd
We separate single 256 which is the base for current iteration. This value comes from evaluation of 4^4.So, we have to multiply it with our previous result variable. And continue evaluating the rest value of 256^0.
256^0
zero
Power 0. So stop the iteration.
So, after translating the process in pseudo code it will be similar to this,
If power is even:
base = base * base
power /= 2
If power is odd:
result = result * base
power -= 1
Now, let have another observation. It is observed that floor(5 / 2) and (5-1) / 2 is same.
So, for odd power, we can directly set power / 2 instead of power -= 1. So, the pseudo code will be like the below,
if power is both odd or even:
base = base * base
power /= 2
If power is odd:
result = result * base
I hope you got the behind the scene.

Kindly explain me this code of increment decrement operator [duplicate]

#include <stdio.h>
int main(void)
{
int i = 0;
i = i++ + ++i;
printf("%d\n", i); // 3
i = 1;
i = (i++);
printf("%d\n", i); // 2 Should be 1, no ?
volatile int u = 0;
u = u++ + ++u;
printf("%d\n", u); // 1
u = 1;
u = (u++);
printf("%d\n", u); // 2 Should also be one, no ?
register int v = 0;
v = v++ + ++v;
printf("%d\n", v); // 3 (Should be the same as u ?)
int w = 0;
printf("%d %d\n", ++w, w); // shouldn't this print 1 1
int x[2] = { 5, 8 }, y = 0;
x[y] = y ++;
printf("%d %d\n", x[0], x[1]); // shouldn't this print 0 8? or 5 0?
}
C has the concept of undefined behavior, i.e. some language constructs are syntactically valid but you can't predict the behavior when the code is run.
As far as I know, the standard doesn't explicitly say why the concept of undefined behavior exists. In my mind, it's simply because the language designers wanted there to be some leeway in the semantics, instead of i.e. requiring that all implementations handle integer overflow in the exact same way, which would very likely impose serious performance costs, they just left the behavior undefined so that if you write code that causes integer overflow, anything can happen.
So, with that in mind, why are these "issues"? The language clearly says that certain things lead to undefined behavior. There is no problem, there is no "should" involved. If the undefined behavior changes when one of the involved variables is declared volatile, that doesn't prove or change anything. It is undefined; you cannot reason about the behavior.
Your most interesting-looking example, the one with
u = (u++);
is a text-book example of undefined behavior (see Wikipedia's entry on sequence points).
Most of the answers here quoted from C standard emphasizing that the behavior of these constructs are undefined. To understand why the behavior of these constructs are undefined, let's understand these terms first in the light of C11 standard:
Sequenced: (5.1.2.3)
Given any two evaluations A and B, if A is sequenced before B, then the execution of A shall precede the execution of B.
Unsequenced:
If A is not sequenced before or after B, then A and B are unsequenced.
Evaluations can be one of two things:
value computations, which work out the result of an expression; and
side effects, which are modifications of objects.
Sequence Point:
The presence of a sequence point between the evaluation of expressions A and B implies that every value computation and side effect associated with A is sequenced before every value computation and side effect associated with B.
Now coming to the question, for the expressions like
int i = 1;
i = i++;
standard says that:
6.5 Expressions:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. [...]
Therefore, the above expression invokes UB because two side effects on the same object i is unsequenced relative to each other. That means it is not sequenced whether the side effect by assignment to i will be done before or after the side effect by ++.
Depending on whether assignment occurs before or after the increment, different results will be produced and that's the one of the case of undefined behavior.
Lets rename the i at left of assignment be il and at the right of assignment (in the expression i++) be ir, then the expression be like
il = ir++ // Note that suffix l and r are used for the sake of clarity.
// Both il and ir represents the same object.
An important point regarding Postfix ++ operator is that:
just because the ++ comes after the variable does not mean that the increment happens late. The increment can happen as early as the compiler likes as long as the compiler ensures that the original value is used.
It means the expression il = ir++ could be evaluated either as
temp = ir; // i = 1
ir = ir + 1; // i = 2 side effect by ++ before assignment
il = temp; // i = 1 result is 1
or
temp = ir; // i = 1
il = temp; // i = 1 side effect by assignment before ++
ir = ir + 1; // i = 2 result is 2
resulting in two different results 1 and 2 which depends on the sequence of side effects by assignment and ++ and hence invokes UB.
I think the relevant parts of the C99 standard are 6.5 Expressions, §2
Between the previous and next sequence point an object shall have its stored value
modified at most once by the evaluation of an expression. Furthermore, the prior value
shall be read only to determine the value to be stored.
and 6.5.16 Assignment operators, §4:
The order of evaluation of the operands is unspecified. If an attempt is made to modify
the result of an assignment operator or to access it after the next sequence point, the
behavior is undefined.
Just compile and disassemble your line of code, if you are so inclined to know how exactly it is you get what you are getting.
This is what I get on my machine, together with what I think is going on:
$ cat evil.c
void evil(){
int i = 0;
i+= i++ + ++i;
}
$ gcc evil.c -c -o evil.bin
$ gdb evil.bin
(gdb) disassemble evil
Dump of assembler code for function evil:
0x00000000 <+0>: push %ebp
0x00000001 <+1>: mov %esp,%ebp
0x00000003 <+3>: sub $0x10,%esp
0x00000006 <+6>: movl $0x0,-0x4(%ebp) // i = 0 i = 0
0x0000000d <+13>: addl $0x1,-0x4(%ebp) // i++ i = 1
0x00000011 <+17>: mov -0x4(%ebp),%eax // j = i i = 1 j = 1
0x00000014 <+20>: add %eax,%eax // j += j i = 1 j = 2
0x00000016 <+22>: add %eax,-0x4(%ebp) // i += j i = 3
0x00000019 <+25>: addl $0x1,-0x4(%ebp) // i++ i = 4
0x0000001d <+29>: leave
0x0000001e <+30>: ret
End of assembler dump.
(I... suppose that the 0x00000014 instruction was some kind of compiler optimization?)
The behavior can't really be explained because it invokes both unspecified behavior and undefined behavior, so we can not make any general predictions about this code, although if you read Olve Maudal's work such as Deep C and Unspecified and Undefined sometimes you can make good guesses in very specific cases with a specific compiler and environment but please don't do that anywhere near production.
So moving on to unspecified behavior, in draft c99 standard section6.5 paragraph 3 says(emphasis mine):
The grouping of operators and operands is indicated by the syntax.74) Except as specified
later (for the function-call (), &&, ||, ?:, and comma operators), the order of evaluation of subexpressions and the order in which side effects take place are both unspecified.
So when we have a line like this:
i = i++ + ++i;
we do not know whether i++ or ++i will be evaluated first. This is mainly to give the compiler better options for optimization.
We also have undefined behavior here as well since the program is modifying variables(i, u, etc..) more than once between sequence points. From draft standard section 6.5 paragraph 2(emphasis mine):
Between the previous and next sequence point an object shall have its stored value
modified at most once by the evaluation of an expression. Furthermore, the prior value
shall be read only to determine the value to be stored.
it cites the following code examples as being undefined:
i = ++i + 1;
a[i++] = i;
In all these examples the code is attempting to modify an object more than once in the same sequence point, which will end with the ; in each one of these cases:
i = i++ + ++i;
^ ^ ^
i = (i++);
^ ^
u = u++ + ++u;
^ ^ ^
u = (u++);
^ ^
v = v++ + ++v;
^ ^ ^
Unspecified behavior is defined in the draft c99 standard in section 3.4.4 as:
use of an unspecified value, or other behavior where this International Standard provides
two or more possibilities and imposes no further requirements on which is chosen in any
instance
and undefined behavior is defined in section 3.4.3 as:
behavior, upon use of a nonportable or erroneous program construct or of erroneous data,
for which this International Standard imposes no requirements
and notes that:
Possible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).
Another way of answering this, rather than getting bogged down in arcane details of sequence points and undefined behavior, is simply to ask, what are they supposed to mean? What was the programmer trying to do?
The first fragment asked about, i = i++ + ++i, is pretty clearly insane in my book. No one would ever write it in a real program, it's not obvious what it does, there's no conceivable algorithm someone could have been trying to code that would have resulted in this particular contrived sequence of operations. And since it's not obvious to you and me what it's supposed to do, it's fine in my book if the compiler can't figure out what it's supposed to do, either.
The second fragment, i = i++, is a little easier to understand. Someone is clearly trying to increment i, and assign the result back to i. But there are a couple ways of doing this in C. The most basic way to add 1 to i, and assign the result back to i, is the same in almost any programming language:
i = i + 1
C, of course, has a handy shortcut:
i++
This means, "add 1 to i, and assign the result back to i". So if we construct a hodgepodge of the two, by writing
i = i++
what we're really saying is "add 1 to i, and assign the result back to i, and assign the result back to i". We're confused, so it doesn't bother me too much if the compiler gets confused, too.
Realistically, the only time these crazy expressions get written is when people are using them as artificial examples of how ++ is supposed to work. And of course it is important to understand how ++ works. But one practical rule for using ++ is, "If it's not obvious what an expression using ++ means, don't write it."
We used to spend countless hours on comp.lang.c discussing expressions like these and why they're undefined. Two of my longer answers, that try to really explain why, are archived on the web:
Why doesn't the Standard define what these do?
Doesn't operator precedence determine the order of evaluation?
See also question 3.8 and the rest of the questions in section 3 of the C FAQ list.
Often this question is linked as a duplicate of questions related to code like
printf("%d %d\n", i, i++);
or
printf("%d %d\n", ++i, i++);
or similar variants.
While this is also undefined behaviour as stated already, there are subtle differences when printf() is involved when comparing to a statement such as:
x = i++ + i++;
In the following statement:
printf("%d %d\n", ++i, i++);
the order of evaluation of arguments in printf() is unspecified. That means, expressions i++ and ++i could be evaluated in any order. C11 standard has some relevant descriptions on this:
Annex J, unspecified behaviours
The order in which the function designator, arguments, and
subexpressions within the arguments are evaluated in a function call
(6.5.2.2).
3.4.4, unspecified behavior
Use of an unspecified value, or other behavior where this
International Standard provides two or more possibilities and imposes
no further requirements on which is chosen in any instance.
EXAMPLE An example of unspecified behavior is the order in which the
arguments to a function are evaluated.
The unspecified behaviour itself is NOT an issue. Consider this example:
printf("%d %d\n", ++x, y++);
This too has unspecified behaviour because the order of evaluation of ++x and y++ is unspecified. But it's perfectly legal and valid statement. There's no undefined behaviour in this statement. Because the modifications (++x and y++) are done to distinct objects.
What renders the following statement
printf("%d %d\n", ++i, i++);
as undefined behaviour is the fact that these two expressions modify the same object i without an intervening sequence point.
Another detail is that the comma involved in the printf() call is a separator, not the comma operator.
This is an important distinction because the comma operator does introduce a sequence point between the evaluation of their operands, which makes the following legal:
int i = 5;
int j;
j = (++i, i++); // No undefined behaviour here because the comma operator
// introduces a sequence point between '++i' and 'i++'
printf("i=%d j=%d\n",i, j); // prints: i=7 j=6
The comma operator evaluates its operands left-to-right and yields only the value of the last operand. So in j = (++i, i++);, ++i increments i to 6 and i++ yields old value of i (6) which is assigned to j. Then i becomes 7 due to post-increment.
So if the comma in the function call were to be a comma operator then
printf("%d %d\n", ++i, i++);
will not be a problem. But it invokes undefined behaviour because the comma here is a separator.
For those who are new to undefined behaviour would benefit from reading What Every C Programmer Should Know About Undefined Behavior to understand the concept and many other variants of undefined behaviour in C.
This post: Undefined, unspecified and implementation-defined behavior is also relevant.
While it is unlikely that any compilers and processors would actually do so, it would be legal, under the C standard, for the compiler to implement "i++" with the sequence:
In a single operation, read `i` and lock it to prevent access until further notice
Compute (1+read_value)
In a single operation, unlock `i` and store the computed value
While I don't think any processors support the hardware to allow such a thing to be done efficiently, one can easily imagine situations where such behavior would make multi-threaded code easier (e.g. it would guarantee that if two threads try to perform the above sequence simultaneously, i would get incremented by two) and it's not totally inconceivable that some future processor might provide a feature something like that.
If the compiler were to write i++ as indicated above (legal under the standard) and were to intersperse the above instructions throughout the evaluation of the overall expression (also legal), and if it didn't happen to notice that one of the other instructions happened to access i, it would be possible (and legal) for the compiler to generate a sequence of instructions that would deadlock. To be sure, a compiler would almost certainly detect the problem in the case where the same variable i is used in both places, but if a routine accepts references to two pointers p and q, and uses (*p) and (*q) in the above expression (rather than using i twice) the compiler would not be required to recognize or avoid the deadlock that would occur if the same object's address were passed for both p and q.
While the syntax of the expressions like a = a++ or a++ + a++ is legal, the behaviour of these constructs is undefined because a shall in C standard is not obeyed. C99 6.5p2:
Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. [72] Furthermore, the prior value shall be read only to determine the value to be stored [73]
With footnote 73 further clarifying that
This paragraph renders undefined statement expressions such as
i = ++i + 1;
a[i++] = i;
while allowing
i = i + 1;
a[i] = i;
The various sequence points are listed in Annex C of C11 (and C99):
The following are the sequence points described in 5.1.2.3:
Between the evaluations of the function designator and actual arguments in a function call and the actual call. (6.5.2.2).
Between the evaluations of the first and second operands of the following operators: logical AND && (6.5.13); logical OR || (6.5.14); comma , (6.5.17).
Between the evaluations of the first operand of the conditional ? : operator and whichever of the second and third operands is evaluated (6.5.15).
The end of a full declarator: declarators (6.7.6);
Between the evaluation of a full expression and the next full expression to be evaluated. The following are full expressions: an initializer that is not part of a compound literal (6.7.9); the expression in an expression statement (6.8.3); the controlling expression of a selection statement (if or switch) (6.8.4); the controlling expression of a while or do statement (6.8.5); each of the (optional) expressions of a for statement (6.8.5.3); the (optional) expression in a return statement (6.8.6.4).
Immediately before a library function returns (7.1.4).
After the actions associated with each formatted input/output function conversion specifier (7.21.6, 7.29.2).
Immediately before and immediately after each call to a comparison function, and also between any call to a comparison function and any movement of the objects passed as arguments to that call (7.22.5).
The wording of the same paragraph in C11 is:
If a side effect on a scalar object is unsequenced relative to either a different side effect on the same scalar object or a value computation using the value of the same scalar object, the behavior is undefined. If there are multiple allowable orderings of the subexpressions of an expression, the behavior is undefined if such an unsequenced side effect occurs in any of the orderings.84)
You can detect such errors in a program by for example using a recent version of GCC with -Wall and -Werror, and then GCC will outright refuse to compile your program. The following is the output of gcc (Ubuntu 6.2.0-5ubuntu12) 6.2.0 20161005:
% gcc plusplus.c -Wall -Werror -pedantic
plusplus.c: In function ‘main’:
plusplus.c:6:6: error: operation on ‘i’ may be undefined [-Werror=sequence-point]
i = i++ + ++i;
~~^~~~~~~~~~~
plusplus.c:6:6: error: operation on ‘i’ may be undefined [-Werror=sequence-point]
plusplus.c:10:6: error: operation on ‘i’ may be undefined [-Werror=sequence-point]
i = (i++);
~~^~~~~~~
plusplus.c:14:6: error: operation on ‘u’ may be undefined [-Werror=sequence-point]
u = u++ + ++u;
~~^~~~~~~~~~~
plusplus.c:14:6: error: operation on ‘u’ may be undefined [-Werror=sequence-point]
plusplus.c:18:6: error: operation on ‘u’ may be undefined [-Werror=sequence-point]
u = (u++);
~~^~~~~~~
plusplus.c:22:6: error: operation on ‘v’ may be undefined [-Werror=sequence-point]
v = v++ + ++v;
~~^~~~~~~~~~~
plusplus.c:22:6: error: operation on ‘v’ may be undefined [-Werror=sequence-point]
cc1: all warnings being treated as errors
The important part is to know what a sequence point is -- and what is a sequence point and what isn't. For example the comma operator is a sequence point, so
j = (i ++, ++ i);
is well-defined, and will increment i by one, yielding the old value, discard that value; then at comma operator, settle the side effects; and then increment i by one, and the resulting value becomes the value of the expression - i.e. this is just a contrived way to write j = (i += 2) which is yet again a "clever" way to write
i += 2;
j = i;
However, the , in function argument lists is not a comma operator, and there is no sequence point between evaluations of distinct arguments; instead their evaluations are unsequenced with regard to each other; so the function call
int i = 0;
printf("%d %d\n", i++, ++i, i);
has undefined behaviour because there is no sequence point between the evaluations of i++ and ++i in function arguments, and the value of i is therefore modified twice, by both i++ and ++i, between the previous and the next sequence point.
The C standard says that a variable should only be assigned at most once between two sequence points. A semi-colon for instance is a sequence point.
So every statement of the form:
i = i++;
i = i++ + ++i;
and so on violate that rule. The standard also says that behavior is undefined and not unspecified. Some compilers do detect these and produce some result but this is not per standard.
However, two different variables can be incremented between two sequence points.
while(*src++ = *dst++);
The above is a common coding practice while copying/analysing strings.
In https://stackoverflow.com/questions/29505280/incrementing-array-index-in-c someone asked about a statement like:
int k[] = {0,1,2,3,4,5,6,7,8,9,10};
int i = 0;
int num;
num = k[++i+k[++i]] + k[++i];
printf("%d", num);
which prints 7... the OP expected it to print 6.
The ++i increments aren't guaranteed to all complete before the rest of the calculations. In fact, different compilers will get different results here. In the example you provided, the first 2 ++i executed, then the values of k[] were read, then the last ++i then k[].
num = k[i+1]+k[i+2] + k[i+3];
i += 3
Modern compilers will optimize this very well. In fact, possibly better than the code you originally wrote (assuming it had worked the way you had hoped).
Your question was probably not, "Why are these constructs undefined behavior in C?". Your question was probably, "Why did this code (using ++) not give me the value I expected?", and someone marked your question as a duplicate, and sent you here.
This answer tries to answer that question: why did your code not give you the answer you expected, and how can you learn to recognize (and avoid) expressions that will not work as expected.
I assume you've heard the basic definition of C's ++ and -- operators by now, and how the prefix form ++x differs from the postfix form x++. But these operators are hard to think about, so to make sure you understood, perhaps you wrote a tiny little test program involving something like
int x = 5;
printf("%d %d %d\n", x, ++x, x++);
But, to your surprise, this program did not help you understand — it printed some strange, inexplicable output, suggesting that maybe ++ does something completely different, not at all what you thought it did.
Or, perhaps you're looking at a hard-to-understand expression like
int x = 5;
x = x++ + ++x;
printf("%d\n", x);
Perhaps someone gave you that code as a puzzle. This code also makes no sense, especially if you run it — and if you compile and run it under two different compilers, you're likely to get two different answers! What's up with that? Which answer is correct? (And the answer is that both of them are, or neither of them are.)
As you've heard by now, these expressions are undefined, which means that the C language makes no guarantee about what they'll do. This is a strange and unsettling result, because you probably thought that any program you could write, as long as it compiled and ran, would generate a unique, well-defined output. But in the case of undefined behavior, that's not so.
What makes an expression undefined? Are expressions involving ++ and -- always undefined? Of course not: these are useful operators, and if you use them properly, they're perfectly well-defined.
For the expressions we're talking about, what makes them undefined is when there's too much going on at once, when we can't tell what order things will happen in, but when the order matters to the result we'll get.
Let's go back to the two examples I've used in this answer. When I wrote
printf("%d %d %d\n", x, ++x, x++);
the question is, before actually calling printf, does the compiler compute the value of x first, or x++, or maybe ++x? But it turns out we don't know. There's no rule in C which says that the arguments to a function get evaluated left-to-right, or right-to-left, or in some other order. So we can't say whether the compiler will do x first, then ++x, then x++, or x++ then ++x then x, or some other order. But the order clearly matters, because depending on which order the compiler uses, we'll clearly get a different series of numbers printed out.
What about this crazy expression?
x = x++ + ++x;
The problem with this expression is that it contains three different attempts to modify the value of x: (1) the x++ part tries to take x's value, add 1, store the new value in x, and return the old value; (2) the ++x part tries to take x's value, add 1, store the new value in x, and return the new value; and (3) the x = part tries to assign the sum of the other two back to x. Which of those three attempted assignments will "win"? Which of the three values will actually determine the final value of x? Again, and perhaps surprisingly, there's no rule in C to tell us.
You might imagine that precedence or associativity or left-to-right evaluation tells you what order things happen in, but they do not. You may not believe me, but please take my word for it, and I'll say it again: precedence and associativity do not determine every aspect of the evaluation order of an expression in C. In particular, if within one expression there are multiple different spots where we try to assign a new value to something like x, precedence and associativity do not tell us which of those attempts happens first, or last, or anything.
So with all that background and introduction out of the way, if you want to make sure that all your programs are well-defined, which expressions can you write, and which ones can you not write?
These expressions are all fine:
y = x++;
z = x++ + y++;
x = x + 1;
x = a[i++];
x = a[i++] + b[j++];
x[i++] = a[j++] + b[k++];
x = *p++;
x = *p++ + *q++;
These expressions are all undefined:
x = x++;
x = x++ + ++x;
y = x + x++;
a[i] = i++;
a[i++] = i;
printf("%d %d %d\n", x, ++x, x++);
And the last question is, how can you tell which expressions are well-defined, and which expressions are undefined?
As I said earlier, the undefined expressions are the ones where there's too much going at once, where you can't be sure what order things happen in, and where the order matters:
If there's one variable that's getting modified (assigned to) in two or more different places, how do you know which modification happens first?
If there's a variable that's getting modified in one place, and having its value used in another place, how do you know whether it uses the old value or the new value?
As an example of #1, in the expression
x = x++ + ++x;
there are three attempts to modify x.
As an example of #2, in the expression
y = x + x++;
we both use the value of x, and modify it.
So that's the answer: make sure that in any expression you write, each variable is modified at most once, and if a variable is modified, you don't also attempt to use the value of that variable somewhere else.
One more thing. You might be wondering how to "fix" the undefined expressions I started this answer by presenting.
In the case of printf("%d %d %d\n", x, ++x, x++);, it's easy — just write it as three separate printf calls:
printf("%d ", x);
printf("%d ", ++x);
printf("%d\n", x++);
Now the behavior is perfectly well defined, and you'll get sensible results.
In the case of x = x++ + ++x, on the other hand, there's no way to fix it. There's no way to write it so that it has guaranteed behavior matching your expectations — but that's okay, because you would never write an expression like x = x++ + ++x in a real program anyway.
A good explanation about what happens in this kind of computation is provided in the document n1188 from the ISO W14 site.
I explain the ideas.
The main rule from the standard ISO 9899 that applies in this situation is 6.5p2.
Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression. Furthermore, the prior value shall be read only to determine the value to be stored.
The sequence points in an expression like i=i++ are before i= and after i++.
In the paper that I quoted above it is explained that you can figure out the program as being formed by small boxes, each box containing the instructions between 2 consecutive sequence points. The sequence points are defined in annex C of the standard, in the case of i=i++ there are 2 sequence points that delimit a full-expression. Such an expression is syntactically equivalent with an entry of expression-statement in the Backus-Naur form of the grammar (a grammar is provided in annex A of the Standard).
So the order of instructions inside a box has no clear order.
i=i++
can be interpreted as
tmp = i
i=i+1
i = tmp
or as
tmp = i
i = tmp
i=i+1
because both all these forms to interpret the code i=i++ are valid and because both generate different answers, the behavior is undefined.
So a sequence point can be seen by the beginning and the end of each box that composes the program [the boxes are atomic units in C] and inside a box the order of instructions is not defined in all cases. Changing that order one can change the result sometimes.
EDIT:
Other good source for explaining such ambiguities are the entries from c-faq site (also published as a book) , namely here and here and here .
The reason is that the program is running undefined behavior. The problem lies in the evaluation order, because there is no sequence points required according to C++98 standard ( no operations is sequenced before or after another according to C++11 terminology).
However if you stick to one compiler, you will find the behavior persistent, as long as you don't add function calls or pointers, which would make the behavior more messy.
Using Nuwen MinGW 15 GCC 7.1 you will get:
#include<stdio.h>
int main(int argc, char ** argv)
{
int i = 0;
i = i++ + ++i;
printf("%d\n", i); // 2
i = 1;
i = (i++);
printf("%d\n", i); //1
volatile int u = 0;
u = u++ + ++u;
printf("%d\n", u); // 2
u = 1;
u = (u++);
printf("%d\n", u); //1
register int v = 0;
v = v++ + ++v;
printf("%d\n", v); //2
}
How does GCC work? it evaluates sub expressions at a left to right order for the right hand side (RHS) , then assigns the value to the left hand side (LHS) . This is exactly how Java and C# behave and define their standards. (Yes, the equivalent software in Java and C# has defined behaviors). It evaluate each sub expression one by one in the RHS Statement in a left to right order; for each sub expression: the ++c (pre-increment) is evaluated first then the value c is used for the operation, then the post increment c++).
according to GCC C++: Operators
In GCC C++, the precedence of the operators controls the order in
which the individual operators are evaluated
the equivalent code in defined behavior C++ as GCC understands:
#include<stdio.h>
int main(int argc, char ** argv)
{
int i = 0;
//i = i++ + ++i;
int r;
r=i;
i++;
++i;
r+=i;
i=r;
printf("%d\n", i); // 2
i = 1;
//i = (i++);
r=i;
i++;
i=r;
printf("%d\n", i); // 1
volatile int u = 0;
//u = u++ + ++u;
r=u;
u++;
++u;
r+=u;
u=r;
printf("%d\n", u); // 2
u = 1;
//u = (u++);
r=u;
u++;
u=r;
printf("%d\n", u); // 1
register int v = 0;
//v = v++ + ++v;
r=v;
v++;
++v;
r+=v;
v=r;
printf("%d\n", v); //2
}
Then we go to Visual Studio. Visual Studio 2015, you get:
#include<stdio.h>
int main(int argc, char ** argv)
{
int i = 0;
i = i++ + ++i;
printf("%d\n", i); // 3
i = 1;
i = (i++);
printf("%d\n", i); // 2
volatile int u = 0;
u = u++ + ++u;
printf("%d\n", u); // 3
u = 1;
u = (u++);
printf("%d\n", u); // 2
register int v = 0;
v = v++ + ++v;
printf("%d\n", v); // 3
}
How does Visual Studio work, it takes another approach, it evaluates all pre-increments expressions in first pass, then uses variables values in the operations in second pass, assign from RHS to LHS in third pass, then at last pass it evaluates all the post-increment expressions in one pass.
So the equivalent in defined behavior C++ as Visual C++ understands:
#include<stdio.h>
int main(int argc, char ** argv)
{
int r;
int i = 0;
//i = i++ + ++i;
++i;
r = i + i;
i = r;
i++;
printf("%d\n", i); // 3
i = 1;
//i = (i++);
r = i;
i = r;
i++;
printf("%d\n", i); // 2
volatile int u = 0;
//u = u++ + ++u;
++u;
r = u + u;
u = r;
u++;
printf("%d\n", u); // 3
u = 1;
//u = (u++);
r = u;
u = r;
u++;
printf("%d\n", u); // 2
register int v = 0;
//v = v++ + ++v;
++v;
r = v + v;
v = r;
v++;
printf("%d\n", v); // 3
}
as Visual Studio documentation states at Precedence and Order of Evaluation:
Where several operators appear together, they have equal precedence and are evaluated according to their associativity. The operators in the table are described in the sections beginning with Postfix Operators.

D std.random different behavior between integer and decimal uniform random number generation

I'm generating some seeded random numbers in D using the default random number generation engine, and I'm getting some "slightly off" values depending on whether I request integer or floating point/double random number generation. For example this code:
import std.random; // From docs: alias Random = MersenneTwisterEngine!(uint, 32, 624, 397, 31, 2567483615u, 11, 7, 2636928640u, 15, 4022730752u, 18).MersenneTwisterEngine;
int seed = 123;
Random genint = Random(seed);
Random gendouble = Random(seed);
foreach (i; 0..4) {
writefln("rand[%d]: %d %f", i, uniform(0, 1000000, genint), uniform(0.0, 1000000.0, gendouble));
}
generates these results:
rand[0]: 696626 696469.187433
rand[1]: 713115 712955.321584
rand[2]: 286203 286139.338810
rand[3]: 428567 428470.925062
For some reason they're close but off by a noticeable amount. Is this expected behavior? I'm not completely familiar with the method being used to generate the numbers so I'm wondering what might be causing this to occur. Specifically, the reason I'm curious about this is I'm trying to port some D code to Objective-C, using the MTRandom library:
int seed = 123;
MTRandom *genint = [[MTRandom alloc] initWithSeed:seed];
MTRandom *gendouble = [[MTRandom alloc] initWithSeed:seed];
for (int i = 0; i < 4; i++) {
NSLog(#"rand[%d]: %d %f", i, [genint randomUInt32From:0 to:1000000], [gendouble randomDoubleFrom:0.0 to:1000000.0]);
}
and I'm getting these results:
rand[0]: 696469 696469.187433
rand[1]: 712956 712955.321584
rand[2]: 286139 286139.338810
rand[3]: 428471 428470.925062
Identical for retrieving random doubles, but here the integer results are much closer.. though sometimes off by one! Is 712955.32 supposed to be rounded up to 712956 using this method of random generation? I just don't know...
What's the best way I can ensure that my seeded random integer generation is identical between D and Objective-C?
The following is based on a very quick review of the docs page:
I don't see where it's indicated that the results of uniform!int and uniform!double should even be remotely close, nor any specification on how the bits from the random number generator get converted to the requested type.
In short, I think the ints and doubles being close is an implementation detail and you can not depend on it. Furthermore, I'd not depend on the obj-c and D implementations returning the same sequence of doubles, even from the same seed. (I expect you can count of the MT construct producing the same bit stream however.)
If you need the same sequence, then you will need to crack open the obj-c version and port it to D as well.

How to 'checksum' an array of noisy floating point numbers?

What is a quick and easy way to 'checksum' an array of floating point numbers, while allowing for a specified small amount of inaccuracy?
e.g. I have two algorithms which should (in theory, with infinite precision) output the same array. But they work differently, and so floating point errors will accumulate differently, though the array lengths should be exactly the same. I'd like a quick and easy way to test if the arrays seem to be the same. I could of course compare the numbers pairwise, and report the maximum error; but one algorithm is in C++ and the other is in Mathematica and I don't want the bother of writing out the numbers to a file or pasting them from one system to another. That's why I want a simple checksum.
I could simply add up all the numbers in the array. If the array length is N, and I can tolerate an error of 0.0001 in each number, then I would check if abs(sum1-sum2)<0.0001*N. But this simplistic 'checksum' is not robust, e.g. to an error of +10 in one entry and -10 in another. (And anyway, probability theory says that the error probably grows like sqrt(N), not like N.) Of course, any checksum is a low-dimensional summary of a chunk of data so it will miss some errors, if not most... but simple checksums are nonetheless useful for finding non-malicious bug-type errors.
Or I could create a two-dimensional checksum, [sum(x[n]), sum(abs(x[n]))]. But is the best I can do, i.e. is there a different function I might use that would be "more orthogonal" to the sum(x[n])? And if I used some arbitrary functions, e.g. [sum(f1(x[n])), sum(f2(x[n]))], then how should my 'raw error tolerance' translate into 'checksum error tolerance'?
I'm programming in C++, but I'm happy to see answers in any language.
i have a feeling that what you want may be possible via something like gray codes. if you could translate your values into gray codes and use some kind of checksum that was able to correct n bits you could detect whether or not the two arrays were the same except for n-1 bits of error, right? (each bit of error means a number is "off by one", where the mapping would be such that this was a variation in the least significant digit).
but the exact details are beyond me - particularly for floating point values.
i don't know if it helps, but what gray codes solve is the problem of pathological rounding. rounding sounds like it will solve the problem - a naive solution might round and then checksum. but simple rounding always has pathological cases - for example, if we use floor, then 0.9999999 and 1 are distinct. a gray code approach seems to address that, since neighbouring values are always single bit away, so a bit-based checksum will accurately reflect "distance".
[update:] more exactly, what you want is a checksum that gives an estimate of the hamming distance between your gray-encoded sequences (and the gray encoded part is easy if you just care about 0.0001 since you can multiple everything by 10000 and use integers).
and it seems like such checksums do exist: Any error-correcting code can be used for error detection. A code with minimum Hamming distance, d, can detect up to d − 1 errors in a code word. Using minimum-distance-based error-correcting codes for error detection can be suitable if a strict limit on the minimum number of errors to be detected is desired.
so, just in case it's not clear:
multiple by minimum error to get integers
convert to gray code equivalent
use an error detecting code with a minimum hamming distance larger than the error you can tolerate.
but i am still not sure that's right. you still get the pathological rounding in the conversion from float to integer. so it seems like you need a minimum hamming distance that is 1 + len(data) (worst case, with a rounding error on each value). is that feasible? probably not for large arrays.
maybe ask again with better tags/description now that a general direction is possible? or just add tags now? we need someone who does this for a living. [i added a couple of tags]
I've spent a while looking for a deterministic answer, and been unable to find one. If there is a good answer, it's likely to require heavy-duty mathematical skills (functional analysis).
I'm pretty sure there is no solution based on "discretize in some cunning way, then apply a discrete checksum", e.g. "discretize into strings of 0/1/?, where ? means wildcard". Any discretization will have the property that two floating-point numbers very close to each other can end up with different discrete codes, and then the discrete checksum won't tell us what we want to know.
However, a very simple randomized scheme should work fine. Generate a pseudorandom string S from the alphabet {+1,-1}, and compute csx=sum(X_i*S_i) and csy=sum(Y_i*S_i), where X and Y are my original arrays of floating point numbers. If we model the errors as independent Normal random variables with mean 0, then it's easy to compute the distribution of csx-csy. We could do this for several strings S, and then do a hypothesis test that the mean error is 0. The number of strings S needed for the test is fixed, it doesn't grow linearly in the size of the arrays, so it satisfies my need for a "low-dimensional summary". This method also gives an estimate of the standard deviation of the error, which may be handy.
Try this:
#include <complex>
#include <cmath>
#include <iostream>
// PARAMETERS
const size_t no_freqs = 3;
const double freqs[no_freqs] = {0.05, 0.16, 0.39}; // (for example)
int main() {
std::complex<double> spectral_amplitude[no_freqs];
for (size_t i = 0; i < no_freqs; ++i) spectral_amplitude[i] = 0.0;
size_t n_data = 0;
{
std::complex<double> datum;
while (std::cin >> datum) {
for (size_t i = 0; i < no_freqs; ++i) {
spectral_amplitude[i] += datum * std::exp(
std::complex<double>(0.0, 1.0) * freqs[i] * double(n_data)
);
}
++n_data;
}
}
std::cout << "Fuzzy checksum:\n";
for (size_t i = 0; i < no_freqs; ++i) {
std::cout << real(spectral_amplitude[i]) << "\n";
std::cout << imag(spectral_amplitude[i]) << "\n";
}
std::cout << "\n";
return 0;
}
It returns just a few, arbitrary points of a Fourier transform of the entire data set. These make a fuzzy checksum, so to speak.
How about computing a standard integer checksum on the data obtained by zeroing the least significant digits of the data, the ones that you don't care about?

What are the practical uses of modulus (%) in programming? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Recognizing when to use the mod operator
What are the practical uses of modulus? I know what modulo division is. The first scenario which comes to my mind is to use it to find odd and even numbers, and clock arithmetic. But where else I could use it?
The most common use I've found is for "wrapping round" your array indices.
For example, if you just want to cycle through an array repeatedly, you could use:
int a[10];
for (int i = 0; true; i = (i + 1) % 10)
{
// ... use a[i] ...
}
The modulo ensures that i stays in the [0, 10) range.
I usually use them in tight loops, when I have to do something every X loops as opposed to on every iteration..
Example:
int i;
for (i = 1; i <= 1000000; i++)
{
do_something(i);
if (i % 1000 == 0)
printf("%d processed\n", i);
}
One use for the modulus operation is when making a hash table. It's used to convert the value out of the hash function into an index into the array. (If the hash table size is a power of two, the modulus could be done with a bit-mask, but it's still a modulus operation.)
To print a number as string, you need the modulus to find the value of a digit.
string number_to_string(uint number) {
string result = "";
while (number != 0) {
result = cast(char)((number % 10) + '0') ~ result;
// ^^^^^^^^^^^
number /= 10;
}
return result;
}
For the control number of international bank account numbers, the mod97 technique.
Also in large batches to do something after n iterations. Here is an example for NHibernate:
ISession session = sessionFactory.openSession();
ITransaction tx = session.BeginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);
session.Save(customer);
if ( i % 20 == 0 ) { //20, same as the ADO batch size
//Flush a batch of inserts and release memory:
session.Flush();
session.Clear();
}
}
tx.Commit();
session.Close();
The usual implementation of buffered communications uses circular buffers, and you manage them with modulus arithmetic.
For languages that don't have bitwise operators, modulus can be used to get the lowest n bits of a number. For example, to get the lowest 8 bits of x:
x % 256
which is equivalent to:
x & 255
Cryptography. That alone would account for an obscene percentage of modulus (I exaggerate, but you get the point).
Try the Wikipedia page too:
Modular arithmetic is referenced in number theory, group theory, ring theory, knot theory, abstract algebra, cryptography, computer science, chemistry and the visual and musical arts.
In my experience, any sufficiently advanced algorithm is probably going to touch on one more of the above topics.
Well, there are many perspectives you can look at it. If you are looking at it as a mathematical operation then it's just a modulo division. Even we don't need this as whatever % do, we can achieve using subtraction as well, but every programming language implement it in very optimized way.
And modulu division is not limited to finding odd and even numbers or clock arithmetic. There are hundreds of algorithms which need this module operation, for example, cryptography algorithms, etc. So it's a general mathematical operation like other +, -, *, /, etc.
Except the mathematical perspective, different languages use this symbol for defining built-in data structures, like in Perl %hash is used to show that the programmer declared a hash. So it all varies based on the programing language design.
So still there are a lot of other perspectives which one can do add to the list of use of %.