I found multiple optimal CRC-32 polynomials on the CRC Polynomal Zoo site of Philip Koopman. Now I want to generate a CRC lookup table for one of the polynomials, by using the software pycrc.
To generate a CRC lookup table you have to provide the following information for the choosen polynomial:
Reflected in (boolean)
Reflected out (boolean)
XOR in (hex value)
XOR out (hex value)
For some polynomials I found the above parameters in a specification (for instance a AUTOSAR specification for the polynomial "F4ACFB13"), but what parameters should I choose if there is no specification for a certain polynomial? The Koopman site doesn't seem to provide the recommended parameters to use.
I already tried to find an explanation how to choose these parameters, but I could only find explanations how to implement these parameters and not how to choose them. Most websites recommend searching for specifications describing "common CRC polynomials", because they provide the optimal parameters.
Generally you are trying to match the CRC used in some existing protocol. In that case you need to do the same thing you did for the AUTOSAR CRC: find the specification for the CRC. Or you need to get several examples of messages and correct CRCs and try to reverse-engineer the CRC parameters.
You can find over a hundred CRC definitions here.
If you are creating your own protocol from scratch, then you can select any polynomial, reflection, initial value, and final exclusive-or you like, as well as any byte order of the CRC in the message. I would recommend that the polynomial be chosen with good properties for your message length from Phil's data, and that the initial value of the CRC register, init, not be zero. (If it is zero, then the CRC of any string of zeros will be the same value, that final exclusive-or, regardless of the length.) Also there is no detriment, and it is more aesthetic to pick the initial value and the final exclusive-or to be equal, so that the CRC of an empty sequence is zero.
I've been reading this article on elliptic-curve crypto and how it works:
http://arstechnica.com/security/2013/10/a-relatively-easy-to-understand-primer-on-elliptic-curve-cryptography/
In the article, they state:
It turns out that if you have two points [on an elliptic curve], an initial point "dotted" with itself n times to arrive at a final point [on the curve], finding out n when you only know the final point and the first point is hard.
It goes on to state that the only way to find out n (if you only have the first and final points, and you know the curve eqn), is to repeatedly dot the initial point until you finally have the matching final point.
I think I understand all this - but what confuses me is - if n is the private key, and the final point corresponds to the public key (which I think is the case), then doesn't it take the exact same amount of work to compute the public key from the private, as it does the private from the public (both just have to recursively dot a point on the curve)? am I misunderstanding something about what the article is saying?
The one-way attribute of ECC and RSA is due to the Chinese Reminder Theoreom (CRT). A series of arithmetic divisions where only the remainder is kept (aka Modulo operation %), which results in information loss in the output. As a result, the person with the keys takes one direct path to generate the output - and any would-be attacker has to exhaust a massive number of possible paths in order to determine what key was used to create the output. If the simple division was used instead of a modulo - then key data would be present in the output and it couldn't be used for cryptography.
If you lived in a world where you had a powerful enough computer to exhaust all possibilities - then the CRT wouldn't be useful as a cryptographic primitive. The computers we have now a fairly powerful - so we balance the power of our modern machines with a keysize that introduces enough range of possibilities so that they cannot be exhausted in a timeframe that matters.
The CRT is a subset of the P vs NP problem set - so perhaps proving P=NP may lead to a way of undermining the oneway aspect of asymmetric cryptography. We know that there is a way to factor CRT using a quantum computer running Shor's Algorithm. Shor's Algorithm has proven that we can defeat the so-called "trapdoor", or one-way attributes of CRT, it is still however an expensive attack to conduct.
The following lecture is my favorite description of the CRT. It shows that there are many possible solutions for one direction forcing an attacker to exhaust them all and only one solution for the other:
https://www.youtube.com/watch?v=ru7mWZJlRQg
EDIT: I previously stated that n is not the private key. In your example, n is either server or client private key.
How it works is that there is a starting point known to anybody.
You select random integer k and do the "dotting operation" k-times. Then you send this new point to the server. (k is your private key)
Server does the same with the starting point, but q-times and sends it to you. (q is server's private key)
You take the point you got from server and "dot" it k-times. The final point would be the starting point "dotted" k*q-times.
Server does the same with point it got from you. And again its final point would be the starting point "dotted" q*k-times.
That means the final point (= the starting point "dotted" k*q-times) is the shared secret since all what any attacker would know is the starting point, the starting point dotted k-times and the starting point dotted q-times. And given only those data, it's practically impossible to find the final point as a product of k*q unless any of those known.
EDIT: No, it doesn't take the same time to compute k from G = kP given known values of G (sent point) and P (starting point). More in comment section and:
For rising to power, see Exponentiation by squaring.
For ECC point multiplication, see point multiplication.
bc, a Linux command-line calculator, is proficient enough to calculate
3^2
9
Even a negative exponent doesn't confuse it:
3^-2
0.11111
Yet it fails when it encounters
9^0.5
Runtime warning (func=(main), adr=8): non-zero scale in exponent
How could it be that bc can't handle this?
And what does the error message mean?
Yes, I've read this and the solution given there:
e(0.5*l(9))
2.99999999999999999998
And yes, it is no good because of precision loss and
A calculator is supposed to solve expressions. You are not supposed to
make life easier for the calculator, it is supposed to be the other
way around...
This feature was designed to encourage users to write their own functions. Making it a unique calculator that requires a user-defined function to calculate a square root.
It doesn't really bother me to write a function for tangents or cotangents as it looks pretty straightforward given s(x) and c(x). But in my opinion calculating a square root through a user-defined function is a bit too much.
Why anyone uses bc if there's Python out there? Speed?
In bc, b must be an integer in a ^ b. However you can add your own functions to bc like this:
create a file ~/.bcrc, add the following function to it:
define pow(a, b) {
if (scale(b) == 0) {
return a ^ b;
}
return e(b*l(a));
}
then you can start bc as follows:
bc ~/.bcrc -l
so you can use function pow to do such calculation.
See more here, you can add some more functions to bc.
bc is very basic and more complex functions not provided by the "math extension" must be implemented in the language itself: it has all you need to do it; in particular "power" is a common example even on wikipedia.
But you may be also interested in reading for example this answer here on SO.
I want to use SYNCSORT to force all Packed Decimal fields to a negative sign value. The critical requirement is the 2nd nibble must be Hex 'D'. I have a method that works but it seems much too complex. In keeping with the KISS principle, I'm hoping someone has a better method. Perhaps using a bit mask on the last 4 bits? Here is the code I have come up with. Is there a better way?
*
* This sort logic is intended to force all Packed Decimal amounts to
* have a negative sign with a B'....1101' value (Hex 'xD').
*
SORT FIELDS=COPY
OUTFIL FILES=1,
INCLUDE=(8,1,BI,NE,B'....1..1',OR, * POSITIVE PACKED DECIMAL
8,1,BI,EQ,B'....1111'), * UNSIGNED PACKED DECIMAL
OUTREC=(1:1,7, * INCLUDING +0
8:(-1,MUL,8,1,PD),PD,LENGTH=1,
9:9,72)
OUTFIL FILES=2,
INCLUDE=(8,1,BI,EQ,B'....1..1',AND, * NEGATIVE PACKED DECIMAL
8,1,BI,NE,B'....1111'), * NOT UNSIGNED PACKED DECIMAL
OUTREC=(1:1,7, * INCLUDING -0
8:(+1,MUL,8,1,PD),PD,LENGTH=1,
9:9,72)
In the code that processes the VSAM file, can you change the read logic to GET with KEY GTEQ and check for < 0 on the result instead of doing a specific keyed read?
If you did that, you could accept all three negative packed values xA, xB and xD.
Have you considered writing an E15 user exit? The E15 user exit lets you
manipulate records as they are input to the sort process. In this case you would have a
REXX, COBOL or other LE compatible language subroutine patch the packed decimal sign field as it is input to the sort process. No need to split into multiple files to be merged later on.
Here is a link to example JCL
for invoking an E15 exit from DFSORT (same JCL for SYNCSORT). Chapter 4 of this reference
describes how to develop User Exit routines, again this is a DFSORT manual but I believe SyncSort is
fully compatible in this respect. Writing a user exit is no different than writing any other subroutine - get the linkage right and the rest is easy.
This is a very general outline, but I hope it helps.
Okay, it took some digging but NEALB's suggestion to seek help on MVSFORUMS.COM paid off... here is the final result. The OUTREC logic used with SORT/MERGE replaces OUTFIL and takes advantage of new capabilities (IFTHEN, WHEN and OVERLAY) in Syncsort 1.3 that I didn't realize existed. It pays to have current documentation available!
*
* This MERGE logic is intended to assert that the Packed Decimal
* field has a negative sign with a B'....1101' value (Hex X'.D').
*
*
MERGE FIELDS=(27,5.4,BI,A),EQUALS
SUM FIELDS=NONE
OUTREC IFTHEN=(WHEN=(32,1,BI,NE,B'....1..1',OR,
32,1,BI,EQ,B'....1111'),
OVERLAY=(32:(-1,MUL,32,1,PD),PD,LENGTH=1)),
IFTHEN=(WHEN=(32,1,BI,EQ,B'....1..1',AND,
32,1,BI,NE,B'....1111'),
OVERLAY=(32:(+1,MUL,32,1,PD),PD,LENGTH=1))
Looking at the last byte of a packed field is possible. You want positive/unsigned to negative, so if it is greater than -1, subtract it from zero.
From a short-lived Answer by MikeC, it is now known that the data contains non-preferred signs (that is, it can contain A through F in the low-order half-byte, whereas a preferred sign would be C (positive) or D (negative). F is unsigned, treated as positive.
This is tested with DFSORT. It should work with SyncSORT. Turns out that DFSORT can understand a negative packed-decimal zero, but it will not create a negative packed-decimal zero (it will allow a zoned-decimal negative zero to be created from a negative zero packed-decimal).
The idea is that a non-preferred sign is valid and will be accurately signed for input to a decimal machine instruction, but the result will always be a preferred sign, and will be correct. So by adding zero first, the field gets turned into a preferred sign and then the test for -1 will work as expected. With data in the sign-nybble for packed-decimal fields, SORT has some specific and documented behaviours, which just don't happen to help here.
Since there is only one value to deal with to become the negative zero, X'0C', after the normalisation of signs already done, there is a simple test and replacement with a constant of X'0D' for the negative zero. Since the negative zero will not work, the second test is changed from the original minus one to zero.
With non-preferred signs in the data:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=INIT,
OVERLAY=(32:+0,ADD,32,1,PD,TO=PD,LENGTH=1)),
IFTHEN=(WHEN=(32,1,CH,EQ,X'0C'),
OVERLAY=(32:X'0D')),
IFTHEN=(WHEN=(32,1,PD,GT,0),
OVERLAY=(32:+0,SUB,32,1,PD,TO=PD,LENGTH=1))
With preferred signs in the data:
SORT FIELDS=COPY
INREC IFTHEN=(WHEN=(32,1,CH,EQ,X'0C'),
OVERLAY=(32:X'0D')),
IFTHEN=(WHEN=(32,1,PD,GT,0),
OVERLAY=(32:+0,SUB,32,1,PD,TO=PD,LENGTH=1))
Note: If non-preferred signs are stuffed through a COBOL program not using compiler option NUMPROC(NOPFD) then results will be "interesting".
I need to get the BIT length from NSUinteger or NSString
How i can get the bit length?
Thanks
If I'm understanding the question correctly (it is kind of odd, but... hey... so am I):
sizeof(NSUInteger) * 8
[aString maximumLengthOfBytesUsingEncoding: ...] * 8
For NSNumber, a subclass of NSValue, things get a little bit trickier. You'll need to call -objCType, then determine the bit length from that.
OP: I really think you need to organize your thoughts and ask a single, coherent question that, at a minimum, gives an overview of what you're trying to accomplish. So far you have asked at least four questions that are all minor variations of each other.
To other people answering this question: From the context of his other questions, he's trying to do some bignum crypto (ala RSA), or some other bignum number theory stuff (needs to do powermod()). Again, based on the context of his other questions, what he's asking in this question is how to do floor(log2(X)) + 1 where X is an arbitrary data type (hence the NSString).
I have a RSA Exponent key value which is supposed to be a biginteger but i have it in NSString/NSdata with full value in(UTF8 encoded)
As Part of RSA encryption , i need to do the following in the Iphone Env
1.I need to find the bit length of the above exponent value
2.I need to do arithmatic operations on exponent and modulus values including PowMod
3.so which data type i can use (uint64_t or NSNUmber or NSUinteger) for arithmatic operations as well as holding the bigint result value.
4.do i need to go for a specfic bigint implementation, can i able to manage with the above existing iphone data types for bigint ?
5. those external bigint implementations expect to port openssl or gmp lib to Iphone ?