GMP Library: How to check first m-bit of a big integer - gmp

I'm using gmp (big integer library)
I have a value b I need to check whether first m-bit of it, is zero.
I know if I convert b to string I can do checking, but it's not efficient.
Question 1: I need to know whether the library has a function that returns first m-bit of a big integer.
To encode a big integer c, I convert it to string and then concatenate it with m zeros, e.g. imagine c in binary is 11, then I encode as: 1100000000
Question 2: I need to know whether I can do encoding faster using the library's function

The GMP library provides plenty of bit manipulation primitives. Have a look at the Logical and Bit Manipulation Functions chapter from the documentation.
For your first question, I think you want to use mpz_tstbit. For your second question, it sounds like you're trying to perform some bit shifting, which can be done via the mpz_mul_2exp and the mpz_*_*_2exp functions.

Related

Kotlin: Convert Hex String to signed integer via signed 2's complement?

Long story short, I am trying to convert strings of hex values to signed 2's complement integers. I was able to do this in a single line of code in Swift, but for some reason I can't find anything analogous in Kotlin. String.ToInt or String.ToUInt just give the straight base 16 to base 10 conversion. That works for some positive values, but not for any negative numbers.
How do I know I want the signed 2's complement? I've used this online converter and according to its output, what I want is the decimal from signed 2's complement, not the straight base 16 to base 10 conversion that's easy to do by hand.
So, "FFD6" should go to -42 (correct, confirmed in Swift and C#), and "002A" should convert to 42.
I would appreciate any help or even any leads on where to look. Because yes I've searched, I've googled the problem a bunch and, no I haven't found a good answer.
I actually tried writing my own code to do the signed 2's complement but so far it's not giving me the right answers and I'm pretty at a loss. I'd really hope for a built in command that does it instead; I feel like if other languages have that capability Kotlin should too.
For 2's complement, you need to know how big the type is.
Your examples of "FFD6" and "002A" both have 4 hex digits (i.e. 2 bytes).  That's the same size as a Kotlin Short.  So a simple solution in this case is to parse the hex to an Int and then convert that to a Short.  (You can't convert it directly to a Short, as that would give an out-of-range error for the negative numbers.)
"FFD6".toInt(16).toShort() // gives -42
"002A".toInt(16).toShort() // gives 42
(You can then convert back to an Int if needed.)
You could similarly handle 8-digit (4-byte) values as Ints, and 2-digit (1-byte) values as Bytes.
For other sizes, you'd need to do some bit operations.  Based on this answer for Java, if you have e.g. a 3-digit hex number, you can do:
("FD6".toInt(16) xor 0x800) - 0x800 // gives -42
(Here 0x800 is the three-digit number with the top bit (i.e. sign bit) set.  You'd use 0x80000 for a five-digit number, and so on.  Also, for 9–16 digits, you'd need to start with a Long instead of an Int.  And if you need >16 digits, it won't fit into a Long either, so you'd need an arbitrary-precision library that handled hex…)

Cleanest way to convert a `Double` or `Single` to `Integer`, without rounding

Converting a floating-point number to an integer using either CInt or CType will cause the value of that number to be rounded. The Int function and Math.Floor may be used to convert a floating-point number to a whole number, rounding toward negative infinity, but both functions return floating-point values which cannot be implicitly used as Integer values without a cast.
Is there a concise and idiomatic alternative to IntVar = CInt(Int(FloatingPointVar));? Pascal included Round and Trunc functions which returned Integer; is there some equivalent in either the VB.NET language or in the .NET framework?
A similar question, CInt does not round Double value consistently - how can I remove the fractional part? was asked in 2011, but it simply asked if there was a way to convert a floating-point number to an integer; the answers suggested a two-step process, but it didn't go into any depth about what does or does not exist in the framework. I would find it hard to believe that the Framework wouldn't have something analogous to the Pascal Trunc function, given that such a thing will frequently be needed when performing graphical operations using floating-point operands [such operations need to be rendered as discrete pixels, and should be rounded in such a way that round(x)-1 = round(x-1) for all x that fit within the range of +/- (2^31-1); even if such operations are rounded, they should use Floor(x+0.5), rather than round-to-nearest-even, so as to ensure the above property]
Incidentally, in C# a typecast from Double to Int using (type)expr notation uses round-to-zero semantics; the fact that this differs from the VB.NET behavior suggests that one or both languages is using its own conversion routines rather an explicit conversion operator included in the Framework. It would seem likely that the Framework should define a conversion operator? Does such an operator exist within the framework? What does it do? Is there a way to invoke it from C# and/or VB.NET?
After some searching, it seems that VB has no clean way of accomplishing that, short of writing an extension method.
The C# (int) cast translates directly into conv.i4 in IL. VB has no such operators, and no framework function seems to provide an alternative.
Usenet had an interesting discussion about this back in 2005 – of course a lot has changed since then but I think this still holds.
You can use the Math.Truncate method.
Calculates the integral part of a specified double-precision floating-point number.
For example:
Dim a As double = 1.6666666
Dim b As Integer = Math.Truncate(a) ' b = 1
I know this is an old case but I saw no one suggest the Math.Round() function.
Yes Math.Round takes a double and returns a double. However it returns a number that has been rounded to a whole number. It should easily and concisely convert to an integer using cInt. Would that suffice?
cInt(math.round(10000.54564)) ' = 10001
cInt(math.round(10000.49564)) ' = 10000
You may need extract the Int part of a float number:
float num = 12.234;
string toint = "" + num;
string auxil = toint.Split('.');
int newnum = Int.Parse(auxil[0]);

Use of byte arrays and hex values in Cryptography

When we are using cryptography always we are seeing byte arrays are being used instead of String values. But when we are looking at the techniques of most of the cryptography algorithms they uses hex values to do any operations. Eg. AES: MixColumns, SubBytes all these techniques(I suppose it uses) uses hex values to do those operations.
Can you explain how these byte arrays are used in these operations as hex values.
I have an assignment to develop a encryption algorithm , therefore any related sample codes would be much appropriate.
Every four digits of binary makes a hexadecimal digit, so, you can convert back and forth quite easily (see: http://en.wikipedia.org/wiki/Hexadecimal#Binary_conversion).
I don't think I full understand what you're asking, though.
The most important thing to understand about hexadecimal is that it is a system for representing numeric values, just like binary or decimal. It is nothing more than notation. As you may know, many computer languages allow you to specify numeric literals in a few different ways:
int a = 42;
int a = 0x2A;
These store the same value into the variable 'a', and a compiler should generate identical code for them. The difference between these two lines will be lost very early in the compilation process, because the compiler cares about the value you specified, and not so much about the representation you used to encode it in your source file.
Main takeaway: there is no such thing as "hex values" - there are just hex representations of values.
That all said, you also talk about string values. Obviously 42 != "42" != "2A" != 0x2A. If you have a string, you'll need to parse it to a numeric value before you do any computation with it.
Bytes, byte arrays and/or memory areas are normally displayed within an IDE (integrated development environment) and debugger as hexadecimals. This is because it is the most efficient and clear representation of a byte. It is pretty easy to convert them into bits (in his mind) for the experienced programmer. You can clearly see how XOR and shift works as well, for instance. Those (and addition) are the most common operations when doing symmetric encryption/hashing.
So it's unlikely that the program performs this kind of conversion, it's probably the environment you are in. That, and source code (which is converted to bytes at compile time) probably uses a lot of literals in hexadecimal notation as well.
Cryptography in general except hash functions is a method to convert data from one format to another mostly referred as cipher text using a secret key. The secret key can be applied to the cipher text to get the original data also referred as plain text. In this process data is processed in byte level though it can be bit level as well. The point here the text or strings which we referring to are in limited range of a byte. Example ASCII is defined in certain range in byte value of 0 - 255. In practical when a crypto operation is performed, the character is converted to equivalent byte and the using the key the process is performed. Now the outcome byte or bytes will most probably be out of range of human readable defined text like ASCII encoded etc. For this reason any data to which a crypto function is need to be applied is converted to byte array first. For example the text to be enciphered is "Hello how are you doing?" . The following steps shall be followed:
1. byte[] data = "Hello how are you doing?".getBytes()
2. Process encipher on data using key which is also byte[]
3. The output blob is referred as cipherTextBytes[]
4. Encryption is complete
5. Using Key[], a process is performed over cipherTextBytes[] which returns data bytes
6 A simple new String(data[]) will return string value of Hellow how are you doing.
This is a simple info which might help you to understand reference code and manuals better. In no way I am trying to explain you the core of cryptography here.

Handling expressions in GMP

I introduced myself to the GMP library for high precision arithmetic recently. It seems easy enough to use but in my first program I am running into practical problems. How are expressions to be evaluated. For instance, if I have "1+8*z^2" and z is a mpz_t "large integer" variable, how am I to quickly evaluate this? (I have larger expressions in the program that I am writing.) Currently, I am doing every single operation manually and storing the results in temporary variables like this for the "1+8*z^2" expression:
1) first do mpt_mul(z,z,z) to square z
2) then define an mpz_t variable called "eight" with the value 8.
3) multiply the result from step one by this 8 and store in temp variable.
4) define mpz_t variable called "one" with value 1.
5) add this to the result in step 3 to find final answer.
Is this what I am supposed to be doing? Or is there a better way? It would really help if there was a user's manual for GMP to get people started but there's only the reference manual.
GMP comes with a C++ class interface which provides a more straightforward way of expressing arithmetic expressions. This interface uses C++ operator overloading to allow you to write:
mpz_class z;
1 + 8 * z**2
This is, of course, assuming you're using C++. If you are using C only, you may need to use the C interface to GMP which does not provide operator overloading.
Turns out that there's an unsupported expression parser distributed with GMP in a "expr" subdirectory. It's not part of GMP proper and is subject to change but it is discussed in a README file in that directory. It isn't guaranteed to do the calculation in the fastest way possible, so buyer beware.
So the user must manually evaluate all expressions when using GMP unless they wish to use this library or make their own expression parser.

can a variable have multiple values

In algebra if I make the statement x + y = 3, the variables I used will hold the values either 2 and 1 or 1 and 2. I know that assignment in programming is not the same thing, but I got to wondering. If I wanted to represent the value of, say, a quantumly weird particle, I would want my variable to have two values at the same time and to have it resolve into one or the other later. Or maybe I'm just dreaming?
Is it possible to say something like i = 3 or 2;?
This is one of the features planned for Perl 6 (junctions), with syntax that should look like my $a = 1|2|3;
If ever implemented, it would work intuitively, like $a==1 being true at the same time as $a==2. Also, for example, $a+1 would give you a value of 2|3|4.
This feature is actually available in Perl5 as well through Perl6::Junction and Quantum::Superpositions modules, but without the syntax sugar (through 'functions' all and any).
At least for comparison (b < any(1,2,3)) it was also available in Microsoft Cω experimental language, however it was not documented anywhere (I just tried it when I was looking at Cω and it just worked).
You can't do this with native types, but there's nothing stopping you from creating a variable object (presuming you are using an OO language) which has a range of values or even a probability density function rather than an actual value.
You will also need to define all the mathematical operators between your variables and your variables and native scalars. Same goes for the equality and assignment operators.
numpy arrays do something similar for vectors and matrices.
That's also the kind of thing you can do in Prolog. You define rules that constraint your variables and then let Prolog resolve them ...
It takes some time to get used to it, but it is wonderful for certain problems once you know how to use it ...
Damien Conways Quantum::Superpositions might do what you want,
https://metacpan.org/pod/Quantum::Superpositions
You might need your crack-pipe however.
What you're asking seems to be how to implement a Fuzzy Logic system. These have been around for some time and you can undoubtedly pick up a library for the common programming languages quite easily.
You could use a struct and handle the operations manualy. Otherwise, no a variable only has 1 value at a time.
A variable is nothing more than an address into memory. That means a variable describes exactly one place in memory (length depending on the type). So as long as we have no "quantum memory" (and we dont have it, and it doesnt look like we will have it in near future), the answer is a NO.
If you want to program and to modell this behaviour, your way would be to use a an array (with length equal to the number of max. multiple values). With this comes the increased runtime, hence the computations must be done on each of the values (e.g. x+y, must compute with 2 different values x1+y1, x2+y2, x1+y2 and x2+y1).
In Perl , you can .
If you use Scalar::Util , you can have a var take 2 values . One if it's used in string context , and another if it's used in a numerical context .