Handling 512 bit numbers - vb.net

I need to convert SHA-512 hex strings to integers and perform arithmetic functions with out my program crashing. I have only found ways to handle up to 64 bit numbers so far. How can I handle larger numbers?

Related

Standard text representation for floating-point numbers

Is there a standard text representation for the floating-point numbers that is supported by the most popular languages?
What is the standard fro representing infinities and NaNs?
There isn't a general consensus, unfortunately.
However, there seems to be some convergence on hexadecimal notation for floats. See pg. 57/58 of http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf
The advantage of this notation is that you can precisely represent the value of the float as represented by the machine without worrying about any loss of precision. See this page for examples: https://www.exploringbinary.com/hexadecimal-floating-point-constants/
Note that NaN and Infinity values are not supported by hexadecimal-floats. There seems to be no general consensus on how to write these. Most languages actually don't even allow writing these as constants, so you resort to expressions such as 0/0 or 1/0 etc. instead.
Since you tagged this question with serialization, I'd recommend simply serializing using the bit-pattern you have for the float value. This will cost you 8-characters for single-precision and 16-characters for double-precision, (64 bits and 128 bits respectively, assuming 8-bit per character). Perhaps not the most efficient, but it'll ensure you can encode all possible values and transmit precisely.

What is the difference between “SHA-2” and “SHA-256”

I'm a bit confused on the difference between SHA-2 and SHA-256 and often hear them used interchangeably. I think SHA-2 a "family" of hash algorithms and SHA-256 a specific algorithm in that family. Can anyone clear up the confusion.
The SHA-2 family consists of multiple closely related hash functions. It is essentially a single algorithm in which a few minor parameters are different among the variants.
The initial spec only covered 224, 256, 384 and 512 bit variants.
The most significant difference between the variants is that some are 32 bit variants and some are 64 bit variants. In terms of performance this is the only difference that matters.
On a 32 bit CPU SHA-224 and SHA-256 will be a lot faster than the other variants because they are the only 32 bit variants in the SHA-2 family. Executing the 64 bit variants on a 32 bit CPU will be slow due to the added complexity of performing 64 bit operations on a 32 bit CPU.
On a 64 bit CPU SHA-224 and SHA-256 will be a little slower than the other variants. This is because due to only processing 32 bits at a time, they will have to perform more operations in order to make it through the same number of bytes. You do not get quite a doubling in speed from switching to a 64 bit variant because the 64 bit variants do have a larger number of rounds than the 32 bit variants.
The internal state is 256 bits in size for the two 32 bit variants and 512 bits in size for all four 64 bit variants. So the number of possible sizes for the internal state is less than the number of possible sizes for the final output. Going from a large internal state to a smaller output can be good or bad depending on your point of view.
If you keep the output size fixed it can in general be expected that increasing the size of the internal state will improve security. If you keep the size of the internal state fixed and decrease the size of the output, collisions become more likely, but length extension attacks may become easier. Making the output size larger than the internal state would be pointless.
Due to the 64 bit variants being both faster (on 64 bit CPUs) and likely to be more secure (due to larger internal state), two new variants were introduced using 64 bit words but shorter outputs. Those are the ones known as 512/224 and 512/256.
The reasons for wanting variants with output that much shorter than the internal state is usually either that for some usages it is impractical to use such a long output or that the output need to be used as key for some algorithm that takes an input of a certain size.
Simply truncating the final output to your desired length is also possible. For example a HMAC construction specify truncating the final hash output to the desired MAC length. Due to HMAC feeding the output of one invocation of the hash as input to another invocation it means that using a hash with shorter output results in a HMAC with less internal state. For this reason it is likely to be slightly more secure to use HMAC-SHA-512 and truncate the output to 384 bits than to use HMAC-SHA-384.
The final output of SHA-2 is simply the internal state (after processing length extended input) truncated to the desired number of output bits. The reason SHA-384 and SHA-512 on the same input look so different is that a different IV is specified for each of the variants.
Wikipedia:
The SHA-2 family consists of six hash functions with digests (hash
values) that are 224, 256, 384 or 512 bits: SHA-224, SHA-256, SHA-384,
SHA-512, SHA-512/224, SHA-512/256.

How to write integer value "60" in 16bit binary, 32bit binary & 64bit binary

How to write integer value "60" in other binary formats?
8bit binary code of 60 = 111100
16bit binary code of 60 = ?
32bit binary code of 60 = ?
64bit binary code of 60 = ?
is it 111100000000 for 16 bit binary?
why does 8bit binary code contain 6bits instead of 8?
I googled for the answers but I'm not able to get these answers. Please provide answers as I'm still a beginner of this area.
Imagine you're writing the decimal value 60. You can write it using 2 digits, 4 digits or 8 digits:
1. 60
2. 0060
3. 00000060
In our decimal notation, the most significant digits are to the left, so increasing the number of digits for representation, without changing the value, means just adding zeros to the left.
Now, in most binary representations, this would be the same. The decimal 60 needs only 6 bits to represent it, so an 8bit or 16bit representation would be the same, except for the left-padding of zeros:
1. 00111100
2. 00000000 00111100
Note: Some OSs, software, hardware or storage devices might have different Endianness - which means they might store 16bit values with the least significant byte first, then the most signficant byte. Binary notation is still MSB-on-the-left, as above, but reading the memory of such little-endian devices will show any 16bit chunk will be internally reversed:
1. 00111100 - 8bit - still the same.
2. 00111100 00000000 - 16bit, bytes are flipped.
every number has its own binary number, that means that there is only one!
on a 16/32/64 bit system 111100 - 60 would just look the same with many 0s added infront of the number (regulary not shown)
so on 16 bit it would be 0000000000111100
32 bit - 0000000000000000000000000011110
and so on
For storage Endian matters ... otherwise bitwidth zeros are always prefixed so 60 would be...
8bit: 00111100
16bit: 0000000000111100

Decimals, Integers, and Doubles in Visual Basic

I'm a high school student learning coding in my pastime and I got stuck while learning Visual Basic. I'm having trouble figuring out the difference between Decimals, Doubles and Integers. I have searched the internet but found very little or confusing help. What I know so far is that Integers store whole numbers, Decimals hold's decimals and Doubles can hold both. But why would I choose Doubles over Decimals? If someone could please help explain the difference between the three.
Doubles are double-precision (64-bit) floating point numbers. They are represented using a 52 bit mantissa, an 11 bit exponent, and a 1 bit sign. Floating point numbers are not exact representations of decimal numbers; rather, they are binary approximations. They are therefore suitable for scientific work where precision is more important than accuracy, but are not suitable for financial calculations, where accuracy is paramount.
Decimals are the same decimal numbers we use in school, and work exactly the same way. They have a range of 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. They are as close to an exact representation of decimal numbers as possible, and are designed for financial calculations, where accuracy and minimal rounding errors are very important.
Integers are whole numbers, zero, and all of the negative representations of whole numbers. Math using integers is exact, with no round-off errors. The high-order bit represents the number's sign. Precision depends on the number of bytes used to represent the integer; for example, a 16-bit signed integer can represent numbers from -32768 to 32767.

genfromtxt() artifact when displaying floats

In numpy, I'm reading an ASCII file (see below) using np.genfromtxt()
0.085 102175 0.00025
0.094 103325 0.00030
raw = genfromtxt(fn)
When checking raw I get the following:
>>> raw[0,0]
0.085000000000000006
How do I prevent the artifact 6 at the end and where does it come from?
This is normal behaviour, and is due to the fundamental imprecision of floating point arithmetic. In other words, 0.085 cannot be represented exactly in floating point bits. For this reason, it's generally a good idea to assume a bit of noise in any numerical calculations.