How can the decimal value of the sha256 hash be lower than one? - cryptography

In bitcoin mining the sha256(sha256(blockheader)) hash has to be lower than 1/difficulty, right?. But how can a hex value be lower then 1. For example the hash of the genesis block is 000000000019d6689c085ae165831e934ff763ae46a2a6c172b3f1b60a8ce26f which is in decimal 10628944869218562084050143519444549580389464591454674019345556079 which is not lower than 1/1 which was the difficulty in the genesis block.

Related

What is the bias exponent?

My question concerns the IEEE 754 Standard
So im going through the steps necessary to convert denary numbers into a floating point number (IEEE 754 standard) but I dont understand the purpose of determining the biased exponent. I cant get my head around that step and what it is exactly and why its done?
Could any one explain what this is - please keep in mind that I have just started a computer science conversion masters so I wont completely understand certain choices of terminology!
If you think its very long to explain please point me in the right direction!
The exponent in an IEEE 32-bit floating point number is 8 bits.
In most cases where we want to store a signed quantity in an 8-bit field, we use a signed representation. 0x80 is -128, 0xFF is -1, 0x00 is 0, up to 0x7F is 127.
But that's not how the exponent is represented. The exponent field is treated as if it were unsigned 8-bit number that is 127 too large. Look at the unsigned value in the exponent field and subtract 127 to get the actual value. So 0x00 represents -127. 0x7F represents 0.
For 64-bit floats, the field is 11 bits, with a bias of 1023, but it works the same.
A floating point number could be represented (but is not) as a sign bit s, an exponent field e, and a mantissa field m, where e is a signed integer and m is an unsigned fraction of an integer. The value of that number would then be computed as (-1)^s • 2^e • m. But this would not allow to represent important special cases.
Note that one could increase the exponent by ±n and shift the mantissa right by ±n without changing the value of the number. This allows for nearly a numbers to adjust the exponent such that the mantissa starts with a 1 (one exception is of course 0, a special FP number). If one does so, one has normalized FP numbers, and since the mantissa now starts always with 1, one does not have to store the leading 1 in memory, and the saved bit is used to increase the precision of the FP number. Thus, no mantissa m is stored but a mantissa field mf.
But how is now 0 represented? And what about FP numbers that have already the max or min exponent field, but due to their normalization, cannot be made larger or smaller? And what about "not a number-s" that are e.g. the result of x/0?
Here comes the idea of a biased exponent: If half of the max exponent value is added to the exponent field, one gets the bias exponent be. To compute the value of the FP number, this bias has to be subtracted of course. But all normalized FP numbers have now 0 < be <(all 1). Therefore these special biased exponents 0 and (all 1) can now be reserved for special purposes.
be = 0, mf = 0: Exact 0.
be = 0, mf ≠ 0: A denormalized number, i.e. mf is the real mantissa that does not have a leading 1.
be = (all 1), mf = 0: Infinite
be = (all 1), mf ≠ 0: Not a number (NaN)

What's the proper way to get a fixed-length bytes representation of an ECDSA Signature?

I'm using python and cryptography.io to sign and verify messages. I can get a DER-encoded bytes representation of a signature with:
cryptography_priv_key.sign(message, hash_function)
...per this document: https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/
A DER-encoded ECDSA Signature from a 256-bit curve is, at most, 72 bytes; see: ECDSA signature length
However, depending on the values of r and s, it can also be 70 or 71 bytes. Indeed, if I examine length of the output of this function, it varies from 70-72. Do I have that right so far?
I can decode the signature to ints r and s. These are both apparently 32 bytes, but it's not clear to me whether that will always be so.
Is it safe to cast these two ints to bytes and send them over the wire, with the intention of encoding them again on the other side?
The simple answer is, yes, they will always be 32 bytes.
The more complete answer is that it depends on the curve. For example, a 256-bit curve has an order of 256-bits. Similarly, a 128-bit curve only has an order of 128-bits.
You can divide this number by eight to find the size of r and s.
It gets more complicated when curves aren't divisible by eight, like secp521r1 where the order is a 521-bit number.
In this case, we round up. 521 / 8 is 65.125, thus it requires that we free 66 bytes of memory to fit this number.
It is safe to send them over the wire and encode them again as long as you keep track of which is r and s.

Is there a CRC or criptographic function for generating smaller size unique results from unique inputs?

I have a manufacturer unique number ID of 128 bits that I cannot change and it's size is just too long for our purpose (2^128). This is on some embedded micro controller.
One idea is to compute a (run time) CRC32 or hash for narrowing the results but I am not sure for unicity CRC32 as a example: this can be unique for 2^32
Or what king of cryptography function I can use for guarantee unicity of 32 bits output based on unique input?
Thanks for clarifications,
If you know all these ID values in advance, then you can check them using a hash table. You can save space by storing only as many bits of each hash value as are necessary to tell them apart if them happen to land in the same bucket.
If not, then you're going to have a hard time, I'm afraid.
Let's assume these 128-bit IDs are produced as the output of a cryptographic hash function (e.g., MD5), so each ID resembles 128 bits chosen uniformly at random.
If you reduce these to 32-bit values, then the best you can hope to achieve is a set of 32-bit numbers where each bit is 0 or 1 with uniform probability. You could do this by calculating the CRC32 checksum, or by simply discarding 96 bits — it makes no difference.
32 bits is not enough enough to avoid collisions. The collision probability exceeds 1 in a million after just 93 inputs, and 1 in a thousand after 2,900 inputs. After 77,000 inputs, the collision probability reaches 50%. (Source).
So instead, your only real options are to somehow reverse-engineer the ID values into something smaller, or implement some external means of replacing these IDs with sequential integers (e.g., using a hash table).

Whether cracking SHA1 will be easier if I know part of the input?

Assume that I know 80% of SHA1 input. Whether cracking remaining 20% from the SHA1 hash value is easier than cracking the whole input? If it is so, by what percentage?
Eg: I know the x's in the input SHA1(xxxxxxxxyy)=hash value
Assume there are 10 bytes in the input. To crack the whole input, we'd have to try 2^(10*8) inputs. With 80% given, we only have to try 2^(2*8) inputs. That's about a quintillion times fewer. If the input size goes up, the ratio gets even larger.
SHA1 is irreversible today with about 100 unknown bits (12 bytes) in the input. With only 20% of the input unknown, that means the input size would need to be about 500 bits to be secure, or about 62 bytes.
It actually matters whether the unknown part is at the beginning or the end. Each 32 bits of known data at the beginning reduces the number of needed operations by a bit more than you might expect because some of the calculations can be re-used.

binary string with random shift-cryptography

Hello
I have a binary string length of n.My goal is that all bit in string will be equal to "1".
I can flip every bit of the string that I want but after fliping the bits of the string it does random circular shift.(shift length evenly distributed between 0...n-1)
I have no way to know what is a state of the bit not initianly nor in middle of process I only know when they all is "1"
As I understand there should be some strategy that guarantees me that I do all the permuatations in truth table of this string.
Thank you
Flip bit 1 until all are set to 1. I don't see there being anything faster without testing the bits.
Georg has the best answer, if the string is shifted randomly (I assume by 0..n bits evenly distributed) his strategy of always flipping the first bit will sooner or later succeed.
Unfortunately that strategy may take very long time depending on the length of the string.
The expected value of the number of bits being set to 1 will be n/2 in average, so the probability that a bit flip will be successful is 0.5, for each bit being set that probability decreases by 1/n.
The process could be viewed as a markov chain where the probability for being at state 0xff...ff where all bits are set is calculcated and thus the number of trials in average required to reach that state can be calculated.