Why we use 1's complement instead of 2's complement when calculating checksums - udp

When calculating UDP checksums I know that we complement the result and use it to check for errors. But I don't understand why we use 1's complement instead of 2's complement (as shown here). If there are no errors 1's complement results -1 (0xFFFF) and 2's complement results 0 (0x0000).
To check for correct transmission, receiver's CPU must first negate the result then look at the zero flag of ALU. Which costs 1 additional cycle for negation. If 2's complement was used the error checking would be done simply by looking at the zero flag.

That is because using 2's complement may give you a wrong result if the sender
and receiver machines have different endianness.
If we use the example:
0000 1111 1110 0000
1111 0000 0001 0000
the checksum with 2's complement calculated on a little-endian machine would be:
0000 0000 0001 0000
if we added our original data to this checksum on a big-endian machine we would get:
0000 0000 1111 1111
which would suggest that our checksum was wrong even though it was not. However, 1's compliments results are independent of the endianness of the machine so if we were to do the same thing with a 1's complement number our checksum would be:
0000 0000 0000 1111
which when added together with the data would get us:
1111 1111 1111 1111
which allows the short UDP checksum to work without requiring both the sender and receiver machines to have the same endianness.

Related

How is this crc calculated correctly?

I'm looking for help. The chip I'm using via SPI (MAX22190) specifies:
CRC polynom: x5 + x4 + x2 + x0
CRC is calculated using the first 19 data bits padded with the 5-bit initial word 00111.
The 5-bit CRC result is then appended to the original data bits to create the 24-bit SPI data frame.
The CRC result I calculated with multiple tools is: 0x18
However, the chip shows an CRC error on this. It expects: 0x0F
Can anybody tell me where my calculations are going wrong?
My input data (19 data bits) is:
19-bit data:
0x04 0x00 0x00
0000 0100 0000 0000 000
24-bit, padded with init value:
0x38 0x20 0x00
0011 1000 0010 0000 0000 0000
=> Data sent by me: 0x38 0x20 0x18
=> Data expected by chip: 0x38 0x20 0x0F
The CRC algorithm is explained here.
I think your error comes from 00111 padding that must be padded on the right side instead on the left.

What's the proper way to get a fixed-length bytes representation of an ECDSA Signature?

I'm using python and cryptography.io to sign and verify messages. I can get a DER-encoded bytes representation of a signature with:
cryptography_priv_key.sign(message, hash_function)
...per this document: https://cryptography.io/en/latest/hazmat/primitives/asymmetric/ec/
A DER-encoded ECDSA Signature from a 256-bit curve is, at most, 72 bytes; see: ECDSA signature length
However, depending on the values of r and s, it can also be 70 or 71 bytes. Indeed, if I examine length of the output of this function, it varies from 70-72. Do I have that right so far?
I can decode the signature to ints r and s. These are both apparently 32 bytes, but it's not clear to me whether that will always be so.
Is it safe to cast these two ints to bytes and send them over the wire, with the intention of encoding them again on the other side?
The simple answer is, yes, they will always be 32 bytes.
The more complete answer is that it depends on the curve. For example, a 256-bit curve has an order of 256-bits. Similarly, a 128-bit curve only has an order of 128-bits.
You can divide this number by eight to find the size of r and s.
It gets more complicated when curves aren't divisible by eight, like secp521r1 where the order is a 521-bit number.
In this case, we round up. 521 / 8 is 65.125, thus it requires that we free 66 bytes of memory to fit this number.
It is safe to send them over the wire and encode them again as long as you keep track of which is r and s.

How to write integer value "60" in 16bit binary, 32bit binary & 64bit binary

How to write integer value "60" in other binary formats?
8bit binary code of 60 = 111100
16bit binary code of 60 = ?
32bit binary code of 60 = ?
64bit binary code of 60 = ?
is it 111100000000 for 16 bit binary?
why does 8bit binary code contain 6bits instead of 8?
I googled for the answers but I'm not able to get these answers. Please provide answers as I'm still a beginner of this area.
Imagine you're writing the decimal value 60. You can write it using 2 digits, 4 digits or 8 digits:
1. 60
2. 0060
3. 00000060
In our decimal notation, the most significant digits are to the left, so increasing the number of digits for representation, without changing the value, means just adding zeros to the left.
Now, in most binary representations, this would be the same. The decimal 60 needs only 6 bits to represent it, so an 8bit or 16bit representation would be the same, except for the left-padding of zeros:
1. 00111100
2. 00000000 00111100
Note: Some OSs, software, hardware or storage devices might have different Endianness - which means they might store 16bit values with the least significant byte first, then the most signficant byte. Binary notation is still MSB-on-the-left, as above, but reading the memory of such little-endian devices will show any 16bit chunk will be internally reversed:
1. 00111100 - 8bit - still the same.
2. 00111100 00000000 - 16bit, bytes are flipped.
every number has its own binary number, that means that there is only one!
on a 16/32/64 bit system 111100 - 60 would just look the same with many 0s added infront of the number (regulary not shown)
so on 16 bit it would be 0000000000111100
32 bit - 0000000000000000000000000011110
and so on
For storage Endian matters ... otherwise bitwidth zeros are always prefixed so 60 would be...
8bit: 00111100
16bit: 0000000000111100

Multiplication of bits in twos complement form

Please help me with the following two's complement multiplication logic.
Actual cropped
Unsigned 5 [101] 3 [011] 15 [001111] 7 [111]
Two’s comp. −3 [101] 3 [011] −9 [110111] −1 [111]
I cant understand how actual multiplication is different for unsigned and two's complement multiplication when bit for both are same.
Multiplication for signed and unsigned integers is performed by different rules (unlike addition and subtraction, for example).
The same bits can represent different data, actual interpretation depends on type.

Unexpected value printed when using %.1f

I'm trying to display floats to just one decimal point. I'm getting unexpected results as follows:
Code:
float a = 1.25;
float b = 1.35;
NSLog(#"1.25 -> %.1f\n1.35 -> %.1f",a,b);
Output:
1.25 -> 1.2
1.35 -> 1.4
Expected output, either:
1.25 -> 1.3
1.35 -> 1.4
or:
1.25 -> 1.2
1.35 -> 1.3
Is this simply due to the internal conversion between binary and decimal? If so, how do I get the expected behaviour?
I'm using Xcode 4.6.
edit: Okay, thanks to TonyK and H2CO3 it's due to the binary representation of decimals.
float a = 1.25;
float b = 1.35;
NSLog(#"1.25 -> %.30f\n1.35 -> %.30f",a,b);
1.25 -> 1.250000000000000000000000000000
1.35 -> 1.350000000000000088817841970013
Lots of good info, but as far as I can see no one has approached the second question: How do I get the expected behaviour?
Rounding numbers in Objective-C is a quite different question.
1.35 is 27/20, which in binary is
1.01 0110 0110 0110 0110 0110 0110....
A float has a 23-bit mantissa on most systems (not counting the implied leading 1.), so this gets rounded up to
1.01 0110 0110 0110 0110 0110 1
(because 0110 is unambiguously greater than half of 1000). So it's strictly greater than 1.35 by the time printf sees it. Hence 1.4.
As for 1.25, this is exactly representable in binary as
1.01
So printf sees its exact value. But how should it round 1.25? We were taught in school to round 5 up to 10. But most modern systems use a default rounding mode called "round to even" at the hardware level, because it lessens the effect of cumulative rounding errors. This means that when a number is exactly between the two nearest candidates for rounding, it gets rounded to the even candidate.
So it seems that print is using "round to even" for decimal output! I checked this hypothesis at this ideone link, and indeed 1.75 gets rounded up, to 1.8. This is a surprise to me, but not a huge one.
That's because floating-point numbers aren't exact. %.1f prints the number rounded to one decimal place, it seems, however, that 1.35 can't be exactly represented as 1.2500000, instead it's a slightly smaller number that can be.
Read about this behavior here: What every computer scientist should know about floating-point numbers.