How is this crc calculated correctly? - embedded

I'm looking for help. The chip I'm using via SPI (MAX22190) specifies:
CRC polynom: x5 + x4 + x2 + x0
CRC is calculated using the first 19 data bits padded with the 5-bit initial word 00111.
The 5-bit CRC result is then appended to the original data bits to create the 24-bit SPI data frame.
The CRC result I calculated with multiple tools is: 0x18
However, the chip shows an CRC error on this. It expects: 0x0F
Can anybody tell me where my calculations are going wrong?
My input data (19 data bits) is:
19-bit data:
0x04 0x00 0x00
0000 0100 0000 0000 000
24-bit, padded with init value:
0x38 0x20 0x00
0011 1000 0010 0000 0000 0000
=> Data sent by me: 0x38 0x20 0x18
=> Data expected by chip: 0x38 0x20 0x0F

The CRC algorithm is explained here.
I think your error comes from 00111 padding that must be padded on the right side instead on the left.

Related

When BHE' signal in 8086 is activated or deactivated?

I'm studying hardware specification for 8086 and I'm wondering What BHE' signal do? when is activated? deactivated?
The 8086 can address bytes (8 bits) and words (16 bits) in memory.
To access a byte at an even address, the A0 signal will be logically 0 and the BHE signal will be 1.
To access a byte at an odd address, the A0 signal will be logically 1 and the BHE signal will be 0.
To access a word at an even address, the A0 signal will be logically 0 and the BHE signal will also be 0.
instruction
A0
BHE
cycles
mov al, [1234h]
0
1
10
mov al, [1235h]
1
0
10
mov ax, [1234h]
0
0
10
To access a word at an odd address, the processor will need to address the bytes separately. This will incur a penalty of 4 cycles!
The instruction mov ax, [1235h] will take 14 cycles.

Why we use 1's complement instead of 2's complement when calculating checksums

When calculating UDP checksums I know that we complement the result and use it to check for errors. But I don't understand why we use 1's complement instead of 2's complement (as shown here). If there are no errors 1's complement results -1 (0xFFFF) and 2's complement results 0 (0x0000).
To check for correct transmission, receiver's CPU must first negate the result then look at the zero flag of ALU. Which costs 1 additional cycle for negation. If 2's complement was used the error checking would be done simply by looking at the zero flag.
That is because using 2's complement may give you a wrong result if the sender
and receiver machines have different endianness.
If we use the example:
0000 1111 1110 0000
1111 0000 0001 0000
the checksum with 2's complement calculated on a little-endian machine would be:
0000 0000 0001 0000
if we added our original data to this checksum on a big-endian machine we would get:
0000 0000 1111 1111
which would suggest that our checksum was wrong even though it was not. However, 1's compliments results are independent of the endianness of the machine so if we were to do the same thing with a 1's complement number our checksum would be:
0000 0000 0000 1111
which when added together with the data would get us:
1111 1111 1111 1111
which allows the short UDP checksum to work without requiring both the sender and receiver machines to have the same endianness.

Understanding Organization of the CRAM bits in bitstream .bin file

For an iCE40 1k device, Following is the snippet from the output of the command "iceunpack -vv example.bin"
I could not understand why there are 332x144 bits?
My understanding is that [1], the CRAM BLOCK[0] starts at the logic tile (1,1), and it should contain:
48 logic tiles, each 54x16,
14 IO tiles, each 18x16
How the "332 x 144" is calculated?
Where does the IO tile and logic tiles bits are mapped in CRAM BLOCK[0] bits?
e.g., which bits of CRAM BLOCK[0] indicates the bits for logic tile (1,1) and bits for IO tile (0,1)?
Set bank to 0.
Next command at offset 26: 0x01 0x01
CRAM Data [0]: 332 x 144 bits = 47808 bits = 5976 bytes
Next command at offset 6006: 0x11 0x01
[1]. http://www.clifford.at/icestorm/format.html
Thanks.
height=9x16=144 (1 I/O tile and 8 Logic tiles)
Width=18+42+5x54 = 330 (1 I/O tile, 1 ram tile and 5 Logic tiles) plus "two zero bytes" = 332.

Is it always true that the CRC of a buffer that has the CRC appended to the end is always 0?

Example of the hypothesis...
Is it always true that the CRC of a buffer that has the CRC appended to the end is always 0?
extern uint16_t CRC16(uint_8* buffer, uint16_t size); // From your favorite library
void main() {
uint16_t crc;
uint8_t buffer[10] = {1,2,3,4,5,6,7,8};
crc = CRC16(buffer,8);
buffer[8]= crc>>8; // This may be endian-dependent code
buffer[9]= crc & 0xff; // Ibid.
if (CRC16(buffer,10) != 0)
printf("Should this ever happen???\n");
else
printf("It never happens!\n");
}
If the CRC is modified after it's calculated, such as some CRCs that post complement the CRC after it's generated, then generating a new CRC using data + appended CRC will result in a non-zero but constant value. If the CRC is not post modified, then the result will be zero, regardless if the CRC is initialized to zero or non-zero value before generation.
Is it always true that the CRC of a buffer that has the CRC appended to the end is always 0 ?
Depends on the CRC, and how it is appended. For the 16-bit and 32-bit CRC "CCITT" used in networking (Ethernet, V42..), no: the final CRC (appended in the specified order) is a constant, but not zero: 47 0F for the 16-bit CRC, 1C DF 44 21 for the 32-bit CRC. 16-bit example
-------- message -------- CRC-CCITT-16
01 02 03 04 05 06 07 08 D4 6D
01 02 03 04 05 06 07 08 D4 6D 47 0F
DE AD BE EF CB E5
DE AD BE EF CB E5 47 0F
That comes handy in telecom, where often the layer handling receive knows that the frame ends only after receiving the CRC, which was already entered in the hardware checking the CRC.
The underlying reason is that the 16-bit CRC of 8-byte message m0 m1 … m6 m7 is defined as the remainder of the sequence of /m0 /m1 m2 m3 … m6 m7 FF FF by the generating polynomial.
When we compute the CRC of the message followed by the original CRC r0 r1, the new CRC is thus the remainder of the sequence /m0 /m1 m2 m3 … m6 m7 r0 r1 FF FF by the generating polynomial, and therefore the remainder of the sequence FF FF FF FF by the generating polynomial, thus a constant, but has no reason to be zero.
Try it online in Python!. Includes 16-bit and 32-bit, "by hand" and using external library including one where the constant is zero.
For CRCs that do not append the right number of bits, or output the CRC wrong-endian (these variants are legion), the result depends on the message. This is a sure sign that something is wrong, and that correspondingly
the receiver can no longer enter the message's CRC into the CRC checker and compare the outcome to a constant to check the integrity of the message.
it is lost the the desirable property of CRC that any error concentrated on a sequence of bits no longer than the CRC is caught (if we do not get the CRC straight, an error that overlaps the end of the message and the CRC can sometime remain undetected)

The xv6-rev7 (JOS) GDT

It's very difficult for me to understand GDT (Global Descriptor Table) in JOS (xv6-rev7)
For example
.word (((lim) >> 12) & 0xffff), ((base) & 0xffff);
Why shift right 12? Why AND 0xffff?
What do these number mean?
What does the formula mean?
Can anyone give me some resources or tutorials or hints?
Here, It's two parts of snippet code as following for my problem.
1st Part
0654 #define SEG_NULLASM \
0655 .word 0, 0; \
0656 .byte 0, 0, 0, 0
0657
0658 // The 0xC0 means the limit is in 4096−byte units
0659 // and (for executable segments) 32−bit mode.
0660 #define SEG_ASM(type,base,lim) \
0661 .word (((lim) >> 12) & 0xffff), ((base) & 0xffff); \
0662 .byte (((base) >> 16) & 0xff), (0x90 | (type)), \
0663 (0xC0 | (((lim) >> 28) & 0xf)), (((base) >> 24) & 0xff)
0664
0665 #define STA_X 0x8 // Executable segment
0666 #define STA_E 0x4 // Expand down (non−executable segments)
0667 #define STA_C 0x4 // Conforming code segment (executable only)
0668 #define STA_W 0x2 // Writeable (non−executable segments)
0669 #define STA_R 0x2 // Readable (executable segments)
0670 #define STA_A 0x1 // Accessed
2nd Part
8480 # Bootstrap GDT
8481 .p2align 2 # force 4 byte alignment
8482 gdt:
8483 SEG_NULLASM # null seg
8484 SEG_ASM(STA_X|STA_R, 0x0, 0xffffffff) # code seg
8485 SEG_ASM(STA_W, 0x0, 0xffffffff) # data seg
8486
8487 gdtdesc:
8488 .word (gdtdesc − gdt − 1) # sizeof(gdt) − 1
8489 .long gdt # address gdt
The complete part: http://pdos.csail.mit.edu/6.828/2012/xv6/xv6-rev7.pdf
Well, it isn't a real formula at all. Limit is shifted twelve bits to right, what's equivalent to division by 2^12, what is 4096, and that is granularity of GDT entry base, when G bit is set (in your code G bit is encoded in constants you use in your macro). Whenever address is to be accessed using correnspondig selector, only higher 20 bits are compared with limit and if they're greater, #GP is thrown. Also note that standard pages are 4KB in size, so any number greater than limit by less than 4 kilobytes is handled by page corresponding selector limit. Landing is there partly for suppressing compiler warnings about number overflow, as the operand 0xFFFF is maximal value for single word (16 bits).
Same applies for other shifts and AND, where in other expressions numbers can be shifted more to get another parts.
The structure of GDT descriptor sees above.
((lim) >> 12) & 0xffff) corresponding to Segment Limit(Bit 0-15). Shift right means minimal unit is 2^12 byte(granularity of GDT entry base); && 0xffff means we need the lower 16 bits of lim) >> 12, which fits to lowest part of 16 bits of GDT descriptor.
The rest of the 'formula' is the same.
here is a good material for learning GTD descriptor.