I am working on a sample test application with Spring cloud starter sleuth, 2.0.1.RELEASE.
I have the following property configured spring.sleuth.traceId128=true.
My expectation is that this ensures that 128 bit traces are always generated and if there is valid hex less < 128 bits, it is replaced by a new 128 bit value.
Here are some use cases I executed:
No x-b3-trace-id in header - generates a new 128 bit hex id.
A valid hex value less than 64 bit is adjusted to be 64 bits long. So '9bd0082a7ecfa' becomes '0009bd0082a7ecfa'
A valid 64 bit hex is never replaced. So, 'bc50f6e7eb5fa554' remains 'bc50f6e7eb5fa554'.
It seems like despite setting this property, anything <=64 is adjusted only upto 64 bits.
Is my understanding of this property incorrect. How do I ensure that only 128 bit values are allowed.
Any help would be appreciated.
Thanks.
Related
I was doing some research over the weekend on some blockchain dev in the Solana blockchain and came across a construct called Compact-u16. The definition of this in the documentation says the following: "A compact-u16 is a multi-byte encoding of 16 bits. The first byte contains the lower 7 bits of the value in its lower 7 bits. If the value is above 0x7f, the high bit is set and the next 7 bits of the value are placed into the lower 7 bits of a second byte. If the value is above 0x3fff, the high bit is set and the remaining 2 bits of the value are placed into the lower 2 bits of a third byte.".
I have been coding for 30+ years. Maybe I'm just old school on this, but why is there a construct to store 16 bits of data in 3 bytes? This is just vastly inefficient from my standpoint. Is there a reason for this? On further research, I found a doc related to assembly instruction pointers, which referenced 7 instruction pointers that are useful for caching values when context switching in and out of the processor stack. But this construct is used for a web app platform. Like, literally, there is no reason that I have been able to find that justifies using 3 bytes to store 16 bits of data. If the developers wanted to use an elegant bit mapping solution to compress space, why not just use a semaphore? Why create a brand new construct that increases the storage requirements for the data by 33%.
What am I missing?
I had some similar confusion when reading the compact-u16 description. Based on the code for parsing them in the solana python module I believe they're doing something conceptually similar to UTF-8, and storing the number in 1-3 bytes depending on its size.
Basically instead of each byte having 8 bits of a number, it has 7 bits of the number and a flag (the most significant bit) that indicates whether the number continues in the next byte. For the largest numbers they need an extra byte, but for numbers less than 128 they need only one byte. Since Solana seems to use these for storing the length of arrays, if it's common that the length of the arrays is less than 128 then they will end up with fewer total bytes to transfer across all transactions.
Some examples I worked out for myself:
hex | compact-u16
--------+------------
0x0000 | [0x00]
0x0001 | [0x01]
0x007f | [0x7f]
0x0080 | [0x80 0x01]
0x3fff | [0xff 0x7f]
0x4000 | [0x80 0x80 0x01]
0xc000 | [0x80 0x80 0x03]
0xffff | [0xff 0xff 0x03])
I am looking at using the BSD checksum described here at wiki BSD does anyone know if you can use it for basic error correction?
Consider an 8 bit or 16 bit left rotating checksum where all the message bytes are supposed to be zero, but one them has a single bit error. The checksum will detect the error, but you'd get the same checksum for message[0] = 0x01, or message[1] = 0x02, ... , or message[7] = 0x80. The checksum can't determine which of these 8 (or more) possible error cases occurred, so it can't be used for error correction.
You'd need at least something like a Hamming code, BCH code or RS code to be able to correct one more bit errors. Since you have CRC as a tag, a single bit correcting binary BCH code is essentially the same as a CRC using a "primitive" polynomial that is the basis for a finite field, if the message length (including the CRC) is shorter than the number of possible values in the finite field. For example, a 15 bit message would have 11 data bits and 4 "parity" bits, based on a finite field of GF(2^4) (GF(16)).
I am novice in Xilinx HLS. I am following tutorial ug871-vivado-high-level-synthesis-tutorial.pdf(page 77).
The code is
#define N 32
void array_io (dout_t d_o[N], din_t d_i[N])
{
//..do something
}
After synthesis, I got report like
I am confused that how the width of the address port has been automatically sized match to the number of addresses that must be accessed (5-bit for 32 addresses)?
Please help.
From the UG871, it seems that the size of the array is from 0 to 16 samples, hence you need 32 addresses to access all values (see Figure 69). I guess that the number N is somewhere limited to be less than 32 (or be exactly 16). This means that Vivado knows this limitation, and generates only as many address bits as are needed. Most synthesis tools check the constraints on size and optimize unnecessary code away.
When you synthetise a function you create, also, some registers to store the variables. It means that the address that you put as input is the one of the data that you are concurrently writing in d_o or d_in.
In your case, where N=32, you have 32 different variables (in both input and output). To adress 32 different variables you need 32 different combination of bit (to point to a specific one, without ambiguity). With 5 bit you have 2^5=32 different combination of addresses: the minimum number of bit to address all your data.
For instance if you have 32
The address number of bit is INDIPENDENT from the size of data (i.e. they can be int, float, char, short, double, arbitrary precision and so on)
I am trying to decipher a trace of USB I/O traffic produced by usbmon and am having some issues getting my head around the endianness. For the sake of example, here are two lines from the trace I am working with:
ffff8800650e7000 433121059 S Ci:2:000:0 s 80 06 0100 0000 0040 64 <
ffff8800650e7000 433121661 C Ci:2:000:0 0 18 = 12010002 00000040 da0b8781 00010102 0301
I initially had no suspicion whatsoever of anything other than big-endianness in the trace, but then I saw da0b8781 in the second line, which corresponds to the identity of the USB device I am tracing which has a vendor ID of 0x0bda and product ID of 0x8187 (note the reversal of byte-order in the trace).
So at this point I thought that maybe within a given field of a usbmon trace, the bytes were always in reverse byte order and should be interpreted as such. But to the contrary, let's examine a small part near the end of the first trace line, ... 0040 64
0040 is a hex field representing the maximum accepted response size. 64 is a decimal field that should represent exactly the same thing. 0x0040 = 64 decimal, without switching the byte order to 0x4000, which would then != 64 decimal. So it's at this point I started to get a bit uncertain about the byte-order of the different parts of the usbmon trace.
Next I thought, maybe it's just the data portions of the usbmon trace that are in reverse byte order. So I thought perhaps I should really be reading
...12010002 00000040 da0b8781 00010102 0301
as
1030 20101000 1878b0ad 04000000 20001021...
Nope, that doesn't seem to be right either. The USB Specification states that the vendor Id (0x0bda in my case) should be at byte offset 8 for this particular string. If we leave the above string in its original order, then the vendor Id does start at byte offset 8 (12010002 00000040 consumes the first 8 bytes), but if we reverse it as I have above, then it starts at byte offset 6 (1030 20101000 only consumes the first 6 bytes).
So my best guess now is that usbmon displays everything big-endian, accept that it switches to reverse byte order within each 4-byte word, but for data only. Can anyone offer some clarification on whether this is correct, or whether there may be something else I'm missing?
May be a bit late for you but I've tried usbmon (and found it OK)
you may want to take a look at evtest
http://www.freedesktop.org/wiki/Evtest
I have a packet that I need to send to a client with an ID of 255. I've had no problems sending packets with IDs of 0, 1, and 2. The ID has to be 255. For some reason, after the translation has happened, both me with my server, and the client, get "63" for any Id greater than 127.
This is the code I am using:
Console.WriteLine(Asc(System.Text.ASCIIEncoding.ASCII.GetString(System.Text.ASCIIEncoding.ASCII.GetBytes(Chr("255")))))
Now, This is an overly complicated version of what the server does. You may consider this a bit unnecessary but the inverse functions performed are for your viewing reasons only.
Where it says "255" is the Packet Id I need sent in the format above. As I said, anything larger than 127 returns "63". Very annoying.
Any help is appreciated.
Taken from here:
ASCIIEncoding corresponds to the Windows code page 20127. Because ASCII is a 7-bit encoding, ASCII characters are limited to the lowest 128 Unicode characters, from U+0000 to U+007F.
So you can't use that technique, because 255 is not a valid ASCII character.