usbmon, the usb spec and endianness/byte-order - usb

I am trying to decipher a trace of USB I/O traffic produced by usbmon and am having some issues getting my head around the endianness. For the sake of example, here are two lines from the trace I am working with:
ffff8800650e7000 433121059 S Ci:2:000:0 s 80 06 0100 0000 0040 64 <
ffff8800650e7000 433121661 C Ci:2:000:0 0 18 = 12010002 00000040 da0b8781 00010102 0301
I initially had no suspicion whatsoever of anything other than big-endianness in the trace, but then I saw da0b8781 in the second line, which corresponds to the identity of the USB device I am tracing which has a vendor ID of 0x0bda and product ID of 0x8187 (note the reversal of byte-order in the trace).
So at this point I thought that maybe within a given field of a usbmon trace, the bytes were always in reverse byte order and should be interpreted as such. But to the contrary, let's examine a small part near the end of the first trace line, ... 0040 64
0040 is a hex field representing the maximum accepted response size. 64 is a decimal field that should represent exactly the same thing. 0x0040 = 64 decimal, without switching the byte order to 0x4000, which would then != 64 decimal. So it's at this point I started to get a bit uncertain about the byte-order of the different parts of the usbmon trace.
Next I thought, maybe it's just the data portions of the usbmon trace that are in reverse byte order. So I thought perhaps I should really be reading
...12010002 00000040 da0b8781 00010102 0301
as
1030 20101000 1878b0ad 04000000 20001021...
Nope, that doesn't seem to be right either. The USB Specification states that the vendor Id (0x0bda in my case) should be at byte offset 8 for this particular string. If we leave the above string in its original order, then the vendor Id does start at byte offset 8 (12010002 00000040 consumes the first 8 bytes), but if we reverse it as I have above, then it starts at byte offset 6 (1030 20101000 only consumes the first 6 bytes).
So my best guess now is that usbmon displays everything big-endian, accept that it switches to reverse byte order within each 4-byte word, but for data only. Can anyone offer some clarification on whether this is correct, or whether there may be something else I'm missing?

May be a bit late for you but I've tried usbmon (and found it OK)
you may want to take a look at evtest
http://www.freedesktop.org/wiki/Evtest

Related

Compact-u16 - what is the purpose of this?

I was doing some research over the weekend on some blockchain dev in the Solana blockchain and came across a construct called Compact-u16. The definition of this in the documentation says the following: "A compact-u16 is a multi-byte encoding of 16 bits. The first byte contains the lower 7 bits of the value in its lower 7 bits. If the value is above 0x7f, the high bit is set and the next 7 bits of the value are placed into the lower 7 bits of a second byte. If the value is above 0x3fff, the high bit is set and the remaining 2 bits of the value are placed into the lower 2 bits of a third byte.".
I have been coding for 30+ years. Maybe I'm just old school on this, but why is there a construct to store 16 bits of data in 3 bytes? This is just vastly inefficient from my standpoint. Is there a reason for this? On further research, I found a doc related to assembly instruction pointers, which referenced 7 instruction pointers that are useful for caching values when context switching in and out of the processor stack. But this construct is used for a web app platform. Like, literally, there is no reason that I have been able to find that justifies using 3 bytes to store 16 bits of data. If the developers wanted to use an elegant bit mapping solution to compress space, why not just use a semaphore? Why create a brand new construct that increases the storage requirements for the data by 33%.
What am I missing?
I had some similar confusion when reading the compact-u16 description. Based on the code for parsing them in the solana python module I believe they're doing something conceptually similar to UTF-8, and storing the number in 1-3 bytes depending on its size.
Basically instead of each byte having 8 bits of a number, it has 7 bits of the number and a flag (the most significant bit) that indicates whether the number continues in the next byte. For the largest numbers they need an extra byte, but for numbers less than 128 they need only one byte. Since Solana seems to use these for storing the length of arrays, if it's common that the length of the arrays is less than 128 then they will end up with fewer total bytes to transfer across all transactions.
Some examples I worked out for myself:
hex | compact-u16
--------+------------
0x0000 | [0x00]
0x0001 | [0x01]
0x007f | [0x7f]
0x0080 | [0x80 0x01]
0x3fff | [0xff 0x7f]
0x4000 | [0x80 0x80 0x01]
0xc000 | [0x80 0x80 0x03]
0xffff | [0xff 0xff 0x03])

Trying to understand nbits value from stratum protocol

I'm looking at the stratum protocol and I'm having a problem with the nbits value of the mining.notify method. I have trouble calculating it, I assume it's the currency difficulty.
I pull a notify from a dogecoin pool and it returned 1b3cc366 and at the time the difficulty was 1078.52975077.
I'm assuming here that 1b3cc366 should give me 1078.52975077 when converted. But I can't seem to do the conversion right.
I've looked here, here and also tried the .NET function BitConverter.Int64BitsToDouble.
Can someone help me understand what the nbits value signify?
You are right, nbits is current network difficulty.
Difficulty encoding is throughly described here.
Hexadecimal representation like 0x1b3cc366 consists of two parts:
0x1b -- number of bytes in a target
0x3cc366 -- target prefix
This means that valid hash should be less than 0x3cc366000000000000000000000000000000000000000000000000 (it is exactly 0x1b = 27 bytes long).
Floating point representation of difficulty shows how much current target is harder than the one used in the genesis block.
Satoshi decided to use 0x1d00ffff as a difficulty for the genesis block, so the target was
0x00ffff0000000000000000000000000000000000000000000000000000.
And 1078.52975077 is how much current target is greater than the initial one:
$ echo 'ibase=16;FFFF0000000000000000000000000000000000000000000000000000 / 3CC366000000000000000000000000000000000000000000000000' | bc -l
1078.52975077482646448605

What does TDO on 4th bit in ICSP SendCommand header mean? (PIC32MX, ICSP 2-wire 4-phase)

Right now I'm trying to implement the flash programming specification for PIC32MX. I'm working with a PIC32MX512L and a PIC32MX512H. The PIC32MX512L must eventually transfer a program to the two wires PGEC2 and PGED2 of the PIC32MX512H.
Right now I'm trying to execute the check device operation. As specified, I'm entering the programming mode by MCLR-juggling and executing SetMode (6b011111) on the TMS clock while the TDI clock stays low. The TAP controller replies with zeroes (every TDO is low).
After that I must execute SendCommand( MTAP_SW_MTAP ) to select the MTAP controller. The sequence to be shifted is
(header) 01 01 00 00_ | (data) 00 00 10 00 00 | (most sign. bit) 01 | (footer) 01 00
The first bit of each pairs is the TDI and the second -- TMS. I write TDI on the first clock, TMS on the second clock and read TDO during the third and the fourth clock. This sequence is feeded from the left to the right. Shifted bits hold their value during each clock fall.
The issue
After shifting the first 4 pairs, the TDO line goes on the fourth pair high (on the third clock) and low at the end of that 4-phase part (on the fourth clock). I've marked this spot with an underscore in the sequence above. After that the controller ignores any further commands. On the next SendCommand( MTAP_COMMAND ), the TDO stays low and later on for XferData( MCHP_STATUS ) TDO still stays low, no matter how often I send the command.
I've done a small screenshot from my oscilloscope. The blue line is the clock, the green one is the data. The hop on the right is what I mean.
The question
Does anyone know what the TAP controller is trying to tell me with that TDO high on the fourth phase?
Thank you in advance!
Well, I've fixed it. Generally the last TDO of the prologue is the first least significant bit of the output. For SendCommand it has no meaning, but for XferData and XferFastData it is important.
For XferFastData it is the PrAacc bit according to the spec. If the bit is zero, you should repeat the whole operation. But beware: the MCU implementation doesn't follow the spec. If you really restart the whole operation for FastData if PrAcc is zero, it won't work. Instead just ignore the bit and proceed writing. I've found it out eventually by trial and error and by comparing my XferFastData implementation against pic32prog.

Reading Packet Id from Byte

I have a packet that I need to send to a client with an ID of 255. I've had no problems sending packets with IDs of 0, 1, and 2. The ID has to be 255. For some reason, after the translation has happened, both me with my server, and the client, get "63" for any Id greater than 127.
This is the code I am using:
Console.WriteLine(Asc(System.Text.ASCIIEncoding.ASCII.GetString(System.Text.ASCIIEncoding.ASCII.GetBytes(Chr("255")))))
Now, This is an overly complicated version of what the server does. You may consider this a bit unnecessary but the inverse functions performed are for your viewing reasons only.
Where it says "255" is the Packet Id I need sent in the format above. As I said, anything larger than 127 returns "63". Very annoying.
Any help is appreciated.
Taken from here:
ASCIIEncoding corresponds to the Windows code page 20127. Because ASCII is a 7-bit encoding, ASCII characters are limited to the lowest 128 Unicode characters, from U+0000 to U+007F.
So you can't use that technique, because 255 is not a valid ASCII character.

What happens when you send an 8bit number to an output which is 4bit? C Language

I'm studying in high school, and we have an electronics project.
We have an output from our computer which is 4 bit, output address is 37Ah
and my teacher did this:
outportb(0x37A,0x80);
so what will appear in the output? 0h or 8h?
Unless this is a 4-bit CPU from the 70s then your output port will be 8 bits, but the connected hardware might only use 4. In that case it is common (but not necessary) to use the lower 4 bits so you would have 0x0 as value. But that makes using 0x80 a smokescreen, it would be the same as 0x00 and 0xF0. So from that alone I would guess that the upper 4 bits are used here, and the value sent is 0x8.
But a twisted hardware engineer could have used the middle 4 bits.
You need to explain your problem a little better. What microprocesser do you use etc. Is it a 4-port output you have?
But 0x80 is equal to:
0b1000000 and if you use the lower 4 bits: 0b1000xxxx, then they will be zero (not turned on). This will happen if 0x37A is 8bit.
Otherwise, explain your problem better :)
Can't you try and see what happens? or is it only theoretical until now?
EDIT:
I see it is a printer port. Check http://www.tinet.cat/~sag/gifs/ParallelPort.gif if you use port 2,3,4,5 then the upper 4 bits really doesn't matter :) as said in my comment.