Why is length of Ethernet Type field 2 bytes? - frame

There are few types of Ethernet but then also we have 2 bytes for Ethernet Type in the frame structure. Is this not a wastage of space by under utilization of space? I mean is 1 byte not enough to fit all the possibilities of Ethernet Types or is it some other reason?

One possible reason is that those 2 bytes could be the payload length if LLC encapsulation. if length only 1 byte, then it's too small.
Also from the perspective of byte alignment, all fields (except payload) are at least 16 bit aligned.

Look at http://en.wikipedia.org/wiki/Ethertype. It's protocol definition of that frame

Related

Compact-u16 - what is the purpose of this?

I was doing some research over the weekend on some blockchain dev in the Solana blockchain and came across a construct called Compact-u16. The definition of this in the documentation says the following: "A compact-u16 is a multi-byte encoding of 16 bits. The first byte contains the lower 7 bits of the value in its lower 7 bits. If the value is above 0x7f, the high bit is set and the next 7 bits of the value are placed into the lower 7 bits of a second byte. If the value is above 0x3fff, the high bit is set and the remaining 2 bits of the value are placed into the lower 2 bits of a third byte.".
I have been coding for 30+ years. Maybe I'm just old school on this, but why is there a construct to store 16 bits of data in 3 bytes? This is just vastly inefficient from my standpoint. Is there a reason for this? On further research, I found a doc related to assembly instruction pointers, which referenced 7 instruction pointers that are useful for caching values when context switching in and out of the processor stack. But this construct is used for a web app platform. Like, literally, there is no reason that I have been able to find that justifies using 3 bytes to store 16 bits of data. If the developers wanted to use an elegant bit mapping solution to compress space, why not just use a semaphore? Why create a brand new construct that increases the storage requirements for the data by 33%.
What am I missing?
I had some similar confusion when reading the compact-u16 description. Based on the code for parsing them in the solana python module I believe they're doing something conceptually similar to UTF-8, and storing the number in 1-3 bytes depending on its size.
Basically instead of each byte having 8 bits of a number, it has 7 bits of the number and a flag (the most significant bit) that indicates whether the number continues in the next byte. For the largest numbers they need an extra byte, but for numbers less than 128 they need only one byte. Since Solana seems to use these for storing the length of arrays, if it's common that the length of the arrays is less than 128 then they will end up with fewer total bytes to transfer across all transactions.
Some examples I worked out for myself:
hex | compact-u16
--------+------------
0x0000 | [0x00]
0x0001 | [0x01]
0x007f | [0x7f]
0x0080 | [0x80 0x01]
0x3fff | [0xff 0x7f]
0x4000 | [0x80 0x80 0x01]
0xc000 | [0x80 0x80 0x03]
0xffff | [0xff 0xff 0x03])

how to drive a dotstar strip from C on a raspberry pi

I am trying to figure out how to drive a dotstart strip by calling write(handle, datap, len) to an SPI handle, from C, on a raspberry pi. I'm not quite clear on how to lay out the data.
Looking at https://cdn-shop.adafruit.com/datasheets/APA102.pdf#page=3 makes me think you start with 4 bytes of 0, a string of coded LED values (4 bytes per LED) and then 4 bytes of 1's. But that cannot be right; the final 4 bytes of 1's would be indistinguishable from a request to set an LED to full brightness white. So how could that terminate the data?
Insight welcome. Yes, I know there's a python library out there for this, but I'm coding in C++ or C.
After much digging, I found the answer here:
https://cpldcpu.wordpress.com/2014/11/30/understanding-the-apa102-superled/
The end frame is more complex than the spec suggests, but the spec is correct if your string has 32 LEDS, and you must always specify values for all LEDS in your string.

A few questions about the startcode of NALU

I am a beginner to study MPEG4, and there are some definitions that confuse me.
It is said if a NALU slice is the first slice of a frame, then the startcode of NALU is 4 bytes "\x00\x00\x00\x01", otherwise it is 3 bytes "\x00\x00\x01". I want to know is that mandatory? I find it seems always 4 bytes used in Android MPEG4Writer.
Is it possible that a NALU slice ends with "\x00", if so, how can we determine this "\x00" belongs to the preceding NALU or the following NALU?
No. 3 byte start codes are not required. But can be used to save a little space.
No. Every NALU has a stop bit. So the last byte is guaranteed to never be 0.

Confused about the header size for a Ethernet frame

I was researching a few things about VLANs and came across the VLAN tag and also the headers.
If we have a MTU for a standard 802.3 Ethernet frame (1518 bytes) what is included in the header 802.3?
Also how do we calculate the header length for that?
What is the difference between 802.3 and 802.1q? I know that the VLAN tag requires extra bytes but how to calculate how many bytes needed to the 802.1q VLAN tag?
Thanks in advance
A regular 802.3/Eth-II ethernet frame doesn't carry VLAN info.
802.1Q can carry VLAN (and QoS) info over to the receiving end.
If the ethertype is 0x8100 then you got yourself an 802.1Q tag which is another 4 bytes in addition to the 14 bytes (dmac+smac+type).
See wikipedia for reference. http://en.wikipedia.org/wiki/Ethernet_frame
EDIT:
Regular Eth-II/802.3 has a total length of:
dmac(6)+smac(6)+etype(2)+payload(1500)+crc(4) = 1518 bytes
For the case of Eth-II/802.3 with 802.1Q tagging:
dmac(6)+smac(6)+8100(2)+vlan/Qos(2)+etype(2)+payload(1500)+crc(4) = 1522 bytes

What happens when you send an 8bit number to an output which is 4bit? C Language

I'm studying in high school, and we have an electronics project.
We have an output from our computer which is 4 bit, output address is 37Ah
and my teacher did this:
outportb(0x37A,0x80);
so what will appear in the output? 0h or 8h?
Unless this is a 4-bit CPU from the 70s then your output port will be 8 bits, but the connected hardware might only use 4. In that case it is common (but not necessary) to use the lower 4 bits so you would have 0x0 as value. But that makes using 0x80 a smokescreen, it would be the same as 0x00 and 0xF0. So from that alone I would guess that the upper 4 bits are used here, and the value sent is 0x8.
But a twisted hardware engineer could have used the middle 4 bits.
You need to explain your problem a little better. What microprocesser do you use etc. Is it a 4-port output you have?
But 0x80 is equal to:
0b1000000 and if you use the lower 4 bits: 0b1000xxxx, then they will be zero (not turned on). This will happen if 0x37A is 8bit.
Otherwise, explain your problem better :)
Can't you try and see what happens? or is it only theoretical until now?
EDIT:
I see it is a printer port. Check http://www.tinet.cat/~sag/gifs/ParallelPort.gif if you use port 2,3,4,5 then the upper 4 bits really doesn't matter :) as said in my comment.