I have some PCAPNG file, one UDP packet has Frame length 187 Bytes (1496 bits) and Data length 472 Bytes. All other packets fame length is greater than data length.
Please correct me if I'm wrong. My basic understanding is Frame length should be greater than data length because frame length includes data length.
1) Whether this packet is captured correctly ?
2) In which case this could happen ?
I found its related to fragmented packet. This case occurs when packet reassembled.
Related
I have a traffic dump in csv file containing packet arrival time, frame length and boolean values for multiple flags.
Can someone please explain how to calculate packet size from the traffic dump.
I further want to generate distribution of the packet size in python.
ppData points to a pointer in which is returned a host-accessible
pointer to the beginning of the mapped range. This pointer minus
offset must be aligned to at least
VkPhysicalDeviceLimits::minMemoryMapAlignment.
I want to allocate a Vec3 float in a uniform buffer. A Vec3 float is 12bytes big.
VkMemoryRequirements { size: 16, alignment: 16, memory_type_bits: 15 }
Vulkan reports that it has to be aligned to 16 bytes, which means that the size of the allocation is now 16 instead of 12. So Vulkan already handled this for me.
minMemoryMapAlignment on my GPU is 64 bytes. What exactly does this mean for my allocation? Does this mean that I can not use the size from a VkMemoryRequirements for my allocation? And instead of allocating 16bytes here, I would have to allocate 64bytes?
Update:
For a 12 byte allocation with a 16 byte alignment and 64 bytes minMemoryMapAlignment. I would still allocate only 16 bytes and then call:
vkMapMemory(device, memory, 0, 16, 0, &mapped);
But the ptr returned from vkMapMemory is actually not 16 bytes but 64 bytes wide? And all the relevant data is in the first 12 bytes and the rest is just "padded" memory? So in practice this basically means that I don't need to use minMemoryMapAlignment at all?
There is nothing in the spec that restricts the size of the allocation like that. The paragraph you quoted means that the mapping will be aligned to minMemoryMapAlignment and you can then tell the compiler to use aligned memory accesses when accessing it. What will happen is that when the memory is mapped the later 48 bytes are wasted space in the host's memory space. That is unlikely to matter though.
This is why people keep saying to allocate larger blocks and subdivide them as needed. That way you can put 4 of those vkBuffers into a single 64 byte allocation (which you will need if you want to pipeline the rendering).
It's highly unlikely that that single vec3 is the only thing you need memory for, so take a look at your other allocations and see which ones you can combine.
I'm study FLAC decode problem, but can't figure out how to get FLAC frame length. Please help.
https://xiph.org/flac/format.html
I docoded METADATA_BLOCK_STREAMINFO, and get below data:
mMinBlock: 4096
mMaxBlock: 4096
mMinFrame: 1201
mMaxFrame: 12804
mSampleRate: 44100
mBitPerSample: 16
mTotalSample: 14170212
Then I start to analyse first Frame, below is the info from first Frame header:
isFixBlock = true
blockSize = 12
sampleRate = 9
channel = 10
sampleSize = 4
number = 0
Blocking strategy is fixed-blocksize;
Block size: 1100, it means 256 * (2^(12-8)) samples = 4096 samples;
Sample rate: 1001 : 44.1kHz;
Channel: 2;
Sample size: 100 : 16 bits per sample;
So from above infomation, we'll know this frame has 4096 samples, and sample size is 16 bits per sample. That means this frame length is at least(ignore subframe header and frame footer,etc.) 4096 * 16 / 8 = 8192 bytes. But if I check the FLAC file manually, the offset gap of first and second frame is only 2976 bytes, this means the frame length of first frame is only 2976 bytes. Is there anything wrong for my calculation?
My purpose is get frame offset and frame length of every frame, is there any good way? I know there is sync code 0xFF F8, but it's very low efficiency.
Thanks in advance!
From http://lists.xiph.org/pipermail/flac-dev/2016-February/005845.html
The frame length you calculated (8192 bytes) is that of the decoded
frame, not of the FLAC frame. As it is compressed, it should be indeed
smaller than 8192 bytes.
There is no direct way to find the frame length except finding where the
next frame starts.
i'm using Omnivision ov5620
http://electronics123.net/amazon/datasheet/OV5620_CLCC_DS%20(1.3).pdf
this is datasheet.
than, you can see the Output Format 10-bit digital RGB Raw data.
first, i know RGB raw data is bayer array.
so, 10-bit RGB mean each channel of 1024 scale? range is 0~1023?
or 8-bit RGB each channel and four-LSB[2:0] is new fifth pixel data?
please refer the image
which is correct?
They pack every four adjacent 10-bit pixels (0..1023) of the line into 5 sequential bytes, where each of the first 4 bytes contains the MSB part of the pixel, and the 5th byte contains LSBs of all four pixels packed together into one byte.
This is convenient format because if you want to convert it to RGB8 you just ignore that fifth byte.
Also each displayed line begins with the packer header (PH) byte and terminates with the packer footer (PF) byte. And the whole frame begins with the frame start (FS) byte and terminates with the frame end (FE) byte.
after ARP protocol in a frame, there are many 0 bytes. Does anyone know the reason for the existence of these 0 bytes?
Check the Ethernet II accordion, all the 0 are labelled as padding.
Ethernet requires that all packets be at least 60 bytes long (64 bytes if you include the Frame Check Sequence at the end), so if a packet is less than 60 bytes long (including the 14-byte Ethernet header), additional padding bytes have to be added to the end of the packet.
(Those padding bytes will not show up on packets sent by the machine running Wireshark; the padding is added by the Ethernet hardware, and packets being sent by the machine capturing the traffic are given to the program before being handed to the hardware, so they haven't been padded.)