I constellation modulate my data
becomes a waveform like in the picture
After going through the block diagram
My headers become two
After demodulating the block
My headers became 4
What should I do to remove the header?
Related
I mostly get RFC 1951, however I'm not too clear on how to manage the case where (when using dynamic Huffman tables) no distance codes are needed or present. For example, let's take the input:
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890987654321ZYXWVUTSR
where no backreference is possible since there are no repetitions of length >= 3.
According to RFC 1951, at least one distance code must be present regardless, otherwise it wouldn't be possible to encode HDIST - 1. I understand, according to the reference, that such code should be of zero bits to signal "no distance codes".
One distance code of zero bits means that there are no distance codes
used at all (the data is all literals).
In infgen symbols, I'd expect to see a dist 0 0.
Analyzing what gzip does with infgen, however, I see that TWO distance codes are emitted (each 1 bit long) for the above input (even though none is actually used then):
! infgen 2.4 output
!
gzip
!
last
dynamic
litlen 48 6
litlen 49 6
litlen 50 6
...cut...
litlen 121 6
litlen 122 6
litlen 256 6
dist 0 1
dist 1 1
literal 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890987654321Z
literal 'YXWVUTSR
end
!
crc
length
So what's the correct behavior in these cases?
If there are no matches in the deflate block, there will be no lengths from the length/literal code, and so the decoder will never look for a distance code. In that case, what would make the most sense is to provide no information at all about a distance code.
However the format does not permit that, since the 5-bit HDIST value in the header is interpreted as 1 to 32 distance codes, for which lengths must be provided for in the header. You must provide at least one distance code length in the header, even though it will never be used.
There are several valid things you can do in that case. RFC 1951 notes you can provide a single distance code (HDIST == 0, meaning one length), with length zero, which would be just one zero in the list of lengths.
It is also permitted to provide a single code of length one, or you could do as zlib is doing, which is to provide two codes of length one. You can actually put any valid distance code description you like there, and it will still be accepted.
As to why zlib's deflate is choosing to define two codes there, I can only guess that Jean-loup was being conservative, writing something he knew that even an over-simplified inflator would have to accept. Both gzip and zopfli do the same thing. They all do the same thing when there is only one distance code used. They could emit just the single one-bit distance code, per the RFC, but they emit two single-bit distance codes, one of which is never used.
Really the right thing to do would be to write a single zero length as noted in the RFC, which would take the fewest number of bits in the header. I will consider updating zlib to do that, to eke out a few more bits of compression.
I am trying to implement the DES circuit and according to a lot of papers, the S-boxes usually is implemented using a SRL or LUT, i'm not familiar with SRL, so i thought i use 8 LUT, each one has 6 adress lines and 4 data lines ( the 1st 2 adress lines represent the 1st and last bits of the bloc, and the 4 other adress lines represent the rest which will define the column)
For example if we take S-box 1 (shown in this figure)
Here is the table that comes with it
That's just for one box, it seems to me wrong, to do all of the boxes we have to write 512 lines. My first question is: is a LUT a hardware component? if so, i am using it correctly? and, is there a more appropriate implementation or representation?
My second question is: what does it mean hardware wiring? I found out that all the permutation function could be implemented using wire crossing, i didn't understand it. Should i make a wire for every bit?
I've built an AVI M-jpeg encoder which basically build an AVI Riff header with all the infos.
I'm adding a frame index at the end of the video stream as specified in the specs.
Index is built as follow:
idx1[Size], then 00dc[0x10,0x00,0x00,0x00][Offset from frame X][Size from frame X] until the end. I compared to any other AVI file, and everything is the same. So I can't understand where softwares don't find - or search for - the index in my AVI file. Also verified several time that each tag has the good byte length indicated after. By the way, there is the good padding in each offset, and the length is the size of the jpeg only.
I attached the current rendered file: movie.avi
I spent the whole day trying to figure out what is the problem with my index. AVI spec is really simple, so I'm smashing my head on the desk.
[Edit]
As soon as my video is longer than 1 second, it fails. That makes no sense for me currently as the algorithm is the same, whatever how many frames are written.
Your AVI file violates the alignment rule: every chunk must start at an even byte.
Add a zero byte after every odd-length frame, and update the index accordingly. The chunk size in the header should still be odd to tell the true size of the data, but all offsets should be even.
I have used pgpdump on an encrypted file (via BouncyCastle) to get more information about it and found several lines about partial start, partial continue and partial end.
So I was wondering what exactly this was describing. Is it some sort of fragmentation of plain text?
Furthermore what does the bit count stand for after the RSA algorithm? In this case it's 1022 bits, but I've seen files with 1023 and 1024bits.
Partial body lengths are pretty well explained by this tumblr post. OpenPGP messages are composed of packets of a given length. Sometimes for large outputs (or in the case of packets from GnuPG, short messages), there will be partial body lengths that specify that another header will show up that tell the reader to continue reading From the post:
A partial body length tells the parser: “I know there are at least N more bytes in this packet. After N more bytes, there will be another header to tell if how many more bytes to read.” The idea being, I guess, that you can encrypt a stream of data as it comes in without having to know when it ends. Maybe you are PGP encrypting a speech, or some off-the-air TV. I don’t know. It can be infinite length — you can just keep throwing more partial body length headers in there, each one can handle up to a gigabyte in length. Every gigabyte it informs the parser: “yeah, there’s more coming!”
So in the case of your screenshot, pgpdump reads 8192 bytes, then encounters another header that says to read another 2048 bytes. after that 2k bytes, it hits another header for 1037 bytes, so on and so forth until the last continue header. 489 bytes after that is the end of the message
The 1022 bits, is the length of the public modulus. It is always going to be close to 1024 (if you have a 1024-bit key) but it can end up being slightly shorter than that given the initial selection of the RSA parameters. They are still called "1024-bit keys" though, even though they are slightly shorter than that.
I am using AVSampleBufferDisplayLayer to display video that is being streamed over the network. On the sending side an AVCaptureSession is used to capture CMSampleBuffers which are serialized into NAL units and streamed to the receiver, which then turns them back into CMSampelBuffers and feeds them to AVSampleBufferDisplayLayer (as is described for instance here). It works quite well - I can see the video and it streams more or less smoothly.
If I set the capture session's sessionPreset to AVCaptureSessionPresetHigh the video shown on the receiving side is cut in half - the top half displays the video from the sender while the bottom half is a solid dark green. If I use any other preset (e.g. AVCaptureSessionPresetMedium or AVCaptureSessionPreset1280x720) the video displays in its entirety.
Has anyone encountered such an issue, or has any idea what might cause it?
I tried examining the data at the source as well as the data at the destination, to see if I can determine where the image is being chopped off, but I have not been successful. It occurred to me that perhaps the high quality frame is being split into more than one NALUs and I am not putting it together correctly - is that possible? How does such splitting look like on the elementary-stream level (if possible at all)?
Thank you
Amos
The problem turned out to be that at the preset AVCaptureSessionPresetHigh one frame would get split into more than one type 5 (or type 1) NALU. On the receiving side I was combining the SPS, PPS and type 1 (or type 5) NAL units into a CMSampleBuffer, but ignoring the second part of the frame if they were split, which caused the problem.
In order to recognize if two successive NAL units belong to the same frame it is necessary to parse the slice header of the picture NAL units. This requires delving into the specification, but goes more or less like this: the first field of the slice header is first_mb_in_slice which is encoded in Golomb encoding. Next come slice_type and the pic_aprameter_set_id, also in Golomb encoding, and finally the frame_number, as an unsigned integer of length (log2_max_frame_num_minus_4 + 4) bits (to get the value of log2_max_frame_num_minus_4 it is necessary to parse the PPS corresponding to this frame). If two consecutive NAL units have the same frame_num they are part of the same frame and should be put into the same CMSampleBuffer.