What do packed_to_unpacked blocks do in GNU RADIO? [closed] - gnuradio

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Could somebody give me an example(e.g. input->output) of what this block does? Explanation is also appreciated.

From the official documentation (which, if your GNU Radio build is intact, you can also access from the documentation tab of your block properties in GRC):
Convert a stream of packed bytes or shorts to stream of unpacked bytes or shorts.
input: stream of unsigned char; output: stream of unsigned char
This is the inverse of gr::blocks::unpacked_to_packed_XX.
The bits in the bytes or shorts input stream are grouped into chunks of bits_per_chunk bits and each resulting chunk is written right- justified to the output stream of bytes or shorts. All b or 16 bits of the each input bytes or short are processed. The right thing is done if bits_per_chunk is not a power of two.
The combination of gr::blocks::packed_to_unpacked_XX_ followed by gr_chunks_to_symbols_Xf or gr_chunks_to_symbols_Xc handles the general case of mapping from a stream of bytes or shorts into arbitrary float or complex symbols.
so, you get a byte in, consisting of 8 bits, and you produce bytes, each of one with bits_per_chunk bits set according to the input. Example (let bits_per_chunk=1, MSB first):
in 0b11110000
out 0b00000001 0b00000001 0b00000001 0b00000001 0b00000000 0b00000000 0b00000000 0b00000000

Related

reading csv that has punctuation on column names using pandas [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
i have a csv file as below only one column(cust_code) with quotation marks and each row also has quotations
“CUST_CODE”
“CST001001”
“CST000235”
“CST010231”
“CST010235”
“CST010231”
“CST010235”
“CST010231”
“CST040015”
i am tried to read this file in pandas and i'm getting error as
'utf-8' codec can't decode byte 0x93 in position 0: invalid start byte
also, i tried by passing encoding type as ascii and utf-8
but nothing worked
Try passing encoding='cp1252' instead. Make sure to swap out 'Documents\Book1.csv' with whatever your filepath to the file is below:
df = pd.read_csv('Documents\Book1.csv', encoding='cp1252')
df
“CUST_CODE”
0 “CST001001”
1 “CST000235”
2 “CST010231”
3 “CST010235”
4 “CST010231”
5 “CST010235”
6 “CST010231”
7 “CST040015”
Here is a wikipedia with more info about that encoding type: https://en.wikipedia.org/wiki/Windows-1252 . A quote from the Wikipedia article:
"...common result was that all the quotes and apostrophes (produced by "smart quotes" in word-processing software) were replaced with question marks or boxes on non-Windows operating systems, making text difficult to read."

Need suggestions with reading text files by every n-th line in Raku [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am looking for some suggestions on how I can read text files by every n-th file in Raku/perl6.
In bioinformatics research, sometimes we need to parse text files in a somewhat less than straightforward manner. Such as Fastq files, which store data in groups of 4 lines at a time. Even more, these Fastq files come in pairs. So if we need to parse such files, we may need to do something like reading 4 lines from the first Fastq file, and reading 4 lines from the second Fastq file, then read the next 4 lines from the first Fastq file, and then read the next 4 lines from the second fastq file, ......
May I have some suggestions regarding what is the best way to approach this problem? Raku's "IO.lines" approach seems to be able to handle each line one at a time. but not sure how to handle every n-th line
An example fastq file pair: https://github.com/wtwt5237/perl6-for-bioinformatics/tree/master/Come%20on%2C%20sister/fastq
What we tried before with "IO.lines": https://github.com/wtwt5237/perl6-for-bioinformatics/blob/master/Come%20on%2C%20sister/script/benchmark2.p6
Reading 4 lines at a time from 2 files and processing them into a single thing, can be easily done with zip and batch:
my #filenames = <file1 file2>;
for zip #filenames.map: *.IO.lines.batch(4) {
# expect ((a,b,c,d),(e,f,g,h))
}
This will keep producing until at least one of the files is fully handled. An alternate for batch is rotor: this will keep going while both files fill up 4 lines completely. Other ways of finishing the loop are with also specifying the :partial flag with rotor, and using roundrobin instead of zip. YMMV.
You can use the lines method. Raku Sequences are lazy. This means that iterating over an expression like "somefile".IO.lines will only ever read one line into memory, never the whole file. In order to do the latter you would need to assign the Sequence to an Array.
The pairs method helps you getting the index of the lines. In combination with the divisible by operator %% we can write
"somefile".IO.lines.pairs.grep({ .key && .key %% 4 }).map({ .value })
in order to get a sequence of every 4th line in a file.

Input Mask start with letter C in VB.net [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I would like to create an input mask which looks like this C-HG__.
But because C represent option character or space in masking (VB.net). It wouldn't let me.
Please assist.
Try using the escape element: \
MSDN has a fairly nice write-up. Here's an excerpt:
\
Escape. Escapes a mask character, turning it into a literal. "\\" is the escape sequence for a backslash.
Possible duplicate with this question and/or this question.

why fgets not work? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I installed a new version of os X (10.7 initially and then updated to 10.7.5) - I lost man fgets in Terminal, it no longer exists(not olny fgets, some other too). I'm using xcode 4.6.3, updated all kinds of documentation. In documentation i got only FGETS(3), not fgets!
When i write this code:
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[])
{
FILE *wordFile = fopen ("/tmp/words.txt", "r");
char word[100];
while (fgets(word, 100, wordFile))
{
word[strlen(word) - 1] = '\0'; // strip off the trailing \n
NSLog (#"%s is %lu characters long", word, strlen(word));
}
fclose (wordFile);
return 0;
}
i got output:
Joe-Bob "Handyman" Brown
Jacksonville "Sly" Murphy
Shinara Bain
George "Guitar" Book is 84 characters long
Why?
My money on #Martin R.
fgets() did not find a converted \n in file "/tmp/words.txt" even though it was open in text mode "t". The editor used to create/edit the file is ending the lines with \r.
See #Michael Haren in Do line endings differ between Windows and Linux?
BTW: word[strlen(word) - 1] = '\0'; is potentially a problem, though not in this case, as strlen(word) may be 0. (#wildplasser)

Bzip2 block header: 1AY&SY

This is the question about bzip2 archive format. Any Bzip2 archive consists of file header, one or more blocks and tail structure. All blocks should start with "1AY&SY", 6 bytes of BCD-encoded digits of the Pi number, 0x314159265359. According to the source of bzip2:
/*--
A 6-byte block header, the value chosen arbitrarily
as 0x314159265359 :-). A 32 bit value does not really
give a strong enough guarantee that the value will not
appear by chance in the compressed datastream. Worst-case
probability of this event, for a 900k block, is about
2.0e-3 for 32 bits, 1.0e-5 for 40 bits and 4.0e-8 for 48 bits.
For a compressed file of size 100Gb -- about 100000 blocks --
only a 48-bit marker will do. NB: normal compression/
decompression do *not* rely on these statistical properties.
They are only important when trying to recover blocks from
damaged files.
--*/
The question is: Is it true, that all bzip2 archives will have blocks with start aligned to byte boundary? I mean all archives created by reference implementation of bzip2, the bzip2-1.0.5+ utility.
I think that bzip2 may parse the stream not as byte stream but as bit stream (the block itself is encoded by huffman, which is not byte-aligned by design).
So, in other words: If grep -c 1AY&SY greater (huffman may generate 1AY&SY inside block) or equal to count of bzip2 blocks in the file?
BZIP2 looks at a bit stream.
From http://blastedbio.blogspot.com/2011/11/random-access-to-bzip2.html:
Anyway, the important bits are that a BZIP2 file contains one or more
"streams", which are byte aligned, each containing one (zero?) or more
"blocks", which are not byte aligned, followed by an end of stream
marker (the six bytes 0x177245385090 which is the square root of pi as
a binary coded decimal (BCD), a four byte checksum, and empty bits for
byte alignment).
The bzip2 wikipedia article also alludes to bit-block alignment (see the File Format section), which seems to be inline from what I remember from school (had to implement the algorithm...).