I am trying to build a basic GNURadio example where I have a text file that contains some text and I want to add to it a preamble to be able to detect the beginning of the data at the receiver. Next I want to modulate it with GMSK and transmit it (with a USRP). At the receiver another USRP will take samples of the received signal and pass it to the GMSK Demod block then a correlator will search for the preamble, remove it, and pass the data of the text to a File sink where it will be stored in a text file.
-------------- ---------------- ---------- ------
-Source File - -> -Packet Encoder- -> -GMSK Mod- -> -USRP-
-------------- ---------------- ---------- ------
------ ------ ------------ ---------------- -----------
-USRP- -> -Sync- -> -GMSK Demod- -> -Packet Decoder- -> -File Sink-
------ ------ ------------ ---------------- -----------
Assuming phase and frequency shifts are corrected, my understanding of how this will work is that GNURadio will take a chunk of the data in the text file and pass it to the next block where a preamble will be added, then the GMSK will modulate this sequence and send it to the USRP (and this will be repeated until all the text in the text file is read). My concern is at the receiver, how will the receive blocks know the size (length) of the data the scheduler is passing on every 'loop' (i.e., the number of bytes from the text file + length of the preamble)? Because if the scheduler at the receiver passes fewer symbols to a block than the length of the text chunk + preamble, then I might be able to detect the preamble but then I will only have part of the text that was sent (along with that preamble and that specific sequence), and the remaining bytes of the text will be passed by the scheduler in the next 'loop' which will consequently be removed (lost) because of preamble search.
Unfortunately I could not find a documentation on this, so I might be missing something. My main question is: How can I make the scheduler take sufficient amount of symbols in order to be able to find the preamble and extract the accompanying text bytes, but at the same time allow the blocks to repeat the same steps for the next sequence of text + preamble?
Related
So I have a function I'm using to read data from a file. It works fine if the file is plain text, but when I try to read a binary file, like a png, it returns a different text (diff confirms that). I opened a hex editor to see what was wrong and found out it is putting some c2 bytes along with the file (I don't know if the position is random or if there are other bytes except this c2 one).
This is my function. I just want it to read and save to a variable.
proc read_file {path} {
set channel [open $path r]
fconfigure $channel -translation binary
set return_string "[read $channel]"
close $channel
return "$return_string"
}
To actually print, I'm doing this:
puts -nonewline [read_file file.png]
When you open a file, it defaults to being in text mode . In text mode (which is really a combination of options) the IO layer translates characters from whatever encoding they are in into Tcl's internal encoding, and does the reverse operation on output. The default encoding scheme is platform specific, but in your case it sounds like it is UTF-8. (Tcl uses a complex internal system of encodings; it doesn't expose those to the outside world.)
By contrast, when you put the channel into binary mode, the bytes on the outside are directly mapped to characters in the range 0-255 (and vice versa on output). You get a perfect copy, provided you put both input and output channels in binary mode. (There are other optimisations for binary mode, but they don't matter here.)
When you only put one of the channels in binary mode, you get what looks like corruption. It isn't random though. In particular, when the input is binary but the output is UTF-8, input bytes in the range 128-255 get converted into multiple output bytes, where the first of those bytes is in the sort of range you observed. There are other combinations that mess things up; the whole range of problems is collectively known as mojibake.
tl;dr Don't mix up binary and text data unless you're very careful. The results of getting it wrong are "surprising".
RFC 1952 (GZIP File Format Specification) section 2.3.1.1 reads:
2.3.1.1. Extra field
If the FLG.FEXTRA bit is set, an "extra field" is present in
the header, with total length XLEN bytes. It consists of a
series of subfields, each of the form:
+---+---+---+---+==================================+
|SI1|SI2| LEN |... LEN bytes of subfield data ...|
+---+---+---+---+==================================+
SI1 and SI2 provide a subfield ID, typically two ASCII letters
with some mnemonic value. Jean-Loup Gailly
<email#hidden> is maintaining a registry of subfield
IDs; please send him any subfield ID you wish to use. Subfield
IDs with SI2 = 0 are reserved for future use. The following
IDs are currently defined:
SI1 SI2 Data
---------- ---------- ----
0x41 ('A') 0x70 ('P') Apollo file type information
LEN gives the length of the subfield data, excluding the 4
initial bytes.
Do any subfield types exist beyond the AP given in the RFC? A web search doesn't find a list; neither is there any mention on GZip's Wikipedia page, the GNU homepage, in the gzip source code, or on Stack Overflow.
As far as I know, there is no such registry being maintained. Jean-loup no longer works on gzip.
Here is one more subfield in use:
The BGZF format (which is gzip-conformant) developed for use in bioinformatics, uses the subfield type "BC", to indicate the size of the current block. This is used to make parallel decompression easy.
From the specification at http://samtools.github.io/hts-specs/SAMv1.pdf :
Each BGZF block contains a standard gzip file header with the following standard-compliant extensions:
The F.EXTRA bit in the header is set to indicate that extra fields are present.
The extra field used by BGZF uses the two subfield ID values 66 and 67 (ASCII ‘BC’).
The length of the BGZF extra field payload (field LEN in the gzip specification) is 2 (two bytes of
payload).
The payload of the BGZF extra field is a 16-bit unsigned integer in little endian format. This integer
gives the size of the containing BGZF block minus one.
I have a formatted data file which is typically billions of lines long, with several lines of headers of variable length. The data file takes the form:
# header 1
# header 2
# headers are of variable length.
# data begins from next line.
1.23 4.56 7.89 0.12
2.34 5.67 8.90 1.23
:
:
# billions of lines of data, each row the same length, same format.
-- end of file --
I would like to extract a portion of data from this file, and my current code looks like:
<pre>
do j=1,jmax !Suppose I want to extract jmax lines of data from the file.
[algorithm to determine number of lines to skip, "N(j)"]
!This determines the number of lines to skip from the previous file
!position, when the data was read on j-1th iteration.
!Skip N-1 lines to go to the next data line to read off:
do i=1,N-1
read(unit=unit,fmt='(A)')
end do
!Now read off the line of data I want:
read(unit=unit,fmt='(data_format)'),data1,data2,etc.
!Data is stored in some arrays.
end do
</pre>
The problem is, N(j) can be anywhere between 1 and several billion, so it takes some time to run the code.
My question is, is there a more efficient way of skipping millions of lines of data? The only way I can think of, while sticking to Fortran, is to open the file with direct access and jump to the desired line upon opening the file.
As you suggest, direct access seems like the best option. But that requires the records to all have the same length, which your headers violate. Also, why used formatted output? With a file of this length, its hard to imagine a person reading the file. If you use unformatted IO, the file will be both smaller and IO will be faster. Perhaps create two files, one with the headers (metadata) in human reader form, and the other with the data in native form. Native / binary representation means a loss of portability, which is something to consider if you want to move the files to different computer architectures or have them be useable for decades. Otherwise it's probably worth the convenience. Other options would be to use a more sophisticated file format that combines metadata and data, such as HDF5 or FITS, but for communication between two programs of one person, that's probably excessive.
How do I fix the Fortran runtime error: Bad integer for item 0 in list input?
Below is the Fortran program which generates a runtime error.
CHARACTER CNFILE*(*)
REAL BOX
INTEGER CNUNIT
PARAMETER ( CNUNIT = 10 )
INTEGER NN
OPEN ( UNIT = CNUNIT, FILE = CNFILE, STATUS = 'OLD' )
READ ( CNUNIT,* ) NN, BOX
The error message received from gdb is :
At line 688 of file MCNPT.f (unit = 10, file = 'LATTICE-256.txt')
Fortran runtime error: Bad integer for item 0 in list input
[Inferior 1 (process 3052) exited with code 02]
(gdb)
I am not sure what options must be specified for READ() to read to numbers from the text file. Does it matter if the two numbers on the same line are specified as either an integer or a real in the text file?
Below is the gdb execution of the program using a break point at the open call
Breakpoint 1, readcn (
cnfile=<error reading variable: Cannot access memory at address 0x7fffffffdff0>,
box=-3.37898272e+33, _cnfile=30) at MCNPT.f:686
Since you did not specify form="unformatted" on the open statement, the unit / file is opened for formatted IO. This is appropriate for a human-readable text file. ("unformatted" would be used for a non-human readable file in computer-native format, sometimes called "binary".) Therefore you should provide a format on the read, or use list-directed read, i.e., read(unit, *). To advise on a particular format we would have to know the layout of the numbers in the file. A possible read with format is: read (CNUINT, '(I4, 2X, F6.2)' ) NN, BOX
P.S. I'm answering the question in your question and not the title, which seems unrelated.
EDIT: now that you are show the text data file, a list-directed read looks easier. That is because the data doesn't line up in columns. It seems that the file has two integers on the first line, then three real numbers on each of the following lines. Most likely you need a different read for the first line. Is the code sample that you are showing us trying to read the first line, or one of the later lines? If the first line, it would seem plausible to read into two integer variables. If a later line, into two or three real variables. Two if you wish to skip the third data item on the line.
EDIT 2: the question has been substantially altered several times, which is very confusing. The first line of the text file that was shown in one version of the question contained integers, with later lines having reals. Since the listed-directed read is reading into an integer and a floating variable, it will have problems if you attempt to use it on the later lines that have two real values.
As per Lua documentation, file:read("*l") reads next line skipping end of line.
Note:- "*l": reads the next line skipping the end of line, returning nil on end of file. This is the default format
Is this documentation right? Because file:read("*l") reads the current line,instead of next line or my understanding is wrong? Pretty confusing...
Lua manages files using the same model of the underlying C implementation (this model is used also by other programming languages and it is fairly common). If you are not familiar with this way of looking at files, the terminology could be unclear, indeed.
In this model a file is represented as a stream of bytes having a so called current position. The current position is a sort of conceptual pointer to the first byte in the file that will be read or written by the next I/O operation. When you open a file for reading, a new stream is set-up so that its current position is the beginning of the file, i.e. the current position "points" to the first byte in the file.
In Lua you manage streams through so-called file handles, which are a sort of intermediaries for the underlying streams. Any operation you perform using the handle is carried over to the corresponding stream.
Lua io.open opens a file, associates a C stream with it and returns a file handle that represents that stream:
local file_handle = io.open( "myfile.txt" ) -- file opened for reading
Therefore, if you perform any operation that reads some bytes (usually interpreted as characters, if you work with text files) those are read from the stream and for each byte read the current position of the stream advances by one, pointing each time to the next byte to be read.
Lua documentation implies this model. Thus when it says next line, it means that the input operation will read all characters in the stream starting from the current position until an end-of-line character is found.
Note that if you look at text files as a sequence of lines you could be misled, since you could think of a "current line" and a "next line". That would be an higher level model compared to the C model. There is no "current line" in C. In C text files are nothing more than a sequence of bytes where some special characters (end-of-line characters) undergo some special treatment (which is mostly implementation-dependent) and are used by some C standard functions as line terminators, i.e. as marks to detect when stop reading characters.
Another source of confusion for newbies or people coming from higher level languages is that in C, for an historical accident, bytes are handled as characters (the basic data type to handle single bytes is char, which is the smallest numeric type in C!). Therefore for people with a C background it is natural to think of bytes as characters and vice versa.
Although Lua is a much higher level language than C, its close relationship with C (it was designed to be easily interfaced with C code) makes it inherit part of this C "bytes-as-characters" approach. In fact, for example, Lua strings can hold arbitrary bytes and can be used to process raw binary data.
Like Lorenso said above, read starts at the current file position and reads from that position some portion of the file. How much of the file it reads depends on read instruction. For reference, in Lua 5.3:
"*all" : reads to the end of the file
"*line" : reads from the current position to the end of the line.
The end of the line is marked by a special character usually denoted
LfCr (Line feed, carriage return )
"*number" : reads a number, that is, it will read up to the end of what
it recognizes in the text as a number, stopping at, for example, a
comma ",".
num : reads a string with up to num characters
Here's an example that reads a file with a list of numbers into an array (a table), then returns the array. (Just change the "*number" to "*line" and it would read a file line by line):
function read_array(file)
local arr = {}
local handle = assert( io.open(file,"r") )
local value = handle:read("*number")
while value do
table.insert( arr, value )
value = handle:read("*number")
end
handle:close()
return arr
end