GNU Radio - BFSK - gnuradio

I have a question about BFSK/M-FSK modulations in GNU Radio but in can be common problem :)
If I have a symbol 1 or 0 and it has got 64 samples (2 periods of sin/cos)
and I want to modulate it with 2 different frequencies for example:
f1 = 175 KHz = 5,7 us
f2 = 225 KHz = 4,4 us
Problem is that 0 or 1 binary are same width - 64 samples but width for cos/sin periods are diferent and longer period will be cut for shorter symbol.
Do I need to generate two diferent symbols width for 0 or 1 ? what with bigger M-FSK?

No, you typically leave your symbol length alone. You're right, these symbols will then contain different amounts of periods, but that's typically not really a problem, because it doesn't matter for your demodulation.
There's some fine mathematical differences in the energy these symbols might have, but in a system where that matters, you simply can't chose arbitrary frequencies -- that's where OFDM comes into play.
You're right, however, this is just barely a GNU Radio question. Maybe you're better of on the ham radio stack exchange, or the DSP stack exchange site.

Related

How to select a lens for reading very small font characters

I am trying to implement an OCR / OCV algorithm for inspecting printed text in black ink on a white background. The text size is ranging from 3 pt. to 6 pt. I tried first to capture images with a 5 MP monochrome camera using an 8 mm, 12 mm and 16 mm lens but I could not get the characters with good clarity. I repeated the same test with 10 MP camera also considering that higher pixel depth will give more information but I got same results.
I'm not sure, how I can get a clearer image. Whether a 5 MP / 10 MP is enough and if there is any way to determine the lens to be used in such application.
The FOV for inspection is about 300 x 250 mm and the working distance I considered from approx. 400 mm to 650 mm.
Due to copyright concerns I cannot post the image of the object under inspection.
Any help or direction is greatly appreciated. Thanks.
It's simple geometry. It is:
3pt =~ 1mm.
Assuming you want to have 10 pixels to cover each character, your IFOV needs to be:
IFOV =~ (font_width / 10) / distance = 0.1 / 650 =~ 0.15 milliradians / pixel.
For the work area width you mention, the horizontal field of view is:
FOV = 2 * atan((300 / 2) / 650) =~ 453 milliradians =~ 26 deg
So the minimal (horizontal) sensor resolution you'd need is:
Width = 453 / 0.15 = 3020 pixels.
Thus a 10MP sensor should be quite sufficient, and 5MP one may be adequate.
To choose the lens, from the above spec for the FOV, and the format (width, height) of your choice of sensor, you can work out by the same simple trigonometry the needed focal length. Finally, among all lenses matching that focal length that are available for your camera mount, you need to choose one that (a) can be focused at the distance of interest and (b) has an adequate Optical Transfer Function such that one line can be resolved at the above IFOV.
In practice, after running the math and looking at catalogs, you'll end up with several candidate lenses. My advice would then be to get samples and try them out on your on setup, and specifically with the particular lighting rig you'll be using, before making a final decision. Depending on your particular project, factors influencing the choice, in addition (obviously) to the cost of the lens + sensor combination, may be size/weight, sensitivity to environment conditions (temperature, humidity, vibrations), availability and lead time for sourcing, etc.

Encoding - Efficiently send sparse boolean array

I have a 256 x 256 boolean array. These array is constantly changing and set bits are practically randomly distributed.
I need to send a current list of the set bits to many clients as they request them.
Following numbers are approximations.
If I send the coordinates for each set bit:
set bits data transfer (bytes)
0 0
100 200
300 600
500 1000
1000 2000
If I send the distance (scanning from left to right) to the next set bit:
set bits data transfer (bytes)
0 0
100 256
300 300
500 500
1000 1000
The typical number of bits that are set in this sparse array is around 300-500, so the second solution is better.
Is there a way I can do better than this without much added processing overhead?
Since you say "practically randomly distributed", let's assume that each location is a Bernoulli trial with probability p. p is chosen to get the fill rate you expect. You can think of the length of a "run" (your option 2) as the number of Bernoulli trials necessary to get a success. It turns out this number of trials follows the Geometric distribution (with probability p). http://en.wikipedia.org/wiki/Geometric_distribution
What you've done so far in option #2 is to recognize the maximum length of the run in each case of p, and reserve that many bits to send all of them. Note that this maximum length is still just a probability, and the scheme will fail if you get REALLY REALLY unlucky, and all your bits are clustered at the beginning and end.
As #Mike Dunlavey recommends in the comment, Huffman coding, or some other form of entropy coding, can redistribute the bits spent according to the frequency of the length. That is, short runs are much more common, so use fewer bits to send those lengths. The theoretical limit for this encoding efficiency is the "entropy" of the distribution, which you can look up on that Wikipedia page, and evaluate for different probabilities. In your case, this entropy ranges from 7.5 bits per run (for 1000 entries) to 10.8 bits per run (for 100).
Actually, this means you can't do much better than you're currently doing for the 1000 entry case. 8 bits = 1 byte per value. For the case of 100 entries, you're currently spending 20.5 bits per run instead of the theoretically possible 10.8, so that end has the highest chance for improvement. And in the case of 300: I think you haven't reserved enough bits to represent these sequences. The entropy comes out to 9.23 bits per pixel, and you're currently sending 8. You will find many cases where the space between true exceeds 256, which will overflow your representation.
All of this, of course, assumes that things really are random. If they're not, you need a different entropy calculation. You can always compute the entropy right out of your data with a histogram, and decide if it's worth pursuing a more complicated option.
Finally, also note that real-life entropy coders only approximate the entropy. Huffman coding, for example, has to assign an integer number of bits to each run length. Arithmetic coding can assign fractional bits.

Simple voice recognition when whispering

I'm trying to do simple voice to text mapping using pocketsphinx (. The grammar is very simple such as:
public <grammar> = (Matt, Anna, Tom, Christine)+ (One | Two | Three | Four | Five | Six | Seven | Eight | Nine | Zero)+ ;
e.g:
Tom Anna Three Three
yields
Tom Anna 33
I adapted the acoustic model (to take into account my foreign accent) and after that I received decent performance (~94% accuracy). I used training dataset of ~3minutes.
Right now I'm trying to do the same but by whispering to the microphone. The accuracy dropped significantly to ~50% w/o training. With training for accent
I got ~60%. I tried other thinks including denoising and boosting volume. I read the whole docs but was wondering if anyone could answer some questions so I can
better know in which direction should I got to improve performance.
1) in tutorial you are adapting hub4wsj_sc_8k acustic model. I guess "8k" is a sampling parameter. When using sphinx_fe you use "-samprate 16000". Was it used deliberately to train 8k model using data with 16k sampling rate? Why data with 8k sampling haven't been used? Does it have influence on performance?
2) in sphinx 4.1 (in comparison to pocketsphinx) there are differenct acoustic models e.g. WSJ_8gau_13dCep_16k_40mel_130Hz_6800Hz.jar. Can those models be used with pocketsphinx? Will acustic model with 16k sampling have typically better performance with data having 16k sampling rate?
3) when using data for training should I use those with normal speaking mode (to adapt only for my accent) or with whispering mode (to adapt to whisper and my accent)? I think I tried both scenarios and didn't notice any difference to draw any conclussion but I don't know pocketsphinx internals so I might be doing something wrong.
4) I used the following script to record adapting training and testing data from the tutorial:
for i in `seq 1 20`; do
fn=`printf arctic_%04d $i`;
read sent; echo $sent;
rec -r 16000 -e signed-integer -b 16 -c 1 $fn.wav 2>/dev/null;
done < arctic20.txt
I noticed that each time I hit Control-C this keypress is distinct in the recorded audio that leaded to errors. Trimming audio somtimes helped to correct to or lead to
other error instead. Is there any requirement that each recording has some few seconds of quite before and after speaking?
5) When accumulating observation counts is there any settings I can tinker with to improve performance?
6) What's the difference between semi-continuous and continuous model? Can pocketsphinx use continuous model?
7) I noticed that 'mixture_weights' file from sphinx4 is much smaller comparing to the one you got in pocketsphinx-extra. Does it make any difference?
8) I tried different combination of removing white noise (using 'sox' toolkit e.g. sox noisy.wav filtered.wav noisered profile.nfo 0.1). Depending on the last parameter
sometimes it improved a little bit (~3%) and sometimes it makes worse. Is it good to remove noise or it's something pocketsphinx doing as well? My environment is quite
is there is only white noise that I guess can have more inpack when audio recorded whispering.
9) I noticed that boosting volume (gain) alone most of the time only maked the performance a little bit worse even though for humans it was easier to distinguish words. Should I avoid it?
10) Overall I tried different combination and the best results I got is ~65% when only removing noise, so only slight (5%) improvement. Below are some stats:
//ORIGNAL UNPROCESSED TESTING FILES
TOTAL Words: 111 Correct: 72 Errors: 43
TOTAL Percent correct = 64.86% Error = 38.74% Accuracy = 61.26%
TOTAL Insertions: 4 Deletions: 13 Substitutions: 26
//DENOISED + VOLUME UP
TOTAL Words: 111 Correct: 76 Errors: 42
TOTAL Percent correct = 68.47% Error = 37.84% Accuracy = 62.16%
TOTAL Insertions: 7 Deletions: 4 Substitutions: 31
//VOLUME UP
TOTAL Words: 111 Correct: 69 Errors: 47
TOTAL Percent correct = 62.16% Error = 42.34% Accuracy = 57.66%
TOTAL Insertions: 5 Deletions: 12 Substitutions: 30
//DENOISE, threshold 0.1
TOTAL Words: 111 Correct: 77 Errors: 41
TOTAL Percent correct = 69.37% Error = 36.94% Accuracy = 63.06%
TOTAL Insertions: 7 Deletions: 3 Substitutions: 31
//DENOISE, threshold 0.21
TOTAL Words: 111 Correct: 80 Errors: 38
TOTAL Percent correct = 72.07% Error = 34.23% Accuracy = 65.77%
TOTAL Insertions: 7 Deletions: 3 Substitutions: 28
Those processing I was doing only for testing data. Should the training data be processed in the same way? I think I tried that but there was barely any difference.
11) In all those testing I used ARPA language model. When using JGSF results where usually much worse (I have the latest pocketsphinx branch). Why is that?
12) Because is each sentence the maximum number would be '999' and no more than 3 names, I modified the JSGF and replaced repetition sign '+' by repeating content in the parentheses manually. This time the result where much closer to ARPA. Is there any way in grammar to tell maximum number of repetition like in regular expression?
13) When using ARPA model I generated it by using all possible combinations (since dictionary is fixed and really small: ~15 words) but then testing I was still receiving somtimes illegal results e.g. Tom Anna (without any required number). Is there any way to enforce some structure using ARPA model?
14) Should the dictionary be limited only to those ~15 words or just full dictionary will only affect speed but not performance?
15) Is modifying dictionary (phonemes) the way to go to improve recognition when whispering? (I'm not an expert but when we whisper I guess some words might sounds different?)
16) Any other tips how to improve accuracy would be really helpful!
Regarding whispering: when you do so, the sound waves don't have meaningful aperiodic parts (vibrations that result from your vocal cords resonating normally, but not when whispering). You can try this by putting your finger to your throat while loudly speaking 'aaaaaa', and then just whispering it.
AFAIR acoustic modeling relies a lot on taking the frequency spectrum of the sound to detect peaks (formants) and relate them to phones (like vowels).
Educated guess: when whispering, the spectrum is mostly white-noise, slightly shaped by the oral position (tongue, openness of mouth, etc), which is enough for humans, but far not enough to make the peeks distinguishable by a computer.

Weird arguments in UIColor colorWithRed:green:blue:alpha:

When checking this method, I was expecting for red, green and blue to be in the 0-255 range. Instead, it's in 0-1.
Am I the only one who thinks this is weird?
Is there any reason not o use the more common 0-255 values for RGB, or even hex numbers (as in html)?
In my opinion this is not weird. Both 0-255 and 0.0-1.0 levels are widely used in different platforms. You can always convert that by using something like this:
#define FLOAT_COLOR_VALUE(n) (n)/255.0
The reason sometimes RGB values are represented as float values rather than 0 to 255 is because 0 to 255 assumes you are using 8 bits to represent each colour component and hence have to use 24 bits for each colour in your frame buffers. This may not be the case if you are using displays that only support 256 colours in total or more than 16 million etc.
In theory then can be an infinite number of shades of red, green or blue. The number of bits you use to represent them depends on how accurate you need to represent colour and how much memory you have on graphics cards to represent images etc.
For many cases 0 to 255 is fine. But there is another world out there where it isn't fine, and for those devices / accurate rendering requirements, floating point numbers provide a much needed alternative.

How would you most efficiently store latitude and longitude data?

This question comes from a homework assignment I was given. You can base your storage system off of one of the three following formats:
DD MM SS.S
DD MM.MMM
DD.DDDDD
You want to maximize the amount of data you can store by using as few bytes as possible.
My solution is based off the first format. I used 3 bytes for latitude: 8 bits for the DD (-90 to 90), 6 bits for the MM (0-59), and 10 bits for the SS.S (0-59.9). I then used 25 bits for the longitude: 9 bits for the DDD (-180 to 180), 6 bits for the MM, and 10 for the SS.S. This solution doesn't fit nicely on a byte border, but I figured the next reading can be stored immediately following the previous one, and 8 readings would use only 49 bytes.
I'm curious what methods others can come up. Is there a more efficient method to storing this data? As a note, I considered an offset based storage, but the problem gave no indication of how much the values may change between readings, so I'm assuming any change is possible.
Your suggested method is not optimal. You are using 10 bits (1024 possible values) to store a value in the range (0..599). This is a waste of space.
If you'll use 3 bytes for latitude, you should map the range [0, 2^24-1] to the range [-90, 90]. Hence each of the 2^24 values represents 180/2^24 degrees, which is 0.086 seconds.
If you want only 0.1 second accuracy, you'll need 23 bits for latitudes and 24 bits for longitudes (you'll get 0.077 seconds accuracy). That's 47 bit total instead of your 49 bits, with better accuracy.
Can we do even better?
The exact number of bits needed for 0.1 second accuracy is log2(180*60*60*10 * 360*60*60*10) < 46.256. Which means that you can use 46256 bits (5782 bytes) to store 1000 (lat,lon) pairs, but the mathematics involved will require dealing with very large integers.
Can we do even better?
It depends. If your data set has concentrations, you can store only some points and relative distances from these points, using less bits. Clustering algorithms should be used.
Sticking to existing technology:
If you used half precision floating point numbers to store only the DD.DDDDD data, you can be a lot more space-efficent, but you'd have to accept an exponent bias of 15, which means: The coordinates stored might not be exact, but at an offset from the original value.
This is due to the way floating point numbers are stored, essentially: A normalized significant is multiplied by an exponent to result in a number, instead of just storing a single value (as in integer numbers, the way you calculated the numbers for your solution).
The next highest commonly used floating point number mechanism uses 32 bits (the type "float" in many programming languages) - still efficient, but larger than your custom format.
If, however, you would design your own custom floating point type as well, and you gradually added more bits, your results would become more exact and it would STILL be more efficient than the solution you first found. Just play around with the number of bits used for significant and exponent, and find out how close your fp approximations come to the desired result in degrees!
Well, if this is for a large number of readings, then you may try a differential approach. Start with an absolute location, and then start saving incremental changes, which should ideally require less bits, depending on the nature of the changes. This is effectively compressing the stream. But somehow I don't think that's what this homework is about.