Arduino mega + GPS module - gps

I'm using an Arduino Mega with a GPS module (PMB-648 GPS), I can see everything that the GPS sends to me:
$GPGSA,A,1,,,,,,,,,,,,,,,*1E
$GPRMC,144547.705,V,5458.6542,N,00136.4148,W,,,240512,,,N*65
$GPGGA,144549.705,5458.6542,N,00136.4148,W,0,00,,20.6,M,47.8,M,,0000*51
This is ok, but now I need to isolate the string that begins with "$GPRMC" and put it into another variable, the string change when the GPS change position, only the "$GPRMC" remains.
this is my code:
String GPSstring ="";
boolean stringComplete = false;
void setup(){
Serial.begin(9600);
Serial2.begin(4800);
}
void loop(){
if (stringComplete){
Serial.println(GPSstring);
GPSstring = "";
stringComplete = false;
}
}
void serialEvent2(){
while(Serial2.available()){
char inchar = (char)Serial2.read();
GPSstring += inchar;
if(inchar == '\n'){
stringComplete = true;
}
}
}

easiest way would be to create a StringObject and use the startsWith() method.

It pays to be lazy. Take a look at the TinyGPS Library for ARduino to parse your NMEA strings easily.

Related

Arduino serial input prompt not repeat

Following Arduino code purpose is to get a char from user.
Currently problem of continuously prints the prompt.
I want it to print the prompt only once, then wait for the input.
Thanks.
char kbChar;
void setup() {Serial.begin(9600);}
void loop() {inChar();}
void inChar(){
Serial.println("Type something!");
if(Serial.available()){
kbChar = Serial.read();
Serial.println(kbChar);
}
}
Note the following attributes about the setup() and loop() functions in an Arduino sketch:
setup() is called once, at the very beginning
loop() is called repeatedly, forever, after setup() finishes.
Therefore, you are continuing to print the prompt, because your prompt call Serial.println("Type something!"); is inside of inChar(), which is called in the loop() function body. To make the prompt output only once, put it in the setup() function body.
char kbChar;
void setup() {
Serial.begin(9600);
while (!Serial) ; // Wait for Serial port to open.
Serial.println("Type something!");
}
void loop() {
inChar();
}
void inChar(){
if(Serial.available()){
kbChar = Serial.read();
Serial.println(kbChar);
}
}

What I do wrong with algorithm to compare samples?

I want compare and recognize two sound streams. I created my own algorithm, but its not working exactly like i would like. I trying for example, compare a few letters "A,B,C" with "D,E,F" or words "facebook" with "music" and algorithm give a true value for this comparing, but these are not the same words. My algorithm is so imprecise or it is cause of quality of sounds recorded using a microphone from laptop?
My concept of comparing algorithm:
Im taking for example 100 samples from one stream ( it can be in the middle of track) and im checking in loop every piece of second stream in specified way: first 0-99 samples , 1-100, 2-101 etc.
My program have about few tracks to compare with one input track, so my algorithm could get the best solution ( the most similar sample in track ) from every track Unfortunately it is getting wrong results.
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.ComponentModel;
using System.IO;
using System.Runtime.CompilerServices;
using System.Windows;
using Controller.Annotations;
using NAudio.Wave;
namespace Controller.Models
{
class DecompositionOfSound
{
private int _numberOfSimilarSamples;
private String _stream;
public string Stream
{
get { return _stream; }
set { _stream = value; }
}
public int IloscPodobnychProbek
{
get { return _numberOfSimilarSamples; }
set { _numberOfSimilarSamples = value; }
}
public DecompositionOfSound(string stream)
{
_stream = stream;
SaveSamples(stream);
}
private void SaveSamples(string stream)
{
var wave = new WaveChannel32(new WaveFileReader(stream));
Samples = new byte[wave.Length];
wave.Read(Samples, 0, (int) wave.Length);
}
private byte[] _samples;
public byte[] Samples
{
get { return _samples; }
set { _samples = value; }
}
}
class Sample: INotifyPropertyChanged
{
#region Cechy
private IList<DecompositionOfSound> _listoOfSoundSamples = new ObservableCollection<DecompositionOfSound>();
private string[] _filePaths;
#endregion
#region Property
public string[] FilePaths
{
get { return _filePaths; }
set { _filePaths = value; }
}
public IList<DecompositionOfSound> ListaSciezekDzwiekowych
{
get { return _listoOfSoundSamples; }
set { _listoOfSoundSamples = value; }
}
#endregion
#region Metody
public Sample()
{
LoadSamples(); // przy każdym nowym nagraniu należy zaktualizować !!!
}
public void DisplayMatchingOfSamples()
{
foreach (var decompositionOfSound in ListaSciezekDzwiekowych)
{
MessageBox.Show(decompositionOfSound.IloscPodobnychProbek.ToString());
}
}
public DecompositionOfSound BestMatchingOfSamples()
{
int max=0;
DecompositionOfSound referenceToObject = null;
foreach (var numberOfMatching in _listoOfSoundSamples)
{
if (numberOfMatching.IloscPodobnychProbek > max)
{
max = numberOfMatching.IloscPodobnychProbek;
referenceToObject = numberOfMatching;
}
}
return referenceToObject;
}
public void LoadSamples()
{
int i = 0;
_filePaths = Directory.GetFiles(#"Samples","*.wav");
while (i < _filePaths.Length)
{
ListaSciezekDzwiekowych.Add(new DecompositionOfSound(_filePaths[i]));
i++;
}
}
public void CheckMatchingOfWord(byte[] inputSound,double eps)
{
foreach (var probka in _listoOfSoundSamples)
{
CompareBufforsOfSamples(inputSound, probka, eps);
}
}
public void CheckMatchingOfWord(String inputSound,int iloscProbek, double eps)
{
var wave = new WaveChannel32(new WaveFileReader(inputSound));
var samples = new byte[wave.Length];
wave.Read(samples, 0, (int)wave.Length);
var licznik = 0;
var samplesTmp = new byte[iloscProbek];
while (licznik < iloscProbek)
{
samplesTmp[licznik] = samples[licznik + (wave.Length >> 1)];
licznik++;
}
foreach (var probka in _listoOfSoundSamples)
{
CompareBufforsOfSamples(samplesTmp, probka, eps);
}
}
private void CompareBufforsOfSamples(byte[] inputSound, DecompositionOfSound samples, double eps)
{
int max = 0;
for (int i = 0; i < (samples.Samples.Length - inputSound.Length); i++)
{
int counter = 0;
for (int j = 0; j < inputSound.Length; j++)
{
if (inputSound[j] * eps <= samples.Samples[i + j] &&
(inputSound[j] + inputSound[j] *(1 - eps)) >= samples.Samples[i + j])
{
counter++;
}
}
if (counter > max) max = counter;
}
samples.IloscPodobnychProbek = max;
}
#endregion
#region INotifyPropertyChange
public event PropertyChangedEventHandler PropertyChanged;
[NotifyPropertyChangedInvocator]
protected virtual void OnPropertyChanged([CallerMemberName] string propertyName = null)
{
PropertyChangedEventHandler handler = PropertyChanged;
if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}
#endregion
}
During comaparing all of sound samples, algorithm finds the soundtrack which have the highest number of matching samples, but it isnt correct record. Do my comparison of the two records make sense and how i can fix it to get expected result. Would you like to help me find solution of this problem ? Sorry for my English.
Kind Regards
You simply cannot do sample-level comparisons on recordings to determine your match. Even given two recordings on the same computer of the same word spoken by the same person - ie: every detail exactly the same - the recorded samples will differ. Digital audio is like that. It can sound the same, but the actual recorded samples will not match.
Speech To Text is not simple, and neither is voice recognition (ie: checking the identity of a person from their voice).
Instead of samples you need to examine the frequency profiles of the recordings. Various sounds in natural speech have distinctive frequency distributions. Sibilants - the s sound - have a wide distribution of higher frequencies for instance, so are easy to spot - which is why they used to use sibilant detection for the old yes/no response detection on phone systems.
You can get the frequency profile of a waveform by using a Fast Fourier Transform on a block of samples. Run through the audio stream and do a series of FFTs to get a 2D map of the frequencies of the waveform, then look for interesting things like sibilants (lots of high frequency, very little low frequency).
Of course you could just use one of the web-based Speech to Text APIs.

Java - pressing a direction key and having it move smoothly

When I press a direction key to move the object in that direction, it moves once, pauses momentarily, then moves again. Kind of like how if I want to type "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa", I would hold "a" key down, but after the first "a" there is a pause then the rest of the "a"'s are typed. How do I remove that pause in KeyListener? Thank you.
This is the key repetition feature that the OS provides, so there is no way around the pauses.
The way most games gets around this is to keep an array of the current state of all required keys and check periodically on them (for example in the game loop) and act on that (e.g move).
public class KTest extends JFrame implements KeyListener {
private boolean[] keyState = new boolean[256];
public static void main(String[] args) {
new KeyTest();
int xVelocity = 0;
int x = 0;
while(1) {
xVelocity = 0;
if(keyState[KeyEvent.VK_LEFT]) {
xVelocity = -5;
}
x += xVelocity;
}
}
KTest() {
this.addKeyListener(this);
}
void keyPressed(KeyEvent e) {
key_state[e.getKeyCode()] = true;
}
void keyReleased(KeyEvent e) {
key_state[e.getKeyCode()] = false;
}
}
Base class taken from: http://content.gpwiki.org/index.php/Java:Tutorials:Key_States

MP3 Playing with NAudio - Problems with Stop()

I've just started using NAudio (1.4) solely for MP3 playback. I've been working off the documentation and the source code for the samples. Currently I have this in a class:
IWavePlayer waveOutDevice;
WaveStream mainOutputStream;
WaveChannel32 volumeStream;
public AudioPlaybackService() : base() {
waveOutDevice = new WasapiOut(AudioClientShareMode.Shared, 100);
}
public bool LoadTrack(string trackPath, float volume)
{
if (!File.Exists(trackPath))
return false;
try
{
mainOutputStream = new Mp3FileReader(trackPath);
volumeStream = new WaveChannel32(mainOutputStream);
volumeStream.Volume = volume;
waveOutDevice.Init(mainOutputStream);
}
catch (Exception e)
{
Logger.Error("Failed to load track for playback {0} :: {1}", trackPath, e.ToString());
return false;
}
return true;
}
public bool PlayTrack()
{
if (waveOutDevice == null || waveOutDevice.PlaybackState == PlaybackState.Playing)
return false;
waveOutDevice.Play();
return true;
}
public bool StopTrack()
{
if (waveOutDevice == null || waveOutDevice.PlaybackState == PlaybackState.Stopped)
return false;
waveOutDevice.Stop();
mainOutputStream.CurrentTime = TimeSpan.Zero;
return true;
}
This loads and plays my test track fine, my issue is with the Stop() function. Firstly should I need to reset the CurrentTime property after calling Stop()? Currently it acts more like a pause button i.e. it resumes the track in the same place it was "stopped". If I do need to reset the CurrentTime I now have a problem where if I click stop, the track stops, but if I click play again afterwards I get a little leftover noise before the track starts again.
Looking at the source code of one of the samples all it does is call Stop().
In our use of naudio, we never stop the audio. Any stop-like functionality causes a silent waveform (zeroes) to be fed to the wave out. This was mainly due to instability in naudio when stopping and starting too frequently, but it also prevents the "leftover buffer" problem.

Getting GPS coordinates on Windows phone 7

How can I get the current GPS coordinates on Windows Phone 7?
Here's a simple example:
GeoCoordinateWatcher watcher;
watcher = new GeoCoordinateWatcher(GeoPositionAccuracy.Default)
{
MovementThreshold = 20
};
watcher.PositionChanged += this.watcher_PositionChanged;
watcher.StatusChanged += this.watcher_StatusChanged;
watcher.Start();
private void watcher_StatusChanged(object sender, GeoPositionStatusChangedEventArgs e)
{
switch (e.Status)
{
case GeoPositionStatus.Disabled:
// location is unsupported on this device
break;
case GeoPositionStatus.NoData:
// data unavailable
break;
}
}
private void watcher_PositionChanged(object sender, GeoPositionChangedEventArgs<GeoCoordinate> e)
{
var epl = e.Position.Location;
// Access the position information thusly:
epl.Latitude.ToString("0.000");
epl.Longitude.ToString("0.000");
epl.Altitude.ToString();
epl.HorizontalAccuracy.ToString();
epl.VerticalAccuracy.ToString();
epl.Course.ToString();
epl.Speed.ToString();
e.Position.Timestamp.LocalDateTime.ToString();
}
GeoCoordinateWatcher is the class that provides this functionality. There is a How To, a sample and some other resources on MSDN.