How to setup "trigger method" for when a float variable reaches a certain value - objective-c

I've been working with a BLE-042 Cypress module that sends voltage data (0V-3.3V) in Float32 form, which I capture via delegate methods in my iOS application, and print to my screen (via a rectangle with the value on it, and a dynamic line graph using Core Plot 2.2). For this particular project, if the Float32 voltage > 2.0V, then I want to make that rectangle with the value blink to issue a warning.
The Core Plot graph has to run on the main thread, and blinking cannot interrupt the flow of capturing data from the BLE delegate methods. I want the rectangle to blink without making the data out-out-sync with reality.
Here's the delegate method for capturing my data:
- (void)getTransducerData:(CBCharacteristic*)characteristic error:(NSError*)error {
NSData* data = [characteristic value];
const uint8_t *reportData = [data bytes];
uint32_t v0 = (uint32_t)reportData[0];
uint32_t v1 = (uint32_t)reportData[1] << 8;
uint32_t v2 = (uint32_t)reportData[2] << 16;
uint32_t v3 = (uint32_t)reportData[3] << 24;
uint32_t voltage = v0 | v1 | v2 | v3;
if ((characteristic.value) || !error) {
self.transducerValue = voltage;
_real = voltage/1000000.0;
self.transducerPosition.text = [NSString stringWithFormat:#"%0.4lf V", _real];
}
}
I was thinking of applying an if condition right after _real has the new value:
_real = voltage/1000000.0;
if (_real > 2.0) {
// start blinking rectangle
}
The problem is I don't want that if statement in there at all. I just want the // start blinking rectangle part to start based on an event condition....not because any if statement is telling it to blink.
Any suggestions as to how I can get this done? If you think that the animation of blinking will not be "a heavy load" to be applied within an if statement condition, then let me know. Also, as a side note, I have an ADC sampling at 70000 samples/sec, and the getTransducerData:error: is really pumping out values.
I'm using Objective-C with Xcode 7.3.1

Related

Write UART on PIC18

I need help with the uart communication I am trying to implement on my Proteus simulation. I use a PIC18f4520 and I want to display on the virtual terminal the values that have been calculated by the microcontroller.
Here a snap of my design on Proteus
Right now, this is how my UART code looks like :
#define _XTAL_FREQ 20000000
#define _BAUDRATE 9600
void Configuration_ISR(void) {
IPR1bits.TMR1IP = 1; // TMR1 Overflow Interrupt Priority - High
PIE1bits.TMR1IE = 1; // TMR1 Overflow Interrupt Enable
PIR1bits.TMR1IF = 0; // TMR1 Overflow Interrupt Flag
// 0 = TMR1 register did not overflow
// 1 = TMR1 register overflowed (must be cleared in software)
RCONbits.IPEN = 1; // Interrupt Priority High level
INTCONbits.PEIE = 1; // Enables all low-priority peripheral interrupts
//INTCONbits.GIE = 1; // Enables all high-priority interrupts
}
void Configuration_UART(void) {
TRISCbits.TRISC6 = 0;
TRISCbits.TRISC7 = 1;
SPBRG = ((_XTAL_FREQ/16)/_BAUDRATE)-1;
//RCSTA REG
RCSTAbits.SPEN = 1; // enable serial port pins
RCSTAbits.RX9 = 0;
//TXSTA REG
TXSTAbits.BRGH = 1; // fast baudrate
TXSTAbits.SYNC = 0; // asynchronous
TXSTAbits.TX9 = 0; // 8-bit transmission
TXSTAbits.TXEN = 1; // enble transmitter
}
void WriteByte_UART(unsigned char ch) {
while(!PIR1bits.TXIF); // Wait for TXIF flag Set which indicates
// TXREG register is empty
TXREG = ch; // Transmitt data to UART
}
void WriteString_UART(char *data) {
while(*data){
WriteByte_UART(*data++);
}
}
unsigned char ReceiveByte_UART(void) {
if(RCSTAbits.OERR) {
RCSTAbits.CREN = 0;
RCSTAbits.CREN = 1;
}
while(!PIR1bits.RCIF); //Wait for a byte
return RCREG;
}
And in the main loop :
while(1) {
WriteByte_UART('a'); // This works. I can see the As in the terminal
WriteString_UART("Hello World !"); //Nothing displayed :(
}//end while(1)
I have tried different solution for WriteString_UART but none has worked so far.
I don't want to use printf cause it impacts other operations I'm doing with the PIC by adding delay.
So I really want to make it work with WriteString_UART.
In the end I would like to have someting like "Error rate is : [a value]%" on the terminal.
Thanks for your help, and please tell me if something isn't clear.
In your WriteByte_UART() function, try polling the TRMT bit. In particular, change:
while(!PIR1bits.TXIF);
to
while(!TXSTA1bits.TRMT);
I don't know if this is your particular issue, but there exists a race-condition due to the fact that TXIF is not immediately cleared upon loading TXREG. Another option would be to try:
...
Nop();
while(!PIR1bits.TXIF);
...
EDIT BASED ON COMMENTS
The issue is due to the fact that the PIC18 utilizes two different pointer types based on data memory and program memory. Try changing your declaration to void WriteString_UART(const rom char * data) and see what happens. You will need to change your WriteByte_UART() declaration as well, to void WriteByte_UART(const unsigned char ch).
Add delay of few miliseconds after line
TXREG = ch;
verify that pointer *data of WriteString_UART(char *data) actually point to
string "Hello World !".
It seems you found a solution, but the reason why it wasn't working in the first place is still not clear. What compiler are you using?
I learned the hard way that C18 and XC8 are used differently regarding memory spaces. With both compilers, a string declared literally like char string[]="Hello!", will be stored in ROM (program memory). They differ in the way functions use strings.
C18 string functions will have variants to access strings either in RAM or ROM (for example strcpypgm2ram, strcpyram2pgm, etc.). XC8 on the other hand, does the job for you and you will not need to use specific functions to choose which memory you want to access.
If you are using C18, I would highly recommend you switch to XC8, which is more recent and easier to work with. If you still want to use C18 or another compiler which requires you to deal with program/data memory spaces, then here below are two solutions you may want to try. The C18 datasheet says that putsUSART prints a string from data memory to USART. The function putrsUSART will print a string from program memory. So you can simply use putrsUSART to print your string.
You may also want to try the following, which consists in copying your string from program memory to data memory (it may be a waste of memory if your application is tight on memory though) :
char pgmstring[] = "Hello";
char datstring[16];
strcpypgm2ram(datstring, pgmstring);
putsUSART(datstring);
In this example, the pointers pgmstring and datstring will be stored in data memory. The string "Hello" will be stored in program memory. So even if the pointer pgmstring itself is in data memory, it initially points to a memory address (the address of "Hello"). The only way to point to this same string in data memory is to create a copy of it in data memory. This is because a function accepting a string stored in data memory (such as putsUSART) can NOT be used directly with a string stored in program memory.
I hope this could help you understand a bit better how to work with Harvard microprocessors, where program and data memories are separated.

Microsoft Kinect and background/environmental noise

I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.

How does one add vertices to a mesh object in OpenGL?

I am new to OpenGL and I have been using The Red Book, and the Super Bible. In the SB, I have gotten to the section about using objects loaded from files. So far, I don't think I have a problem understanding what is going on and how to do it, but it got me thinking about making my own mesh within my own app--in essence, a modeling app. I have done a lot of searching through both of my references as well as the internet, and I have yet to find a nice tutorial about implementing such functionality into one's own App. I found an API that just provides this functionality, but I am trying to understand the implementation; not just the interface.
Thus far, I have created an "app" (I use this term lightly), that gives you a view that you can click in and add vertices. The vertices don't connect, just are just displayed where you click. My concern is that this method I stumbled upon while experimenting is not the way I should be implementing this process.
I am working on a Mac and using Objective-C and C in Xcode.
MyOpenGLView.m
#import "MyOpenGLView.h"
#interface MyOpenGLView () {
NSTimer *_renderTimer
Gluint VAO, VBO;
GLuint totalVertices;
GLsizei bufferSize;
}
#end
#implementation MyOpenGLView
/* Set up OpenGL view with a context and pixelFormat with doubleBuffering */
/* NSTimer implementation */
- (void)drawS3DView {
currentTime = CACurrentMediaTime();
NSOpenGLContext *currentContext = self.openGLContext;
[currentContext makeCurrentContext];
CGLLockContext([currentContext CGLContextObj]);
const GLfloat color[] = {
sinf(currentTime * 0.2),
sinf(currentTime * 0.3),
cosf(currentTime * 0.4),
1.0
};
glClearBufferfv(GL_COLOR, 0, color);
glUseProgram(shaderProgram);
glBindVertexArray(VAO);
glPointSize(10);
glDrawArrays(GL_POINTS, 0, totalVertices);
CGLFlushDrawable([currentContext CGLContextObj]);
CGLUnlockContext([currentContext CGLContextObj]);
}
#pragma mark - User Interaction
- (void)mouseUp:(NSEvent *)theEvent {
NSPoint mouseLocation = [theEvent locationInWindow];
NSPoint mouseLocationInView = [self convertPoint:mouseLocation fromView:self];
GLfloat x = -1 + mouseLocationInView.x * 2/(GLfloat)self.bounds.size.width;
GLfloat y = -1 + mouseLocationInView.y * 2/(GLfloat)self.bounds.size.height;
NSOpenGLContext *currentContext = self.openGLContext;
[currentContext makeCurrentContext];
CGLLockContext([currentContext CGLContextObj]);
[_renderer addVertexWithLocationX:x locationY:y];
CGLUnlockContext([currentContext CGLContextObj]);
}
- (void)addVertexWithLocationX:(GLfloat)x locationY:(GLfloat)y {
glBindBuffer(GL_ARRAY_BUFFER, VBO);
GLfloat vertices[(totalVertices * 2) + 2];
glGetBufferSubData(GL_ARRAY_BUFFER, 0, (totalVertices * 2), vertices);
for (int i = 0; i < ((totalVertices * 2) + 2); i++) {
if (i == (totalVertices * 2)) {
vertices[i] = x;
} else if (i == (totalVertices * 2) + 1) {
vertices[i] = y;
}
}
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
totalVertices ++;
}
#end
The app is supposed take the location of the mouse click and provide it is a vertex location. With each added vertex, I first bind the VBO to make sure it is active. Next, I create a new array to hold my current vertex location's (totalVertices) plus space for one more vertex (+ 2 for x and y). Then I used glGetBufferSubData to bring the data back from the VBO and put it into this array. Using a for loop I add the X and Y numbers to the end of the array. Finally, I send this data back to the GPU into a VBO and call totalVertices++ so I know how many vertices I have in the array next time I want to add a vertex.
This brings me to my question: Am I doing this right? Put another way, should I be keeping a copy of the BufferData on the CPU side so that I don't have to call out to the GPU and have the data sent back for editing? In that way, I wouldn't call glGetBufferSubData, I would just create a bigger array, add the new vertex to the end, and then call glBufferData to realloc the VBO with the updated vertex data.
** I tried to include my thinking process so that someone like myself who is very inexperienced in programming can hopefully understand what I am trying to do. I don't want anyone to be offended by my explanations of what I did. **
I would certainly avoid reading the data back. Not only because of the extra data copy, but also to avoid synchronization between CPU and GPU.
When you make an OpenGL call, you can picture the driver building a GPU command, queuing it up for later submission to the GPU, and then returning. These commands will then be submitted to the GPU at a later point. The idea is that the GPU can run as independently as possible from whatever runs on the CPU, which includes your application. CPU and GPU operating in parallel with minimal dependencies is very desirable for performance.
For most glGet*() calls, this asynchronous execution model breaks down. They will often have to wait until the GPU completed all (or at least some) pending commands before they can return the data. So the CPU might block while only the GPU is running, which is undesirable.
For that reason, you should definitely keep your CPU copy of the data so that you don't ever have to read it back.
Beyond that, there are a few options. It will all depend on your usage pattern, the performance characteristics of the specific platform, etc. To really get the maximum out of it, there's no way around implementing multiple variations, and benchmarking them.
For what you're describing, I would probably start with something that works similar to a std::vector in C++. You allocate a certain amount of memory (typically named capacity) that is larger than what you need at the moment. Then you can add data without reallocating, until you fill the allocated capacity. At that point, you can for example double the capacity.
Applying this to OpenGL, you can reserve a certain amount of memory by calling glBufferData() with NULL as the data pointer. Keep track of the capacity you allocated, and populate the buffer with calls to glBufferSubData(). When adding a single point in your example code, you would call glBufferSubData() with just the new point. Only when you run out of capacity, you call glBufferData() with a new capacity, and then fill it with all the data you already have.
In pseudo-code, the initialization would looks something like this:
int capacity = 10;
glBufferData(GL_ARRAY_BUFFER,
capacity * sizeof(Point), NULL, GL_DYNAMIC_DRAW);
std::vector<Point> data;
Then each time you add a point:
data.push_back(newPoint);
if (data.size() <= capacity) {
glBufferSubData(GL_ARRAY_BUFFER,
(data.size() - 1) * sizeof(Point), sizeof(Point), &newPoint);
} else {
capacity *= 2;
glBufferData(GL_ARRAY_BUFFER,
capacity * sizeof(Point), NULL, GL_DYNAMIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER,
0, data.size() * sizeof(Point), &data[0]);
}
As an alternative to glBufferSubData(), glMapBufferRange() is another option to consider for updating buffer data. Going farther, you can look into using multiple buffers, and cycle through them, instead of updating just a single buffer. This is where benchmarking comes into play, because there isn't a single approach that will be best for every possible platform and use case.

GPS altitude print to LCD

I am trying to get my arduino uno to display the altitude from a GPS module, without any other data. I am still learning the code, but I've run into a problem where I can't seem to find what command is used to pull the altitude from the GPS string. I know it is pulling the data successfully, as I ran the example code from http://learn.parallax.com/kickstart/28500 and it read the first bit of the string, though I moved onto trying to get the altitude before getting it to scroll the whole string.
I am using a basic 16x2 LCD display, and the display I have working fine.
The end goal of this project is a GPS/gyroscope altimeter that can record to an SD card and record temperature, and deploy a parachute at apogee (15,000ft) and a larger parachute at 1,000ft.
Here is the code I am using for the altitude, I've marked the section I can't figure out. (probably just missing a term, or I might have really messed something up)
Any help would be appreciated, have a great day.
#include <SoftwareSerial.h>
#include "./TinyGPS.h" // Special version for 1.0
#include <LiquidCrystal.h>
TinyGPS gps;
SoftwareSerial nss(0, 255); // Yellow wire to pin 6
LiquidCrystal lcd(7, 8, 9, 10, 11, 12);
void gpsdump(TinyGPS &gps);
bool feedgps();
void setup() {
// set up the LCD's number of columns and rows:
lcd.begin(16, 2);
// initialize the serial communications:
Serial.begin(9600);
Serial.begin(115200);
nss.begin(4800);
lcd.print("Reading GPS");
lcd.write(254); // move cursor to beginning of first line
lcd.write(128);
lcd.write(" "); // clear display
lcd.write(" ");
}
void loop() {
bool newdata = false;
unsigned long start = millis();
while (millis() - start < 5000) { // Update every 5 seconds
if (feedgps())
newdata = true;
}
gpsdump(gps);
}
// Get and process GPS data
void gpsdump(TinyGPS &gps) {
// problem area
float falt, flat, flon;
unsigned long age;
gps.f_get_position(&flat, &flon);
inline long altitude (return _altitude);
long _altitude
;lcd.print(_altitude, 4);
}//end problem area
// Feed data as it becomes available
bool feedgps() {
while (nss.available()) {
if (gps.encode(nss.read()))
return true;
}
return false;
}
lcd.print(x,4) prints base-4. Did you want that, or do you want ordinary base-10 (decimal)?
Secondly, where do you expect _altitude to come from? It's uninitialized. There's also an uninitialized falt, and a weird inline long altitude line which doesn't mean anything.
You might be better of learning C++ first, in a desktop environment. Debugging an embedded device is a lot harder, and you're still producing quite a few bugs.

Objective-C - Passing Streamed Data to Audio Queue

I am currently developing an app on iOS that reads IMA-ADPCM audio data in over through a TCP socket and converts it to PCM and then plays the stream. At this stage, I have completed the class that pulls (or should I say reacts to pushes) in the data from the stream and decoded it to PCM. I have also setup the Audio Queue class and have it playing a test tone. Where I need assistance is the best way to pass the data into the Audio Queue.
The audio data comes out of the ADPCM decoder as 8 Khz 16bit LPCM at 640 bytes a chunk. (it originates as 160 bytes of ADPCM data but decompresses to 640). It comes into the function as uint_8t array and passes out an NSData object. The stream is a 'push' stream, so everytime the audio is sent it will create/flush the object.
-(NSData*)convertADPCM:(uint8_t[]) adpcmdata {
The Audio Queue callback of course is a pull function that goes looking for data on each pass of the run loop, on each pass it runs:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
I've been working on this for a few days and the PCM conversion was quite taxing and I am having a little bit of trouble assembling in my head the best way to bridge the data between the two. It's not like I am creating the data, then I could simply incorporate data creation into the fillbuffer routine, rather the data is being pushed.
I did setup a circular buffer, of 0.5 seconds in a uint16_t[] ~ but I think I have worn my brain out and couldn't work out a neat way to push and pull from the buffer, so I ended up with snap crackle pop.
I have completed the project mostly on Android, but found AudioTrack a very different beast to Core-Audio Queues.
At this stage I will also say I picked up a copy of Learning Core Audio by Adamson and Avila and found this an excellent resource for anyone looking to demystify core audio.
UPDATE:
Here is the buffer management code:
-(OSStatus) fillBuffer: (AudioQueueBufferRef) buffer {
int frame = 0;
double frameCount = bufferSize / self.streamFormat.mBytesPerFrame;
// buffersize = framecount = 8000 / 2 = 4000
//
// incoming buffer uint16_t[] convAudio holds 64400 bytes (big I know - 100 x 644 bytes)
// playedHead is set by the function to say where in the buffer the
// next starting point should be
if (playHead > 99) {
playHead = 0;
}
// Playstep factors playhead to get starting position
int playStep = playHead * 644;
// filling the buffer
for (frame = 0; frame < frameCount; ++frame)
// framecount = 4000
{
// pointer to buffer
SInt16 *data = (SInt16*)buffer->mAudioData;
// load data from uint16_t[] convAudio array into frame
(data)[frame] = convAudio[(frame + playStep)];
}
// set buffersize
buffer->mAudioDataByteSize = bufferSize;
// return no Error - Osstatus will return an error otherwise if there is one. (I think)
return noErr;
}
As I said, my brain was fuzzy when I wrote this, and there's probably something glaringly obvious I am missing.
Above code is called by the callback:
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
soundHandler *sHandler = (__bridge soundHandler*)inUserData;
CheckError([sHandler fillBuffer: inCompleteAQBuffer],
"can't refill buffer",
"buffer refilled");
CheckError(AudioQueueEnqueueBuffer(inAQ,
inCompleteAQBuffer,
0,
NULL),
"Couldn't enqueue buffer (refill)",
"buffer enqued (refill)");
}
On the convAudio array side of things I have dumped the it to log and it is getting filled and refilled in a circular fashion, so I know at least that bit is working.
The hard part in managing rates, and what to do if they don't match. At first, try using a huge circular buffer (many many seconds) and mostly fill it before starting the Audio Queue to pull from it. Then monitor the buffer level to see his big a rate matching problem you have.