TI CC2650STK - how to control onboard LED through iOS app - objective-c

I am using this code from this repository: https://git.ti.com/sensortag-ios-source-code-example/sensortag-ios-source-code-example
I am trying to turn on the red onboard LED of the CC2650STK when the object temperature sensor exceeds 30°C and turn it off when the temperature is below 30°C again.
I'm not even sure if my current approach is correct but I'm stuck here. Does somebody know what I'm doing wrong?
Thanks in advance!
I did not change the hardware's firmware
I have already added the following into the "calcValue" method in the 'sensorTagAmbientTemperatureService.m' file:
if (tObj >= 30.0){
uint8_t valueRedLedOn = 0x01;
NSData *data = [NSData dataWithBytes:&valueRedLedOn length:sizeof(valueRedLedOn)];
[self.btHandle writeValue:data toCharacteristic:TI_SENSORTAG_IO_CONFIG];
redLedOn = true;
}
else {
if(redLedOn == true){
uint8_t valueRedLedOff = 0x00;
NSData *data = [NSData dataWithBytes:&valueRedLedOff length:sizeof(valueRedLedOn)];
[self.btHandle writeValue:data toCharacteristic:TI_SENSORTAG_IO_CONFIG];
redLedOn = false;
}
}
but when the app is running and the temperature reaches 30°C, I get a SIGABRT error (also see the log output):
screenshot of error w code and log
repository with my changes

Thanks for your answers.
I got to fix it.
The problem was that I forgot to initialize the service and characteristic.
I added 'sensorTagIoService.h' and '.m' and initialized it like the other services.
(the values for LED on and off in my question seem to be wrong tho)

Related

communication between board and the GPS module

I'm currently having trouble with talking between the dev board (STM32L476RG) and the GPS module (GP-207U). What my code does now is that, it can print out the very first packet received from GPS to PuTTY and will keep printing the same packet, even if I unplug the Tx wire from the dev board, PuTTY will still keep printing. I suspect that either the buffer that stores the received value is not getting updated(fulshed) or the HAL_UART_Receive() function only run once. (The receive function is in While(1) in main, so I'm confused)
enter image description here
(I unpluged the GPS, Putty still prints, so the receive function isn't doing anything after it received the very first packet from GPS)
/*retrive data from GPS*/
char UARTRxBuffer[1024] = "";
char RxBuffer[1024] = "";
void GetGPS(void) {
HAL_UART_Receive(&huart3, (uint8_t *)UARTRxBuffer, 1024, 1000);
HAL_Delay(100);
sprintf(RxBuffer,"%s\r\n\r\n", UARTRxBuffer);
HAL_UART_Transmit(&huart2, (uint8_t *)RxBuffer, strlen(RxBuffer), 5000);
HAL_Delay(100);
}
GetGPS() is put into while(1) in main().
I tried everything based on my guesses, but none worked.
Thanks ahead for any sort of help!
I suspect the call to HAL_UART_Receive is timing out (1000 ms in your code) during the second/subsequent attempts to read the GPS. if so, the buffer contents wouldn't get cleared or overwritten resulting the same data being printed over and over. It might help to read the GPS datasheet/manual to find out the maximum polling speed (here it appears to be ~200ms, considering the 2x 100 ms delays) and adjust the delay if the GPS device cannot keep up.
try this to confirm
HAL_StatusTypeDef status = HAL_UART_Receive(/*same as above*/);
if(status == HAL_OK){
// got valid data
sprintf(RxBuffer,"%s\r\n\r\n", UARTRxBuffer);
HAL_UART_Transmit(&huart2, (uint8_t *)RxBuffer, strlen(RxBuffer), 5000);
}
else{
sprintf(RxBuffer,"read timeout.\r\n\r\n");
HAL_UART_Transmit(&huart2, (uint8_t *)RxBuffer, strlen(RxBuffer), 5000);
}
API reference docs here page 1037/2232

Cocoa - detect event when camera started recording

In my OSX application I'm using code below to show preview from camera.
[[self session] beginConfiguration];
NSError *error = nil;
AVCaptureDeviceInput *newVideoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:captureDevice error:&error];
if (captureDevice != nil) {
[[self session] removeInput: [self videoDeviceInput]];
if([[self session] canAddInput: newVideoDeviceInput]) {
[[self session] addInput:newVideoDeviceInput];
[self setVideoDeviceInput:newVideoDeviceInput];
} else {
DLog(#"WTF?");
}
}
[[self session] commitConfiguration];
Yet, I need to detect the exact time when the preview from the camera becomes available.
In other words I'm trying to detect the same moment like in Facetime under OSX, where animation starts once the camera provides the preview.
What is the best way to achieve this?
I know this question is really old, but I stumbled upon it too when I was looking for this same question, and I have found answers so here goes.
For starters, AVFoundation is too high level, you'll need to drop down to a lower level, CoreMediaIO. There's not a lot of documentation on this, but basically you need to perform a couple queries.
To do this, we'll use a combination of calls. First, CMIOObjectGetPropertyDataSize lets us get the size of the data we'll query for next, which we can then use when we call CMIOObjectGetPropertyData. To set up the get property data size call, we need to start at the top, using this property address:
var opa = CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIOHardwarePropertyDevices),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeGlobal),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementMaster)
)
Next, we'll set up some variables to keep the data we'll need:
var (dataSize, dataUsed) = (UInt32(0), UInt32(0))
var result = CMIOObjectGetPropertyDataSize(CMIOObjectID(kCMIOObjectSystemObject), &opa, 0, nil, &dataSize)
var devices: UnsafeMutableRawPointer? = nil
From this point on, we'll need to wait until we get some data out, so let's busy loop:
repeat {
if devices != nil {
free(devices)
devices = nil
}
devices = malloc(Int(dataSize))
result = CMIOObjectGetPropertyData(CMIOObjectID(kCMIOObjectSystemObject), &opa, 0, nil, dataSize, &dataUsed, devices);
} while result == OSStatus(kCMIOHardwareBadPropertySizeError)
Once we get past this point in our execution, devices will point to potentially many devices. We need to loop through them, somewhat like this:
if let devices = devices {
for offset in stride(from: 0, to: dataSize, by: MemoryLayout<CMIOObjectID>.size) {
let current = devices.advanced(by: Int(offset)).assumingMemoryBound(to: CMIOObjectID.self)
// current.pointee is your object ID you will want to keep track of somehow
}
}
Finally, clean up devices
free(devices)
Now at this point, you'll want to use that object ID you saved above to make another query. We need a new property address:
var CMIOObjectPropertyAddress(
mSelector: CMIOObjectPropertySelector(kCMIODevicePropertyDeviceIsRunningSomewhere),
mScope: CMIOObjectPropertyScope(kCMIOObjectPropertyScopeWildcard),
mElement: CMIOObjectPropertyElement(kCMIOObjectPropertyElementWildcard)
)
This tells CoreMediaIO that we want to know if the device is currently running somewhere (read: in any app), wildcarding the rest of the fields. Next we get to the meat of the query, camera below corresponds to the ID you saved before:
var (dataSize, dataUsed) = (UInt32(0), UInt32(0))
var result = CMIOObjectGetPropertyDataSize(camera, &opa, 0, nil, &dataSize)
if result == OSStatus(kCMIOHardwareNoError) {
if let data = malloc(Int(dataSize)) {
result = CMIOObjectGetPropertyData(camera, &opa, 0, nil, dataSize, &dataUsed, data)
let on = data.assumingMemoryBound(to: UInt8.self)
// on.pointee != 0 means that it's in use somewhere, 0 means not in use anywhere
}
}
With the above code samples you should have enough to test whether or not the camera is in use. You only need to get the device once (the first part of the answer); the check for if it's in use however, you'll have to do at any time you want this information. As an extra exercise, consider playing with CMIOObjectAddPropertyListenerBlock to be notified on event changes for the in use property address we used above.
While this answer is nearly 3 years too late for the OP, I hope it helps someone in the future. Examples here are given with Swift 3.0.
The previous answer from the user jer is definitely the correct answer, but I just wanted to add one additional important information.
If a listener block is registered with CMIOObjectAddPropertyListenerBlock, the current run loop must be run, otherwise no event will be received and the listener block will never fire.

STM32439I-EVAL2 board problems with camera

I am working with the STM32439I-EVAL2 board and my OV2640 camera is not functioning properly. I have been testing it with the MB1063 Demonstration example from the STM32CubeF4 software and when I try to use the camera it shows "Error while Initializing Camera Interface. Please, chech if the camera module is mounted" on the LCD daughter board, even though the camera module is mounted and connected to the board. Has anyone else had this problem and solved it? Any help would be much appreciated.
I found the answer to my question. If anyone else has this problem, put this code in STM32Cube_FW_F4_V1.1.0\Drivers\BSP\STM324x9I_EVAL\stm324x9i_eval_io.c
uint8_t BSP_IO_Init(void)
{
uint8_t ret = IO_ERROR;
/* Read ID and verify the IO expander is ready */
if(stmpe1600_io_drv.ReadID(IO_I2C_ADDRESS) == STMPE1600_ID)
{
/* Initialize the IO driver structure */
io_driver = &stmpe1600_io_drv;
ret = IO_OK;
}
if(ret == IO_OK)
{
io_driver->Init(IO_I2C_ADDRESS);
io_driver->Start(IO_I2C_ADDRESS, IO_PIN_ALL);
io_driver->Config(IO_I2C_ADDRESS,IO_PIN_0,IO_MODE_OUTPUT);
io_driver->Config(IO_I2C_ADDRESS,IO_PIN_2,IO_MODE_OUTPUT);
io_driver->WritePin(IO_I2C_ADDRESS,IO_PIN_0,1);
io_driver->WritePin(IO_I2C_ADDRESS,IO_PIN_2,0);
}
return ret;
}

File copy with progress indicator [duplicate]

FSCopyObjectAsync is Deprecated in OS X v10.8, Now how to display progress indictor for file copy operation.
My answer assumes you're talking about showing the progress of a single file being copied.
Yes, "FSCopyObjectAsync" been deprecated but it's not gone yet.
And as you have discovered, Apple has not yet provided a useful replacement for the functionality that will eventually be removed. I suspect (but do not know for certain) that when the new functionality comes in, perhaps for 10.9, it will be delivered in the "NSFileManagerDelegate" protocol for delegates to make use of.
To make certain of that, Apple needs to be aware there are lots of developers need this. File a bug report at http://bugreporter.apple.com -- it'll likely be closed as a duplicate, but every vote counts.
copyfile(3) is alternative for FSCopyObjectAsync. Here is example of copyfile(3) with Progress Callback.
I created an open source project addressing this issue, I wrapped copy file(3) on a NSOperation and created a gui for it, please check it out and maybe contribute to make it better.
https://github.com/larod/FileCopyDemo
Coping files with progressIndicator in C
#define BUFSIZE (64*1024)
void *thread_proc(void *arg);
{
//outPath & inPatn an NSString paths
char buffer[BUFSIZE];
const char * outputFile = [outPath UTF8String];
const char * inputFile = [inPath UTF8String];
int in = open(inputFile, O_RDONLY);
int out = open(outputFile, O_WRONLY | O_CREAT | O_TRUNC);
vvolatile off_t progress;
progress = 0;
ssize_t bytes_read;
double fileSize = 0;
NSNumber * theSize;
if ([inPath getResourceValue:&theSize forKey:NSURLFileSizeKey error:nil])
fileSize = [theSize doubleValue];
[progressIndicator setMaxValue:fileSize];
while((bytes_read = read(in, buffer, BUFSIZE)) > 0)
{
write(out, buffer, BUFSIZE);
progress += bytes_read;
[progressIndicator setDoubleValue:progress];
}
// copy is done, or an error occurred
close(in);
close(out);
}

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping.
Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still clipped at time of input.
How do I set the input level or gain for that built-in input to avoid the clipping problem?
This works for me to set the input volume on my MacBook Pro (2011 model). It is a bit funky, I had to try setting the master channel volume, then each independent stereo channel volume until I found one that worked. Look through the comments in my code, I suspect the best way to tell if your code is working is to find a get/set-property combination that works, then do something like get/set (something else)/get to verify that your setter is working.
Oh, and I'll point out of course that I wouldn't rely on the values in address staying the same across getProperty calls as I'm doing here. It seems to work but it's definitely bad practice to rely on struct values being the same when you pass one by reference to a function. This is of course sample code so please forgive my laziness. ;)
//
// main.c
// testInputVolumeSetter
//
#include <CoreFoundation/CoreFoundation.h>
#include <CoreAudio/CoreAudio.h>
OSStatus setDefaultInputDeviceVolume( Float32 toVolume );
int main(int argc, const char * argv[]) {
OSStatus err;
// Load the Sound system preference, select a default
// input device, set its volume to max. Now set
// breakpoints at each of these lines. As you step over
// them you'll see the input volume change in the Sound
// preference panel.
//
// On my MacBook Pro setting the channel[ 1 ] volume
// on the default microphone input device seems to do
// the trick. channel[ 0 ] reports that it works but
// seems to have no effect and the master channel is
// unsettable.
//
// I do not know how to tell which one will work so
// probably the best thing to do is write your code
// to call getProperty after you call setProperty to
// determine which channel(s) work.
err = setDefaultInputDeviceVolume( 0.0 );
err = setDefaultInputDeviceVolume( 0.5 );
err = setDefaultInputDeviceVolume( 1.0 );
}
// 0.0 == no volume, 1.0 == max volume
OSStatus setDefaultInputDeviceVolume( Float32 toVolume ) {
AudioObjectPropertyAddress address;
AudioDeviceID deviceID;
OSStatus err;
UInt32 size;
UInt32 channels[ 2 ];
Float32 volume;
// get the default input device id
address.mSelector = kAudioHardwarePropertyDefaultInputDevice;
address.mScope = kAudioObjectPropertyScopeGlobal;
address.mElement = kAudioObjectPropertyElementMaster;
size = sizeof(deviceID);
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &address, 0, nil, &size, &deviceID );
// get the input device stereo channels
if ( ! err ) {
address.mSelector = kAudioDevicePropertyPreferredChannelsForStereo;
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementWildcard;
size = sizeof(channels);
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &channels );
}
// run some tests to see what channels might respond to volume changes
if ( ! err ) {
Boolean hasProperty;
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
// On my MacBook Pro using the default microphone input:
address.mElement = kAudioObjectPropertyElementMaster;
// returns false, no VolumeScalar property for the master channel
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 0 ];
// returns true, channel 0 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 1 ];
// returns true, channel 1 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
}
// try to get the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 0 ];
// returns noErr, but says the volume is always zero (weird)
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 1 ];
// returns noErr, but returns the correct volume!
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
}
// try to set the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
if ( toVolume < 0.0 ) volume = 0.0;
else if ( toVolume > 1.0 ) volume = 1.0;
else volume = toVolume;
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 0 ];
// returns noErr, but doesn't affect my input gain
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 1 ];
// success! correctly sets the input device volume.
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
}
return err;
}
EDIT in response to your question, "How'd [I] figure this out?"
I've spent a lot of time using Apple's audio code over the last five or so years and I've developed some intuition/process when it comes to where and how to look for solutions. My business partner and I co-wrote the original iHeartRadio apps for the first-generation iPhone and a few other devices and one of my responsibilities on that project was the audio portion, specifically writing an AAC Shoutcast stream decoder/player for iOS. There weren't any docs or open-source examples at the time so it involved a lot of trial-and-error and I learned a ton.
At any rate, when I read your question and saw the bounty I figured this was just low-hanging fruit (i.e. you hadn't RTFM ;-). I wrote a few lines of code to set the volume property and when that didn't work I genuinely got interested.
Process-wise maybe you'll find this useful:
Once I knew it wasn't a straightforward answer I started thinking about how to solve the problem. I knew the Sound System Preference lets you set the input gain so I started by disassembling it with otool to see whether Apple was making use of old or new Audio Toolbox routines (new as it happens):
Try using:
otool -tV /System/Library/PreferencePanes/Sound.prefPane/Contents/MacOS/Sound | bbedit
then search for Audio to see what methods are called (if you don't have bbedit, which every Mac developer should IMO, dump it to a file and open in some other text editor).
I'm most familiar with the older, deprecated Audio Toolbox routines (three years to obsolescence in this industry) so I looked at some Technotes from Apple. They have one that shows how to get the default input device and set its volume using the newest CoreAudio methods but as you undoubtedly saw their code doesn't work properly (at least on my MBP).
Once I got to that point I fell back on the tried-and-true: Start googling for keywords that were likely to be involved (e.g. AudioObjectSetPropertyData, kAudioDevicePropertyVolumeScalar, etc.) looking for example usage.
One interesting thing I've found about CoreAudio and using the Apple Toolbox in general is that there is a lot of open-source code out there where people try various things (tons of pastebins and GoogleCode projects etc.). If you're willing to dig through a bunch of this code you'll typically either find the answer outright or get some very good ideas.
In my search the most relevant things I found were the Apple technote showing how to get the default input device and set the master input gain using the new Toolbox routines (even though it didn't work on my hardware), and I found some code that showed setting the gain by channel on an output device. Since input devices can be multichannel I figured this was the next logical thing to try.
Your question is really good because at least right now there is no correct documentation from Apple that shows how to do what you asked. It's goofy too because both channels report that they set the volume but clearly only one of them does (the input mic is a mono source so this isn't surprising, but I consider having a no-op channel and no documentation about it a bit of a bug on Apple).
This happens pretty consistently when you start dealing with Apple's cutting-edge technologies. You can do amazing things with their toolbox and it blows every other OS I've worked on out of the water but it doesn't take too long to get ahead of their documentation, particularly if you're trying to do anything moderately sophisticated.
If you ever decide to write a kernel driver for example you'll find the documentation on IOKit to be woefully inadequate. Ultimately you've got to get online and dig through source code, either other people's projects or the OS X source or both, and pretty soon you'll conclude as I have that the source is really the best place for answers (even though StackOverflow is pretty awesome).
Thanks for the points and good luck with your project :)