I have a CFArrayRef which mostly has CFDictionaryRef, but sometimes it'll contain other things. I'd like to access a value from the dictionary in the array if I can, and not crash if I can't. Here's the code:
bool result = false;
CFArrayRef devices = CFArrayCreateCopy(kCFAllocatorDefault, SDMMobileDevice->deviceList);
if (devices) {
for (uint32_t i = 0; i < CFArrayGetCount(devices); i++) {
CFDictionaryRef device = CFArrayGetValueAtIndex(devices, i);
if (device) { // *** I need to verify this is actually a dictionary or actually responds to the getObjectForKey selector! ***
CFNumberRef idNumber = CFDictionaryGetValue(device, CFSTR("DeviceID"));
if (idNumber) {
uint32_t fetched_id = 0;
CFNumberGetValue(idNumber, 0x3, &fetched_id);
if (fetched_id == device_id) {
result = true;
break;
}
}
}
}
CFRelease(devices);
}
return result;
Any suggestions for how I can ensure that I only treat device like a CFDictionary if it's right to do so?
(I'm dealing with some open source code that isn't particularly well documented, and it doesn't seem to be particularly reliable either. I'm not sure if it's a bug that the array contains non-dictionary objects or a bug that it doesn't detect when it contains non-dictionary objects, but it seems to me that adding a check here is less likely to break other code then forcing it to only contain dictionaries elsewhere. I don't often work with CoreFoundation, so I'm not sure if I'm using the proper terms.)
In this case, since it looks like you are traversing the I/O Registry, you can use CFGetTypeId():
CFTypeRef device = CFArrayGetValueAtIndex(devices, i); // <-- use CFTypeRef
if(CFGetTypeID(device) == CFDictionaryGetTypeID()) { // <-- ensure it's a dictionary
...
}
If you really need to send messages to NSObject's interface from your C code, you can (see #include <objc/objc.h> and friends, or call to a C helper function in a .m file), but these strategies are not as straight forward as CFGetTypeID(), and much more error-prone.
Related
I am newbie of C++/CLI.
I already know that the pin_ptr's functionality is making GC not to learn to specified object.
now let me show you msdn's example.
https://msdn.microsoft.com/en-us//library/1dz8byfh.aspx
// pin_ptr_1.cpp
// compile with: /clr
using namespace System;
#define SIZE 10
#pragma unmanaged
// native function that initializes an array
void native_function(int* p) {
for(int i = 0 ; i < 10 ; i++)
p[i] = i;
}
#pragma managed
public ref class A {
private:
array<int>^ arr; // CLR integer array
public:
A() {
arr = gcnew array<int>(SIZE);
}
void load() {
pin_ptr<int> p = &arr[0]; // pin pointer to first element in arr
int* np = p; // pointer to the first element in arr
native_function(np); // pass pointer to native function
}
int sum() {
int total = 0;
for (int i = 0 ; i < SIZE ; i++)
total += arr[i];
return total;
}
};
int main() {
A^ a = gcnew A;
a->load(); // initialize managed array using the native function
Console::WriteLine(a->sum());
}
hear is the question.
Isn't it okay, the passed object(arr) not pinned ?
because the unmanaged code(native_function) is sync operation and finished before the C++/CLI code (load)
is there any chance the gc destory arr, even though the main logic is running?
(I think A is main's stack variable and arr is A's member variable, so while running main, it should visible)
if so, how can we guarantee that the A is there before invoking load?
(only while not running in native-code?)
int main() {
A^ a = gcnew A;
// I Think A or arr can be destroyed in here, if it is able to be destroyed in native_function.
a->load();
...
}
Thanks, in advance.
The problem that is solved by pinning a pointer is not a normal concurrency issue. It might be true that no other thread will preempt the execution of your native function. However, you have to count in the garbage collector, which might kick in whenever the .NET runtime sees fit. For instance, the system might run low on memory, so the runtime decides to collect disposed objects. This might happen while your native function executes, and the garbage collector might relocate the array it is using, so the pointer you passed in isn't valid anymore.
The golden rule of thumb is to pin ALL array pointers and ALL string pointers before passing them to a native function. ALWAYS. Don't think about it, just do it as a rule. Your code might work fine for a long time without pinning, but one day bad luck will strike you just when it's most annoying.
I'm writing a serial communication wrapper class in Objective-C. To list all serial available modems and setup the connection I'm using pretty much the same code as used in this example project by Apple.
I could read and write the ways apple does it. But I want to implement a loop on a second thread and write to the stream if a NSString *writeString longer 0 and read after write if bytes are available.
I got writing working quite straight forward. I just used the write function declared in unistd.h.
Reading will not work. Whenever I call read(), the function hangs and my loop does not proceed.
Here is the code used in my loop:
- (void)runInCOMLoop {
do {
// write
} while (bytesWritten < strlen([_writeString UTF8String]));
NSMutableString *readString = [NSMutableString string];
ssize_t bytesRead = 0;
ssize_t readB = 0;
char buffer[256];
do {
readB = read(_fileDescriptor, &buffer, sizeof(buffer));
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ this function hangs
bytesRead += readB;
if (readB == -1 {
// error
}
else if (readB > 0) {
if(buffer[bytesRead - 1] == '\r' ]] buffer[bytesRead - 1] == '\n') {
break;
}
[readString appendString:[NSString stringWithUTF8String:buffer]];
}
} while (readB > 0);
What am I doing wrong here?
read() will block if there is nothing to read. Apple probably has their own of doing things, but you can use select() to see if there is anything to read on _fileDescriptor. Google around for examples on how to use select.
Here's one link on StackOverflow:
Can someone give me an example of how select() is alerted to an fd becoming "ready"
This excerpt from the select man is pertains:
To effect a poll, the timeout argument should be
non-nil, pointing to a zero-valued timeval structure. Timeout is not
changed by select(), and may be reused on subsequent calls, however it is
good style to re-initialize it before each invocation of select().
You can set the non-blocking flag (O_NONBLOCK) on the file descriptor using fcntl() to keep read() from waiting for data, but if you do that, you have to continuously poll looking for data, which is obviously bad from a CPU usage standpoint. As Charlie Burns' answer explains, the best solution is to use select() which will allow your program to efficiently wait until there is some data to be read on the port's file descriptor. Here's some example code taken from my own Objective-C serial port class, ORSSerialPort (slightly modified):
fd_set localReadFDSet;
FD_ZERO(&localReadFDSet);
FD_SET(self.fileDescriptor, &localReadFDSet);
timeout.tv_sec = 0;
timeout.tv_usec = 100000; // Check to see if port closed every 100ms
result = select(localPortFD+1, &localReadFDSet, NULL, NULL, &timeout);
if (!self.isOpen) break; // Port closed while select call was waiting
if (result < 0) {
// Handle error
}
if (result == 0 || !FD_ISSET(localPortFD, &localReadFDSet)) continue;
// Data is available
char buf[1024];
long lengthRead = read(localPortFD, buf, sizeof(buf));
NSData *readData = nil;
if (lengthRead>0) readData = [NSData dataWithBytes:buf length:lengthRead];
Note that select() indicates that data is available by returning. So, your program will sit suspended at the select() call while no data is available. The program is not hung, that's how it's supposed to work. If you need to do other things while select() is waiting, you should put the select() call on a different queue/thread from the other work you need to do. ORSSerialPort does this.
I have to provide a C-style callback for a specific C library in an iOS app. The callback has no void *userData or something similar. So I am not able to loop in a context. I'd like to avoid introducing a global context to solve this. An ideal solution would be an Objective-C block.
My question: Is there a way to 'cast' a block into a function pointer or to wrap/cloak it somehow?
Technically, you could get access to a function pointer for the block. But it's totally unsafe to do so, so I certainly don't recommend it. To see how, consider the following example:
#import <Foundation/Foundation.h>
struct Block_layout {
void *isa;
int flags;
int reserved;
void (*invoke)(void *, ...);
struct Block_descriptor *descriptor;
};
int main(int argc, char *argv[]) {
#autoreleasepool {
// Block that doesn't take or return anything
void(^block)() = ^{
NSLog(#"Howdy %i", argc);
};
// Cast to a struct with the same memory layout
struct Block_layout *blockStr = (struct Block_layout *)(__bridge void *)block;
// Now do same as `block()':
blockStr->invoke(blockStr);
// Block that takes an int and returns an int
int(^returnBlock)(int) = ^int(int a){
return a;
};
// Cast to a struct with the same memory layout
struct Block_layout *blockStr2 = (struct Block_layout *)(__bridge void *)returnBlock;
// Now do same as `returnBlock(argc)':
int ret = ((int(*)(void*, int a, ...))(blockStr2->invoke))(blockStr2, argc);
NSLog(#"ret = %i", ret);
}
}
Running that yields:
Howdy 1
ret = 1
Which is what we'd expect from purely executing those blocks directly with block(). So, you could use invoke as your function pointer.
But as I say, this is totally unsafe. Don't actually use this!
If you want to see a write-up of a way to do what you're asking, then check this out:
http://www.mikeash.com/pyblog/friday-qa-2010-02-12-trampolining-blocks-with-mutable-code.html
It's just a great write-up of what you would need to do to get this to work. Sadly, it's never going to work on iOS though (since you need to mark a page as executable which you're not allowed to do within your app's sandbox). But nevertheless, a great article.
If your block needs context information, and the callback does not offer any context, I'm afraid the answer is a clear no. Blocks have to store context information somewhere, so you will never be able to cast such a block into a no-arguments function pointer.
A carefully designed global variable approach is probably the best solution in this case.
MABlockClosure can do exactly this. But it may be overkill for whatever you need.
I know this has been solved but, for interested parties, I have another solution.
Remap the entire function to a new address space. The new resulting address can be used as a key to the required data.
#import <mach/mach_init.h>
#import <mach/vm_map.h>
void *remap_address(void* address, int page_count)
{
vm_address_t source_address = (vm_address_t) address;
vm_address_t source_page = source_address & ~PAGE_MASK;
vm_address_t destination_page = 0;
vm_prot_t cur_prot;
vm_prot_t max_prot;
kern_return_t status = vm_remap(mach_task_self(),
&destination_page,
PAGE_SIZE*(page_count ? page_count : 4),
0,
VM_FLAGS_ANYWHERE,
mach_task_self(),
source_page,
FALSE,
&cur_prot,
&max_prot,
VM_INHERIT_NONE);
if (status != KERN_SUCCESS)
{
return NULL;
}
vm_address_t destination_address = destination_page | (source_address & PAGE_MASK);
return (void*) destination_address;
}
Remember to handle pages that aren't required anymore and note that it takes a lot more memory per invocation than MABlockClosure.
(Tested on iOS)
I am trying to write a basic app that uses CoreMidi to receive midi events from a specific source. I understand that all midi events that come into a port call the proc that I connected via MidiInputPortCreate(). I also understand that when using MidiPortConnectSource() that you can send an identifier (connRefCon) to help know what the source is. But I'm not sure how to use it.
I figure that within my MidiReadProc that I can use the scrConnRefCon and an if statement to listen to a specific source, but I still dont know what *void I should pass to separate each source. Ideally my ReadProc will look something like this:
void SourceReadProc (const MIDIPacketList *pktlist,
void *readProcRefCon,
void *srcConnRefCon)
{
if (srcConnRefCon == mySourceChoice) {
// pass the pktlist to do something
}
};
Any help will be greatly appreciated.
GW
After a break I've come back to this project with a fresh perspective. When I call MIDIPortConnectSource and pass a unique connRefCon it's not apparently passing for each endpoint. Here's my code:
ItemCount count = MIDIGetNumberOfSources();
for (Itemcount i=0; i<count; i++) {
MIDIEndpointRef endpoint = MIDIGetSource(i);
MIDIObjectGetStringProperty(endpoint,kMIDIPropertyName, &midiEndpointSourceName);
NSLog(#"Source %lu: %#", i, midiEndpointSourceName);
MIDIPortConnectSource(midiSourcePort, endpoint, (void*)&i);
}
Then my read proc:
void SourceReadProc (const MIDIPacketList *pktlist,
void *readProcRefCon,
void *srcConnRefCon)
{
ItemCount *source = (ItemCount*) srcConnRefCon;
NSLog(#"source: %lu", *source);
}
I've hooked up two different midi sources and I can find them both just fine. My first code reports that there are two sources and tells me their names. But my read proc says that the sources is always the first source. I've tried three different data types when passing the connRefCon with no luck. I feel that my issue must be with the MIDIPortConnectSource.
Any help or even troubleshooting ideas would be great. I wish that CoreMIDI had functions to query what's connected to ports so I could check that, but alas, there's not.
The srcConnRefCon is useful if you've made multiple MIDIPortConnectSource() calls. Most commonly, it's a pointer to an object representing the source, but it could be anything. If you just want to disambiguate multiple sources, you could, say, use a string.
MIDIPortConnectSource(port, endpoint1, (void *)"endpoint1");
MIDIPortConnectSource(port, endpoint2, (void *)"endpoint2");
Then, in your SourceReadProc, you'd do something like this:
char *source = (char *)srcConnRefCon;
if (!strcmp(source, "endpoint1")) {
// Process packets from source 1
}
Make sure the allocation lifetime of whatever you pass in extends as long as the port is connected - otherwise you'll get a dangling pointer, which can be hell to debug.
The UIKeyboardAnimationCurveUserInfoKey has a UIViewAnimationCurve value. How do I convert it to the corresponding UIViewAnimationOptions value for use with the options argument of +[UIView animateWithDuration:delay:options:animations:completion:]?
// UIView.h
typedef enum {
UIViewAnimationCurveEaseInOut, // slow at beginning and end
UIViewAnimationCurveEaseIn, // slow at beginning
UIViewAnimationCurveEaseOut, // slow at end
UIViewAnimationCurveLinear
} UIViewAnimationCurve;
// ...
enum {
// ...
UIViewAnimationOptionCurveEaseInOut = 0 << 16, // default
UIViewAnimationOptionCurveEaseIn = 1 << 16,
UIViewAnimationOptionCurveEaseOut = 2 << 16,
UIViewAnimationOptionCurveLinear = 3 << 16,
// ...
};
typedef NSUInteger UIViewAnimationOptions;
Obviously, I could create a simple category method with a switch statement, like so:
// UIView+AnimationOptionsWithCurve.h
#interface UIView (AnimationOptionsWithCurve)
#end
// UIView+AnimationOptionsWithCurve.m
#implementation UIView (AnimationOptionsWithCurve)
+ (UIViewAnimationOptions)animationOptionsWithCurve:(UIViewAnimationCurve)curve {
switch (curve) {
case UIViewAnimationCurveEaseInOut:
return UIViewAnimationOptionCurveEaseInOut;
case UIViewAnimationCurveEaseIn:
return UIViewAnimationOptionCurveEaseIn;
case UIViewAnimationCurveEaseOut:
return UIViewAnimationOptionCurveEaseOut;
case UIViewAnimationCurveLinear:
return UIViewAnimationOptionCurveLinear;
}
}
#end
But, is there an even easier/better way?
The category method you suggest is the “right” way to do it—you don’t necessarily have a guarantee of those constants keeping their value. From looking at how they’re defined, though, it seems you could just do
animationOption = animationCurve << 16;
...possibly with a cast to NSUInteger and then to UIViewAnimationOptions, if the compiler feels like complaining about that.
Arguably you can take your first solution and make it an inline function to save yourself the stack push. It's such a tight conditional (constant-bound, etc) that it should compile into a pretty tiny piece of assembly.
Edit:
Per #matt, here you go (Objective-C):
static inline UIViewAnimationOptions animationOptionsWithCurve(UIViewAnimationCurve curve)
{
switch (curve) {
case UIViewAnimationCurveEaseInOut:
return UIViewAnimationOptionCurveEaseInOut;
case UIViewAnimationCurveEaseIn:
return UIViewAnimationOptionCurveEaseIn;
case UIViewAnimationCurveEaseOut:
return UIViewAnimationOptionCurveEaseOut;
case UIViewAnimationCurveLinear:
return UIViewAnimationOptionCurveLinear;
}
}
Swift 3:
extension UIViewAnimationOptions {
init(curve: UIViewAnimationCurve) {
switch curve {
case .easeIn:
self = .curveEaseIn
case .easeOut:
self = .curveEaseOut
case .easeInOut:
self = .curveEaseInOut
case .linear:
self = .curveLinear
}
}
}
In Swift you can do
extension UIViewAnimationCurve {
func toOptions() -> UIViewAnimationOptions {
return UIViewAnimationOptions(rawValue: UInt(rawValue << 16))
}
}
An issue with the switch based solution is that it assumes no combination of options will be ever passed in. Practice shows though, that there may be situations where the assumption doesn't hold. One instance I found is (at least on iOS 7) when you obtain the keyboard animations to animate your content along with the appearance/disappearance of the keyboard.
If you listen to the keyboardWillShow: or keyboardWillHide: notifications, and then get the curve the keyboard announces it will use, e.g:
UIViewAnimationCurve curve = [userInfo[UIKeyboardAnimationCurveUserInfoKey] integerValue];
you're likely to obtain the value 7. If you pass that into the switch function/method, you won't get a correct translation of that value, resulting in incorrect animation behaviour.
Noah Witherspoon's answer will return the correct value. Combining the two solutions, you might write something like:
static inline UIViewAnimationOptions animationOptionsWithCurve(UIViewAnimationCurve curve)
{
UIViewAnimationOptions opt = (UIViewAnimationOptions)curve;
return opt << 16;
}
The caveat here, as noted by Noah also, is that if Apple ever changes the enumerations where the two types no longer correspond, then this function will break. The reason to use it anyway, is that the switch based option doesn't work in all situations you may encounter today, while this does.
iOS 10+
Swift 5
A Swift alternative to converting UIView.AnimationCurve to UIView.AnimationOptions, which may not even be possible, is to use UIViewPropertyAnimator (iOS 10+), which accepts UIView.AnimationCurve and is a more modern animator than UIView.animate.
Most likely you'll be working with UIResponder.keyboardAnimationCurveUserInfoKey, which returns an NSNumber. The documentation for this key is (Apple's own notation, not mine):
public class let keyboardAnimationCurveUserInfoKey: String // NSNumber of NSUInteger (UIViewAnimationCurve)
Using this approach, we can eliminate any guesswork:
if let kbTiming = notification.userInfo?[UIResponder.keyboardAnimationCurveUserInfoKey] as? NSNumber, // doc says to unwrap as NSNumber
let timing = UIView.AnimationCurve.RawValue(exactly: kbTiming), // takes an NSNumber
let curve = UIView.AnimationCurve(rawValue: timing) { // takes a raw value
let animator = UIViewPropertyAnimator(duration: duration, curve: curve) {
// add animations
}
animator.startAnimation()
}