C Callback in Objective-C (IOKIT) - objective-c

I am trying to write some code that interacts with an USB device in Objective C, and I got stuck on setting the callback function for incoming reports. In my case it's an IOKIT function but I think the problem is more general as I (apparently) don't know how to correctly set a C callback function in Objective-C. I've got a Class "USBController" that handles the io functions
USBController.m:
#include <CoreFoundation/CoreFoundation.h>
#include <Carbon/Carbon.h>
#include <IOKit/hid/IOHIDLib.h>
#import "USBController.h"
static void Handle_IOHIDDeviceIOHIDReportCallback(
void * inContext, // context from IOHIDDeviceRegisterInputReportCallback
IOReturn inResult, // completion result for the input report operation
void * inSender, // IOHIDDeviceRef of the device this report is from
IOHIDReportType inType, // the report type
uint32_t inReportID, // the report ID
uint8_t * inReport, // pointer to the report data
CFIndex InReportLength) // the actual size of the input report
{
printf("hello"); //just to see if the function is called
}
#implementation USBController
- (void)ConnectToDevice {
...
IOHIDDeviceRegisterInputReportCallback(tIOHIDDeviceRefs[0], report, reportSize,
Handle_IOHIDDeviceIOHIDReportCallback,(void*)self);
...
}
...
#end
All the functions are also declared in the header file.
I think I did pretty much the same as what I've found here, but it doesn't work. The project compiles nicely and everything works up till the moment there is input and the callback function is to be called. Then I get an "EXC_BAD_ACCESS" error. The first three arguments of the function are correct. I'm not so sure about the context..
What did I do wrong?

I am not sure at all that your EXEC_BAD_ACCESS depends on your callback. Indeed, if you say that it is called (I suppose you see the log) and since it only logs a message, there should be no problem with this.
EXEC_BAD_ACCESS is caused by an attempt to access an already deallocated object. You can get more information in two ways:
execute the program in debug mode, so when it crashes you will be able to see the stack content;
activate NSZombies or run the program using the performance tool Zombies; this will tell you exactly which object was accessed after its deallocation.

I know how to fix this. When calling this:
IOHIDDeviceRegisterInputReportCallback(tIOHIDDeviceRefs[0], report, reportSize,
Handle_IOHIDDeviceIOHIDReportCallback,(void*)self);
You don't include the code for the creation/type of the value called report. However the method name "Handle_IOHIDDeviceIOHIDReportCallback" comes from an Apple document where there is an error in the creation of the report value. https://developer.apple.com/library/archive/technotes/tn2187/_index.html
CFIndex reportSize = 64;
uint8_t report = malloc( reportSize ); // <---- WRONG
IOHIDDeviceRegisterInputReportCallback( deviceRef,
report,
reportSize,
Handle_IOHIDDeviceIOHIDReportCallback,
context );
Instead do this:
uint8_t *report = (uint8_t *)malloc(reportSize);

Related

How to get access to WriteableBitmap.PixelBuffer pixels with C++?

There are a lot of samples for C#, but only some code snippets for C++ on MSDN. I have put it together and I think it will work, but I am not sure if I am releasing all the COM references I have to.
Your code is correct--the reference count on the IBufferByteAccess interface of *buffer is incremented by the call to QueryInterface, and you must call Release once to release that reference.
However, if you use ComPtr<T>, this becomes much simpler--with ComPtr<T>, you cannot call any of the three members of IUnknown (AddRef, Release, and QueryInterface); it prevents you from calling them. Instead, it encapsulates calls to these member functions in a way that makes it difficult to screw things up. Here's an example of how this would look:
// Get the buffer from the WriteableBitmap:
IBuffer^ buffer = bitmap->PixelBuffer;
// Convert from C++/CX to the ABI IInspectable*:
ComPtr<IInspectable> bufferInspectable(AsInspectable(buffer));
// Get the IBufferByteAccess interface:
ComPtr<IBufferByteAccess> bufferBytes;
ThrowIfFailed(bufferInspectable.As(&bufferBytes));
// Use it:
byte* pixels(nullptr);
ThrowIfFailed(bufferBytes->Buffer(&pixels));
The call to bufferInspectable.As(&bufferBytes) performs a safe QueryInterface: it computes the IID from the type of bufferBytes, performs the QueryInterface, and attaches the resulting pointer to bufferBytes. When bufferBytes goes out of scope, it will automatically call Release. The code has the same effect as yours, but without the error-prone explicit resource management.
The example uses the following two utilities, which help to keep the code clean:
auto AsInspectable(Object^ const object) -> Microsoft::WRL::ComPtr<IInspectable>
{
return reinterpret_cast<IInspectable*>(object);
}
auto ThrowIfFailed(HRESULT const hr) -> void
{
if (FAILED(hr))
throw Platform::Exception::CreateException(hr);
}
Observant readers will notice that because this code uses a ComPtr for the IInspectable* we get from buffer, this code actually performs an additional AddRef/Release compared to the original code. I would argue that the chance of this impacting performance is minimal, and it's best to start from code that is easy to verify as correct, then optimize for performance once the hot spots are understood.
This is what I tried so far:
// Get the buffer from the WriteableBitmap
IBuffer^ buffer = bitmap->PixelBuffer;
// Get access to the base COM interface of the buffer (IUnknown)
IUnknown* pUnk = reinterpret_cast<IUnknown*>(buffer);
// Use IUnknown to get the IBufferByteAccess interface of the buffer to get access to the bytes
// This requires #include <Robuffer.h>
IBufferByteAccess* pBufferByteAccess = nullptr;
HRESULT hr = pUnk->QueryInterface(IID_PPV_ARGS(&pBufferByteAccess));
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
// Get the pointer to the bytes of the buffer
byte *pixels = nullptr;
pBufferByteAccess->Buffer(&pixels);
// *** Do the work on the bytes here ***
// Release reference to IBufferByteAccess created by QueryInterface.
// Perhaps this might be done before doing more work with the pixels buffer,
// but it's possible that without it - the buffer might get released or moved
// by the time you are done using it.
pBufferByteAccess->Release();
When using C++/WinRT (instead of C++/CX) there's a more convenient (and more dangerous) alternative. The language projection generates a data() helper function on the IBuffer interface that returns a uint8_t* into the memory buffer.
Assuming that bitmap is of type WriteableBitmap the code can be trimmed down to this:
uint8_t* pixels{ bitmap.PixelBuffer().data() };
// *** Do the work on the bytes here ***
// No cleanup required; it has already been dealt with inside data()'s implementation
In the code pixels is a raw pointer into data controlled by the bitmap instance. As such it is only valid as long as bitmap is alive, but there is nothing in the code that helps the compiler (or a reader) track that dependency.
For reference, there's an example in the WriteableBitmap::PixelBuffer documentation illustrating the use of the (otherwise undocumented) helper function data().

MidiReadProc - using srcConnRefCon to listen to only one source

I am trying to write a basic app that uses CoreMidi to receive midi events from a specific source. I understand that all midi events that come into a port call the proc that I connected via MidiInputPortCreate(). I also understand that when using MidiPortConnectSource() that you can send an identifier (connRefCon) to help know what the source is. But I'm not sure how to use it.
I figure that within my MidiReadProc that I can use the scrConnRefCon and an if statement to listen to a specific source, but I still dont know what *void I should pass to separate each source. Ideally my ReadProc will look something like this:
void SourceReadProc (const MIDIPacketList *pktlist,
void *readProcRefCon,
void *srcConnRefCon)
{
if (srcConnRefCon == mySourceChoice) {
// pass the pktlist to do something
}
};
Any help will be greatly appreciated.
GW
After a break I've come back to this project with a fresh perspective. When I call MIDIPortConnectSource and pass a unique connRefCon it's not apparently passing for each endpoint. Here's my code:
ItemCount count = MIDIGetNumberOfSources();
for (Itemcount i=0; i<count; i++) {
MIDIEndpointRef endpoint = MIDIGetSource(i);
MIDIObjectGetStringProperty(endpoint,kMIDIPropertyName, &midiEndpointSourceName);
NSLog(#"Source %lu: %#", i, midiEndpointSourceName);
MIDIPortConnectSource(midiSourcePort, endpoint, (void*)&i);
}
Then my read proc:
void SourceReadProc (const MIDIPacketList *pktlist,
void *readProcRefCon,
void *srcConnRefCon)
{
ItemCount *source = (ItemCount*) srcConnRefCon;
NSLog(#"source: %lu", *source);
}
I've hooked up two different midi sources and I can find them both just fine. My first code reports that there are two sources and tells me their names. But my read proc says that the sources is always the first source. I've tried three different data types when passing the connRefCon with no luck. I feel that my issue must be with the MIDIPortConnectSource.
Any help or even troubleshooting ideas would be great. I wish that CoreMIDI had functions to query what's connected to ports so I could check that, but alas, there's not.
The srcConnRefCon is useful if you've made multiple MIDIPortConnectSource() calls. Most commonly, it's a pointer to an object representing the source, but it could be anything. If you just want to disambiguate multiple sources, you could, say, use a string.
MIDIPortConnectSource(port, endpoint1, (void *)"endpoint1");
MIDIPortConnectSource(port, endpoint2, (void *)"endpoint2");
Then, in your SourceReadProc, you'd do something like this:
char *source = (char *)srcConnRefCon;
if (!strcmp(source, "endpoint1")) {
// Process packets from source 1
}
Make sure the allocation lifetime of whatever you pass in extends as long as the port is connected - otherwise you'll get a dangling pointer, which can be hell to debug.

User triggered event in libevent

I am currently writing a multi-threaded application using libevent.
Some events are triggered by IO, but I need a couple of events that are triggered accross threads by the code itself, using event_active().
I have tried to write a simple program that shows where my problem is:
The event is created using event_new(), and the fd set to -1.
When calling event_add(), if a timeout struct is used, the event is later properly handled by event_base_dispatch.
If event_add(ev, NULL) is used instead, it returns 0 (apparently successful), but event_base_dispatch() returns 1 (which means no the event was not properly registered.)
This behavior can be tested using the following code and swapping the event_add lines:
#include <event2/event.h>
#include <unistd.h>
void cb_func (evutil_socket_t fd, short flags, void * _param) {
puts("Callback function called!");
}
void run_base_with_ticks(struct event_base *base)
{
struct timeval one_sec;
one_sec.tv_sec = 1;
one_sec.tv_usec = 0;
struct event * ev1;
ev1 = event_new(base, -1, EV_PERSIST, cb_func, NULL);
//int result = event_add(ev1, NULL);
int result = event_add(ev1, &one_sec);
printf("event_add result: %d\n",result);
while (1) {
result = event_base_dispatch(base);
if (result == 1) {
printf("Failed: event considered as not pending dispite successful event_add\n");
sleep(1);
} else {
puts("Tick");
}
}
}
int main () {
struct event_base *base = event_base_new();
run_base_with_ticks(base);
return 0;
}
Compilation: g++ sample.cc -levent
The thing is, I do not need the timeout, and do not want to use a n-years timeout as a workaround. So if this is not the right way to use user-triggered events, I would like to know how it is done.
Your approach is sound. In Libevent 2.0, you can use event_active() to activate an event from another thread. Just make sure that you use evthread_use_windows_threads() or evthread_use_pthreads() as appropriate beforehand, to tell Libevent to use the right threading library.
As for needing an extra event: in Libevent 2.0 and earlier, an event loop will exit immediately when there are no pending events added. Your best bet there is probably the timeout trick you discovered.
If you don't like that, you can use the internal "event_base_add_virtual" function to tell the event_base that it has a virtual event. This function isn't exported, though, so you'll have to say something like:
void event_base_add_virtual(struct event_base *);
// ...
base = event_base_new();
event_base_add_virtual(base); // keep it from exiting
That's a bit of a hack, though, and it uses an undocumented function, so you'd need to watch out in case it doesn't work with a later version of Libevent.
Finally, this method won't help you now, but there's a patch pending for future versions of Libevent (2.1 and later) to add a new flag to event_base_loop() to keep it from exiting when the loop is out of events. The patch is over on Github; it is mainly waiting for code review, and for a better name for the option.
I just got burned by this with libevent-2.0.21-stable. It is quite clearly a bug. I hope they fix it in a future release. In the meantime, updating the docs to warn us about it would be helpful.
The best workaround seems to be the fake timeout as described in the question.
#nickm, you didn't read the question. His example code uses event_new() like you described; there is a bug in libevent that causes it to fail when using a NULL timeout (but return 0 when you call event_add()).

Registering for Display Reconfiguration Callbacks

I'm putting together a Mac OS X Application and I'm trying to register to receive Display Reconfiguration notices, but I'm very lost right now. I've been reading Apple's documentation and some forums posts, etc., but everything seems to assume a better knowledge of things than I apparently possess. I understand that I have to request the callback inside a run loop for it to work properly. I don't know how to set up a basic run loop for it, though. I also feel like the example Apple has in their documentation is missing stuff they are expecting me to already know. To display my ignorance here is what I feel like things should look like.
NSRunLoop *rLoop = [NSRunLoop currentRunLoop];
codeToStartRunLoop
void MyDisplayReconfigurationCallBack (
CGDirectDisplayID display,
CGDisplayChangeSummaryFlags flags,
void *userInfo);
{
if (flags & kCGDisplayAddFlag) {
NSLog (#"Display Added");
}
else if (kCGDisplayRemoveFlag) {
NSLog (#"Display Removed");
}
}
CGDisplayRegisterReconfigurationCallback(MyDisplayReconfigurationCallBack, NULL);
The actual code I got was from Apple's Example, but it tells me that flags is an undeclared identifier at this point and won't compile. Not that it would work right since I don't have it in a run loop. I was hoping to find a tutorial somewhere that explains registering for system callback in a run loop but have not been successful. If anyone could point me in the right direction I'd super appreciate it.
(I'm sure that you'll be able to tell from my question that I'm very green. I taught myself Objective-C out of a book as my first programming language. I skipped C, so every once in a while I hit a snag somewhere that I can't figure out.)
If you're writing a Mac OS X application, the AppKit has already set up a run loop for you, so you don't need to worry about that part. You really only need to create your own run loop in Cocoa when you are also creating your own thread.
For the "undeclared identifier" part, it looks like it's due to a typo/syntax mistake:
void MyDisplayReconfigurationCallBack (CGDirectDisplayID display,
CGDisplayChangeSummaryFlags flags,
void *userInfo);
// Semicolon makes this an invalid function definition^^
{
// This is an anonymous block,* and flags wasn't declared in it
if (flags & kCGDisplayAddFlag) {
// etc.
}
Also, unlike some other languages, you can't declare or define functions inside of other functions, methods, or blocks* -- they have to be at the top level of the file. You can't put this in the same place where you call CGDisplayRegisterReconfigurationCallback.
Just as an sample (I have no idea what the rest of your code really looks like):
// MyClassThatIsInterestedInDisplayConfiguration.m
#import "MyClassThatIsInterestedInDisplayConfiguration.h"
// Define callback function at top level of file
void MyDisplayReconfigurationCallBack (
CGDirectDisplayID display,
CGDisplayChangeSummaryFlags flags,
void *userInfo)
{
if (flags & kCGDisplayAddFlag) {
NSLog (#"Display Added");
}
else if (kCGDisplayRemoveFlag) {
NSLog (#"Display Removed");
}
}
#implementation MyClassThatIsInterestedInDisplayConfiguration
- (void) comeOnBabyAndDoTheRegistrationWithMe {
// Register callback function inside a method
CGDisplayRegisterReconfigurationCallback(MyDisplayReconfigurationCallBack,
NULL);
}
#end
*The basic C curly-brace-delimited thing, not the new cool Obj-C ad hoc function thing.

Symbian: kern-exec 3 panic on RLibrary::Load

I have troubles with dynamic loading of libraries - my code panics with Kern-Exec 3. The code is as follows:
TFileName dllName = _L("mydll.dll");
TFileName dllPath = _L("c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(dllName, dllPath); // Kern-Exec 3!
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
I receive panic on TInt res = dll.Load(dllName, dllPath); What can I do to get rid of this panic? mydll.dll is really my dll, which has only 1 exported function (for test purposes). Maybe something wrong with the DLL? Here's what it is:
def file:
EXPORTS
_ZN4Init4InitEv # 1 NONAME
pkg file:
#{"mydll DLL"},(0xED3F400D),1,0,0
;Localised Vendor name
%{"Vendor-EN"}
;Unique Vendor name
:"Vendor"
"$(EPOCROOT)Epoc32\release\$(PLATFORM)\$(TARGET)\mydll.dll"-"!:\sys\bin\mydll.dll"
mmp file:
TARGET mydll.dll
TARGETTYPE dll
UID 0x1000008d 0xED3F400D
USERINCLUDE ..\inc
SYSTEMINCLUDE \epoc32\include
SOURCEPATH ..\src
SOURCE mydllDllMain.cpp
LIBRARY euser.lib
#ifdef ENABLE_ABIV2_MODE
DEBUGGABLE_UDEBONLY
#endif
EPOCALLOWDLLDATA
CAPABILITY CommDD LocalServices Location MultimediaDD NetworkControl NetworkServices PowerMgmt ProtServ ReadDeviceData ReadUserData SurroundingsDD SwEvent TrustedUI UserEnvironment WriteDeviceData WriteUserData
source code:
// Exported Functions
namespace Init
{
EXPORT_C TInt Init()
{
// no implementation required
return 0;
}
}
header file:
#ifndef __MYDLL_H__
#define __MYDLL_H__
// Include Files
namespace Init
{
IMPORT_C TInt Init();
}
#endif // __MYDLL_H__
I have no ideas about this... Any help is greatly appreciated.
P.S. I'm trying to do RLibrary::Load because I have troubles with static linkage. When I do static linkage, my main program doesn't start at all. I decided to check what happens and discovered this issue with RLibrary::Load.
A KERN-EXEC 3 panic is caused by an unhandled exception (CPU fault) generated by trying to invalidly access a region of memory. This invalid memory access can be for both code (for example, bad PC by stack corruption) or data (for example, accessing freed memory). As such these are typically observed when dereferencing a NULL pointer (it is equivalent to a segfault).
Certainly the call to RLibrary::Load should never raise a KERN-EXEC 3 due to programmatic error, it is likely to be an environmental issue. As such I have to speculate on what is happening.
I believe the issue that is observed is due to stack overflow. Your MMP file does not specify the stack or heap size the initial thread should use. As such the default of 4Kb (if I remember correctly) will be used. Equally you are using TFileName - use of these on the stack is generally not recommended to avoid... stack overflow.
You would be better off using the _LIT() macro instead - this will allow you to provide the RLibrary::Load function with a descriptor directly referencing the constant strings as located in the constant data section of the binary.
As a side note, you should check the error value to determine the success of the function call.
_LIT(KMyDllName, "mydll.dll");
_LIT(KMyDllPath, "c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(KMyDllName, MyDllPath); // Hopefully no Kern-Exec 3!
if(err == KErrNone)
{
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
}
// else handle error
The case that you can't use static linkage should be a strong warning to you. It shows that there is something wrong with your DLL and using dynamic linking won't change anything.
Usually in these cases the problem is in mismatched capabilities. DLL must have at least the same set of capabilities that your main program has. And all those capabilities should be covered by your developer cert.