Symbian: kern-exec 3 panic on RLibrary::Load - dll

I have troubles with dynamic loading of libraries - my code panics with Kern-Exec 3. The code is as follows:
TFileName dllName = _L("mydll.dll");
TFileName dllPath = _L("c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(dllName, dllPath); // Kern-Exec 3!
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
I receive panic on TInt res = dll.Load(dllName, dllPath); What can I do to get rid of this panic? mydll.dll is really my dll, which has only 1 exported function (for test purposes). Maybe something wrong with the DLL? Here's what it is:
def file:
EXPORTS
_ZN4Init4InitEv # 1 NONAME
pkg file:
#{"mydll DLL"},(0xED3F400D),1,0,0
;Localised Vendor name
%{"Vendor-EN"}
;Unique Vendor name
:"Vendor"
"$(EPOCROOT)Epoc32\release\$(PLATFORM)\$(TARGET)\mydll.dll"-"!:\sys\bin\mydll.dll"
mmp file:
TARGET mydll.dll
TARGETTYPE dll
UID 0x1000008d 0xED3F400D
USERINCLUDE ..\inc
SYSTEMINCLUDE \epoc32\include
SOURCEPATH ..\src
SOURCE mydllDllMain.cpp
LIBRARY euser.lib
#ifdef ENABLE_ABIV2_MODE
DEBUGGABLE_UDEBONLY
#endif
EPOCALLOWDLLDATA
CAPABILITY CommDD LocalServices Location MultimediaDD NetworkControl NetworkServices PowerMgmt ProtServ ReadDeviceData ReadUserData SurroundingsDD SwEvent TrustedUI UserEnvironment WriteDeviceData WriteUserData
source code:
// Exported Functions
namespace Init
{
EXPORT_C TInt Init()
{
// no implementation required
return 0;
}
}
header file:
#ifndef __MYDLL_H__
#define __MYDLL_H__
// Include Files
namespace Init
{
IMPORT_C TInt Init();
}
#endif // __MYDLL_H__
I have no ideas about this... Any help is greatly appreciated.
P.S. I'm trying to do RLibrary::Load because I have troubles with static linkage. When I do static linkage, my main program doesn't start at all. I decided to check what happens and discovered this issue with RLibrary::Load.

A KERN-EXEC 3 panic is caused by an unhandled exception (CPU fault) generated by trying to invalidly access a region of memory. This invalid memory access can be for both code (for example, bad PC by stack corruption) or data (for example, accessing freed memory). As such these are typically observed when dereferencing a NULL pointer (it is equivalent to a segfault).
Certainly the call to RLibrary::Load should never raise a KERN-EXEC 3 due to programmatic error, it is likely to be an environmental issue. As such I have to speculate on what is happening.
I believe the issue that is observed is due to stack overflow. Your MMP file does not specify the stack or heap size the initial thread should use. As such the default of 4Kb (if I remember correctly) will be used. Equally you are using TFileName - use of these on the stack is generally not recommended to avoid... stack overflow.
You would be better off using the _LIT() macro instead - this will allow you to provide the RLibrary::Load function with a descriptor directly referencing the constant strings as located in the constant data section of the binary.
As a side note, you should check the error value to determine the success of the function call.
_LIT(KMyDllName, "mydll.dll");
_LIT(KMyDllPath, "c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(KMyDllName, MyDllPath); // Hopefully no Kern-Exec 3!
if(err == KErrNone)
{
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
}
// else handle error

The case that you can't use static linkage should be a strong warning to you. It shows that there is something wrong with your DLL and using dynamic linking won't change anything.
Usually in these cases the problem is in mismatched capabilities. DLL must have at least the same set of capabilities that your main program has. And all those capabilities should be covered by your developer cert.

Related

CoreText CopyFontsForRequest received mig IPC error

I've been working on a BIG project (there's no point of showing any actual code anyway) and I've notice that the following message appears in the logs:
CoreText CopyFontsForRequest received mig IPC error (FFFFFFFFFFFFFECC) from font server
The error pops up as soon as a WebView has finished loading. And I kinda believe it's the culprit behind a tiny lag.
Why is that happening? What can I do to fix this?
P.S. Tried the suggested solution here to check whether it was something system-specific, but it didn't work.
More details:
The error appears when using the AMEditorAppearance.car NSAppearance file, from the Appearance Maker project. Disabling it (= not loading it all) makes the error go away.
I don't really care about the error message, other than that it creates some weird issues with fonts. E.g. NSAlert panels, with input fiels, show a noticeable flicker and the font/text seems rather messed up, in a way I'm not sure I can accurately describe. (I could post a video with that if that'd help)
This is probably related to system font conflicts and can easily be fixed:
Open Font book
Select all fonts
Go to the file menu and select "Validate fonts"
Resolve all font conflicts (by removing duplets).
Source: Andreas Wacker
Answer by #Abrax5 is excellent. I just wanted to add my experience with this problem and could not fit it into a comment:
As far as I can tell, this error is raised only on the first failed attempt to initialise an NSFont with a font name that is not available. NSFont initialisers are failable and will return nil in such a case at which time you have an opportunity to do something about it.
You can check whether a font by a given name is available using:
NSFontDescriptor(fontAttributes: [NSFontNameAttribute: "<font name>"]).matchingFontDescriptorWithMandatoryKeys([NSFontNameAttribute]) != nil
Unfortunately, this also raises the error! The following method does not, but is deprecated:
let fontDescr = NSFontDescriptor(fontAttributes: [NSFontNameAttribute: "<font name>"])
let isAvailable = NSFontManager.sharedFontManager().availableFontNamesMatchingFontDescriptor(fontDescr)?.count ?? 0 > 0
So the only way I found of checking the availability of a font of a given name without raising that error is as follows:
public extension NSFont {
private static let availableFonts = (NSFontManager.sharedFontManager().availableFonts as? [String]).map { Set($0) }
public class func available(fontName: String) -> Bool {
return NSFont.availableFonts?.contains(fontName) ?? false
}
}
For example:
NSFont.available("Georgia") //--> true
NSFont.available("WTF?") //--> false
(I'm probably overly cautious with that optional constant there and if you are so inclined you can convert the returned [AnyObject] using as! [String]...)
Note that for the sake of efficiency this will not update until the app is started again, i.e. any fonts installed during the app's run will not be matched. If this is an important issue for your particular app, just turn the constant into a computed property:
public extension NSFont {
private static var allAvailable: Set<String>? {
return (NSFontManager.sharedFontManager().availableFonts as? [String]).map { Set($0) }
}
private static let allAvailableAtStart = allAvailable
public class func available(fontName: String) -> Bool {
return NSFont.allAvailable?.contains(fontName) ?? false
}
public class func availableAtStart(fontName: String) -> Bool {
return NSFont.allAvailableAtStart?.contains(fontName) ?? false
}
}
On my machine available(:) takes 0.006s. Of course, availableAtStart(:) takes virtually no time on all but the first call...
This is caused by calling NSFont fontWithFamily: with a family name argument which is not available on the system from within Chromium's renderer process. When Chromium's sandbox is active this call triggers the CoreText error that you're observing.
It happens during matching CSS font family names against locally installed system fonts.
Probably you were working on a Chromium-derived project. More info can be found in Chromium Bug 452849.

OpenNI 1.5::Could not run code from documentation

I am trying to run a sample code from the OpenNI 1.5 documentation.I have imported the library required XnCppWrapper.h so that I can use C++.The code has only one error on a particular variable "bshouldrun".I know that it should be declared as something but since I am new at this and the documentation does not contain anything above the main, I dont know what to declare it as..Please help!!
And thanks in advance.
#include <XnOpenNI.h>
#include <XnCppWrapper.h>
#include <stdio.h>
int main()
{
XnStatus nRetVal = XN_STATUS_OK;
xn::Context context;
// Initialize context object
nRetVal = context.Init();
// TODO: check error code
// Create a DepthGenerator node
xn::DepthGenerator depth;
nRetVal = depth.Create(context);
// TODO: check error code
// Make it start generating data
nRetVal = context.StartGeneratingAll();
// TODO: check error code
// Main loop
while (bShouldRun) //<-----------------------------**ERROR;bShouldRun Undefined**
{
// Wait for new data to be available
nRetVal = context.WaitOneUpdateAll(depth);
if (nRetVal != XN_STATUS_OK)
{
printf("Failed updating data: %s\n", xnGetStatusString(nRetVal));
continue;
}
// Take current depth map
const XnDepthPixel* pDepthMap = depth.GetDepthMap();
// TODO: process depth map
}
// Clean-up
context.Shutdown();
}
Here's what I did to run a sample from Visual Studio 2010 Express on Windows (8):
Opened the NiSimpleViewer.vcxproj VS2010 project from C:\Program Files (x86)\OpenNI\Samples\NiSimpleViewer
Edited OpenNI.rc to comment out #include "afxres.h" on line 10(might be missing this because I'm using Express version, not sure. Your machine might compile this fine/not complain about the missing header file)
Enabled Tools > Options > Debugging > Symbols > Microsoft Symbol Servers (to get past missing pdb files issue)
Optionally edit the SAMPLE_XML_PATH to "SamplesConfig.xml" rather than the default "../../../Data/SamplesConfig.xml", otherwise you need to run the sample executable from ..\Bin\Debug\NiSimpleViewer.exe by navigating to there rather than using the Ctrl+F5. A;so copy the SamplesConfig.xml file into your sample folder as you can see bellow
Here are a few images to illustrate some of the above steps:
You can also compile the NiHandTracker sample, which sounds closer to what you need.
So this explains the setup for OpenNI 1.5 which is what your question is about.
I've noticed your OpenNI 2 lib issue in the comments. It should be a matter of linking against SimpleHandTracker.lib which you can do via Project Properties (right-click project->select Properties) > Linker > Input > Additional Dependencies > Edit.
I don't have OpenNI2 setup on this machine, but assuming SimpleHandTracker.lib would be in OpenNI_INSTALL_FOLDER\Lib. Try a file search in case I might be wrong.

How to get access to WriteableBitmap.PixelBuffer pixels with C++?

There are a lot of samples for C#, but only some code snippets for C++ on MSDN. I have put it together and I think it will work, but I am not sure if I am releasing all the COM references I have to.
Your code is correct--the reference count on the IBufferByteAccess interface of *buffer is incremented by the call to QueryInterface, and you must call Release once to release that reference.
However, if you use ComPtr<T>, this becomes much simpler--with ComPtr<T>, you cannot call any of the three members of IUnknown (AddRef, Release, and QueryInterface); it prevents you from calling them. Instead, it encapsulates calls to these member functions in a way that makes it difficult to screw things up. Here's an example of how this would look:
// Get the buffer from the WriteableBitmap:
IBuffer^ buffer = bitmap->PixelBuffer;
// Convert from C++/CX to the ABI IInspectable*:
ComPtr<IInspectable> bufferInspectable(AsInspectable(buffer));
// Get the IBufferByteAccess interface:
ComPtr<IBufferByteAccess> bufferBytes;
ThrowIfFailed(bufferInspectable.As(&bufferBytes));
// Use it:
byte* pixels(nullptr);
ThrowIfFailed(bufferBytes->Buffer(&pixels));
The call to bufferInspectable.As(&bufferBytes) performs a safe QueryInterface: it computes the IID from the type of bufferBytes, performs the QueryInterface, and attaches the resulting pointer to bufferBytes. When bufferBytes goes out of scope, it will automatically call Release. The code has the same effect as yours, but without the error-prone explicit resource management.
The example uses the following two utilities, which help to keep the code clean:
auto AsInspectable(Object^ const object) -> Microsoft::WRL::ComPtr<IInspectable>
{
return reinterpret_cast<IInspectable*>(object);
}
auto ThrowIfFailed(HRESULT const hr) -> void
{
if (FAILED(hr))
throw Platform::Exception::CreateException(hr);
}
Observant readers will notice that because this code uses a ComPtr for the IInspectable* we get from buffer, this code actually performs an additional AddRef/Release compared to the original code. I would argue that the chance of this impacting performance is minimal, and it's best to start from code that is easy to verify as correct, then optimize for performance once the hot spots are understood.
This is what I tried so far:
// Get the buffer from the WriteableBitmap
IBuffer^ buffer = bitmap->PixelBuffer;
// Get access to the base COM interface of the buffer (IUnknown)
IUnknown* pUnk = reinterpret_cast<IUnknown*>(buffer);
// Use IUnknown to get the IBufferByteAccess interface of the buffer to get access to the bytes
// This requires #include <Robuffer.h>
IBufferByteAccess* pBufferByteAccess = nullptr;
HRESULT hr = pUnk->QueryInterface(IID_PPV_ARGS(&pBufferByteAccess));
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
// Get the pointer to the bytes of the buffer
byte *pixels = nullptr;
pBufferByteAccess->Buffer(&pixels);
// *** Do the work on the bytes here ***
// Release reference to IBufferByteAccess created by QueryInterface.
// Perhaps this might be done before doing more work with the pixels buffer,
// but it's possible that without it - the buffer might get released or moved
// by the time you are done using it.
pBufferByteAccess->Release();
When using C++/WinRT (instead of C++/CX) there's a more convenient (and more dangerous) alternative. The language projection generates a data() helper function on the IBuffer interface that returns a uint8_t* into the memory buffer.
Assuming that bitmap is of type WriteableBitmap the code can be trimmed down to this:
uint8_t* pixels{ bitmap.PixelBuffer().data() };
// *** Do the work on the bytes here ***
// No cleanup required; it has already been dealt with inside data()'s implementation
In the code pixels is a raw pointer into data controlled by the bitmap instance. As such it is only valid as long as bitmap is alive, but there is nothing in the code that helps the compiler (or a reader) track that dependency.
For reference, there's an example in the WriteableBitmap::PixelBuffer documentation illustrating the use of the (otherwise undocumented) helper function data().

Using XNA Math in a DLL Class

I'm having a problem in using XNA Math in a DLL I'm creating. I have a class that is in a DLL and is going to be exported. It has a member variable of type XMVECTOR. In the class constructor, I try to initialize the XMVECTOR. I get a Access Violation in reading from reading location 0x0000000000
The code runs something like this:
class DLLClass
{
public:
DLLClass(void);
~DLLClass(void);
protected:
XMVECTOR vect;
XMMATRIX matr;
}
DLLClass::DLLClass(void)
{
vect = XMLoadFloat3(&XMFLOAT3(0.0f, 0.0f, 0.0f)); //this is the line causing the access violation
}
Note that this class is in a DLL that is going to be exported. I do not know if this will make a difference by just some further info.
Also while I'm at it, I have another question:
I also get the warning: struct '_XMMATRIX' needs to have dll-interface to be used by clients of class 'DLLClass'
Is this fatal? If not, what does it mean and how can I get rid of it? Note this DLLClass is going to be exported and the "clients" of the DLLClass is probably going to use the variable 'matr'.
Any help would be appreciated.
EDIT: just some further info: I've debugged the code line by line and it seems that the error occurs when the return value of XMLoadFloat3 is assigned to the vect.
This code is only legal if you are building with x64 native -or- if you use __aligned_malloc to ensure the memory for all instances of DLLClass are 16-byte aligned. x86 (32-bit) malloc and new only provide 8-byte alignment by default. You can 'get lucky' but it's not stable.
class DLLClass
{
public:
DLLClass(void);
~DLLClass(void);
protected:
XMVECTOR vect;
XMMATRIX matr;
}
See DirectXMath Programming Guide, Getting Started
You have three choices:
Ensure DLLClass is always 16-byte aligned
Use XMFLOAT4 and XMFLOAT4X4 instead and do explicit load/stores
Use the SimpleMath wrapper types in DirectX Tool Kit instead which handle the loads/stores for you.
You shouldn't take the address of an anonymous variable:
vect = XMLoadFloat3(&XMFLOAT3(0.0f, 0.0f, 0.0f));
You need
XMFLOAT3 foo(0.0f, 0.0f, 0.0f);
vect = XMLoadFloat3(&foo);

C Callback in Objective-C (IOKIT)

I am trying to write some code that interacts with an USB device in Objective C, and I got stuck on setting the callback function for incoming reports. In my case it's an IOKIT function but I think the problem is more general as I (apparently) don't know how to correctly set a C callback function in Objective-C. I've got a Class "USBController" that handles the io functions
USBController.m:
#include <CoreFoundation/CoreFoundation.h>
#include <Carbon/Carbon.h>
#include <IOKit/hid/IOHIDLib.h>
#import "USBController.h"
static void Handle_IOHIDDeviceIOHIDReportCallback(
void * inContext, // context from IOHIDDeviceRegisterInputReportCallback
IOReturn inResult, // completion result for the input report operation
void * inSender, // IOHIDDeviceRef of the device this report is from
IOHIDReportType inType, // the report type
uint32_t inReportID, // the report ID
uint8_t * inReport, // pointer to the report data
CFIndex InReportLength) // the actual size of the input report
{
printf("hello"); //just to see if the function is called
}
#implementation USBController
- (void)ConnectToDevice {
...
IOHIDDeviceRegisterInputReportCallback(tIOHIDDeviceRefs[0], report, reportSize,
Handle_IOHIDDeviceIOHIDReportCallback,(void*)self);
...
}
...
#end
All the functions are also declared in the header file.
I think I did pretty much the same as what I've found here, but it doesn't work. The project compiles nicely and everything works up till the moment there is input and the callback function is to be called. Then I get an "EXC_BAD_ACCESS" error. The first three arguments of the function are correct. I'm not so sure about the context..
What did I do wrong?
I am not sure at all that your EXEC_BAD_ACCESS depends on your callback. Indeed, if you say that it is called (I suppose you see the log) and since it only logs a message, there should be no problem with this.
EXEC_BAD_ACCESS is caused by an attempt to access an already deallocated object. You can get more information in two ways:
execute the program in debug mode, so when it crashes you will be able to see the stack content;
activate NSZombies or run the program using the performance tool Zombies; this will tell you exactly which object was accessed after its deallocation.
I know how to fix this. When calling this:
IOHIDDeviceRegisterInputReportCallback(tIOHIDDeviceRefs[0], report, reportSize,
Handle_IOHIDDeviceIOHIDReportCallback,(void*)self);
You don't include the code for the creation/type of the value called report. However the method name "Handle_IOHIDDeviceIOHIDReportCallback" comes from an Apple document where there is an error in the creation of the report value. https://developer.apple.com/library/archive/technotes/tn2187/_index.html
CFIndex reportSize = 64;
uint8_t report = malloc( reportSize ); // <---- WRONG
IOHIDDeviceRegisterInputReportCallback( deviceRef,
report,
reportSize,
Handle_IOHIDDeviceIOHIDReportCallback,
context );
Instead do this:
uint8_t *report = (uint8_t *)malloc(reportSize);