I'm having a problem in using XNA Math in a DLL I'm creating. I have a class that is in a DLL and is going to be exported. It has a member variable of type XMVECTOR. In the class constructor, I try to initialize the XMVECTOR. I get a Access Violation in reading from reading location 0x0000000000
The code runs something like this:
class DLLClass
{
public:
DLLClass(void);
~DLLClass(void);
protected:
XMVECTOR vect;
XMMATRIX matr;
}
DLLClass::DLLClass(void)
{
vect = XMLoadFloat3(&XMFLOAT3(0.0f, 0.0f, 0.0f)); //this is the line causing the access violation
}
Note that this class is in a DLL that is going to be exported. I do not know if this will make a difference by just some further info.
Also while I'm at it, I have another question:
I also get the warning: struct '_XMMATRIX' needs to have dll-interface to be used by clients of class 'DLLClass'
Is this fatal? If not, what does it mean and how can I get rid of it? Note this DLLClass is going to be exported and the "clients" of the DLLClass is probably going to use the variable 'matr'.
Any help would be appreciated.
EDIT: just some further info: I've debugged the code line by line and it seems that the error occurs when the return value of XMLoadFloat3 is assigned to the vect.
This code is only legal if you are building with x64 native -or- if you use __aligned_malloc to ensure the memory for all instances of DLLClass are 16-byte aligned. x86 (32-bit) malloc and new only provide 8-byte alignment by default. You can 'get lucky' but it's not stable.
class DLLClass
{
public:
DLLClass(void);
~DLLClass(void);
protected:
XMVECTOR vect;
XMMATRIX matr;
}
See DirectXMath Programming Guide, Getting Started
You have three choices:
Ensure DLLClass is always 16-byte aligned
Use XMFLOAT4 and XMFLOAT4X4 instead and do explicit load/stores
Use the SimpleMath wrapper types in DirectX Tool Kit instead which handle the loads/stores for you.
You shouldn't take the address of an anonymous variable:
vect = XMLoadFloat3(&XMFLOAT3(0.0f, 0.0f, 0.0f));
You need
XMFLOAT3 foo(0.0f, 0.0f, 0.0f);
vect = XMLoadFloat3(&foo);
Related
It is possible to cast a managed array<Byte>^ to some non-managed struct only using pin_ptr, AFAIK, like:
void Example(array<Byte>^ bfr) {
pin_ptr<Byte> ptr = &bfr[0];
auto data = reinterpret_cast<NonManagedStruct*>(ptr);
data->Header = 7;
data->Length = sizeof(data);
data->CRC = CalculateCRC(data);
}
However, is with interior_ptr in any way?
I'd rather work on managed data the low-level-way (using unions, struct-bit-fields, and so on), without pinning data - I could be holding this data for quite a long time and don't want to harass the GC.
Clarification:
I do not want to copy managed-data to native and back (so the Marshaling way is not an option here...)
You likely won't harass the GC with pin_ptr - it's pretty lightweight unlike GCHandle.
GCHandle::Alloc(someObject, GCHandleType::Pinned) will actually register the object as being pinned in the GC. This lets you pin an object for extended periods of time and across function calls, but the GC has to track that object.
On the other hand, pin_ptr gets translated to a pinned local in IL code. The GC isn't notified about it, but it will get to see that the object is pinned only during a collection. That is, it will notice it's pinned status when looking for object references on the stack.
If you really want to, you can access stack memory in the following way:
[StructLayout(LayoutKind::Explicit, Size = 256)]
public value struct ManagedStruct
{
};
struct NativeStruct
{
char data[256];
};
static void DoSomething()
{
ManagedStruct managed;
auto nativePtr = reinterpret_cast<NativeStruct*>(&managed);
nativePtr->data[42] = 42;
}
There's no pinning at all here, but this is only due to the fact that the managed struct is stored on the stack, and therefore is not relocatable in the first place.
It's a convoluted example, because you could just write:
static void DoSomething()
{
NativeStruct native;
native.data[42] = 42;
}
...and the compiler would perform a similar trick under the covers for you.
I've been working on a BIG project (there's no point of showing any actual code anyway) and I've notice that the following message appears in the logs:
CoreText CopyFontsForRequest received mig IPC error (FFFFFFFFFFFFFECC) from font server
The error pops up as soon as a WebView has finished loading. And I kinda believe it's the culprit behind a tiny lag.
Why is that happening? What can I do to fix this?
P.S. Tried the suggested solution here to check whether it was something system-specific, but it didn't work.
More details:
The error appears when using the AMEditorAppearance.car NSAppearance file, from the Appearance Maker project. Disabling it (= not loading it all) makes the error go away.
I don't really care about the error message, other than that it creates some weird issues with fonts. E.g. NSAlert panels, with input fiels, show a noticeable flicker and the font/text seems rather messed up, in a way I'm not sure I can accurately describe. (I could post a video with that if that'd help)
This is probably related to system font conflicts and can easily be fixed:
Open Font book
Select all fonts
Go to the file menu and select "Validate fonts"
Resolve all font conflicts (by removing duplets).
Source: Andreas Wacker
Answer by #Abrax5 is excellent. I just wanted to add my experience with this problem and could not fit it into a comment:
As far as I can tell, this error is raised only on the first failed attempt to initialise an NSFont with a font name that is not available. NSFont initialisers are failable and will return nil in such a case at which time you have an opportunity to do something about it.
You can check whether a font by a given name is available using:
NSFontDescriptor(fontAttributes: [NSFontNameAttribute: "<font name>"]).matchingFontDescriptorWithMandatoryKeys([NSFontNameAttribute]) != nil
Unfortunately, this also raises the error! The following method does not, but is deprecated:
let fontDescr = NSFontDescriptor(fontAttributes: [NSFontNameAttribute: "<font name>"])
let isAvailable = NSFontManager.sharedFontManager().availableFontNamesMatchingFontDescriptor(fontDescr)?.count ?? 0 > 0
So the only way I found of checking the availability of a font of a given name without raising that error is as follows:
public extension NSFont {
private static let availableFonts = (NSFontManager.sharedFontManager().availableFonts as? [String]).map { Set($0) }
public class func available(fontName: String) -> Bool {
return NSFont.availableFonts?.contains(fontName) ?? false
}
}
For example:
NSFont.available("Georgia") //--> true
NSFont.available("WTF?") //--> false
(I'm probably overly cautious with that optional constant there and if you are so inclined you can convert the returned [AnyObject] using as! [String]...)
Note that for the sake of efficiency this will not update until the app is started again, i.e. any fonts installed during the app's run will not be matched. If this is an important issue for your particular app, just turn the constant into a computed property:
public extension NSFont {
private static var allAvailable: Set<String>? {
return (NSFontManager.sharedFontManager().availableFonts as? [String]).map { Set($0) }
}
private static let allAvailableAtStart = allAvailable
public class func available(fontName: String) -> Bool {
return NSFont.allAvailable?.contains(fontName) ?? false
}
public class func availableAtStart(fontName: String) -> Bool {
return NSFont.allAvailableAtStart?.contains(fontName) ?? false
}
}
On my machine available(:) takes 0.006s. Of course, availableAtStart(:) takes virtually no time on all but the first call...
This is caused by calling NSFont fontWithFamily: with a family name argument which is not available on the system from within Chromium's renderer process. When Chromium's sandbox is active this call triggers the CoreText error that you're observing.
It happens during matching CSS font family names against locally installed system fonts.
Probably you were working on a Chromium-derived project. More info can be found in Chromium Bug 452849.
There are a lot of samples for C#, but only some code snippets for C++ on MSDN. I have put it together and I think it will work, but I am not sure if I am releasing all the COM references I have to.
Your code is correct--the reference count on the IBufferByteAccess interface of *buffer is incremented by the call to QueryInterface, and you must call Release once to release that reference.
However, if you use ComPtr<T>, this becomes much simpler--with ComPtr<T>, you cannot call any of the three members of IUnknown (AddRef, Release, and QueryInterface); it prevents you from calling them. Instead, it encapsulates calls to these member functions in a way that makes it difficult to screw things up. Here's an example of how this would look:
// Get the buffer from the WriteableBitmap:
IBuffer^ buffer = bitmap->PixelBuffer;
// Convert from C++/CX to the ABI IInspectable*:
ComPtr<IInspectable> bufferInspectable(AsInspectable(buffer));
// Get the IBufferByteAccess interface:
ComPtr<IBufferByteAccess> bufferBytes;
ThrowIfFailed(bufferInspectable.As(&bufferBytes));
// Use it:
byte* pixels(nullptr);
ThrowIfFailed(bufferBytes->Buffer(&pixels));
The call to bufferInspectable.As(&bufferBytes) performs a safe QueryInterface: it computes the IID from the type of bufferBytes, performs the QueryInterface, and attaches the resulting pointer to bufferBytes. When bufferBytes goes out of scope, it will automatically call Release. The code has the same effect as yours, but without the error-prone explicit resource management.
The example uses the following two utilities, which help to keep the code clean:
auto AsInspectable(Object^ const object) -> Microsoft::WRL::ComPtr<IInspectable>
{
return reinterpret_cast<IInspectable*>(object);
}
auto ThrowIfFailed(HRESULT const hr) -> void
{
if (FAILED(hr))
throw Platform::Exception::CreateException(hr);
}
Observant readers will notice that because this code uses a ComPtr for the IInspectable* we get from buffer, this code actually performs an additional AddRef/Release compared to the original code. I would argue that the chance of this impacting performance is minimal, and it's best to start from code that is easy to verify as correct, then optimize for performance once the hot spots are understood.
This is what I tried so far:
// Get the buffer from the WriteableBitmap
IBuffer^ buffer = bitmap->PixelBuffer;
// Get access to the base COM interface of the buffer (IUnknown)
IUnknown* pUnk = reinterpret_cast<IUnknown*>(buffer);
// Use IUnknown to get the IBufferByteAccess interface of the buffer to get access to the bytes
// This requires #include <Robuffer.h>
IBufferByteAccess* pBufferByteAccess = nullptr;
HRESULT hr = pUnk->QueryInterface(IID_PPV_ARGS(&pBufferByteAccess));
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
// Get the pointer to the bytes of the buffer
byte *pixels = nullptr;
pBufferByteAccess->Buffer(&pixels);
// *** Do the work on the bytes here ***
// Release reference to IBufferByteAccess created by QueryInterface.
// Perhaps this might be done before doing more work with the pixels buffer,
// but it's possible that without it - the buffer might get released or moved
// by the time you are done using it.
pBufferByteAccess->Release();
When using C++/WinRT (instead of C++/CX) there's a more convenient (and more dangerous) alternative. The language projection generates a data() helper function on the IBuffer interface that returns a uint8_t* into the memory buffer.
Assuming that bitmap is of type WriteableBitmap the code can be trimmed down to this:
uint8_t* pixels{ bitmap.PixelBuffer().data() };
// *** Do the work on the bytes here ***
// No cleanup required; it has already been dealt with inside data()'s implementation
In the code pixels is a raw pointer into data controlled by the bitmap instance. As such it is only valid as long as bitmap is alive, but there is nothing in the code that helps the compiler (or a reader) track that dependency.
For reference, there's an example in the WriteableBitmap::PixelBuffer documentation illustrating the use of the (otherwise undocumented) helper function data().
I have troubles with dynamic loading of libraries - my code panics with Kern-Exec 3. The code is as follows:
TFileName dllName = _L("mydll.dll");
TFileName dllPath = _L("c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(dllName, dllPath); // Kern-Exec 3!
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
I receive panic on TInt res = dll.Load(dllName, dllPath); What can I do to get rid of this panic? mydll.dll is really my dll, which has only 1 exported function (for test purposes). Maybe something wrong with the DLL? Here's what it is:
def file:
EXPORTS
_ZN4Init4InitEv # 1 NONAME
pkg file:
#{"mydll DLL"},(0xED3F400D),1,0,0
;Localised Vendor name
%{"Vendor-EN"}
;Unique Vendor name
:"Vendor"
"$(EPOCROOT)Epoc32\release\$(PLATFORM)\$(TARGET)\mydll.dll"-"!:\sys\bin\mydll.dll"
mmp file:
TARGET mydll.dll
TARGETTYPE dll
UID 0x1000008d 0xED3F400D
USERINCLUDE ..\inc
SYSTEMINCLUDE \epoc32\include
SOURCEPATH ..\src
SOURCE mydllDllMain.cpp
LIBRARY euser.lib
#ifdef ENABLE_ABIV2_MODE
DEBUGGABLE_UDEBONLY
#endif
EPOCALLOWDLLDATA
CAPABILITY CommDD LocalServices Location MultimediaDD NetworkControl NetworkServices PowerMgmt ProtServ ReadDeviceData ReadUserData SurroundingsDD SwEvent TrustedUI UserEnvironment WriteDeviceData WriteUserData
source code:
// Exported Functions
namespace Init
{
EXPORT_C TInt Init()
{
// no implementation required
return 0;
}
}
header file:
#ifndef __MYDLL_H__
#define __MYDLL_H__
// Include Files
namespace Init
{
IMPORT_C TInt Init();
}
#endif // __MYDLL_H__
I have no ideas about this... Any help is greatly appreciated.
P.S. I'm trying to do RLibrary::Load because I have troubles with static linkage. When I do static linkage, my main program doesn't start at all. I decided to check what happens and discovered this issue with RLibrary::Load.
A KERN-EXEC 3 panic is caused by an unhandled exception (CPU fault) generated by trying to invalidly access a region of memory. This invalid memory access can be for both code (for example, bad PC by stack corruption) or data (for example, accessing freed memory). As such these are typically observed when dereferencing a NULL pointer (it is equivalent to a segfault).
Certainly the call to RLibrary::Load should never raise a KERN-EXEC 3 due to programmatic error, it is likely to be an environmental issue. As such I have to speculate on what is happening.
I believe the issue that is observed is due to stack overflow. Your MMP file does not specify the stack or heap size the initial thread should use. As such the default of 4Kb (if I remember correctly) will be used. Equally you are using TFileName - use of these on the stack is generally not recommended to avoid... stack overflow.
You would be better off using the _LIT() macro instead - this will allow you to provide the RLibrary::Load function with a descriptor directly referencing the constant strings as located in the constant data section of the binary.
As a side note, you should check the error value to determine the success of the function call.
_LIT(KMyDllName, "mydll.dll");
_LIT(KMyDllPath, "c:\\sys\\bin\\");
RLibrary dll;
TInt res = dll.Load(KMyDllName, MyDllPath); // Hopefully no Kern-Exec 3!
if(err == KErrNone)
{
TLibraryFunction f = dll.Lookup(1);
if (f)
f();
}
// else handle error
The case that you can't use static linkage should be a strong warning to you. It shows that there is something wrong with your DLL and using dynamic linking won't change anything.
Usually in these cases the problem is in mismatched capabilities. DLL must have at least the same set of capabilities that your main program has. And all those capabilities should be covered by your developer cert.
I would like to keep a static counter in a garbage collected class and increment it using Interlocked::Increment. What's the C++/CLI syntax to do this?
I've been trying variations on the following, but no luck so far:
ref class Foo
{
static __int64 _counter;
__int64 Next()
{
return System::Threading::Interlocked::Increment( &_counter );
}
};
You need to use a tracking reference to your _int64 value, using the % tracking reference notation:
ref class Bar
{
static __int64 _counter;
__int64 Next()
{
__int64 %trackRefCounter = _counter;
return System::Threading::Interlocked::Increment(trackRefCounter);
}
};
Just remove the address-of operator:
return System::Threading::Interlocked::Increment( _counter );
In C++/CLI, like C++, there is no special notation for passing by reference.
or you could use the native function, InterlockedIncrement64 (#include <windows.h>)
return ::InterlockedIncrement64(&_counter);
The suggestion to use the native functions/macros (i.e. InterlockedExchangePointer, etc... plus a lot of cool ones I didn't know about such as InterlockedXor64) is severely hampered by the fact that doing so can cause an intrinsic (at least with the default compiler settings) to be dropped into your managed C++/CLI function. When this happens, your whole function will be compiled as native.
And the managed versions of Interlocked::* are also nice because you don't have to pin_ptr if the target is in a GC object. However, as noted on this page, it can be a real pain to find the right incantations for getting it to work, especially when you want to swap, (i.e) native pointers themselves. Here's how:
int i1 = 1, i2 = 2;
int *pi1 = &i1, *pi2 = &i2, *pi3;
pi3 = (int*)Interlocked::Exchange((IntPtr%)pi1, IntPtr(pi2)).ToPointer();
I verified that this does work properly, despite the suspiciously unnerving lack of address-taking (&) on the pi1 pointer. But it makes sense, when you think about it because if the target is moving about in a GC host, you wouldn't want to do the usual ** thing by grabbing the & (native) address of the pointer itself.