Can't access the memory which was malloc() in DLL - dll

Dear,
Recently I run into a general issue, I am writing a DLL, which will be used/invoked by another program. It has structure like this :
DLLEXT long AMI_Init(void **AMI_dll_memory)
{
mem = (struct model_memory*)malloc(sizeof(struct model_memory));
mem->submem = (struct submem*)malloc(sizeof(struct submem));
......
*AMI_dll_memory = (void*)mem;
}
DLLEXT long AMI_Get(void *AMI_dll_memory)
{
....
mem = (struct model_memory*)AMI_dll_memory;
mem->submem->init();
}
// Defined in submem module
struct {
int data;
struct list* next;
}list;
void init()
{
struct list* n;
n = (struct list*)malloc(sizeof(struct list));
// access n->data caused memory access violation.
}
Another software will invoke the AMI_Init() first, then call AMI_Get() with passing the AMI_dll_memory in between, but I got access violation once I tried to access the data "n->data" of "mem->submem->init()". Why it is ? I confirmed I can apply memory since the malloc funtion returned successfully, but just can't access n->data, anyone know why it is ? n->data didn't belong to current process after memory passing in funtions ? Thanks so much.

Guys,
I figure out what is going on here, it is related to implicit declaration of malloc() function, when I malloc memory in second places, I didn't include stdlib.h in that file, which caused the issue.

Related

fatfs f_write returns FR_DISK_ERR when passing a pointer to data in a mail queue

I'm trying to use FreeRTOS to write ADC data to SD card on the STM32F7 and I'm using V1 of the CMSIS-RTOS API. I'm using mail queues and I have a struct that holds an array.
typedef struct
{
uint16_t data[2048];
} ADC_DATA;
on the ADC half/Full complete interrupts, I add the data to the queue and I have a consumer task that writes this data to the sd card. My issue is in my Consumer Task, I have to do a memcpy to another array and then write the contents of that array to the sd card.
void vConsumer(void const * argument)
{
ADC_DATA *rx_data;
for(;;)
{
writeEvent = osMailGet(adcDataMailId, osWaitForever);
if(writeEvent.status == osEventMail)
{
// write Data to SD
rx_data = writeEvent.value.p;
memcpy(sd_buff, rx_data->data, sizeof(sd_buff));
if(wav_write_result == FR_OK)
{
if( f_write(&wavFile, (uint8_t *)sd_buff, SD_WRITE_BUF_SIZE, (void*)&bytes_written) == FR_OK)
{
file_size+=bytes_written;
}
}
osMailFree(adcDataMailId, rx_data);
}
}
This works as intended but if I try to change this line to
f_write(&wavFile, (uint8_t *)rx_data->data, SD_WRITE_BUF_SIZE, (void*)&bytes_written) == FR_OK)
so as to get rid of the memcpy, f_write returns FR_DISK_ERR. Can anyone help shine a light on why this happens, I feel like the extra memcpy is useless and you should just be able to pass the pointer to the queue straight to f_write.
So just a few thoughts here:
memcpy
Usually I copy only the necessary amount of data. If I have the size of the actual data I'll add a boundary check and pass it to memcpy.
Your problem
I am just guessing here, but if you check the struct definition, the data field has the type uint16_t and you cast it to a byte pointer. Also the FatFs documentation expects a void* for the type of buf.
EDIT: Could you post more details of sd_buff

Invalid pointer operation when freeing TStreamAdapter

Can anyone clarify why do I get "Invalid pointer operation" when I attempt to delete TStreamAdapter? Or... how to properly free the memory from TStreamAdapter? It works, if I remove the delete but that causes a memory leak. Even if I use boost::scoped_ptr it also fails with the same error.
Note: I also tried initializing TStreamAdapter with soOwned value, same error.
The code:
HRESULT LoadFromStr(TWebBrowser* WB, const UnicodeString& HTML)
{
if (!WB->Document)
{
WB->Navigate("about:blank");
while (!WB->Document) { Application->ProcessMessages(); }
}
DelphiInterface<IHTMLDocument2> diDoc = WB->Document;
if (diDoc)
{
boost::scoped_ptr<TMemoryStream> ms(new TMemoryStream);
{
boost::scoped_ptr<TStringList> sl(new TStringList);
sl->Text = HTML;
sl->SaveToStream(ms.get(), TEncoding::Unicode);
ms->Position = 0;
}
DelphiInterface<IPersistStreamInit> diPSI;
if (SUCCEEDED(diDoc->QueryInterface(IID_IPersistStreamInit, (void**)&diPSI)) && diPSI)
{
TStreamAdapter* sa = new TStreamAdapter(ms.get(), soReference);
diPSI->Load(*sa);
delete sa; // <-- invalid pointer operation here???
// UPDATED (solution) - instead of the above!!!
// DelphiInterface<IStream> sa(*(new TStreamAdapter(ms.get(), soReference)));
// diPSI->Load(sa);
// DelphiInterface is automatically freed on function end
return S_OK;
}
}
return E_FAIL;
}
Update: I found the solution here - http://www.cyberforum.ru/cpp-builder/thread743255.html
The solution is to use
_di_IStream sa(*(new TStreamAdapter(ms.get(), soReference)));
or...
DelphiInterface<IStream> sa(*(new TStreamAdapter(ms.get(), soReference)));
As it will automatically free the IStream once it is out of scope. At least it should - is there a possible memory leak here? (CodeGuard did not detect any memory leaks).
TStreamAdapter is a TInterfacedObject descendant, which implements reference counting semantics. You are not supposed to delete it at all, you need to let reference counting free the object when it is no longer being referenced by anyone.
Using _di_IStream (which is just an alias for DelphiInterface<IStream>) is the correct way to automate that with a smart pointer. TComInterface<IStream> and CComPtr<IStream> would also work, too.

C++/CLI pin_ptr's usage in proper way

I am newbie of C++/CLI.
I already know that the pin_ptr's functionality is making GC not to learn to specified object.
now let me show you msdn's example.
https://msdn.microsoft.com/en-us//library/1dz8byfh.aspx
// pin_ptr_1.cpp
// compile with: /clr
using namespace System;
#define SIZE 10
#pragma unmanaged
// native function that initializes an array
void native_function(int* p) {
for(int i = 0 ; i < 10 ; i++)
p[i] = i;
}
#pragma managed
public ref class A {
private:
array<int>^ arr; // CLR integer array
public:
A() {
arr = gcnew array<int>(SIZE);
}
void load() {
pin_ptr<int> p = &arr[0]; // pin pointer to first element in arr
int* np = p; // pointer to the first element in arr
native_function(np); // pass pointer to native function
}
int sum() {
int total = 0;
for (int i = 0 ; i < SIZE ; i++)
total += arr[i];
return total;
}
};
int main() {
A^ a = gcnew A;
a->load(); // initialize managed array using the native function
Console::WriteLine(a->sum());
}
hear is the question.
Isn't it okay, the passed object(arr) not pinned ?
because the unmanaged code(native_function) is sync operation and finished before the C++/CLI code (load)
is there any chance the gc destory arr, even though the main logic is running?
(I think A is main's stack variable and arr is A's member variable, so while running main, it should visible)
if so, how can we guarantee that the A is there before invoking load?
(only while not running in native-code?)
int main() {
A^ a = gcnew A;
// I Think A or arr can be destroyed in here, if it is able to be destroyed in native_function.
a->load();
...
}
Thanks, in advance.
The problem that is solved by pinning a pointer is not a normal concurrency issue. It might be true that no other thread will preempt the execution of your native function. However, you have to count in the garbage collector, which might kick in whenever the .NET runtime sees fit. For instance, the system might run low on memory, so the runtime decides to collect disposed objects. This might happen while your native function executes, and the garbage collector might relocate the array it is using, so the pointer you passed in isn't valid anymore.
The golden rule of thumb is to pin ALL array pointers and ALL string pointers before passing them to a native function. ALWAYS. Don't think about it, just do it as a rule. Your code might work fine for a long time without pinning, but one day bad luck will strike you just when it's most annoying.

Is it possible to cast a managed bytes-array to native struct without pin_ptr, so not to bug the GC too much?

It is possible to cast a managed array<Byte>^ to some non-managed struct only using pin_ptr, AFAIK, like:
void Example(array<Byte>^ bfr) {
pin_ptr<Byte> ptr = &bfr[0];
auto data = reinterpret_cast<NonManagedStruct*>(ptr);
data->Header = 7;
data->Length = sizeof(data);
data->CRC = CalculateCRC(data);
}
However, is with interior_ptr in any way?
I'd rather work on managed data the low-level-way (using unions, struct-bit-fields, and so on), without pinning data - I could be holding this data for quite a long time and don't want to harass the GC.
Clarification:
I do not want to copy managed-data to native and back (so the Marshaling way is not an option here...)
You likely won't harass the GC with pin_ptr - it's pretty lightweight unlike GCHandle.
GCHandle::Alloc(someObject, GCHandleType::Pinned) will actually register the object as being pinned in the GC. This lets you pin an object for extended periods of time and across function calls, but the GC has to track that object.
On the other hand, pin_ptr gets translated to a pinned local in IL code. The GC isn't notified about it, but it will get to see that the object is pinned only during a collection. That is, it will notice it's pinned status when looking for object references on the stack.
If you really want to, you can access stack memory in the following way:
[StructLayout(LayoutKind::Explicit, Size = 256)]
public value struct ManagedStruct
{
};
struct NativeStruct
{
char data[256];
};
static void DoSomething()
{
ManagedStruct managed;
auto nativePtr = reinterpret_cast<NativeStruct*>(&managed);
nativePtr->data[42] = 42;
}
There's no pinning at all here, but this is only due to the fact that the managed struct is stored on the stack, and therefore is not relocatable in the first place.
It's a convoluted example, because you could just write:
static void DoSomething()
{
NativeStruct native;
native.data[42] = 42;
}
...and the compiler would perform a similar trick under the covers for you.

When does JNI decide that it can release memory?

When I return a direct ByteBuffer to JNI, how long until it can get reclaimed by the JVM/GC?
Suppose I have a function like this:
void* func()
{
[ ... ]
jobject result = env->CallStaticObjectMethod(testClass, doSomethingMethod);
void* pointerToMemory = env->GetDirectBufferAddress(result);
return pointerToMemory;
}
The JVM can't possibly know how long I'm going to use that pointerToMemory, right? What if I want to hold on to that address and the corresponding memory for a while?
Suppose I want to circumvent this issue and return a byte[] from Java to JNI like this:
ByteBuffer buf;
byte[] b = new byte[1000];
buf = ByteBuffer.wrap(b);
buf.order(ByteOrder.BIG_ENDIAN);
return buf.array();
AND THEN do the same as above, I store a pointer to that byte[] and want to hold on to it for a while. How / when / why is the JVM going to come after that backing byte[] from Java?
void* function()
{
jbyteArray byteArr = (jbytearray)env->CallStaticObjectMethod(testClass, doSomethingMethod);
jbyte *b= env->GetByteArrayElements(byteArr, 0);
return b;
}
The short answer is: If the function implements a native method, the pointer will be invalid as soon as you return.
To avoid this, you should get a global reference for all objects that you intend to keep valid after returning. See the documentation on local and global references for more information.
To understand better how JNI manages references from native code, see the documentation on PushLocalFrame/PopLocalFrame.