Casting System::IO::FileStream^ to FILE* - c++-cli

I am working on refactoring a large amount of code from an unmanaged C++ assembly into a C# assembly. There is currently a mixed-mode assembly going between the two with, of course, a mix of managed and unmanaged code. There is a function I am trying to call in the unmanaged C++ which relies on FILE*s (as defined in stdio.h). This function ties into a much larger process which cannot be refactored into the C# code yet, but which now needs to be called from the managed code.
I have searched but cannot find a definitive answer to what kind of underlying system pointer the System::IO::FileStream class uses. Is this just applied on top of a FILE*? Or is there some other way to convert a FileStream^ to a FILE*? I found FileStream::SafeFileHandle, on which I can call DangerousGetHandle().ToPointer() to get a native void*, but I'm just trying to be certain that if I cast this to FILE* that I'm doing the right thing...?
void Write(FILE *out)
{
Data->Write(out); // huge bulk of code, writing the data
}
virtual void __clrcall Write(System::IO::FileStream ^out)
{
// is this right??
FILE *pout = (FILE*)out->SafeFileHandle->DangerousGetHandle().ToPointer();
Write(pout);
}

You'll need _open_osfhandle followed by _fdopen.
Casting is not magic. Just because the input and types output are right for your situation doesn't mean the values are.

Related

a pointer-to-memeber is not valid for a managed class

I have a Windows Form in Visual Studio C++. (CLR)
In the header file, I declare void createThread()
private:
void createThread() {
char buffer[1024];
ZeroMemory(buffer, sizeof(buffer));
while (true) {
recv(connection, buffer, sizeof(buffer), 0);
main.displayMessage(gcnew System::String(buffer));
}
ExitThread(0);
}
Now, I want to call function createThread
CreateThread(NULL, NULL, (LPTHREAD_START_ROUTINE)createThread, NULL, NULL, NULL)
After that I get this error:
a pointe-to-member is not valid for a managed class
I tried user thread library but not support. How can I fix??
It appears that this function is defined in a managed class. You need to use the managed thread object, not unmanaged CreateThread.
This error exists for two reasons: First, it's a instance method, not a static method, so it would need to be called with an instance of this type, which there's no way to pass to CreateThread. Second, it's a managed object, and its methods do not trivially convert to C-style raw function pointers.
Finally, a note about the language: C++/CLI is meant to act as a way to interface managed code (e.g., C#) with unmanaged C++. It's not intended as a primary development language. If you don't need to link managed and unmanaged code, you may want to consider switching to either C# or C++ for your application.

Code sharing between multiple independently compiled binaries/hex files

I'm looking for documentation/information on how to share information/code between multiple binaries compiled for a Cortex-m/0/4/7 architectures. The two binaries will be on the same chip and same architecture. They are flashed at different locations and sets the main stack pointer and resets the program counter so that one binary "jumps" to the other binary. I want to share code between these two binaries.
I've done a simple copy of an array of function pointers into a section defined in the linker script into RAM. Then read the RAM out in the other binary and cast it to an array then use the index to call functions in the other binary. This does work as a Proof-of-concept, but I think what I'm looking for is a bit more complex. As I want some way of describing compatibility between the two binaries. I want some what the functionality of shared libraries, but I'm unsure if I need position independent code.
As an example how the current copy process is done it is basically:
Source binary:
void copy_func()
{
memncpy(array_of_function_pointers, fixed_size, address_custom_ram_section)
}
Binary which is jumped too from source binary:
array_fp_type get_funcs()
{
memncpy(adress_custom_ram_section, fixed_size, array_of_fp)
return array_of_fp;
}
Then I can use the array_of_fp to call into functions residing in the source binary from the jump binary.
So what I'm looking for is some resources or input for someone who have implemented a similar system. Like I would like to not have to have a custom RAM section where I'm copying the function pointers into.
I would be fine with having the compilation step of source binary outputting something which can be included into the compilation step of the jump binary. However it needs to be reproducible and recompiling the source binary shouldn't break the compatibility with the jump binary(even if it included a different file from what is now outputted) as long as you don't change the interface.
To clarify source binary shouldn't require any specific knowledge about the jump binary. The code should not reside in both binaries as this would defeat the purpose of this mechanism. The overall goal if this mechanism is a way to save space when creating multi-binary applications on cortex-m processors.
Any ideas or links to resources are welcome. If you have any more questions feel free to comment on the question and I'll try to answer it.
Its very hard for me to picture what you want to do, but if you're interested in having an application link against your bootloader/ROM, then see Loading symbol file while linking for a hint on what you could do.
Build your "source"(?) image, scrape its mapfile and make a symbol file, then use that when you link your "jump"(?) image.
This does mean you need to link your "jump" image against a specific version of your "source" image.
If you need them to be semi-version independent (i.e. you define a set of functions that get exported, but you can rebuild on either side), then you need to export function pointers at known locations in your "source" image and link against those function pointers in your "jump" image. You can simplify the bookkeeping by making a structure of function pointers access the functions through that on either side.
For example:
shared_functions.h:
struct FunctionPointerTable
{
void(*function1)(int);
void(*function2)(char);
};
extern struct FunctionPointerTable sharedFunctions;
Source file in "source" image:
void function1Implementation(int a)
{
printf("You sent me an integer: %d\r\n", a);
function2Implementation((char)(a%256))
sharedFunctions.function2((char)(a%256));
}
void function2Implementation(char b)
{
printf("You sent me an char: %c\r\n", b);
}
struct FunctionPointerTable sharedFunctions =
{
function1Implementation,
function2Implementation,
};
Source file in "jump" image:
#include "shared_functions.h"
sharedFunctions.function1(1024);
sharedFunctions.function2(100);
When you compile/link the "source", take its mapfile and extract the location of sharedFunctions and create a symbol file that is linked with the source the "jump" image.
Note: the printfs (or anything directly called by the shared functions) would come from the "source" image (and not the "jump" image).
If you need them to come from the "jump" image (or be overridable) , then you need to access them through the same function pointer table, and the "jump" image needs to fix the function pointer table up with its version of the relevant function. I updated the function1() to show this. The direct call to function2 will always be the "source" version. The shared function call version of it will go through the jump table and call the "source" version unless the "jump" image updates the function table to point to its implementation.
You CAN get away from the structure, but then you need to export the function pointers one by one (not a big problem), but you want to keep them in order and at a fixed location, which means explicitly putting them in the linker descriptor file, etc. etc. I showed the structure method to distill it down to the easiest example.
As you can see, things get pretty hairy, and there is some penalty (calling through the function pointer is slower because you need to load up the address to jump to)
As explained in comment, we could imagine an application and a bootloader relying on same dynamic library. So application and bootloader rely on library, application can be changed without impact on library or boot.
I did not find an easy way to do a shared library with arm-none-eabi-gcc. However
this document gives some alternatives to shared libraries. I your case, I would recommand the jump table solution.
Write a library with the functions that need to be used in bootloader and in applicative.
"library" code
typedef void (*genericFunctionPointer)(void)
// use the linker script to set MySection at a known address
// I think this could be a structure like Russ Schultz solution but struct may or may not compile identically in lib and boot. However yes struct would be much easyer and avoiding many function pointer cast.
const genericFunctionPointer FpointerArray[] __attribute__ ((section ("MySection")))=
{
(genericFunctionPointer)lib_f1,
(genericFunctionPointer)lib_f2,
}
void lib_f1(void)
{
//some code
}
uint8_t lib_f2(uint8_t param)
{
//some code
}
applicative and/or bootloader code
typedef void (*genericFunctionPointer)(void)
// Use the linker script to set MySection at same address as library was compiled
// in linker script also put this section as `NOLOAD` because it is init by library and not by our code
//volatile is needed here because you read in flash memory and compiler may initialyse usage of this array to NULL pointers
volatile const genericFunctionPointer FpointerArray[NB_F] __attribute__ ((section ("MySection")));
enum
{
lib_f1,
lib_f2,
NB_F,
}
int main(void)
{
(correctCastF1)(FpointerArray[lib_f1])();
uint8_t a = (correctCastF2)(FpointerArray[lib_f2])(10);
}
You can look into using linker sections. If you have your bootloader source code in folder bootloader, you can use
SECTIONS
{
.bootloader:
{
build_output/bootloader/*.o(.text)
} >flash_region1
.binary1:
{
build_output/binary1/*.o(.text)
} >flash_region2
.binary2:
{
build_output/binary2/*.o(.text)
} >flash_region3
}

Managed array in VC++

I want to pass a managed array from VB.NET to a function in a VC++ project. How would I declare my C++ function and how would I use the array when I'm inside it? Specifically, I want to make VB compatible functions like the one below, which is written in plain old C.
void Vcopy(double *A, double *B)
{
int n;
for(n=0;n<3;n++)
{
B[n]=A[n];
}
}
Maybe some kind soul could convert this to something that would play nicer with VB. Thanks!
Can the C++ method be managed, e.g., C++/CLI ?
If so, then:
void Vcopy(array<double> ^A, array<double> ^B)
By the way, the rest of the method should be identical, provided that the size is 3 - otherwise use A->Length and B->Length.

Calling a function by name input by user

Is it possible to call a function by name in Objective C? For instance, if I know the name of a function ("foo"), is there any way I can get the pointer to the function using that name and call it? I stumbled across a similar question for python here and it seems it is possible there. I want to take the name of a function as input from the user and call the function. This function does not have to take any arguments.
For Objective-C methods, you can use performSelector… or NSInvocation, e.g.
NSString *methodName = #"doSomething";
[someObj performSelector:NSSelectorFromString(methodName)];
For C functions in dynamic libraries, you can use dlsym(), e.g.
void *dlhandle = dlopen("libsomething.dylib", RTLD_LOCAL);
void (*function)(void) = dlsym(dlhandle, "doSomething");
if (function) {
function();
}
For C functions that were statically linked, not in general. If the corresponding symbol hasn’t been stripped from the binary, you can use dlsym(), e.g.
void (*function)(void) = dlsym(RTLD_SELF, "doSomething");
if (function) {
function();
}
Update: ThomasW wrote a comment pointing to a related question, with an answer by dreamlax which, in turn, contains a link to the POSIX page about dlsym. In that answer, dreamlax notes the following with regard to converting a value returned by dlsym() to a function pointer variable:
The C standard does not actually define behaviour for converting to and from function pointers. Explanations vary as to why; the most common being that not all architectures implement function pointers as simple pointers to data. On some architectures, functions may reside in an entirely different segment of memory that is unaddressable using a pointer to void.
With this in mind, the calls above to dlsym() and the desired function can be made more portable as follows:
void (*function)(void);
*(void **)(&function) = dlsym(dlhandle, "doSomething");
if (function) {
(*function)();
}

std::unique_ptr and pointer-to-pointer

I've been using std::unique_ptr to store some COM resources, and provided a custom deleter function. However, many of the COM functions want pointer-to-pointer. Right now, I'm using the implementation detail of _Myptr, in my compiler. Is it going to break unique_ptr to be accessing this data member directly, or should I store a gajillion temporary pointers to construct unique_ptr rvalues from?
COM objects are reference-countable by their nature, so you shouldn't use anything except reference-counting smart pointers like ATL::CComPtr or _com_ptr_t even if it seems inappropriate for your usecase (I fully understand your concerns, I just think you assign too much weight to them). Both classes are designed to be used in all valid scenarios that arise when COM objects are used, including obtaining the pointer-to-pointer. Yes, that's a bit too much functionality, but if you don't expect any specific negative consequences you can't tolerate you should just use those classes - they are designed exactly for this purpose.
I've had to tackle the same problem not too long ago, and I came up with two different solutions:
The first was a simple wrapper that encapsulated a 'writeable' pointer and could be std::moved into my smart pointer. This is just a little more convenient that using the temp pointers you are mentioning, since you cannot define the type directly at the call-site.
Therefore, I didn't stick with that. So what I did was a Retrieve helper-function that would get the COM function and return my smart-pointer (and do all the temporary pointer stuff internally). Now this trivially works with free-functions that only have a single T** parameter. If you want to use this on something more complex, you can just pass in the call via std::bind and only leave the pointer-to-be-returned free.
I know that this is not directly what you're asking, but I think it's a neat solution to the problem you're having.
As a side note, I'd prefer boost's intrusive_ptr instead of std::unique_ptr, but that's a matter of taste, as always.
Edit: Here's some sample code that's transferred from my version using boost::intrusive_ptr (so it might not work out-of-the box with unique_ptr)
template <class T, class PtrType, class PtrDel>
HRESULT retrieve(T func, std::unique_ptr<PtrType, PtrDel>& ptr)
{
ElementType* raw_ptr=nullptr;
HRESULT result = func(&raw_ptr);
ptr.reset(raw_ptr);
return result;
}
For example, it can be used like this:
std::unique_ptr<IFileDialog, ComDeleter> FileDialog;
/*...*/
using std::bind;
using namespace std::placeholders;
std::unique_ptr<IShellItem, ComDeleter> ShellItem;
HRESULT status = retrieve(bind(&IFileDialog::GetResult, FileDialog, _1), ShellItem);
For bonus points, you can even let retrieve return the unique_ptr instead of taking it by reference. The functor that bind generates should have signature typedefs to derive the pointer type. You can then throw an exception if you get a bad HRESULT.
C++0x smart pointers have a portable way to get at the raw pointer container .get() or release it entirely with .release(). You could also always use &(*ptr) but that is less idiomatic.
If you want to use smart pointers to manage the lifetime of an object, but still need raw pointers to use a library which doesn't support smart pointers (including standard c library) you can use those functions to most conveniently get at the raw pointers.
Remember, you still need to keep the smart pointer around for the duration you want the object to live (so be aware of its lifetime).
Something like:
call_com_function( &my_uniq_ptr.get() ); // will work fine
return &my_localscope_uniq_ptr.get(); // will not
return &my_member_uniq_ptr.get(); // might, if *this will be around for the duration, etc..
Note: this is just a general answer to your question. How to best use COM is a separate issue and sharptooth may very well be correct.
Use a helper function like this.
template< class T >
T*& getPointerRef ( std::unique_ptr<T> & ptr )
{
struct Twin : public std::unique_ptr<T>::_Mybase {};
Twin * twin = (Twin*)( &ptr );
return twin->_Myptr;
}
check the implementation
int wmain ( int argc, wchar_t argv[] )
{
std::unique_ptr<char> charPtr ( new char[25] );
delete getPointerRef(charPtr);
getPointerRef(charPtr) = 0;
return charPtr.get() != 0;
}