I have a DLL from a vendor that I Need to invoke from C#. I know that C# data classes are not directly compatible with C++ data types.
So, given that I have a function that receives data and returns a "string".
(like this)
string answer = CreateCode2(int, string1, uint32, string2, uint16);
What must I do to make the input parameters compatible, and then make the result string compatible?
Please - I have never done this: Don't give answers like "Use P/Invoke" or "Use Marshal" I need a tutorial with examples.
All the P/Invoke examples I have seen are from .NET Framework 1.1, and Marshall (without a tutorial) is totally confusing me.
Also, I have seen some examples that tell me when I create my extern function to replace all the datatypes with void*. This is makes my IDE demand that I use "unsafe".
This isn't quite a tutorial but it's got a lot of good information on using P/Invoke
Calling Win32 DLLs in C# with P/Invoke
It'll give you an idea of the terminology, the basic concepts, how to use DllImport and should be enough to get you going.
There's a tutorial on MSDN: Platform Invoke Tutorial.
But it's pretty short and to be honest the one I've mentioned above is a much better source of information, but there's a lot of it on there.
Also useful is the PInvoke Signature Toolkit, described here .
And downloadable here.
It lets you paste in an unmanaged method signature, or struct definition and it'll give you the .NET P/Invoke equivalent. It's not 100% perfect but it gets you going much quicker than trying to figure everything out yourself.
With regards to Marshalling specifically, I would say start simple.
If you've got something that's some sort of pointer, rather than trying to convert it directly to some .NET type in the method signature using Marshal it can sometimes be easier to just treat it as an IntPtr and then use Marshal.Copy, .PtrToString, .PtrToStructure and the other similar methods to get the data into a .NET type.
Then when you've gotten to grips with the whole thing you can move on to direct conversions using the Marshal attribute.
There's a good 3 part set of articles on marshalling here, here and here.
Related
I'm aware this might be a broad question (there's no specific code for you to look at), but I'm hoping I'd get some insights as to what to do, or how to approach the problem.
To keep things simple, suppose the compiler that I'm writing performs these three steps:
parse (and bind all variables)
typecheck
codegen
Also the language that I'm building the compiler for wants to support late-analysis/late-binding (ie., it has a function that takes a String, which is to be compiled and executed as a piece of source-code during runtime).
Now during parse-phase, I have a piece of context that I need to keep around till run-time for the sole benefit of the aforementioned function (because it needs to parse and typecheck its argument in that context).
So the question, how should I do this? What do other compilers do?
Should I just serialise the context object to disk (codegen for it) and resurrect it during run-time or something?
Thanks
Yes, you'll need to emit the type information (or other context, you weren't very specific) in your object/executable files, so that your eval can read it at runtime. You might look at Java's .class file format for inspiration; Java doesn't have eval as such, but you can dynamically spin new bytecode at runtime that must be linked in a type-safe manner. David Conrad's comment is spot-on: this information can also be used to implement reflection, if your language has such a feature.
That's as much as I can help you without more specifics.
I have an application that involves a lot of communication between managed (C#) and unmanaged (C++) code. We are using Visual Studio 2005 (!), and we use the interop assembly generated automatically by tlbimp.
We have fairly good luck passing simple structs back and forth as function arguments. And because our objects are fairly simple, we can pack them into SAFEARRAYs using the IRecordInfo interface. Passing these arrays as arguments to COM methods seems to work properly.
We would like to be able to embed variable-length arrays in our UDTs, but this fails badly. I don't think I have been able to find a single piece of documentation showing how someone has accomplished this. Nor have I found documentation that says it can't be done.
1) Naive approach: Simply declare a safearray in the managed code:
struct MyUdt {
int member1;
BSTR member2;
SAFEARRAY *m3;
};
The C++ compiler is happy with this, but the generated IDL confounds tblimp.exe. It reports that it is unable to convert the signature for member m3, and the signature for member tagSAFEARRAY.rgsabound. These are only warnings, but they are meaningful, the resulting assembly is not usable.
Using LPSAFEARRAY, oddly enough, fails in different ways, but for the same reason, tblimp just can't deal with it.
2) Trickier: Pack it into a variant:
struct MyUdt {
int member1;
BSTR member2;
VARIANT m3;
};
We have code that builds safearrays of UDTs, and it never gives us any trouble. It's basically copied from MSDN. Using that code to create a safearray, then:
pVal->m3.vt = VT_SAFEARRAY | VT_RECORD;
pval->parray = p;
Fails in odd ways. It always breaks, some variations produce an OutOFMemoryException... odd, others fail in different ways. (I'm not sure if a pRecInfo pointer is required here or not, but it fails the same way, present or not.)
The Google search space for this is badly polluted with answers to questions that I am not asking:
How do you pass UDT/structs from unmanaged code.
How do you pass a SAFEARRAY of structs? (We're doing this fine.)
How do you use p/invoke or customer marshalling to pass UDTs.
And many answers describing how to define things from the managed side, not the unmanaged side.
And then there are a couple of Microsoft KBs describing problems with VT_RECORD in early versions of .NET. I don't think these are germane - VT_RECORD types work with VARIANT and with SAFEARRAY. (But maybe not with the UDT marshallling...)
If this won't ever work, it would be nice to at least know why.
Mark
I have de-compiled some old code from a legacy VB.NET app using ILSply and this line has appeared:
Operators.ConditionalCompareObjectEqual(safeDataReader["isLoader"], -1, false)
I'm aware this is compiler generated, but its not advised that this code is used in application source code. My question is why is this the case and what should it be in 'normal' code?
The documentation for the method says it right there:
Represents the overloaded Visual Basic equals (=) operator.
Why? I don't "know", but it's easy to make an educated guess.
The semantics of the "=" operator in VB.NET are just a bit different from those of C# and the standard Object.Equals(). The semantics are inherited from VB6 and cannot be changed for backward compatibility reasons. Obviously this method implements the VB6 semantics for the compiler.
It would make an interesting read to come up with a systematic analysis of the differences.
Further thoughts:
The reason it's "not recommended" is because there is no reason to call the method from VB.NET: just use =. In C#, there is no particular reason to invoke the VB6 semantics so the method doesn't make a lot of sense there either.
Obviously, if you are compiling C# code generated from VB.NET, then those are special circumstances: calling the method is the right thing to do, unless you're willing to take the time to analyze the code and prove to yourself that the standard = cn be substituted safely.
I need to work on converting a very huge c++ project to clr safe. The current c++ project has a lot of stuff from c++ like templates, generics, pointers, storage/stream, ole apis, zlib compression apis, inlines etc. Where can I find the datiled document for this type of conversion? Can you suggest some good book to refer to? If anyone of you have done such conversion, can I get some analysis from you?
I'll just cough up the MSDN Library article titled "How to: Migrate to /clr:safe
Visual C++ can generate verifiable components with using /clr:safe, which causes the compiler to generate errors for each non-verifiable code construct.
The following issues generate verifiability errors:
Native types. Even if it isn't used, the declaration of native classes, structures, pointers, or arrays will prevent compilation.
Global variables
Function calls into any unmanaged library, including common language runtime function calls
A verifiable function cannot contain a static_cast Operator for down-casting. The static_cast operator can be used for casting between primitive types, but for down-casting, safe_cast or a C-Style cast (which is implemented as a safe_cast) must be used.
A verifiable function cannot contain a reinterpret_cast operator (or any C-style cast equivalent).
A verifiable function cannot perform arithmetic on an interior_ptr. It may only assign to it and dereference it.
A verifiable function can only throw or catch pointers to reference types, so value types must be boxed before throwing.
A verifiable function can only call verifiable functions (such that calls to the common language runtime are not allowed, include AtEntry/AtExit, and so global constructors are disallowed).
A verifiable class cannot use Explicit.
If building an EXE, a main function cannot declare any parameters, so GetCommandLineArgs must be used to retrieve command-line arguments.
Making a non-virtual call to a virtual function.
Also, the following keywords cannot be used in verifiable code:
unmanaged and pack pragmas
naked and align __declspec modifiers
__asm
__based
__try and __except
I reckon that will keep you busy for a while. There is no magic wand to wave to turn native C++ into verifiable code. Are you sure this is worth the investment?
The vast majority of native C++ is entirely valid C++/CLI, including templates, inlines, etc, except the CLR STL is rather slow compared to the BCL. Also, native C++ doesn't have generics, only templates.
The reality of compiling as C++/CLI is to check the switch and push compile, and wait for it to throw errors.
Rewriting native C++ into safe C++/CLI will result in a code that is syntactically different, but semantically same as C#. If that is the case, why not rewrite directly in C#?
If you want to avoid what is essentially a complete rewrite, consider the following alternatives:
P/Invoke. Unfortunately, I'm unfamiliar whether this would isolate safe from unsafe code. Even if it can perform the isolation, you'll need to wrap your existing C++ code into procedural, C-like API, so it can be consumed by P/Invoke. On a plus side, unless your API is excessively chatty, you get to keep (most of) your native performance.
Wrapping your C++ into out-of-process COM server and using COM Interop to consume it from the manged code. This way, your managed code is completely protected from any corruption that might happen at C++ end and can remain "safe". The downside is a performance hit that you'll get for out-of-process marshaling and the implementation effort you'll need to expend to correctly implement the COM.
To implement "method-missing"-semantics and such in C# 4.0, you have to implement IDynamicObject:
public interface IDynamicObject
{
MetaObject GetMetaObject(Expression parameter);
}
As far as I can figure out IDynamicObject is actually part of the DLR, so it is not new. But I have not been able to find much documentation on it.
There are some very simple example implementations out there (f.x. here and here), but could anyone point me to more complete implementations or some real documentation?
Especially, how exactly are you supposed to handle the "parameter"-parameter?
The short answer is that the MetaObject is what's responsible for actually generating the code that will be run at the call site. The mechanism that it uses for this is LINQ expression trees, which have been enhanced in the DLR. So instead of starting with an object, it starts with an expression that represents the object, and ultimately it's going to need to return an expression tree that describes the action to be taken.
When playing with this, please remember that the version of System.Core in the CTP was taken from a snapshot at the end of August. It doesn't correspond very cleanly to any particular beta of IronPython. A number of changes have been made to the DLR since then.
Also, for compatibility with the CLR v2 System.Core, releases of IronPython starting with either beta 4 or beta 5 now rename everything in that's in the System namespace to be in the Microsoft namespace instead.
If you want an end to end sample including source code, resulting in a dynamic object that stores value for arbitrary properties in a Dictionary then my post "A first look at Duck Typing in C# 4.0" could be right for you. I wrote that post to show how dynamic object can be cast to statically typed interfaces. It has a complete working implementation of a Duck that is a IDynamicObject and may acts like a IQuack.
If you need more information contact me on my blog and I will help you along, as good as I can.
I just blogged about how to do this here:
http://mikehadlow.blogspot.com/2008/10/dynamic-dispatch-in-c-40.html
Here is what I have figured out so far:
The Dynamic Language Runtime is currently maintained as part of the IronPython project. So that is the best place to go for information.
The easiest way to implement a class supporting IDynamicObject seems to be to derive from Microsoft.Scripting.Actions.Dynamic and override the relevant methods, for instance the Call-method to implement function call semantics. It looks like Microsoft.Scripting.Actions.Dynamic hasn't been included in the CTP, but the one from IronPython 2.0 looks like it will work.
I am still unclear on the exact meaning of the "parameter"-parameter, but it seems to provide context for the binding of the dynamic-object.
This presentation also provides a lot of information about the DLR:
Deep Dive: Dynamic Languages in Microsoft .NET by Jim Hugunin.