C++/CLI: How do I use the CallerMemberNameAttribute in C++/CLI? - c++-cli

In C# and VB.Net I can use the CallerMemberNameAttribute to get the name of the invoker as a string:
public void Caller([CallerMemberName]string memberName = "")
{
Debug.Print(memberName);
}
I would like to do the same in C++/CLI but somehow I cannot get it working. Tried several constructions and I am starting to wonder whether the C++/CLI compiler supports this attribute.
Here is a (simplified) implementation:
using namespace System::Runtime::CompilerServices;
public ref class InvokeExample
{
Invoke([CallerMemberName][Optional]String^ name)
{
Debug::Print(name);
}
}
When invoking this method in a C# application the value of name is null. Also tried with the attribute DefaultParameterValue but it didn't help either. Now running out of ideas.
Obvious answer would be, why not implementing in C#?
Well, in this specific case I am limited to C++/CLI.

I used reflector to view the differences between the C++/CLI and the C#/VB.Net version and they looked exactly the same.
Then used ILDASM and now I think I know why it doesn't work (after reading this post).
Here's the il code:
C++/CLI
.method public hidebysig instance string
Caller([opt] string methodName) cil managed
{
.param [1]
.custom instance void [System]System.Runtime.InteropServices.DefaultParameterValueAttribute::.ctor(object) = ( 01 00 0E 00 00 00 )
.custom instance void [mscorlib]System.Runtime.CompilerServices.CallerMemberNameAttribute::.ctor() = ( 01 00 00 00 )
// Code size 2 (0x2)
.maxstack 1
IL_0000: ldarg.1
IL_0001: ret
} // end of method ClassCPP::Caller
C#
.method public hidebysig instance string
Caller([opt] string methodName) cil managed
{
.param [1] = ""
.custom instance void [mscorlib]System.Runtime.CompilerServices.CallerMemberNameAttribute::.ctor() = ( 01 00 00 00 )
// Code size 2 (0x2)
.maxstack 1
.locals init ([0] string CS$1$0000)
IL_0000: ldarg.1
IL_0001: ret
} // end of method ClassCS::Caller
IL code from VB.Net differs from C# as follows:
.param [1] = nullref
I suspect that because the C++/CLI emits DefaultParameterValue instead of initializing .param[1] with a default value the C# compiler won't convert the value to the caller member name.
Would be handy if the MSDN pages describe such limitations for C++/CLI projects. Would save us a lot of time.

Related

When tampering an assembly, why can't I remove original instructions?

In order to be able to test legacy code which relies on SharePoint, I need to mock some of the objects of SharePoint. I do this by tampering with SharePoint assemblies, replacing their methods by mine on the fly.
This works for some cases, but not for others. A strange situation I encountered is this one.
I want to replace the getter of SPContext.Current by my own implementation; for the sake of simplicity, my implementation just throws an exception:
.property class Microsoft.SharePoint.SPContext Current()
{
.get class Microsoft.SharePoint.SPContext Proxy.SPContextProxy::get_Current()
}
.method public hidebysig specialname static
class Microsoft.SharePoint.SPContext get_Current () cil managed
{
// Method begins at RVA 0x877e68
// Code size 12 (0xc)
.maxstack 8
IL_0000: nop
IL_0001: ldstr "Proxy don't have an effective implementation of this property."
IL_0006: newobj instance void [mscorlib]System.NotImplementedException::.ctor(string)
IL_000b: throw
} // end of method SPContextProxy::get_Current
When tampering the original assembly, if I replace the IL code corresponding to SPContext.Current getter, the property cannot be used any longer. I can't even visualize its contents in ILSpy, because this is what is shown instead:
System.NullReferenceException: Object reference not set to an instance of an object.
at Mono.Cecil.Cil.CodeReader.ReadExceptionHandlers(Int32 count, Func`1 read_entry, Func`1 read_length)
at Mono.Cecil.Cil.CodeReader.ReadSection()
at Mono.Cecil.Cil.CodeReader.ReadFatMethod()
at Mono.Cecil.Cil.CodeReader.ReadMethodBody()
at Mono.Cecil.Cil.CodeReader.ReadMethodBody(MethodDefinition method)
at Mono.Cecil.MethodDefinition.<get_Body>b__2(MethodDefinition method, MetadataReader reader)
at Mono.Cecil.ModuleDefinition.Read[TItem,TRet](TRet& variable, TItem item, Func`3 read)
at Mono.Cecil.MethodDefinition.get_Body()
at ICSharpCode.Decompiler.Disassembler.ReflectionDisassembler.DisassembleMethodInternal(MethodDefinition method)
at ICSharpCode.ILSpy.ILLanguage.DecompileProperty(PropertyDefinition property, ITextOutput output, DecompilationOptions options)
at ICSharpCode.ILSpy.TextView.DecompilerTextView.DecompileNodes(DecompilationContext context, ITextOutput textOutput)
at ICSharpCode.ILSpy.TextView.DecompilerTextView.<>c__DisplayClass16.<DecompileAsync>b__15()
On the other hand, when I insert my instructions before the original instructions, I can call the getter successfully, as well as see its contents in ILSpy:
.property class Microsoft.SharePoint.SPContext Current()
{
.custom instance void [Microsoft.SharePoint.Client.ServerRuntime]Microsoft.SharePoint.Client.ClientCallableAttribute::.ctor() = (
01 00 00 00
)
.get class Microsoft.SharePoint.SPContext Microsoft.SharePoint.SPContext::get_Current()
}
.method public hidebysig specialname static
class Microsoft.SharePoint.SPContext get_Current () cil managed
{
// Method begins at RVA 0x33e2d8
// Code size 61 (0x3d)
.maxstack 1
.locals init (
[0] class Microsoft.SharePoint.SPContext,
[1] class [System.Web]System.Web.HttpContext,
[2] class Microsoft.SharePoint.SPContext
)
... follows by the instructions that I inserted:
IL_0000: nop
IL_0001: ldstr "Proxy doesn't implement this property yet."
IL_0006: newobj instance void [mscorlib]System.NotImplementedException::.ctor(string)
IL_000b: throw
... follows by the original instructions:
IL_000c: ldnull
IL_000d: stloc.0
IL_000e: call class [System.Web]System.Web.HttpContext [System.Web]System.Web.HttpContext::get_Current()
IL_0013: stloc.1
IL_0014: ldloc.1
IL_0015: brfalse.s IL_0039
.try
{
IL_0017: ldloc.1
IL_0018: call class Microsoft.SharePoint.SPWeb Microsoft.SharePoint.WebControls.SPControl::GetContextWeb(class [System.Web]System.Web.HttpContext)
IL_001d: brtrue.s IL_0023
IL_001f: ldnull
IL_0020: stloc.2
IL_0021: leave.s IL_003b
IL_0023: leave.s IL_002a
} // end .try
catch [mscorlib]System.InvalidOperationException
{
IL_0025: pop
IL_0026: ldnull
IL_0027: stloc.2
IL_0028: leave.s IL_003b
} // end handler
.try
{
IL_002a: ldloc.1
IL_002b: call class Microsoft.SharePoint.SPContext Microsoft.SharePoint.SPContext::GetContext(class [System.Web]System.Web.HttpContext)
IL_0030: stloc.0
IL_0031: leave.s IL_0039
} // end .try
catch [mscorlib]System.IO.FileNotFoundException
{
IL_0033: pop
IL_0034: leave.s IL_0039
} // end handler
catch [mscorlib]System.InvalidOperationException
{
IL_0036: pop
IL_0037: leave.s IL_0039
} // end handler
IL_0039: ldloc.0
IL_003a: ret
IL_003b: ldloc.2
IL_003c: ret
} // end of method SPContext::get_Current
What prevents the code from being loaded by ILSpy when original instructions are removed before new ones are inserted?
Notes:
Tampering is done with Mono.Cecil by using MethodDefinition.Body.Instructions collection (and the corresponding Insert and Remove methods.)
A few other methods and properties of Microsoft.SharePoint assembly are tampered successfully: ILSpy displays the resulting IL code.
I thought that .maxstack directive could be a problem (1 in the original property, 8 in the proxied one, 1 in the result). After a few tests on a separate project, it appears that it has no effect.
I also suspected that exceptions could be the cause (the original code throws different exceptions than the new one). After a few tests on a separate project, it appears that it has no effect either.
When IL is shown in textual form, exception handling blocks (.try, catch, etc.) appear as actual blocks of IL instructions, just like they do in C#.
But in the binary form, exception handling blocks are stored separately (see §II.25.4.6 Exception handling clauses of ECMA-335) and reference the IL instructions using offsets. In Cecil, exception handlers are represented using the MethodBody.ExceptionHandlers property.
So, if you replaced the old MethodBody.Instructions with your own instructions, it's very likely that the offsets of the old exception handlers are now invalid, which is causing the issues. (The fact that Cecil throws NullReferenceException sounds like a bug to me, consider reporting it.)
The other example that you linked to which doesn't exhibit this problem is different because there the original method doesn't contain exception handlers, it throws an exception. And throw is just a normal IL instruction, it doesn't have a special representation like e.g. .try/catch does.

COM Object that returns SAFEARRAY(Long) causing SafeArrayTypeMismatchException

I write plugins for a program that we use at work using a API Type library they provide. It is a COM object named SCAPI. The COM object was written for VB6 so when I add the reference for it it for .NET, an Interop version of it is created.
When I use the following code, it is supposed to return a SAFEARRAY(Long) as per the documentation, I get an error saying I have a SafeArrayTypeMismatchException.
Dim oBag As SCAPI.PropertyBag = tempModel.PropertyBag
Dim x = oBag.Value("Created")
The error is being thrown from the SCAPI.PropertyBagClass.get_Value(Object Property) function, which is part of the COM object, not something I've written. After all the research I've done, I can't seem to figure out what I need to do to get this working. I've used tlbimp.exe to get the method information but it doesn't seem to contain any [out] tags in it, even though I've used the same function oBag.Value("Name"), which returns a string value and doesn't throw an error:
.method public hidebysig newslot specialname abstract virtual
instance object
marshal( struct)
get_Value([in] object marshal( struct) Property) runtime managed internalcall
{
.custom instance void [mscorlib]System.Runtime.InteropServices.DispIdAttribute::.ctor(int32) = ( 01 00 01 00 02 60 00 00 ) // .....`..
} // end of method ISCPropertyBag::get_Value

Bind generic Interface in Ninject

So, I have dug for quite some time to find the answer for this with no luck.
What am I doing wrong?
Ninject throws an exception with this message:
Error activating IModelRepository{User}
No matching bindings are available, and the type is not self-bindable.
Here's my code:
I have a generic Interface:
public interface IModelRepository<T> where T: IModel
{
//interface stuff here
}
The concrete class is:
public UserRepository : IModelRepository<User>
{
public UserRepository(IDocumentStore documentStore, string databaseName)
{
//constructor code here
}
}
Ninject module Load():
public override void Load()
{
string databaseName = Properties.Settings.Default.DefaultDatabaseName;
Bind<IModelRepository<User>>()
.To<UserRepository>()
.WithConstructorArgument("documentStore", Kernel.Get<IDocumentStore>())
.WithConstructorArgument("databaseName", databaseName);
}
Ninject instantiation (this is where the exception occurs):
Kernel = new Ninject.StandardKernel(new DIModules.ModelRepositoryModule()
,new DIModules.DocumentStoreModule());
Here's the full stack trace:
at Ninject.KernelBase.Resolve(IRequest request) in c:\Projects\Ninject\ninject\src\Ninject\KernelBase.cs:line 359
at Ninject.ResolutionExtensions.GetResolutionIterator(IResolutionRoot root, Type service, Func`2 constraint, IEnumerable`1 parameters, Boolean isOptional, Boolean isUnique) in c:\Projects\Ninject\ninject\src\Ninject\Syntax\ResolutionExtensions.cs:line 263
at Ninject.ResolutionExtensions.Get[T](IResolutionRoot root, IParameter[] parameters) in c:\Projects\Ninject\ninject\src\Ninject\Syntax\ResolutionExtensions.cs:line 37
at xl.view.DIModules.DataStoreModule.Load() in c:\Users\Michael\Google Drive\Projects\Windows\xl\xl.view\DIModules\DataStoreModule.cs:line 18
at Ninject.Modules.NinjectModule.OnLoad(IKernel kernel) in c:\Projects\Ninject\ninject\src\Ninject\Modules\NinjectModule.cs:line 85
at Ninject.KernelBase.Load(IEnumerable`1 m) in c:\Projects\Ninject\ninject\src\Ninject\KernelBase.cs:line 217
at Ninject.KernelBase..ctor(IComponentContainer components, INinjectSettings settings, INinjectModule[] modules) in c:\Projects\Ninject\ninject\src\Ninject\KernelBase.cs:line 100
at Ninject.KernelBase..ctor(INinjectModule[] modules) in c:\Projects\Ninject\ninject\src\Ninject\KernelBase.cs:line 57
at Ninject.StandardKernel..ctor(INinjectModule[] modules) in c:\Projects\Ninject\ninject\src\Ninject\StandardKernel.cs:line 46
at xl.view.Program.InitializeApplication() in c:\Projects\Windows\xl\xl.view\Program.cs:line 53
at xl.view.Program.Main() in c:\Windows\xl\xl.view\Program.cs:line 28
.WithConstructorArgument("documentStore", Kernel.Get<IDocumentStore>())
You might want to change that to ctx=> Kernel.Get<IDocumentStore>(). The way you're calling it, you're creating objects during the module Load() - this should not be the casse - Moduel Load() methods should only Bind() stuff.
Also, don't have a dev env to hand but pretty sure there should be a way to let default provisioning take care of binding that ctor param to whatever DI would resolve.
(If none of the above makes sense, you'll definitely need to give a more complete stacktrace than you have)
Try to change order of modules, seems order is important, because IModelRepository<User> does not know about IModel and User before you bind them:
Kernel = new Ninject.StandardKernel(
new DIModules.DocumentStoreModule(),
new DIModules.ModelRepositoryModule());
This works well for me, and here is full sample: http://pastebin.com/2TjBqAwc

MSIL Property Setter - Access to the value field

I have the following setter-method, but the object I put in value isn't put through to the called method:
.method public hidebysig specialname instance void set_SeatingCapacity(int32 'value') cil managed
{
.custom instance void [mscorlib]System.Runtime.CompilerServices.CompilerGeneratedAttribute::.ctor()
.maxstack 3
L_0000: ldc.i4 0x6c
L_0005: ldarg.0
L_0006: ldfld int32 Young3.FMSearch.Core.Entities.InGame.BaseObject::MemoryAddress
L_000b: ldarg.1
L_000c: call void Young3.FMSearch.Core.Managers.PropertyInvoker::Set(int32, int32, object)
L_0011: ret
}
I want to call the function in L_000c like Set(0x6c, ldfld MemoryAddress, value). The first two fields are correctly posted to the function. Any clue? It looks quite well when doing something similar and looking at the definition in Reflector.
I had to do a box int32, or by making Set into Set<T>.

Run-Time Check Failure #0 vb.net callback from C dll

I'm writing Add-inn Application A in VB.Net and DLL B in C language.
Application A pass callback method to dll B.
When certain event occur the dll invoke the callback from A.
Whole works fine on my PC but when I move it to Notebook I get an error:
Run-Time Check Failure #0 - The value of ESP was not properly saved across a function call. This is usually a result of calling a function declared with one calling convention with a function pointer declared with a different calling convention.
This is part of C code:
typedef void (__cdecl * OFFICE_PTR)();
void TAPIClient::tapiCallBack(
DWORD hDevice,
DWORD dwMessage,
DWORD dwInstance,
DWORD dwParam1,
DWORD dwParam2,
DWORD dwParam3){
switch (dwMessage)
{
case LINE_CALLSTATE:
switch (dwParam1)
{
case LINECALLSTATE_OFFERING:
if(dwInstance!=NULL)
{
try
{
OFFICE_PTR vbFunc =(OFFICE_PTR)dwInstance;
vbFunc( );//Critical moment
}
catch(...)
{
MessageBox (NULL, L"( (OFFICE_PTR)dwInstance )(&sCallNr)",L"ERROR",MB_OK);
}
}
break;
};
break;
}
}
Where dwInstance is a address of application A callback method
This is part of VB.Net code:
Public Class TapiPlugin
Public Delegate Sub P_Fun()
Private Declare Function startSpy _
Lib "TAPIClient.dll" _
(ByVal pFun As P_Fun) As IntPtr
Public Shared Sub simpleTest()
MsgBox("Plugin sub simpleTest")
End Sub
Public Sub onStart()
Dim pBSTR As IntPtr
pBSTR = startSpy(AddressOf simpleTest)
MsgBox(Marshal.PtrToStringAuto(pBSTR))
Marshal.FreeBSTR(pBSTR)
End Sub
End Class
The Error occur when I try call 'vbFunc( )'. I would be grateful for any help. :D
If the calling convention is cdecl, then you need to declare your delegate like this:
<UnmanagedFunctionPointer(CallingConvention.Cdecl)>
Public Delegate Sub P_Fun()
You can only do this in .NET 2.0 and after, as the attribute was not introduced before then (and the interop layer was not changed to acknowledge it before that).
If the calling convention is indeed stdcall then the delegate can remain as is. You said it is stdcall, but I have doubts, since the exception is explicitly telling you that there might be a mismatch in calling conventions.
Do the two computers have different pointer sizes perhaps? Maybe one is a 64 bit machine and the other only 32?
typedef void (__cdecl * OFFICE_PTR)();
void TAPIClient::tapiCallBack(
DWORD hDevice,
DWORD dwMessage,
DWORD dwInstance,
...){
...
OFFICE_PTR vbFunc =(OFFICE_PTR)dwInstance;
vbFunc( );//Critical moment
The DWORD type is not really valid for passing pointer types. You should be using INT_PTR I guess.
I thing it is not a reason to check it out I passed the callback as global pointer of type OFFICE_PTR and i get the same result. On PC it work fine on Notebook it crash :(
A have to apologies for a mistake I wrote that the def look like:
typedef void (__cdecl * OFFICE_PTR)();
but for real it looks like
typedef void (__stdcall * OFFICE_PTR)();