Is it possible to indirectly load a value type on the stack - cil

In Microsoft IL, to call a method on a value type you need an indirect reference. Lets say we have an ILGenerator named "il" and that currently we have a Nullable on top of the stack, if we want to check whether it has a value then we could emit the following:
var local = il.DeclareLocal(typeof(Nullable<int>));
il.Emit(OpCodes.Stloc, local);
il.Emit(OpCodes.Ldloca, local);
var method = typeof(Nullable<int>).GetMethod("get_HasValue");
il.EmitCall(OpCodes.Call, method, null);
However it would be nice to skip saving it as a local variable, and simply call the method on the address of the variable already on the stack, something like:
il.Emit(/* not sure */);
var method = typeof(Nullable<int>).GetMethod("get_HasValue");
il.EmitCall(OpCodes.Call, method, null);
The ldind family of instructions looks promising (particularly ldind_ref) but I can't find sufficient documentation to know whether this would cause boxing of the value, which I suspect it might.
I've had a look at the C# compiler output, but it uses local variables to achieve this, which makes me believe the first way may be the only way. Anyone have any better ideas?
**** Edit: Additional Notes ****
Attempting to call the method directly, as in the following program with the lines commented out, doesn't work (the error will be "Operation could destabilise the runtime"). Uncomment the lines and you'll see that it does work as expected, returning "True".
var m = new DynamicMethod("M", typeof(bool), Type.EmptyTypes);
var il = m.GetILGenerator();
var ctor = typeof(Nullable<int>).GetConstructor(new[] { typeof(int) });
il.Emit(OpCodes.Ldc_I4_6);
il.Emit(OpCodes.Newobj, ctor);
//var local = il.DeclareLocal(typeof(Nullable<int>));
//il.Emit(OpCodes.Stloc, local);
//il.Emit(OpCodes.Ldloca, local);
var getValue = typeof(Nullable<int>).GetMethod("get_HasValue");
il.Emit(OpCodes.Call, getValue);
il.Emit(OpCodes.Ret);
Console.WriteLine(m.Invoke(null, null));
So you can't simply call the method with the value on the stack because it's a value type (though you could if it was a reference type).
What I'd like to achieve (or to know whether it is possible) is to replace the three lines that are shown commented out, but keep the program working, without using a temporary local.

I figured it out! Luckily I was reading about the unbox opcode and noticed that it pushes the address of the value. unbox.any pushes the actual value. So, in order to call a method on a value type without having to store it in a local variable and then load its address, you can simply box followed by unbox. Using your last example:
var m = new DynamicMethod("M", typeof(bool), Type.EmptyTypes);
var il = m.GetILGenerator();
var ctor = typeof(Nullable<int>).GetConstructor(new[] { typeof(int) });
il.Emit(OpCodes.Ldc_I4_6);
il.Emit(OpCodes.Newobj, ctor);
il.Emit(OpCodes.Box, typeof(Nullable<int>)); // box followed by unbox
il.Emit(OpCodes.Unbox, typeof(Nullable<int>));
var getValue = typeof(Nullable<int>).GetMethod("get_HasValue");
il.Emit(OpCodes.Call, getValue);
il.Emit(OpCodes.Ret);
Console.WriteLine(m.Invoke(null, null));
The downside to this is that boxing causes memory allocation for the boxed object, so it is a bit slower than using local variables (which would already be allocated). But, it saves you from having to determine, declare, and reference all of the local variables you need.

If the variable is already on the stack, you can go ahead and just emit the method call.
It seems that the constructor doesn't push the variable on the stack in a typed form. After digging into the IL a bit, it appears there are two ways of using the variable after constructing it.
You can load the variable that will store the reference onto the evaluation stack before calling the constructor, and then load that variable again after calling the constructor like so:
DynamicMethod method = new DynamicMethod("M", typeof(bool), Type.EmptyTypes);
ILGenerator il = method.GetILGenerator();
Type nullable = typeof(Nullable<int>);
ConstructorInfo ctor = nullable.GetConstructor(new Type[] { typeof(int) });
MethodInfo getValue = nullable.GetProperty("HasValue").GetGetMethod();
LocalBuilder value = il.DeclareLocal(nullable);
// load the variable to assign the value from the ctor to
il.Emit(OpCodes.Ldloca_S, value);
// load constructor args
il.Emit(OpCodes.Ldc_I4_6);
il.Emit(OpCodes.Call, ctor);
il.Emit(OpCodes.Ldloca_S, value);
il.Emit(OpCodes.Call, getValue);
il.Emit(OpCodes.Ret);
Console.WriteLine(method.Invoke(null, null));
The other option is doing it the way you have shown. The only reason for this that I can see is that the ctor methods return void, so they don't put their value on the stack like other methods. It does seem strange that you can call Setloc if the new object isn't on the stack.

After looking at the options some more and further consideration, I think you're right in assuming it can't be done. If you examine the stack behaviour of MSIL instructions, you can see that no op leaves its operand(s) on the stack. Since this would be a requirement for a 'get address of stack entry' op, I'm fairly confident one doesn't exist.
That leaves you with either dup+box or stloc+ldloca. As you've pointed out, the latter is likely more efficient.
#greg: Many instructions leave their result on the stack, but no instructions leave any of their operands on the stack, which would be required for a 'get stack element address' instruction.

Just wrote a class that does what the OP is asking... here's the IL code that C# compiler produces:
IL_0008: ldarg.0
IL_0009: ldarg.1
IL_000a: newobj instance void valuetype [mscorlib]System.Nullable`1<int32>::.ctor(!0)
IL_000f: stfld valuetype [mscorlib]System.Nullable`1<int32> ConsoleApplication3.Temptress::_X
IL_0014: nop
IL_0015: ret

Related

Passing custom parameters in render function

I have below code to create column:
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(validationRenderer)
and render function:
function validationRenderer(data, type, full, meta) {
.......
}
Now, I want to pass custom parameters to validationRenderer so that I can access it inside the function, like below:
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(validationRenderer('abc'))
function validationRenderer(data, type, full, meta, additionalParam) {
// do something with additionalParam
}
I could not find it in the documentation but there must be something to pass additional parameters in meta as per the reference from here
Yes, you can. Or, better, you technically can, but you may use a clever workaround to handle your issue.
I had this issue today, and found a pretty sad (but working) solution.
Basically, the big problem is that the render function is a parameter passed to the datatable handler, which is (of course) isolated.
In my case, to make a pratical example, I had to add several dynamic buttons, each with a different action, to a dynamic datatable.
Apparently, there was no solution, until I thought the following: the problem seems to be that the renderer function scope is somewhat isolated and unaccessible. However, since the "return" of the function is called only when the datatable effectively renders the field, you may wrap the render function in a custom self-invoking-anonymous-function, providing arguments there to use them once the cell is being rendered.
Here is what I did with my practical example, considering the following points:
The goal was to pass the ID field of each row to several different custom functions, so the problem was passing the ID of the button to call when the button is effectively clicked (since you can't get any external reference of it when it is rendered).
I'm using a custom class, which is the following:
hxDatatableDynamicButton = function(label, onClick, classNames) {
this.label = label;
this.onClick = onClick;
this.classNames = this.classNames || 'col5p text-center';
}
Basically, it just creates an instance that I'm later using.
In this case, consider having an array of 2 different instances of these, one having a "test" label, and the other one having a "test2" label.
I'm injecting these instances through a for loop, hence I need to pass the "i" to my datatable to know which of the buttons is being pressed.
Since the code is actually quite big (the codebase is huge), here is the relevant snippet that you need to accomplish the trick:
scope.datatableAdditionalActionButtons.reverse();
scope._abstractDynamicClick = function(id, localReferenceID) {
scope.datatableAdditionalActionButtons[localReferenceID].onClick.call(null, id);
};
for (var i = 0; i < scope.datatableAdditionalActionButtons.length; i++) {
var _localReference = scope.datatableAdditionalActionButtons[i];
var hax = (function(i){
var _tmp = function (data, type, full, meta) {
var _label = scope.datatableAdditionalActionButtons[i].label;
return '<button class="btn btn-default" ng-click="_abstractDynamicClick('+full.id+', '+i+')">'+_label+'</button>';
}
return _tmp;
})(i);
dtColumns.unshift(DTColumnBuilder.newColumn(null).notSortable().renderWith(hax).withClass(_localReference.classNames));
}
So, where is the trick? the trick is entirely in the hax function, and here is why it works: instead of passing the regular renderWith function prototype, we are using a "custom" render, which has the same arguments (hence same parameters) as the default one. However, it is isolated in a self invoking anonymous function, which allows us to arbitrarely inject a parameter inside it and, so, allows us to distinguish, when rendering, which "i" it effectively is, since the isolated scope of the function is never lost in this case.
Basically, the output is as follow:
And the inspection actually shows that elements are effectively rendered differently, hence each "i" is being rendered properly, while it wouldn't have if the function wouldn't have been wrapped in a self invoking anonymous function:
So, basically, in your case, you would do something like this:
var _myValidator = (function(myAbcParam){
var _validate = function (data, type, full, meta) {
console.log("additional param is: ", myAbcParam); // logs "abc"
return '<button id="'+myAbcParam+'">Hello!</button>'; // <-- renders id ="abc"
}
return _validate ;
})('abc');
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(_myValidator);
// <-- note that _myValidator is passed instead of "_myValidator()", since it is already executed and already returns a function.
I know this is not exactly the answer someone may be expecting, but if you need to accomplish something that complex in datatable it really looks like the only possible way to do this is using a self invoking anonymous function.
Hope this helps someone who is still having issues with this.

Is it possible to cast a managed bytes-array to native struct without pin_ptr, so not to bug the GC too much?

It is possible to cast a managed array<Byte>^ to some non-managed struct only using pin_ptr, AFAIK, like:
void Example(array<Byte>^ bfr) {
pin_ptr<Byte> ptr = &bfr[0];
auto data = reinterpret_cast<NonManagedStruct*>(ptr);
data->Header = 7;
data->Length = sizeof(data);
data->CRC = CalculateCRC(data);
}
However, is with interior_ptr in any way?
I'd rather work on managed data the low-level-way (using unions, struct-bit-fields, and so on), without pinning data - I could be holding this data for quite a long time and don't want to harass the GC.
Clarification:
I do not want to copy managed-data to native and back (so the Marshaling way is not an option here...)
You likely won't harass the GC with pin_ptr - it's pretty lightweight unlike GCHandle.
GCHandle::Alloc(someObject, GCHandleType::Pinned) will actually register the object as being pinned in the GC. This lets you pin an object for extended periods of time and across function calls, but the GC has to track that object.
On the other hand, pin_ptr gets translated to a pinned local in IL code. The GC isn't notified about it, but it will get to see that the object is pinned only during a collection. That is, it will notice it's pinned status when looking for object references on the stack.
If you really want to, you can access stack memory in the following way:
[StructLayout(LayoutKind::Explicit, Size = 256)]
public value struct ManagedStruct
{
};
struct NativeStruct
{
char data[256];
};
static void DoSomething()
{
ManagedStruct managed;
auto nativePtr = reinterpret_cast<NativeStruct*>(&managed);
nativePtr->data[42] = 42;
}
There's no pinning at all here, but this is only due to the fact that the managed struct is stored on the stack, and therefore is not relocatable in the first place.
It's a convoluted example, because you could just write:
static void DoSomething()
{
NativeStruct native;
native.data[42] = 42;
}
...and the compiler would perform a similar trick under the covers for you.

Proper cleanup of CComSafeArray<VARIANT>

Given:
{
CComSafeArray<VARIANT> sa;
CComVariant ccv(L"test");
sa.Add(ccv, TRUE);
}
I was hoping the dtor of CComSafeArray would call ::VariantClear on each contained member and the documentation seems to indicate that:
In certain cases, it may be preferable to clear a variant in code without calling VariantClear.
For example, you can change the type of a VT_I4 variant to another type without calling this
function. Safearrays of BSTR will have SysFreeString called on each element not VariantClear.
However, you must call VariantClear if a VT_type is received but cannot be handled. Safearrays
of variant will also have VariantClear called on each member.
(source: http://msdn.microsoft.com/en-us/library/windows/desktop/ms221165(v=vs.85).aspx)
But I see no such thing happening in the code in atlsafe.h.
Am I just looking in the wrong place or is this just supposed to happen as a side-effect of ::SafeArrayDestroy() -- which is the only thing happening via the CComSafeArray dtor.
Ultimately VariantClear will get called on the contents of the CComSafeArray object, albeit after progressing through multiple layers. CComSafeArray::~CComSafeArray() calls CComSafeArray::Destroy() which is ultimately a wrapper of SafeArrayDestroy():
HRESULT Destroy()
{
HRESULT hRes = S_OK;
if (m_psa != NULL)
{
hRes = Unlock();
if (SUCCEEDED(hRes))
{
hRes = SafeArrayDestroy(m_psa);
if (SUCCEEDED(hRes))
m_psa = NULL;
}
}
return hRes;
}
SafeArrayDestroy() is documented as calling VariantClear on its contents if it contains VARIANTs:
Safe arrays of variant will have the VariantClear function called on
each member and safe arrays of BSTR will have the SysFreeString
function called on each element.

qt5 proxy model updating too soon before main model update is done

I have a setup with a main model (QStandardModel), a proxy model which changes the output of the DisplayRole, and a separate tableview displaying each model. Inside the main model data is a user role that stores a pointer to another QObject which is used by the proxy model to get the desired display value.
I'm running into problems when the object pointed to by that variable is deleted. I am handling deletion in the main model via the destroyed(QObject*) signal. Inside the slot, I search through the model looking for any items that are pointing to the object and delete the reference.
That part works fine on its own but I also have connected to the onDataChanged(...) signal of the proxy model, where I call resizeColumnsToContents() on the proxy model. This then calls the proxy's data() function. Here I check to see if the item has a pointer and, if it does, get some information from the object for display.
The result of all this becomes:
Object about to be deleted triggers destroyed(...) signal
Main model looks for any items using the deleted object and calls setData to remove the reference
Tableview catches onDataChanged signal for the proxy model and resizes columns
Proxy model's data(...) is called. It checks if the item in the main model has the object pointer and, if so, displays a value from the object. If not, it displays something else.
The problem is, at step 4 the item from the main model apparently still hasn't been deleted; the pointer address is still stored. The object the pointer was referencing, though, has been deleted by this point resulting in a segfault.
How can I fix my setup to make sure the main model is finished deleting pointer references before the proxy model tries to update?
Also, here is pseudo-code for the relevant sections:
// elsewhere
Object *someObject = new QObject();
QModelIndex index = mainModel->index(0,0);
mainModel->setData(index, someObject, ObjectPtrRole);
// do stuff
delete someObject; // Qt is actually doing this, I'm not doing it explicitly
// MainModel
void MainModel::onObjectDestroyed(QObject *obj)
{
// iterating over all model items
// if item has pointer to obj
item->setData(QVariant::fromValue(NULL), ObjectPtrRole));
}
// receives onDataChanged signal
void onProxyModelDataChanged(...)
{
ui->tblProxyView->reseizeColumnsToContents();
}
void ProxyModel::data(const QModelIndex &index, int role) const
{
QModelIndex srcIndex = mapToSource(index);
if(role == Qt::DisplayRole)
{
QVariant v = sourceModel()->data(srcIndex, ObjectPtrRole);
Object *ptr = qvariant_cast<Object*>(v);
if(ptr != NULL)
return ptr->getDisplayData();
else
return sourceModel->data(srcIndex, role);
}
}
The problem is ptr is not NULL, but the referenced object is deleted, at the time ProxyModel::data(...) is called so I end up with a segfault.
To avoid dangling pointer dereferences with instances of QObject, you can do one of two things:
Use object->deleteLater - the object will be deleted once the control returns to the event loop. Such functionality is also known as autorelease pools.
Use a QPointer. It will set itself to null upon deletion of the object, so you can check it before use.

How to get access to WriteableBitmap.PixelBuffer pixels with C++?

There are a lot of samples for C#, but only some code snippets for C++ on MSDN. I have put it together and I think it will work, but I am not sure if I am releasing all the COM references I have to.
Your code is correct--the reference count on the IBufferByteAccess interface of *buffer is incremented by the call to QueryInterface, and you must call Release once to release that reference.
However, if you use ComPtr<T>, this becomes much simpler--with ComPtr<T>, you cannot call any of the three members of IUnknown (AddRef, Release, and QueryInterface); it prevents you from calling them. Instead, it encapsulates calls to these member functions in a way that makes it difficult to screw things up. Here's an example of how this would look:
// Get the buffer from the WriteableBitmap:
IBuffer^ buffer = bitmap->PixelBuffer;
// Convert from C++/CX to the ABI IInspectable*:
ComPtr<IInspectable> bufferInspectable(AsInspectable(buffer));
// Get the IBufferByteAccess interface:
ComPtr<IBufferByteAccess> bufferBytes;
ThrowIfFailed(bufferInspectable.As(&bufferBytes));
// Use it:
byte* pixels(nullptr);
ThrowIfFailed(bufferBytes->Buffer(&pixels));
The call to bufferInspectable.As(&bufferBytes) performs a safe QueryInterface: it computes the IID from the type of bufferBytes, performs the QueryInterface, and attaches the resulting pointer to bufferBytes. When bufferBytes goes out of scope, it will automatically call Release. The code has the same effect as yours, but without the error-prone explicit resource management.
The example uses the following two utilities, which help to keep the code clean:
auto AsInspectable(Object^ const object) -> Microsoft::WRL::ComPtr<IInspectable>
{
return reinterpret_cast<IInspectable*>(object);
}
auto ThrowIfFailed(HRESULT const hr) -> void
{
if (FAILED(hr))
throw Platform::Exception::CreateException(hr);
}
Observant readers will notice that because this code uses a ComPtr for the IInspectable* we get from buffer, this code actually performs an additional AddRef/Release compared to the original code. I would argue that the chance of this impacting performance is minimal, and it's best to start from code that is easy to verify as correct, then optimize for performance once the hot spots are understood.
This is what I tried so far:
// Get the buffer from the WriteableBitmap
IBuffer^ buffer = bitmap->PixelBuffer;
// Get access to the base COM interface of the buffer (IUnknown)
IUnknown* pUnk = reinterpret_cast<IUnknown*>(buffer);
// Use IUnknown to get the IBufferByteAccess interface of the buffer to get access to the bytes
// This requires #include <Robuffer.h>
IBufferByteAccess* pBufferByteAccess = nullptr;
HRESULT hr = pUnk->QueryInterface(IID_PPV_ARGS(&pBufferByteAccess));
if (FAILED(hr))
{
throw Platform::Exception::CreateException(hr);
}
// Get the pointer to the bytes of the buffer
byte *pixels = nullptr;
pBufferByteAccess->Buffer(&pixels);
// *** Do the work on the bytes here ***
// Release reference to IBufferByteAccess created by QueryInterface.
// Perhaps this might be done before doing more work with the pixels buffer,
// but it's possible that without it - the buffer might get released or moved
// by the time you are done using it.
pBufferByteAccess->Release();
When using C++/WinRT (instead of C++/CX) there's a more convenient (and more dangerous) alternative. The language projection generates a data() helper function on the IBuffer interface that returns a uint8_t* into the memory buffer.
Assuming that bitmap is of type WriteableBitmap the code can be trimmed down to this:
uint8_t* pixels{ bitmap.PixelBuffer().data() };
// *** Do the work on the bytes here ***
// No cleanup required; it has already been dealt with inside data()'s implementation
In the code pixels is a raw pointer into data controlled by the bitmap instance. As such it is only valid as long as bitmap is alive, but there is nothing in the code that helps the compiler (or a reader) track that dependency.
For reference, there's an example in the WriteableBitmap::PixelBuffer documentation illustrating the use of the (otherwise undocumented) helper function data().