winrt::fire_and_forget, What does this do? - c++-winrt

I' m porting C++/CX application to C++/WinRT Core Application.
and I found a useful sample code (Simple3DGameDX) at this link.
https://github.com/microsoft/Windows-universal-samples/tree/master/Samples/Simple3DGameDX/cppwinrt
and its Suspending method's return type is winrt::fire_and_forget.
but anther example in C++/CX, its Suspending method's return type is void.
why this return type is not void in C++/WinRT?
and What does this do?
C++/WinRT
winrt::fire_and_forget OnSuspending(IInspectable const& /* sender */, SuspendingEventArgs const& args)
C++/CX
void OnSuspending(Object^ Sender, SuspendingEventArgs^ Args)

why this return type is not void in C++/WinRT? and What does this do?
Apart from documentation for Fire and forget, note the following: the function itself uses co_await operator in its body.
This requires that the function itself is coroutine friendly and could be compiled into "stackless" form, for asynchronous execution. void return type does not work out, but fire_and_forget struct is okay because C++/WinRT defines coroutine handling for it, as explained by the documentation.
Think of this as void which can be asynchronous and has no need to be waited on.

winrt::fire_and_forget, What does this do?
Please refer document here,
Sometimes, you have a task that can be done concurrently with other work, and you don't need to wait for that task to complete (no other work depends on it), nor do you need it to return a value. In that case, you can fire off the task and forget it. You can do that by writing a coroutine whose return type is winrt::fire_and_forget (instead of one of the Windows Runtime asynchronous operation types, or concurrency::task).

Related

Blockhound is not detecting straightforward blocking code

Using spring boot webflux, I'm trying Blockhound for a very simple blocking call, but it doesn't seem to be detecting it.
<dependency>
<groupId>io.projectreactor.tools</groupId>
<artifactId>blockhound</artifactId>
<version>1.0.6.RELEASE</version>
</dependency>
in main method:
public static void main(String[] args) {
BlockHound.install();
SpringApplication.run(MyApplication.class, args);
}
My blocking endpoint:
#GetMapping("/block")
public Mono<String> block() {
String a = Mono.just("block").block();
return Mono.just(a);
}
Any idea?
EDIT:
When I use UUID.randomUUID() in my endpoint, I get the error related to a blocking FileInputStream#readBytes used by randomUUID().
So I suppose My install is good
Nothing is wrong here, you've just hit a corner case.
Mono.just() is a rather special kind of Mono in more ways than one (which is why I despair at its use in so many simple "getting started" style examples, but I digress) - since you're literally just wrapping a value inside a dummy publisher, it never needs to block in order to return its value, even if you call the block method. The method name might imply you're blocking, but you can trivially verify from the source code that it just returns a value. There's therefore no blocking operation occurring, and so nothing for Blockhound to complain about.
If you were to add another operator in the mix, even if it has no real-world effect:
String a = Mono.just("block").cache().block();
...then you'll see Blockhound start complaining, as you're no longer directly using the special case of MonoJust.
Blockhound is doing exactly what it should here, the issue is that you're (very understandably) expecting something to block which doesn't.

Project Reactor. Mono.map() vs Mono.flatMap()

What is the principal difference between these in terms of Mono?
From the documentation, I read that flatMap acts asynchronous and map synchronous. But that doesn't really make sense for me b/c Mono is all about parallelism and that point isn't understandable. Can someone rephrase it in a more understandable way?
Then in the documentation for flatMap stated (https://projectreactor.io/docs/core/release/api/reactor/core/publisher/Mono.html#flatMap-java.util.function.Function-):
Transform the item emitted by this Mono asynchronously, returning the
value emitted by another Mono (possibly changing the value type).
Which another Mono is meant there?
Mono#flatMap takes a Function that transforms a value into another Mono. That Mono could represent some asynchronous processing, like an HTTP request.
On the other hand, Mono#map takes a Function that transforms a value of type T into another value, of type R. That transformation is thus done imperatively and synchronously (eg. transforming a String into an URL instance).
The other subtlety with flatMap is that the operator subscribes to the generated Mono, unlike what would happen if you passed the same Function to map.
I would say it simply,
map(a -> b) is returning Mono.just(b)
map wraps the returned value in another mono whereas flatmap already expects a Mono do no further wrapping required
Practical use case example,
public Mono<String> getResponseFromServer(Mono<String> request) {
// here few logics
request.flatMap(r -> callServer(r);
}
Where callServer looks like
public Mono<String> callServer(String body) {
// invoke http
return http call returned mono
}
Above use case is not possible with map.

Generic List in C# DLL cannot be accessed from CLI

Want to preface this by pointing out I am new to C++/CLI
We have one solution with an unmanaged C++ application project(We'll call "Application") and a C# .net remoting project which builds to a DLL(We'll call "Remoting"), and a CLI project for interfacing between the two(We'll call "Bridge").
Everything seems to be working, we have an IMyEventHandler interface in Bridge which successfully receives events from Remoting and can call methods in Application.
#ifndef EVENTS_HANDLER_INTERFACE_H_INCLUDED
#include "EventsHandlerInterface.h"
#endif
#define DLLEXPORT __declspec(dllexport)
#ifdef __cplusplus
extern "C"
{
#endif
DLLEXPORT bool RegisterAppWithBridge(IMyEventsHandler * aHandler);
DLLEXPORT void PostEventToServer(AppToServerEvent eventType);
DLLEXPORT void PollEventsFromServer();
#ifdef __cplusplus
}
#endif
In Bridge implementation we have a method for handling an event and depending on which event type it is, we will call a different method for handling that exact type:
void Bridge::OnReceiveServerEvent(IMyEvent^ aEvent)
{
// Determine event type
...
Handle_SpecificEventType();
}
This all is working fine so far. Once we call the handler for a known type of event, we can directly cast to it from the generic interface type. And this is where we start to see the issue. All these event types are defined in another DLL generated from C#. Simple events that have just ints or strings work just fine, but we have this SpecificEventType which contains a list of other types(We'll call "AnotherType") all defined in another DLL. All required DLL's have been added as references, and I am able to gcnew a AnotherType without it complaining.
However, once I try to get AnotherType element out of the list, we see the build error: "C2526 'System::Collections::Generic::List::GetEnumerator' C linkage function cannot return C++ class"
void Bridge::Handle_SpecificEventType(IMyEvent ^evt)
{
SpecificEventType ^e = (SpecificEventType ^)evt;
// We can pull the list itself, but accessing elements gives error
System::Collections::Generic:List<AnotherType ^> ^lst = e->ThatList;
// These all cause error
array<AnotherType ^> ^arr = lst->ToArray();
AnotherType ^singleElement = lst[0];
for each(AnotherType ^loopElement in lst){}
}
To clarify why we're doing this, we are trying to take managed events defined in a C# DLL and sent through .net remoting from a newer C# server, and "translate" them for an older unmanaged C++ application. So the end goal is to create a copy of the C# type "SpecificEventType" and translate it to unmanaged "SpecificEventType_Unmanaged" and just make a call to the application with that data:
// Declared in Bridge.h and gets assigned from DLLEXPORT RegisterGameWithBridge method.
IMyEventsHandler *iApplicationEventHandler;
// Bridge.cpp
void Bridge::Handle_SpecificEventType(IMyEvent ^evt)
{
... Convert SpecificEventType to SpecificEventType_Unmanaged
iApplicationEventHandler->Handle_SpecificEvent(eventUnmanaged);
}
This messaging all seems to be working and setup correctly - but it really doesn't want to give us the elements from the generic list - preventing us from pulling the data and building an unmanaged version of the event to send down to the application.
I hope I have explained this well, again I am new to CLI and haven't had to touch C++ for some years now - so let me know if any additional details are needed.
Thanks in advance for any assistance.
Turns out the issue was because all the methods in Bridge implementation were still inside of an extern "C" block. So much time lost - such a simple issue.

OOP, enforcing method call order

Question:
This is a question about OOP practice. I've run into a situation while working with an API where there are a series of methods that need to be called in a specific order.
Case:
Controlling the operation of a smart sensor.
A simplified version of the interaction goes like this: first the API must be configured to interface with the sensor over TCP, the next command starts the scanning process, followed by receiving input for multiple items until the command to stop is given. At that time a similar series of disconnect commands must be given. If these are executed out of order an exception is thrown.
I see a conflict between the concepts of modularization and encapsulation here. Each of the steps is a discrete operation and thus should be encapsulated in separate methods, but they are also dependent on proper order of execution.
I'm thinking from the perspective of a later developer working on this code. It seems like someone would have to have a high level of understanding of this system before they could work on this code and that makes it feel fragile. I can add warning comments about this call order, but I'm hoping there's some principle or design pattern that might fit my situation.
Here's an example:
class RemoteTool
{
public void Config();
public void StartProcess();
public void BeginListen();
public void StopProcess();
public void StopListening();
}
class Program
{
static void Main(string[] args)
{
RemoteTool MyRemoteTool = new RemoteTool();
MyRemoteTool.Config();
MyRemoteTool.StartProcess();
MyRemoteTool.BeginListen();
// Do some stuff
MyRemoteTool.StopListening();
MyRemoteTool.StopProcess();
}
}
The closest thing I can think of is to use boolean flags and check them in in each function to assure that the prerequisite functions have already been called, but I guess I'm hoping for a better way.
Here's a method I found while looking for an answer. It's pretty simple, it helps, but doesn't solve my issue.
Essentially the class is created exactly in the question, but the dependant functions are created as protected and a public member is created to keep them in order like so:
class RemoteTool
{
public bool Running = false;
public void Run()
{
Config();
StartProcess();
BeginListen();
Running = true;
}
public void Stop() {
StopListening();
StopProcess();
Running = false;
}
protected void Config();
protected void StartProcess();
protected void BeginListen();
protected void StopProcess();
protected void StopListening();
}
The trouble is that you still have to call Stop() and Run() in the right order, but they're easier to manage and the modularization is higher.
I think the problem is related to the fact that the RemoteTool class has a contract that requires some pre-condition. e.g. : method b() has to execute() after method a().
If your language does not provide a mechanism to define these kinds of pre-conditions, you need to implement one yourself.
I agree with you that to implement this extra functionality (or these specific class contract features)
inside RemoteTool() class could degrade your current design. A simple solution could be use another class with the responsibility of enforce the needed pre-condition before call the specific method of RemoteClass.(RemoteToolProxy() can be a suitable name)
This way you will decouple the concrete functionality and the contract that says how to use it.
There are other alternatives provided by a software design approach called Design by Contract
that can give you other ways of improving your class contract.

How to pass a Object^ to a native function in C++ CLI

I'm new to C++ CLI and I still don't get the new pointers and handles.
I have a native function which opens a window. It requires a handle to a parent window:
void open(void* parentHwnd);
How am I supposed to pass a parent window from managed code to this function? I was trying to do something like this:
void managedOpen(Object^ parent)
{
interior_ptr<void> ptr = &*parent);
open(ptr);
}
but the & operator "cannot be used to take the address of an object with a ref class type".
Also should I use pin_ptr instead of interior_ptr?
Picking proper types in an interop scenario is 99% of the battle. You didn't get any help from the existing code, void* is not an appropriate type to use for a window handle. It should be HWND. That ship probably sailed a long time ago.
But on top of the list of types never to use is System::Object. That only ever interops correctly by sheer accident, unless you interop with COM code that uses variants. The appropriate type to store an operating system handle in managed code is IntPtr or SafeHandle. Heavily biased to IntPtr for window handles since there isn't anything safe about them, they'll die beyond your control when the user closes a window.
So this needs to look like this:
void managedOpen(IntPtr parent)
{
open(parent.ToPointer());
}
With the burden on the client code to produce a valid IntPtr. Could be Control.Handle in Winforms or WindowInteropHelper.Handle in WPF, etcetera.
Stuff like System::Object is only passed from managed to unmanaged with intention of passing it back to managed code, such as a managed function calling EnumWindows. But in this case:
In C++/CLI, you can simply pass a pointer to an unmanaged object containing a gcroot<> to the managed object you want to access.
In C#, you use the GCHandle class to obtain an IntPtr and back.