MVVM - calling UI-logic in code behind from ViewModel - xaml

I'm working on some .Net XAML application using MVVM pattern.
According to MVVM I keep my app logic in VM and in Code Behind I do UI-related actions.
But I need to execute some UI-related code in Code Behind in respond to some logic in VM.
Example:
I need to show an error message (custom toast notification in my case) when login operation failed. Login operation resides in VM, but I can't use any UI-specific classes in my VM, so I made an event in VM and hooking up to in in Code Behind, doing UI stuff.
Is it a violation of MVVM pattern? If yes, then how to solve my case?

Ideally communication between View and ViewModel in MVVM pattern done through Mediator to avoid hard-referencing View from VM. Having a mediator,
View can subscribe to certain type of message.
Then VM send the message to mediator,
mediator broadcast the message, so all party that subscribed will get it.
Upon receiving, View can respond by executing certain UI logic according to the message
The CodeProject link above shows how to implement a mediator class. But I will suggest to use a popular MVVM framework since you'll find it has Mediator implementation and many other tools for MVVM available out-of-the-box.

Related

How to subscribe to Topic in Dapr for .Net Core outside Controllers

I just started with Dapr a few days back and although I am able to publish and subscribe to events in Dapr, the way I am doing so is using the Topic method attribute on an action method within a controller such as this...
And while this works I prefer not to mix integration events with service API. This is the Swagger...
I get that the topic name is long but it's so I can ensure unique topics.
What I'd prefer is to position the Handler outside of any Controller. Something like this..
Is this even possible?
I derived a solution out of the .Net Dapr client routing sample.
For each event I add a MapPost endpoint similar to the following
The RequestDelegator receives the event for a given Topic, it will resolve the handler class from the interface and topic attributes and then invoke it's handle method, passing in the event data from Dapr.
My services follow the CQRS and EventSourcing patterns, as such integration events will rarely be the same shape as the command inputs. In my case, events are typically much lighter than a commands, consisting of mostly related Ids.

Can a BackgroundService run indefinitely in ASP .NET Core 3.1?

I am constructing a web service that receives data and updates it periodically. When a user pings the service, it will send specific data back to the user. In order to receive this data, I have a persistent that is created on startup and regularly receives updates, but not at periodic intervals. I have already implemented it, but I would like to add DI and make it into a service. Can this type of problem be solved with a BackgroundService or is this not recommended? Is there anything better I should use? I originally wanted to just register my connection object as a singleton, but since singletons are not initialized on startup, that does not work so well for me.
I thought I would add an answer as so expand on my comment. From what you have described, creating a BackgroundService is likely the best solution for what you want to do.
ASP.NET Core provides an IHostedService interface that can be used to implement a background task or service, in your web app. They also provide a BackgroundService class that implements IHostedService and provides a base class for implementing long running background services. These background services are registered within the CreateWebHostBuilder method in Program.cs.
You can consume services from the dependency injection container but you will have to properly manage their scopes when using them. You can decide how to manage your BackgroundService classes in order to fit your needs. It does take an understanding of how to work with Task objects and executing, queueing, monitoring them etc. So I'd recommend giving the docs a thorough read, so you don't end up impacting performance or resource usage.
I also tend to use Autofac as my DI container rather than the built in Microsoft container, since Autofac provides more features for resolving services and managing scopes. So it's worth considering if you find yourself hitting a wall because of the built in container.
Here's the link to the docs section covering this in much more depth. I believe you can also create standalone service workers now, so that might be worth a look depending on use case.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&tabs=visual-studio
Edit: Here's another link to a guide an example implementation for a microservice background service. It goes a little more in depth on some of the specifics.
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#implementing-ihostedservice-with-a-custom-hosted-service-class-deriving-from-the-backgroundservice-base-class

#RepositoryEventHandler only invoked via HTTP - why?

when I use a #RepositoryEventHandler then its methods are only invoked when the call into the repository comes in via HTTP.
Any reason why? OK, it is called Spring Data REST, but wouldn't it be VERY useful to invoke the handler too, when I call my Repo directly, not via HTTP?
Any way to invoke the handler when called directly (some magic AOP-stuff)?
Thank you
The reason for that is that the different persistence mechanisms covered by the different Spring Data modules already ship with event mechanisms. Depending on the one you use you now get a different mechanism to use.
Unfortunately this can't be unified as e.g. with JPA not all persistence operations need to go through the repository in the first place, as JPA automatically flushes all changes that were made to an attached instance on EntityManager flush. In this case even AOP on the repository instance doesn't help.
So you're basically left with two choices:
The events exposed by Spring Data REST for all repositories (as we basically don't make use of the automatic change tracking in JPA).
The store specific event mechanisms that will make sure that the persistence mechanism exposes events as documented.
I don't know if the solution I put below from other stackoverflow questions would seen as acceptable by #Olivier-drotbohm, but from:
SpringDataRest #RepositoryEventHandler not running when Controller is added
and
#RepositoryEventHandler events stop with #RepositoryRestController
you could inject/autowire the "ApplicationEventPublisher" and fire the BeforeCreateEvent/AfterCreateEvent manually to trigger the RepositoryEventHandler.
This is not a perfect solution, but I hope it is good enough for you (and we tested it: it works).

Where to put calls to 3rd party APIs in Apigility/ZF2?

I have just completed my first API in Apigility. Right now it is basically a gateway to a database, storing and retrieving multi-page documents uploaded through an app.
Now I want to run some processing on the documents, e.g. process them through a 3rd party API or modify the image quality etc., and return them to the app users.
Where (in which class) do I generally put such logic? My first reflex would be to implement such logic in the Resource-Classes. However I feel that they will become quite messy, obstructing a clear view on the API's interface in the code and creating a dependency on a foreign API. Also, I feel limited because each method corresponds to an API call.
What if there is a certain processing/computing time? Meaning I cannot direct respond the result through a GET request. I thought about running an asynchronous process and send a push notification to the app, once the processing is complete. But again, where in the could would I ideally implement such processing logic?
I would be very happy to receive some architectural advice from someone who is more seasoned in developing APIs. Thank you.
You are able to use the zf-rest resource events to connect listeners with your additional custom logic without polluting your resources.
These events are fired in the RestController class (for example a post.create event here on line 382).
When you use Apigility-Doctrine module you can also use the events triggered in the DoctrineResource class (for example the DoctrineResourceEvent::EVENT_CREATE_POST event here on line 361) to connect your listeners.
You can use a queueing service like ZendQueue or something (a third party module) built on top of ZendQueue for managing that. You can find different ZF2 queueing systems/modules using Google.
By injecting the queueing service into your listener you can simply push your jobs directly into your queue.

PRISM and WCF - Do they play nice?

Ok,
this is a more general "ugly critters in the corner" question. I am planning to start a project on WCF and PRISM. I have been playing around with PRISM some time know, and must say, I like it. Solid foundation for applications with nice possibilities to grow.
Now I want to incorporate WCF and build a distributed application, with one part on a server and two on the clients. It could be even the same machine, or not, depending on the scenario.
My idea is now to take the event concept from PRISM and extend it "over the wire" using WCF and callbacks, like described here WCF AlarmClock Callback Example.
I created a small picture to illustrate the idea (mainly for me), perhaps this makes things a little more clear:
The grey arrows stand for "using lib". The WCF-Event-Base meaning normal PRISM events, where the publish method is called "over the wire".
There are a few questions which come to mind:
Are there any existing known examples for such things?
What will be the best way to "raise" events over the wire?
Any possible problems with this concept (the ugly critters mentioned earlier)
Regarding the second question, I currently think about raising the events using a string (the type of the concrete event I want to raise) and the payload as argument. Something like public void RaiseEvent(string eventType, object eventPayload){} The payload needs to be serializeable, perhaps I even include a hashcheck. (Meaning if I raise e.g. an event with a picture as argument 10 times, I only transfer the picture once, afterwards using the hash to let the server use the buffer when publish)...
Ok, I think you get the idea. This "thing" should behave like a giant single application, using a kind of WCF_EventAggregator instead of the normal PRISM IEventAggregator. (wow, while writing I just got the idea to "simply" extend the IEventAggregator, have to think about this)...
Why do I write this? Well, for feedback mainly, and to sort my thoughts. So comments welcome, perhaps anything I should be "careful" about?
Chris
[EDITS]
Client distribution
There should be an undefined number of client, the server should not be aware of clients. The server itself can be a client to itself, raising strongly typed PRISM events in other parts of the source code.
The main difference between a "client" and a "server" is the actual implementation of the WCF_PRISM connector, see next chapter...
Client Event raising (PRISM feature)
In PRISM, to raise simple events you do NOT even need a reference to a service interface. The IEventAggregator can be obtained via dependency injection, providing an instance of the desired event (e.g. WeatherChangedEvent). This event can be raised by simply calling eventInstance.Publish(23) because the event is implemented as public class WeatherChangedEvent : CompositePresentationEvent<int>
WCF - PRISM Connector
As simple as raising events is subscribing to events. Every module can subsribe to events using the same technique, obtaining a reference and using Subscribe to attach to this event.
Here is now where the "magic" should happen. The clients will include a prism module responsible for connecting PRISM events to "wcf message sends". It will basically subsribe to all available events in the solution (they are all defined in the infrastructure module anyway) and send out a WCF message in case an event is raised.
The difference between a SERVER and a CLIENT is the implementation of this module. There needs to be a slight difference because of two things.
The WCF setup settings
The flow of events to prevent an infinite loop
The event flow will be (example)
Client obtain ref to WeatherChangedEvent
wChanged.Publish(27) --> normal PRISM event raising
WCF_PRISM module is subscribed to event and
send this event to the server
Server internally gets instance of WeatherChangedEvent and publishes
Server calls back to all clients raising their WeatherChangedEvent
Open Points
The obvious point is preventing a loop. If the server would raise the event in ALL clients, the clients would call back to the server, raising the event again, and so on... So there needs to be a difference between an event caused locally (which means I have to send it to the server) and a "server caused event" which means I do not have to send it to the server.
Also, if a client has initiated the event itself, it does not need to be called by the server, because the event has already be raised (in the client itself, point 2).
All this special behaviour will be encapsulated in the WCF event raiser module, invisible from the rest of the app. I have to think about "how to know if event already published", perhaps a GUID or something like this would be a good idea.
And now the second big question, what was I was aiming at when telling about "strings" earlier. I do not want to write a new service interface definition every time I add an event. Most events in PRISM are defined by one line, especially during development I do not want to update the WCF_Event_Raising_Module each time I add an event.
I thought about sending the events directly when calling WCF, e.g. using a function with a signature like:
public void RaiseEvent(EventBase e, object[] args)
The problem is, I do not really know if I can serialize PRISM events that easy. They all derive from EventBase, but I have to check this... For that reason, I had the idea to use the type (as string), because I know the server shares the infrastructure module and can obtain its own instance of the event (no need to send it over the wire, only the arg)
So far till here, I will keep the question open for more feedback. Main new "insight" I just got: Have to think about the recursion / infite loop problem.
Btw. if anybody is completely confused by all this event talk, give PRISM a try. You will love it, even if you only use DI and Events (RegionManager e.g. is not my favorite)
Chris
[END EDIT 1]
This is a very interesting approach. I would say only two things here:
You are really asking for trouble if you use strings and object parameters. Strongly typed EventAggregator events (inheriting from CompositeEvent) are the way to go here. The maintainability will go way up if you do this.
Your model for your WCF -> EventAggregator should consider everything to and from the EventAggregator as an "event" and everything to/from the WCF services as "messages". What you should really consider is that you are essentially translating a EventAggregator event to a message, rather than asking the question "how do I raise WCF events".
I think what you are doing is feasible. Looking at your implementation I really like how you are thinking about it.
Slight Alternative (w/ strong typing)
I wanted to throw a little something out there and see what you thought about it... maybe it will influence your design slightly. Specifically this is meant to address my first point above and go even further with the strong-typing.
Have you considered having EventAggregator-backed implementations of your service interface? Let's say in your example you have an IWeatherService WCF service that you are working with. Currently, as I understand it, your usage will look something like this:
Client uses the WCF Event Client library and calls RaiseEvent("ChangeWeather", Weather.Sunny);
The WCF Event Client library translates this into the appropriate call to the WCF service waiting to receive this message, using the IWeatherService channel interface to do so. Probably with a big nasty switch statement based on the name of the method call.
Why not modify this slightly. Make IWeatherService a shared contract among all of the servers and clients. The servers will have the actual implementation, obviously, but the clients will have EventAggregator-backed implementations that go to a central broker that queues and sends messages to servers.
Write an EventAggregator-backed implementation of the IWeatherService that raises events to be received by a central message broker and throw that implementation in your container for clients to use.
public ClientWeatherService : IWeatherService
{
IEventAggregator _aggregator;
public ClientWeatherService(IEventAggregator aggregator)
{
_aggregator = aggregator;
}
public void ChangeWeather(Weather weather)
{
ChangeWeatherEvent cwEvent = _aggregator.GetEvent<ChangeWeatherEvent>();
cwEvent.Publish(weather);
}
}
From there, instead of using your "WCF Event Client Library" directly, they use the IWeatherService directly, not knowing that it doesn't call the actual service.
public MyWeatherViewModel : ViewModel
{
IWeatherService _weatherService;
public MyWeatherViewModel(IWeatherService weatherService)
{
_weatherService = weatherService;
}
}
Then, you'd have some event handler setup to make the WCF calls to the real service, but now you have the benefit of strong-typing from the clients.
Just a thought.
I really like this type of question. I wish more people would ask this kind of thing on Stackoverflow. Gets the brain moving in the morning :)
It seems like a complicated approach to the problem.
Are you raising the event from the Client application, or raising the events from the service using the callback contract? or both?
I would approach this with a simple service class in the client. It can implement the Callback contract, and for each callback method it can just raise a Prism event locally to any subscribers in the client. If you need to raise events that are handled by the service, then the service class can subscribe to those events and call the wcf service.
All you need really is a class that abstracts the details of the wcf service away from the client, and exposes it's interface through Prism events.
I personally wouldn't want to modify / extend the infrastructure component and create a dependency on the concrete wcf service.