How to use Game Loops to Trigger SignalR Group Messages? - asp.net-core

Background
I have built a game on top of SignalR in .NET framework and am now porting to dotnet core. As part of this I have several "Game Rooms" with up to 6 players in each. I am using SignalR's Group functionality to support the concept of GameRoom - no problems there.
The Question
My question, how do I enable each GameRoom to broadcast messages automatically every 5 seconds to all members of the group?
I have one GameHub.cs which has a List<GameRoom>. Each GameRoom then maintains all the data (scores, players, SignalR users, SignalR group name, etc) for it's corresponding game. Inside the Game Room I have a Game Loop which I want to leverage to push data to the players. The Game Rooms are static for the duration of the game.
I am unable to ascertain if my dependency modelling is correct in the new dotnet core world.
For example: GameHub --> GameRoom which in turn contains a simple Timer loop. Inside this timer I don't know how to call GameHub to broadcast messages.
In .NET I could leverage GlobalHost to get the GameHub and broadcast a group message something like this was ok, how does one do something similar for dot-net core implementation? How to use game loops to trigger SignalR group messages?
class GameRoom
{
List<player> _players;
Scoreboard _scores;
string _gameRoomName;
...
void GameLoop_Timer_Elapsed(object sender, ElapsedEventArgs e)
{
var context = GlobalHost.ConnectionManager.GetHubContext<Game>();
context.Clients.Group(_gameRoomName).SendMessage(_gameState);
}
}

Inside Asp.net core, I suggest you could use backgroud service to achieve your requirement. Inside the backrround service you could write the logic to call the hub to send the group message every 5 seconds.
For how to call the ihubcontext, you could refer to this article and this sample.
For how to use background service, you could refer to this article.

Related

How to subscribe to Topic in Dapr for .Net Core outside Controllers

I just started with Dapr a few days back and although I am able to publish and subscribe to events in Dapr, the way I am doing so is using the Topic method attribute on an action method within a controller such as this...
And while this works I prefer not to mix integration events with service API. This is the Swagger...
I get that the topic name is long but it's so I can ensure unique topics.
What I'd prefer is to position the Handler outside of any Controller. Something like this..
Is this even possible?
I derived a solution out of the .Net Dapr client routing sample.
For each event I add a MapPost endpoint similar to the following
The RequestDelegator receives the event for a given Topic, it will resolve the handler class from the interface and topic attributes and then invoke it's handle method, passing in the event data from Dapr.
My services follow the CQRS and EventSourcing patterns, as such integration events will rarely be the same shape as the command inputs. In my case, events are typically much lighter than a commands, consisting of mostly related Ids.

Where to put calls to 3rd party APIs in Apigility/ZF2?

I have just completed my first API in Apigility. Right now it is basically a gateway to a database, storing and retrieving multi-page documents uploaded through an app.
Now I want to run some processing on the documents, e.g. process them through a 3rd party API or modify the image quality etc., and return them to the app users.
Where (in which class) do I generally put such logic? My first reflex would be to implement such logic in the Resource-Classes. However I feel that they will become quite messy, obstructing a clear view on the API's interface in the code and creating a dependency on a foreign API. Also, I feel limited because each method corresponds to an API call.
What if there is a certain processing/computing time? Meaning I cannot direct respond the result through a GET request. I thought about running an asynchronous process and send a push notification to the app, once the processing is complete. But again, where in the could would I ideally implement such processing logic?
I would be very happy to receive some architectural advice from someone who is more seasoned in developing APIs. Thank you.
You are able to use the zf-rest resource events to connect listeners with your additional custom logic without polluting your resources.
These events are fired in the RestController class (for example a post.create event here on line 382).
When you use Apigility-Doctrine module you can also use the events triggered in the DoctrineResource class (for example the DoctrineResourceEvent::EVENT_CREATE_POST event here on line 361) to connect your listeners.
You can use a queueing service like ZendQueue or something (a third party module) built on top of ZendQueue for managing that. You can find different ZF2 queueing systems/modules using Google.
By injecting the queueing service into your listener you can simply push your jobs directly into your queue.

Event Aggregaotr for ASP.NET MVC

I need something like EventAggregator in WPF PRISM but for ASP.NET MVC4.
I have following problem. When some action is done, for example commend is added, I need to send 2 mails and SMS message. Now I would like to take it off comment processing action, and to do it somewhere in the background.
One way I could do it is to subscribe to events in Global.asax, and than to execute list of subscribers for specific event when needed (by creating IEventAggregator, using dependency injection, like in PRISM). But then, it will run in same thread as comment processing action.
Is there any other way, and are background tasks and background workers even possible in ASP.NET MVC4?
NOTE: Currently I can't create separate services (WCF for example), which would do that work, because site is running on shared hosting.
hm, have you hear about signalR (background tasks r possible)? http://www.asp.net/signalr/overview/getting-started/introduction-to-signalr

PRISM and WCF - Do they play nice?

Ok,
this is a more general "ugly critters in the corner" question. I am planning to start a project on WCF and PRISM. I have been playing around with PRISM some time know, and must say, I like it. Solid foundation for applications with nice possibilities to grow.
Now I want to incorporate WCF and build a distributed application, with one part on a server and two on the clients. It could be even the same machine, or not, depending on the scenario.
My idea is now to take the event concept from PRISM and extend it "over the wire" using WCF and callbacks, like described here WCF AlarmClock Callback Example.
I created a small picture to illustrate the idea (mainly for me), perhaps this makes things a little more clear:
The grey arrows stand for "using lib". The WCF-Event-Base meaning normal PRISM events, where the publish method is called "over the wire".
There are a few questions which come to mind:
Are there any existing known examples for such things?
What will be the best way to "raise" events over the wire?
Any possible problems with this concept (the ugly critters mentioned earlier)
Regarding the second question, I currently think about raising the events using a string (the type of the concrete event I want to raise) and the payload as argument. Something like public void RaiseEvent(string eventType, object eventPayload){} The payload needs to be serializeable, perhaps I even include a hashcheck. (Meaning if I raise e.g. an event with a picture as argument 10 times, I only transfer the picture once, afterwards using the hash to let the server use the buffer when publish)...
Ok, I think you get the idea. This "thing" should behave like a giant single application, using a kind of WCF_EventAggregator instead of the normal PRISM IEventAggregator. (wow, while writing I just got the idea to "simply" extend the IEventAggregator, have to think about this)...
Why do I write this? Well, for feedback mainly, and to sort my thoughts. So comments welcome, perhaps anything I should be "careful" about?
Chris
[EDITS]
Client distribution
There should be an undefined number of client, the server should not be aware of clients. The server itself can be a client to itself, raising strongly typed PRISM events in other parts of the source code.
The main difference between a "client" and a "server" is the actual implementation of the WCF_PRISM connector, see next chapter...
Client Event raising (PRISM feature)
In PRISM, to raise simple events you do NOT even need a reference to a service interface. The IEventAggregator can be obtained via dependency injection, providing an instance of the desired event (e.g. WeatherChangedEvent). This event can be raised by simply calling eventInstance.Publish(23) because the event is implemented as public class WeatherChangedEvent : CompositePresentationEvent<int>
WCF - PRISM Connector
As simple as raising events is subscribing to events. Every module can subsribe to events using the same technique, obtaining a reference and using Subscribe to attach to this event.
Here is now where the "magic" should happen. The clients will include a prism module responsible for connecting PRISM events to "wcf message sends". It will basically subsribe to all available events in the solution (they are all defined in the infrastructure module anyway) and send out a WCF message in case an event is raised.
The difference between a SERVER and a CLIENT is the implementation of this module. There needs to be a slight difference because of two things.
The WCF setup settings
The flow of events to prevent an infinite loop
The event flow will be (example)
Client obtain ref to WeatherChangedEvent
wChanged.Publish(27) --> normal PRISM event raising
WCF_PRISM module is subscribed to event and
send this event to the server
Server internally gets instance of WeatherChangedEvent and publishes
Server calls back to all clients raising their WeatherChangedEvent
Open Points
The obvious point is preventing a loop. If the server would raise the event in ALL clients, the clients would call back to the server, raising the event again, and so on... So there needs to be a difference between an event caused locally (which means I have to send it to the server) and a "server caused event" which means I do not have to send it to the server.
Also, if a client has initiated the event itself, it does not need to be called by the server, because the event has already be raised (in the client itself, point 2).
All this special behaviour will be encapsulated in the WCF event raiser module, invisible from the rest of the app. I have to think about "how to know if event already published", perhaps a GUID or something like this would be a good idea.
And now the second big question, what was I was aiming at when telling about "strings" earlier. I do not want to write a new service interface definition every time I add an event. Most events in PRISM are defined by one line, especially during development I do not want to update the WCF_Event_Raising_Module each time I add an event.
I thought about sending the events directly when calling WCF, e.g. using a function with a signature like:
public void RaiseEvent(EventBase e, object[] args)
The problem is, I do not really know if I can serialize PRISM events that easy. They all derive from EventBase, but I have to check this... For that reason, I had the idea to use the type (as string), because I know the server shares the infrastructure module and can obtain its own instance of the event (no need to send it over the wire, only the arg)
So far till here, I will keep the question open for more feedback. Main new "insight" I just got: Have to think about the recursion / infite loop problem.
Btw. if anybody is completely confused by all this event talk, give PRISM a try. You will love it, even if you only use DI and Events (RegionManager e.g. is not my favorite)
Chris
[END EDIT 1]
This is a very interesting approach. I would say only two things here:
You are really asking for trouble if you use strings and object parameters. Strongly typed EventAggregator events (inheriting from CompositeEvent) are the way to go here. The maintainability will go way up if you do this.
Your model for your WCF -> EventAggregator should consider everything to and from the EventAggregator as an "event" and everything to/from the WCF services as "messages". What you should really consider is that you are essentially translating a EventAggregator event to a message, rather than asking the question "how do I raise WCF events".
I think what you are doing is feasible. Looking at your implementation I really like how you are thinking about it.
Slight Alternative (w/ strong typing)
I wanted to throw a little something out there and see what you thought about it... maybe it will influence your design slightly. Specifically this is meant to address my first point above and go even further with the strong-typing.
Have you considered having EventAggregator-backed implementations of your service interface? Let's say in your example you have an IWeatherService WCF service that you are working with. Currently, as I understand it, your usage will look something like this:
Client uses the WCF Event Client library and calls RaiseEvent("ChangeWeather", Weather.Sunny);
The WCF Event Client library translates this into the appropriate call to the WCF service waiting to receive this message, using the IWeatherService channel interface to do so. Probably with a big nasty switch statement based on the name of the method call.
Why not modify this slightly. Make IWeatherService a shared contract among all of the servers and clients. The servers will have the actual implementation, obviously, but the clients will have EventAggregator-backed implementations that go to a central broker that queues and sends messages to servers.
Write an EventAggregator-backed implementation of the IWeatherService that raises events to be received by a central message broker and throw that implementation in your container for clients to use.
public ClientWeatherService : IWeatherService
{
IEventAggregator _aggregator;
public ClientWeatherService(IEventAggregator aggregator)
{
_aggregator = aggregator;
}
public void ChangeWeather(Weather weather)
{
ChangeWeatherEvent cwEvent = _aggregator.GetEvent<ChangeWeatherEvent>();
cwEvent.Publish(weather);
}
}
From there, instead of using your "WCF Event Client Library" directly, they use the IWeatherService directly, not knowing that it doesn't call the actual service.
public MyWeatherViewModel : ViewModel
{
IWeatherService _weatherService;
public MyWeatherViewModel(IWeatherService weatherService)
{
_weatherService = weatherService;
}
}
Then, you'd have some event handler setup to make the WCF calls to the real service, but now you have the benefit of strong-typing from the clients.
Just a thought.
I really like this type of question. I wish more people would ask this kind of thing on Stackoverflow. Gets the brain moving in the morning :)
It seems like a complicated approach to the problem.
Are you raising the event from the Client application, or raising the events from the service using the callback contract? or both?
I would approach this with a simple service class in the client. It can implement the Callback contract, and for each callback method it can just raise a Prism event locally to any subscribers in the client. If you need to raise events that are handled by the service, then the service class can subscribe to those events and call the wcf service.
All you need really is a class that abstracts the details of the wcf service away from the client, and exposes it's interface through Prism events.
I personally wouldn't want to modify / extend the infrastructure component and create a dependency on the concrete wcf service.

Use ISAPI filter to trace and time a WCF call?

I'm building a web application using WCF that will be consumed by other applications as a service. Our app will be installed on a farm of web services and load balanced for scalability purposes. Occasionally we run into problems specific to one web server and we'd like to be able to determine from the response which web server the request was processed by and possibly timing information as well. For example, this request was processed by WebServer01 and the request took 200ms to finish.
The first solution that came to mind was to build an ISAPI filter to add an HTTP header that stores this information in the response. This strikes me as the kind of thing somebody must have done before. Is there a better way to do this or an off-the-shelf ISAPI filter that I can use for this?
Thanks in advance
WCF offers much nicer extension points than ISAPI filters. You could e.g. create a client side message inspector that gets called just before the message goes out to the server, and then also gets called when the response comes back, and thus you could fairly easily measure the time needed for a service call from a client perspective.
Check out the IClientMessageInspector interface - that might be what you're looking for. Also see this excellent blog entry on how to use this interface.
Marc
I don't have a ready-made solution for you but I can point you towards IHttpModule.
See code in Instrument and Monitor Your ASP.NET Apps Using WMI and MOM 2005 for example.
private DateTime startTime;
private void context_BeginRequest(object sender, EventArgs e)
{
startTime = DateTime.Now;
}
private void context_EndRequest(object sender, EventArgs e)
{
// Increment corresponding counter
string ipAddress = HttpContext.Current.Request.
ServerVariables["REMOTE_ADDR"];
if (HttpContext.Current.Request.IsAuthenticated)
authenticatedUsers.Add(ipAddress);
else
anonymousUsers.Add(ipAddress);
// Fire excessively long request event if necessary
int duration = (int) DateTime.Now.Subtract(
startTime).TotalMilliseconds;
if (duration > excessivelyLongRequestThreshold)
{
string requestPath=HttpContext.Current.Request.Path;
new AspNetExcessivelyLongRequestEvent(applicationName,
duration,requestPath).Fire();
}
}