Setting up SignalR with a pub/sub Redis backplane is incredibly easy in ASP.NET - see here.
Can we do this in ASP.NET Core today? Are the ASP.NET Core packages ready this? It doesn't need to be this easy to configure.
It's there, my colleagues recently implemented it. But since we have a reverse proxy in front of the app, this is giving us problems (make sure your proxy accepts alsothe HTTP Update verb.)
Since the new IOC model everything seems to be very easily configurable.
Use: Microsoft.AspNetCore.SignalR.Server. I'm not sure (pretty sure it's not) production ready though: https://dotnet.myget.org/feed/aspnetcore-dev/package/nuget/Microsoft.AspNetCore.SignalR.Server
Best way would be to check out the GitHub code. Maybe you make your own implementation that sooths you best (for a non easy way).
Related
I'm building an app folowing DDD paterns with each AR having their events outbox saved to a permanent store.
That store gets polled by other parts interested in events.
Whole application is user oriented so the basic infrastructure is asp.net web api.
Now, i'd like to avoid having my domain artifacts spread across different processes/infrastructure options. For example, Azure Function that listens for event store & does logic upon events received.
It seems convenient to have web api & events consumer together in same container. Reason is that domain artifacts are deployed together with api & events consumer infrastructure.
I read that IHostedService might be one option for it as it can run as long running background process.
Question is, is IHostedService meant for this particular scenario of reacting to event outboxes? or are there some important drawbacks i'm missing and better infrastructural choices?
In general, I don't think there is anything wrong with using hosted service for running background workers.
I'm more sceptical about the "polling from other bounded contexts" part. I'd be at least concerned if that breaks the encapsulation of my contexts (similar to having "foreign" components read from a contexts persistence). But this might not be the case in your situation.
Anyways, in case you just want to guarantee that your events are reliably transmitted I would rather make sure that each component realizing a specific bounded context (e.g. microservice or component of a monolith) pushes these events somewhere interested consumer are able to pick them up.
So if it is about the reliable transmission of outgoing events I would suggest the transactional outbox pattern, maybe in combination with a publish-subscribe approach.
As you're on the .net stack the outbox feature of NServiceBus might be interesting for you.
Here's a link.) about implementing an IHostedService.
Indeed, IHostedService is a good way for long running process for your scenario by two methods.
It is important to note that the way you deploy your ASP.NET Core WebHost or .NET Host might impact the final solution. For instance, if you deploy your WebHost on IIS or a regular Azure App Service, your host can be shut down because of app pool recycles. But if you are deploying your host as a container into an orchestrator like Kubernetes, you can control the assured number of live instances of your host. In addition, you could consider other approaches in the cloud especially made for these scenarios, like Azure Functions. Finally, if you need the service to be running all the time and are deploying on a Windows Server you could use a Windows Service.
But even for a WebHost deployed into an app pool, there are scenarios like repopulating or flushing application's in-memory cache that would be still applicable.
The IHostedService interface provides a convenient way to start background tasks in an ASP.NET Core web application (in .NET Core 2.0 and later versions) or in any process/host (starting in .NET Core 2.1 with IHost). Its main benefit is the opportunity you get with the graceful cancellation to clean-up the code of your background tasks when the host itself is shutting down.
I am working on a project which has following requirements:
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Possibility to plugin my own custom based load balancer.
Easy to write and deploy.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Easy extraction of a node from this configuration file(Possibly with xpath).
I tried working with camel for a while but, wasn't able to do perform certain task with it.
So thought of giving a try to Akka.
Will akka be possibly able to satisfy the above requirements?
If so is there a load balancing example in akka or proxy example?
Would really appreciate some feedbacks.
You can do everything you've described with Akka.
You don't mention what language you're working with, Scala or Java. I've included links to the Scala documentation.
Before you do anything with Akka you HAVE TO read the documentation and understand how Akka works.
http://doc.akka.io/docs/akka/2.0.3/
Doing so, you'll find Akka is perfect for the project you've described with some minor caveats.
Once you read the documentation the following answers should make a lot of sense.
Perform sticky based load balancing(based on SOAP session ID) onto multiple backend servers.
Load balancing is already part of the framework (it's called Routing in Akka http://doc.akka.io/docs/akka/2.0.3/scala/routing.html) and Remoting (http://doc.akka.io/docs/akka/2.0.3/scala/remoting.html) will take care of the backend servers. You can easily combine the two.
To my knowledge the idea of sticky load balancing is not a part of Akka but I can envision this being accomplished with a Map using the session ID as the key and the Actor name (or path) as the value. A quick actorFor will take care of the rest. Not well thought out but should give you a good idea of where to start.
Possibility to plugin my own custom based load balancer.
Refer to the Routing documentation.
Easy to write and deploy.
This depends on your aptitude and effort but after you read certain parts of the documentation you should be build a proof of concept in a couple of hours.
Deployment can be a bit frustrating mostly because the documentation isn't really great with respect to deploying Akka networks with remote components. However, there are enough examples on the web that you can figure out how to get it done...eventually. Once you do it once it's no big deal.
A central configuration file(Possibly an XML), to take care of all the backend servers.
Akka uses Typesafe Config (https://github.com/typesafehub/config) which is a lot easier to work with than XML (but I hate XML so take that with a grain of salt). As far as a central configuration, I'm not sure what you're trying to accomplish but it sounds like something that can be solved using remote actor creation. Again, see the Remoting documentation.
Easy extraction of a node from this configuration file(Possibly with xpath).
Akka provides a lookup method .actorFor. There's no need to go to the configuration file once the system is up and running.
If so is there a load balancing example in akka or proxy example?
Google is your friend.
We have an application that works well with any load balancing scheduling, be in random choice, random DNS or round robin. Now we'r considering using SignalR in our project, and we'r wondering how well SignalR handles these kind of load balancing schemes.
Having not tested anything yet, we'r thinking if SignalR probably works well with in this scenario if the transport uses EventSource or WebSockets, but what if it falls back on long-polling?
Im having a hard time googling more detailed information regarding this topic.
There are a couple of options today. Redis or azure. I am currently using the redis bus implemention with good results.
We are moving a WPF app across to use WCF and POCO. So far there are 10 or so services, but this will grow with time. It seems to me that we are having to repeat a lot of rubbish (the same text, I know its not rubbish) in the app.config to wire up the services for WCF.
It seems to me to be a nice place to work with convention over configuration. If I put what the defaults would be in a config section and listed / integrated what the services are it could wire them up when the application fires up.
It feels like this would also make deployments easier as there would only be one or two lines to change (addresses etc).
Is there any framework out there (did a bit of Googling and didn't find anything) that does this sort of thing? If not, is it because this is a daft idea or not practical to implement?
I am reasonably new to WCF so there could be issues that I am not aware of at this point.
Thanks for any and all advice around this subject.
.NET 4 cleans up the config quite nicely, and WCF 4 in .NET 4 also has lots of sensible "defaults" - you might not even need any additional config files, with the default endpoints (one for each service contract implemented by your service class, and for each defined base address in your config / service host) and so forth in WCF 4.
Read all about the new features in WCF 4 here:
A Developer's Introduction to Windows Communication Foundation 4
Read, digest, enjoy! :-)
Am wondering if anybody has tried this a technique to get events to the client from the server side. I have an environment that uses Unix based servers and so can't use WCF duplex / callbacks etc.
The idea being that my clients are windows boxes running a thick .net app would spin up a WCF self host and register their self host URL on the server for that session. They would have a very simple contract and the server would when it has an update call out the client server telling it that an Update is waiting on the server for it and the client would then get it etc.
I still trying to get my head round WCF so not sure if this is a good way to go, are there any security implications I should worry about ? are there ways to get the Duplex calls to work across platform.
I have done something similar before using sockets or maybe a cross platform message queue would be a better way to go on this anyhow.
Thanks
76mel
At the very least, that sounds like it ought to work, though I'd guess you could host in IIS as well since the *nix servers could then just make a web call, right? I'm not sure what self-hosting would gain you, though it should work fine, but might be a bit more of a pain in the neck to configure, etc.
Please update here whenever you've made a decision because it sounds like an interesting challenge and some of us would like to see how you make out.
We use self-hosted WCF for a similar scenario. We also wanted to avoid making our client application dependent on IIS, to prevent both licensing and deployment hassles.
It works fairly well for us, though WCF might be overkill for what you need. Since you're using HTTP, you could create a simple web service built directly on Http.sys.
Another way of getting similar results could be to have the client poll. This does depend strongly on what requirements are there. If you need a near real time update, this obviously doesn't work as you would have to have way to many polls to do this, but if it's ok to take a minute or more to get updates to the client, polling might just be the answer.