Will BackgroundService play nicely on a Kubernetes cluster - asp.net-core

I have a kubernetes cluster into which I'm intending to implement a service in a pod - the service will accept a grpc request, start a long running process but return to the caller indicating the process has started. Investigation suggests that IHostedService (BackgroundService) is the way to go for this.
My question is, will use of BackgroundService behave nicely with various neat features of asp.net and k8s:
Will horizontal scaling understand that a service is getting overloaded and spin up a new instance even though the service will appear to have no pending grpc requests because all the work is background (I appreciate there's probably hooks that can be implemented, I'm wondering what's default behaviour)
Will the notion of awaiting allowing the current process to be swapped out and another run work okay with background services (I've only experienced it where one message received hits an await so allows another message to be processed, but backround services are not a messaging context)
I think asp.net will normally manage throttling too many requests, backing off if the server is too busy, but will any of that still work if the 'busy' is background processes
What's the best method to mitigate against overloading the service (if horizontal scaling is not an option) - I can have the grpc call reutrn 'too busy' but would need to detect it (not quite sure if that's cpu bound, memory or just number of background services)
Should I be considering something other than BackgroundService for this task
I'm hoping the answer is that "it all just works" but feel it's better to have that confirmed than to just hope...

Investigation suggests that IHostedService (BackgroundService) is the way to go for this.
I strongly recommend using a durable queue with a separate background service. It's not that difficult to split into two images, one running ASP.NET GRPC requests, and the other processing the durable queue (this can be a console app - see the Service Worker template in VS). Note that solutions using non-durable queues are not reliable (i.e., work may be lost whenever a pod restarts or is scaled down). This includes in-memory queues, which are commonly suggested as a "solution".
If you do make your own background service in a console app, I recommend applying a few tweaks (noted on my blog):
Wrap ExecuteAsync in Task.Run.
Always have a top-level try/catch in ExecuteAsync.
Call IHostApplicationLifetime.StopApplication when the background service stops for any reason.
Will horizontal scaling understand that a service is getting overloaded and spin up a new instance even though the service will appear to have no pending grpc requests because all the work is background (I appreciate there's probably hooks that can be implemented, I'm wondering what's default behaviour)
One reason I prefer using two different images is that they can scale on different triggers: GRPC requests for the API and queued messages for the worker. Depending on your queue, using "queued messages" as the trigger may require a custom metric provider. I do prefer using "queued messages" because it's a natural scaling mechanism for the worker image; out-of-the-box solutions like CPU usage don't always work well - in particular for asynchronous processors, which you mention you are using.
Will the notion of awaiting allowing the current process to be swapped out and another run work okay with background services (I've only experienced it where one message received hits an await so allows another message to be processed, but backround services are not a messaging context)
Background services can be asynchronous without any problems. In fact, it's not uncommon to grab messages in batches and process them all concurrently.
I think asp.net will normally manage throttling too many requests, backing off if the server is too busy, but will any of that still work if the 'busy' is background processes
No. ASP.NET only throttles requests. Background services do register with ASP.NET, but that is only to provide a best-effort at graceful shutdown. ASP.NET has no idea how busy the background services are, in terms of pending queue items, CPU usage, or outgoing requests.
What's the best method to mitigate against overloading the service (if horizontal scaling is not an option) - I can have the grpc call reutrn 'too busy' but would need to detect it (not quite sure if that's cpu bound, memory or just number of background services)
Not a problem if you use the durable queue + independent worker image solution. GRPC calls can pretty much always stick another message in the queue (very simple and fast), and K8 can autoscale based on your (possibly custom) metric of "outstanding queue messages".

Generally, "it all works".
For the automatic horizontal scale, you need a autoscaler, read this: Horizontal Pod Autoscale
But you can just scale it yourself (kubectl scale deployment yourDeployment --replicas=10).
Lets assume, you have a deployment of your backend, which will start with one pod. Your autoscaler will watch your pod (eg. used cpu) and will start a new pod for you, when you have a high load.
A second pod will be started. Each new request will send to different pods (round-robin).
There is no need, that your backend throttle calls. It should just handle many calls as possible.

Related

Async WCF and Protocol Behaviors

FYI: This will be my first real foray into Async/Await; for too long I've been settling for the familiar territory of BackgroundWorker. It's time to move on.
I wish to build a WCF service, self-hosted in a Windows service running on a remote machine in the same LAN, that does this:
Accepts a request for a single .ZIP archive
Creates the archive and packages several files
Returns the archive as its response to the request
I have to support archives as large as 10GB. Needless to say, this scenario isn't covered by basic WCF designs; we must take additional steps to meet the requirement. We must eliminate timeouts while the archive is building and memory errors while it's being sent. Both of these occur under basic WCF designs, depending on the size of the file returned.
My plan is to proceed using task-based asynchronous WCF calls and streaming mode.
I have two concerns:
Is this the proper approach to the problem?
Microsoft has done a nice job at abstracting all of this, but what of the underlying protocols? What goes on 'under the hood?' Does the server keep the connection alive while the archive is building (could be several minutes) or instead does it close the connection and initiate a new one once the operation is complete, thereby requiring me to properly route the request through the client machine firewall?
For #2, clearly I'm hoping for the former (keep-alive). But after some searching I'm not easily finding an answer. Perhaps you know.
You need streaming for big payloads. That is the right approach. This has nothing at all to do with asynchronous IO. The two are independent. The client cannot even tell that the server is async internally.
I'll add my standard answers for whether to use async IO or not:
https://stackoverflow.com/a/25087273/122718 Why does the EF 6 tutorial use asychronous calls?
https://stackoverflow.com/a/12796711/122718 Should we switch to use async I/O by default?
Each request runs over a single connection that is kept alive. This goes for both streaming big amounts of data as well as big initial delays. Not sure why you are concerned about routing. Does your router kill such connections? That's a problem.
Regarding keep alive, there is nothing going over the wire to do that. TCP sessions can stay open indefinitely without any kind of wire traffic.

Approaches for reporting progress for competing consumer scenario

I am getting my head around messaging. Currently we are spiking a few scenarios using Rebus. We are also considering NServiceBus.
The scenario we are trying to build is a proof of concept for a background task processing system. Today we have a handful of backend services hosted in different ways. (web, windows services, console apps) I am looking to hook them up to rebus and start consuming messages using competing consumer, some mesages will have one listener and some will share the load of messages. Elegant :)
I got a pretty good start from this other question How should I set rebus up for one producer and many consumers and it is working nicely in the proof of concept.
Now I want to start reporting progress. My intital approach is to set up pub/sub as well and spin up a service that listens to progress events from all the services. And if a service is interrested in a specific progress in the future it is easy to subscripe of interrest to the messages and start listening.
But how shall I approach setting up both competing consumer and pub/sub? it is dimply two separate things? (In the rebus case one adapter using UseSqlServerInOneWayClientMode / UseSqlServer and another adapter that is set up for the pub/sub using whatever protocol we want?)
Or is there a better solution then having two "buses" here?
I've built something like that myself a couple of times, and I've had pretty good results with using SignalR to report progress from this kind of backend worker processes.
Our setup had a bunch of WPF clients, one single SignalR hub, and a bunch of backend worker processes. All WPF clients and all backend workers would then establish a connection to the hub, allowing workers to send progress reports while doing their work.
SignalR has some nice properties that makes it very suitable for this exact kind of problem:
The published messages "escape" the Rebus unit of work, allowing progress report messages to be sent several times from within one single message handler even though it could take a long time to complete
It was easy to get the messages all the way to the clients because they subscribe directly
We could use the hub groups functionality to group users so we could target progress/status messages from the backend at either all users or a single user (could also be used for departments, etc.)
The most important point, I guess, is that this progress reporting thing (at least in our case) was not as important as our Rebus messages, i.e. it didn't require the same reliability etc, which we could use to our advantage and then pick a technology with some other nice properties that turned out to be cool.

On Heroku, does utilising Node.js prevent the need for queues + worker dynos for third-party API calls?

The Heroku Dev Center on the page about using worker dynos and background jobs states that you need to use worker's + queues to handle API calls, such as fetching an RSS feed, as the operation may take some time if the server is slow and doing this on a web dyno would result in it being blocked from receiving additional requests.
However, from what I've read, it seems to me that one of the major points of Node.js is that it doesn't suffer from blocking under these conditions due to its asynchronous event-based runtime model.
I'm confused because wouldn't this imply that it would be ok to do API calls (asynchronously) in the web dynos? Perhaps the docs were written more for the Ruby/Python/etc use cases where a synchronous model was more prevalent?
NodeJS is an implementation of the reactor pattern. The default build of of NodeJS uses 5 reactors. Once these 5 reactors are being used for IO bound tasks, the main event loop will block.
A common misconception about NodeJS is that it is a system that allows you to do many things at once. This is not necessarily the case, it allows you to do other things while waiting on IO bound tasks, up to 5 at a time.
Any CPU bound tasks are always executed in the main event loop, meaning they will block.
This means if your "job" is IO bound, like putting things in databases then you can probably get away with not using dynos. This of course is dependent on how many things you plan on having go on at once. Remember, any task you put in your main app will take away resources from other incoming requests.
Generally it is not recommended for things like this, if you have a job that does some processing, it belongs in a queue that is executed in its own process or thread.

Concurrent WCF calls via shared channel

I have a web tier that forwards calls onto an application tier. The web tier uses a shared, cached channel to do so. The application tier services in question are stateless and have concurrency enabled.
But they are not being called concurrently.
If I alter the web tier to create a new channel on every call, then I do get concurrent calls onto the application tier. But I want to avoid that cost since it is functionally unnecessary for my scenario. I have no session state, and nor do I need to re-authenticate the caller each time. I understand that the creation of the channel factory is far more expensive than than the creation of the channels, but it is still a cost I'd like to avoid if possible.
I found this article on MSDN that states:
While channels and clients created by
the channels are thread-safe, they
might not support writing more than
one message to the wire concurrently.
If you are sending large messages,
particularly if streaming, the send
operation might block waiting for
another send to complete.
Firstly, I'm not sending large messages (just a lot of small ones since I'm doing load testing) but am still seeing the blocking behavior. Secondly, this is rather open-ended and unhelpful documentation. It says they "might not" support writing more than one message but doesn't explain the scenarios under which they would support concurrent messages.
Can anyone shed some light on this?
Addendum: I am also considering creating a pool of channels that the web server uses to fulfill requests. But again, I see no reason why my existing approach should block and I'd rather avoid the complexity if possible.
After much ado, this all came down to the fact that I wasn't calling Open explicitly on the channel before using it. Apparently an implicit Open can preclude concurrency in some scenarios.
You can cache the WCF proxy, but still create a channel for each service call - this will ensure concurrency, is not very expensive in comparison to creating a channel from scratch, and re-authentication for each call will not be necessary. This is explained on Wenlong Dong's blog - "Performance Improvement for WCF Client Proxy Creation in .NET 3.5 and Best Practices" (a much better source of WCF information and guidance than MSDN).
Just for completeness: Here is a blog entry explaining the observed behavior of request serialization when not opening the channel explicitly:
http://blogs.msdn.com/b/wenlong/archive/2007/10/26/best-practice-always-open-wcf-client-proxy-explicitly-when-it-is-shared.aspx

WCF polling, background processing, and resource starvation

I have a web service, implemented with WCF and hosted in IIS7, with a submit-poll communication pattern. An initial request is made, which returns quickly and kicks off a background process. The client polls for the status of the background process. This interface is set and can't be changed (it's a simulation of an external service we depend on).
I implemented the background processing by adding another service contract to the existing service with a one-way message contract that starts the long-running process. The "background" service keeps a database updated with the status in order to communicate with the main service. This avoids creating any new web services or items to deploy.
The problem is that the background process is very CPU intensive, and it seems to be starving the other service calls out. It will take up an entire processor, and while a single instance of the background process is running, status polling calls to the main service can take over a minute. I don't care how long the background process takes.
Is there any way to throttle the resource usage of the background method? Or an obvious way to do long running async processes in WCF without changing my submit/poll service contract? Would separating them into different web services help if the two services were still running on the same server?
The first thing I would try would be to lower the priority.
If you're actually spinning off a separate process for the background work, then you can do it like this:
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.BelowNormal;
If it's really just a background thread, use this instead (from within the thread):
Thread.CurrentThread.Priority = ThreadPriority.BelowNormal;
(Actually, it's better to start the thread suspended and change the priority at the caller before running it, but it's generally OK to lower your own priority.)
At the very least it should help determine whether or not it's really a CPU issue. If you still have problems after lowering the priority then it might be something else that's getting starved, like file or network I/O.