Infinispan write-only Hibernate client in invalidation mode - infinispan

I'm currently working on a high-traffic legacy Hibernate (5.0) web application that relies heavily on L2 caching with ehcache. Let's say this web application currently also contains a thread pool that writes data. The tasks in this thread pool don't benefit of L2 caching but cause plenty of invalidations. Now we'd like to take this thread pool out of the legacy app and put it in its own Java process, ideally on a different server.
Would it be possible to configure Hibernate/Infinispan in a way that the new task process doesn't have an L2 cache (or one with no actual capacity) but sends invalidations to the web app while at the same time the web app doesn't send any invalidations to the task process.

Related

ASP .NET Core Application Process Isolation for IIS hosted Kestrel Services

I'm migrating a service based integration platform from .Net Framework to .Net Core. The original versions of the integration platform have proven very successful and compared to replacing it with a 'off the shelf' integration solution, it has a far better ROI.
So after redeveloping the code, all tests has been working very well and have achieved higher levels of performance with a single IIS server that I could with 2 IIS servers with the original versions.
Except... If I go over ~3 message/sec with multiple clients, I start seeing duplicate GUID key errors when trying to save instrumentation data to my DB. All these errors are generated from the on-ramp service. The on-ramp places the message on a queue. The messages are then consumed by an off-ramp service and sent to the destination (for this load test the destination is a file folder).
Even though the off-ramp is also running on the same server as the on-ramp, we do not see any duplication errors generated by the off-ramp. I suspect this is due to the queue creating a linier process, so only one instance of the off-ramp is running at any time vs the on-ramp that has up to 4 clients firing concurrent messages at it's API.
Initially I thought the issue was caused by a static global variable class I had implemented, crossing process boundaries. But I would expect that the issue would be seen with the off-ramp as well, as the service architecture for both are virtually identical.
Summary of thoughts on issue:
If it is a pure coding issue, then errors would happen at low messaging rates.
The error would also be seen on the off-ramp if the GUID duplication was chance.
The on and off ramps are both running on the same server, but duplication only seen on the on ramp. IE on ramp not impacting the off ramp and visa versa.
Duplication has to be due to shared memory between concurrently running on-ramp instances, generated by multiple client scenario.
To try and resolve the issue I removed the static global variable class but I'm still seeing the duplication errors.
This issue was never observed in the original IIS implementation (after millions of message processed). I suspect the issue is with process isolation in the IIS hosted Kestrel .Net Core service host. From what I have read there is good isolation between different apps (based on IIS path) but not within the same app. So basically within the same IIS app pool. This could explain why .Net Core does not support multiple app running in the same IIS app pool.
If any one has a good idea how i can achieve process isolation between instances of the same app running in the same IIS app pool I would appreciate your thoughts/suggestions.
After running more tests I was able to resolve the issue. The problem was with the scope of the instrumentation variable. At low rates there was never a problem, but at high throughput, the same instrumentation object was being accessed by a second instance of the process.
The issue was difficult to track down due to the short lived nature of the integration services.
Thanks to anyone who reviewed the question.
Martin

Hangfire and main app in different applications

We have web application developed in .NET Core and is hosted on azure. We want to use HangFire for report scheduling.
Our application is multitenant so it will have load of its own, So I want to run these background processes into different server. HangFire has option of Placing Processing into Another Process as Using Console applications or Using Windows Services.
I have gone through HangFire Doc but there is no clear explanation of how this main application (which is .NET CORE) connects to this Console application?
https://docs.hangfire.io/en/latest/background-processing/placing-processing-into-another-process.html
I came across this question but still is not clear to me
How to run Hangfire as a separate process from the main web application?
Your AspNet .Net Core application (the hangfire client) will not communicate directly with the console application (the hangfire server).
Communication is done trhough the storage (the database) : client declares new tasks in the storage, and the server polls the storage (or is notified by the storage depending on the storage technology, like Redis) to execute the tasks.
You need to ensure communication of both client and server with the storage, but not between the client and the server.

asp.net core - long running process polling other bounded contexts events outbox

I'm building an app folowing DDD paterns with each AR having their events outbox saved to a permanent store.
That store gets polled by other parts interested in events.
Whole application is user oriented so the basic infrastructure is asp.net web api.
Now, i'd like to avoid having my domain artifacts spread across different processes/infrastructure options. For example, Azure Function that listens for event store & does logic upon events received.
It seems convenient to have web api & events consumer together in same container. Reason is that domain artifacts are deployed together with api & events consumer infrastructure.
I read that IHostedService might be one option for it as it can run as long running background process.
Question is, is IHostedService meant for this particular scenario of reacting to event outboxes? or are there some important drawbacks i'm missing and better infrastructural choices?
In general, I don't think there is anything wrong with using hosted service for running background workers.
I'm more sceptical about the "polling from other bounded contexts" part. I'd be at least concerned if that breaks the encapsulation of my contexts (similar to having "foreign" components read from a contexts persistence). But this might not be the case in your situation.
Anyways, in case you just want to guarantee that your events are reliably transmitted I would rather make sure that each component realizing a specific bounded context (e.g. microservice or component of a monolith) pushes these events somewhere interested consumer are able to pick them up.
So if it is about the reliable transmission of outgoing events I would suggest the transactional outbox pattern, maybe in combination with a publish-subscribe approach.
As you're on the .net stack the outbox feature of NServiceBus might be interesting for you.
Here's a link.) about implementing an IHostedService.
Indeed, IHostedService is a good way for long running process for your scenario by two methods.
It is important to note that the way you deploy your ASP.NET Core WebHost or .NET Host might impact the final solution. For instance, if you deploy your WebHost on IIS or a regular Azure App Service, your host can be shut down because of app pool recycles. But if you are deploying your host as a container into an orchestrator like Kubernetes, you can control the assured number of live instances of your host. In addition, you could consider other approaches in the cloud especially made for these scenarios, like Azure Functions. Finally, if you need the service to be running all the time and are deploying on a Windows Server you could use a Windows Service.
But even for a WebHost deployed into an app pool, there are scenarios like repopulating or flushing application's in-memory cache that would be still applicable.
The IHostedService interface provides a convenient way to start background tasks in an ASP.NET Core web application (in .NET Core 2.0 and later versions) or in any process/host (starting in .NET Core 2.1 with IHost). Its main benefit is the opportunity you get with the graceful cancellation to clean-up the code of your background tasks when the host itself is shutting down.

Add Quartz to a Web API project. Performance question

I need to use Quartz for a time-consuming task to update data in my project, I'm afraid that adding the workers to the web API will limit the performance of the web API when the tasks are running in the background. I'm hosting my web API in Amazon so I can just beef it up or I deploy this project to another server to handle the background jobs in another service.
Hosting the Workers and WebApi on the same server will probably be cheaper. But I know that deploying them separately will make fixes much easier to deploy.
I'm afraid that adding the workers to the web API will limit the performance of the web API when the tasks are running in the background. I'm hosting my web API in Amazon so I can just beef it up or I deploy this project to another server to handle the background jobs in another service.
If your background task would do CPU-intensive or I/O-intensive etc jobs, hosting the Workers and WebApi application(s) on the same server, which might result in resource contention and lead to low performance.
On the other hand, isolating your app (or workers) into separate server (or service) in Amazon, which would take additional charge. You can monitor the metrics of CUP, memory etc usage first, then determine if current hosting approach is ok.

WCF Web Service with a Singleton COBOL VM

I have a WCF Web Service that has no concurrency configuration in the web.config, so I believe it is running as the default as persession. In the service, it uses a COBOL Virtual Machine to execute code that pulls data from COBOL Vision files. Per the developer of the COBOL VM, it is a singleton.
When more than one person accesses the service at a time, I'll get periodic crashes of the web service. What I believe is happening is that as one process is executing another separate process comes in at about the same time. The first process ends and closes the VM down through normal closing procedures. The second process is still executing and attempting to read/write data, but the VM was shutdown and it crashes. In the constructor for the web service, an instance of the VM is created and when a series of methods complete, the service is cleaned up and the VM closed out.
I have been reading up on Singleton concurrency in WCF web services and thinking I might need to switch to this instead. This way I can open the COBOL VM and keep it alive forever and eliminate my code shutting down the VM in my methods. The only data I need to share between requests is the status of the COBOL VM.
My alternative I'm thinking of is creating a server process that manages opening the VM and keeping it alive and allowing the web service to make read/write requests through that process instead.
Does this sound like the right path? I'm basically looking for a way to keep the Virtual Machine alive in a WCF web service situation and just keep executing code against it. The COBOL VM system sends me back locking information on the read/writes which I can use to handle retries or waits against.
Thanks,
Martin
The web service is now marked as:
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Single)]
From what I understand, this only allows a single thread to run through the web service at a time. Other requests are queued until the first completes. This was a quick fix that works in my situation because my web service doesn't require high concurrency. There are never more than a handful of requests coming in at a time.