I am trying to use Hangfire to schedule a notification with SignalR.
For some odd reason, the notifications through SignalR work completely fine if Hangfire is not involved. When I involve Hanfire(tried enqueue, schedule), the method gets called for sure(I check a separate log file), and yet the notifications are not sent out to the clients…
How I am using Hangfire enqueue/schedule?
BackgroundJob.Enqueue(() => SendNotifications(notification));
The method:
public void SendNotifications(Notification notification)
{
GlobalHost.ConnectionManager.GetHubContext().Clients.All.Invoke(“notification”, notification);
}
The Hangfire dashboard doesn’t register any problems.
Like I said, I tried calling the method directly, without Hangfire, and it works as expexted.
It would be great if anyone could help me find the problem, and solution. Thank you! :)
Related
I have a project in .net core 5 Web API which is using EF Core as ORM. I would like to send notification to specific/all clients whenever any successful insert/update/delete is performed on specific tables in the database. This Web API project would be the only way to access the database.
To send push notification, I am using SignalR.
Due to performance related issues I do not intend to use trigger, change tracker etc mechanisms that are directly associated with the database.
Any action method in my web api controller would have the following flow -
EmployeeService : BaseService {
...
}
EmployeeController:BaseController {
private EmployeeService empService;
...
[HttpPost]
public Task<EmployeeDto> EditEmployee(Guid id, EmployeeUpdateDto model) {
....
// Here I call the corresponding method in EmployeeService which in turn calls appropriate
// Repository method
..
}
}
So in controller method , after the appropriate service call, I would be able to understand whether the table has been actually updated or not and accordingly I can call SignlaR to send push notification.
My query is whether this is the best approach to handle the scenario ?
And if it is the best approach then shall I use a ResultFilter or another service (that I inject in Every Controller ) to send the SignalR notification.
In case this approach does not seem to be an efficient one, please suggest how can I achieve the same in an alternate manner.
I wanna update data from an in-memory database to a relational database every 5 minutes.
Everyone suggests me to use the IHostedService from origin .net core or some other third-party package such as Hangfire.
Whereas, I think it is so troublesome for it must code much.
I have a strange idea that achieves it by the looping task, for example:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
namespace WebApplication3.Controllers
{
[Route("api/")]
public class TestController : Controller
{
public TestController()
{
Task.Run(() => {
while (true)
{
Console.WriteLine("123");//code something to transfer data to the database
Thread.Sleep(5000);
//Task.Delay(5000);
}
});
}
[Route("ABC")]
public void ABC()
{
}
}
}
What's more, it is so strange that it won't delay any if I use the Task.Delay while it works well if I use the Thread.Sleep.
I wonder why no people achieve it by the Task of System.Threading.Tasks?
Maybe it is a stupid question but I want to find the reason. Thank you.
Generally is because of how the process works. I will assume that you don't place the Task.Run() on the controller but on the startup. If you use it on the controller it will simply start a new thread in each and every request.
The way ASP.NET Core works is that it starts a process that listens for incoming request and for each request it creates a new thread for it. Remember that creating a new thread with task run is not the same as something running in the background. In order for it to run on the background you would require a new process not a thread from the thread pool. Generally this will be a thread that will always run and never be freed to server other requests.
What's more, it is so strange that it won't delay any if I use the Task.Delay while it works well if I use the Thread.Sleep.
Use Task.Delay. It requires the async keyword and it doesnt block the thread. You were probably not using the await keyword.
I wonder why no people achieve it by the Task of System.Threading.Tasks?
Generally you could defently implement it but the important thing is the control you have over it. Every 5 minutes your cpu and io usage will spike. But if you split the application in 2 containers in the same host, you can control how much CPU allocation each container will have thus allowing you not to have spikes in performance for the API.
EDIT:
About hosted service as you can see from the documentation ASP.NET Core starts a web server and then starts a IHostedService on a different service. Thats why it's preferred. It's a background task not a thread from the thread pool of your API
EDIT:
About the IHostedService I was wrong it doesn't start a new process but you should use it just because it's more manageable and it allows you swap a new process easily and maintain in a much more structured way.
I have a service that collects data and has to survive the app's life-cycle changes while app is in the background. This service resides in the same process as my app, i.e. registered in the manifest as well.
The service posts LiveData to the app, and the main app retrieves this LiveData by binding to the service and doing something like:
private void onServiceConnected(TicketValidatorService service) {
...
service.getStatus().observe(this, new Observer<SomeStatus>() {
#Override
public void onChanged(SomeStatus status) {
handleStatusChanged(status);
}
})
...
}
Is this considered bad practice? Or should I rather communicate via Messenger/Handler or LocalBroadcastManager stuff over the service/app boundary? It would be difficult to put the service in another process, but I don't think I have to do that for the sake of my task.
Communication to a local service directly is not considered to be a bad practice and in fact an official recommendation. There is no reason to complicate your code to support cross-process communication when you are not going to use it. Moreover this kind of communication involves marshaling / unmarshaling which adds restrictions on data types you can pass through and has some performance hit.
Also please note, starting from android 8 there are limitations on background services. So if you are not running your service as a foreground service it's not going to be alive for long time after your app goes to background.
After certain actions (say a PUT or a DELETE) in my services, I will like to send a notification to a user or to a group of users, this is done before send the response of the action.
My way to implement notifications is quite simple, I have an interface:
public Interface INotification{
void send(string mail, string content);
void send(Group group, string content);
}
that represents every type of notification. I inject the types of notifications that are used in a given service but I don't see this as an optimal solution. Is there a better way to accomplish this? are any frameworks that integrates easily with ServiceStack that help me achieve this?
Another problem from my point of view is loading a template, this is done every time I send the notification. I don't like this approach since I assume that this is not optimal. (but this is a different problem)
Thanks for all the help you can provide me.
I am considering adding EventStore to my app to handle a similar scenario, with the added requirement of an auditable history of object changes:
https://github.com/joliver/EventStore
I've not tried it out yet.
this is a weird thing.
I created a simple SOAP based web service with WCF. When the 'SubmitTransaction' method is called, the transaction is being passed on to an application service. But if the application service is not available, it is being written to a MSMQ.
Like this:
public void SubmitTransaction(someTransaction)
{
try
{
// pass transaction data to application
}
catch(SomeError)
{
// write to MSMQ
}
}
So when an error occures the transaction is written to the queue. Now, when using the MSMQ API directly in my WCF service, everything is fine. Each call takes a few milliseconds.
E.g.:
...
catch(SomeError)
{
// write to MSMQ
var messageQueue = new MessageQueue(queuePath);
try
{
messageQueue.Send(accountingTransaction, MessageQueueTransactionType.Single);
}
finally
{
messageQueue.Close();
}
}
But since I want to use the message queue functionality at some other points of the system as well, I created a new assembly that takes care of the message queue writing.
Like:
...
catch(SomeError)
{
// write to MSMQ
var messageQueueService = new MessageQueueService();
messageQueueService.WriteToQueue(accountingTransaction);
}
Now when using this setup, the web service is suddenly very slow. From the above-mentioned milliseconds, each call now takes up to 4 seconds. Only because the message queue stuff is encapsulated in a new assembly. The logic is exactly the same. Anyone knows what the problem could be...?
Thanks!
Ok, now I know. It has something to do with my logging setup (log4net). I'll have to check that first. Sorry for stealing your time..
You have two new lines of code here:
var messageQueueService = new MessageQueueService();
messageQueueService.WriteToQueue(accountingTransaction);
Do you know which of the two is causing the problem? Perhaps add some logging, or profiling, or step through in a debugger to see which one seems slow.