Hangfire.Console logging not displayed in Dashboard - hangfire

I've been using Hangfire for ages, but only recently discovered you can use Hangfire.Console to output logging, which can be seen in the Hangfire Dashboard.
I've seen it work on someone else's project, but when adding logging to my own project, I cannot make it work.
This is the gist of the task.
I checked with Debugger that all lines are executed and the job does finish successfully.
public async Task MethodName(PerformContext context)
{
var result = await apiClient.MethodName();
if (result != null)
{
context.SetTextColor(ConsoleTextColor.Green);
context.WriteLine($"Checked: {result.Checked}");
context.WriteLine($"Cleared: {result.Cleared}");
}
else
{
context.SetTextColor(ConsoleTextColor.Red);
context.WriteLine($"No result ...");
}
}
I expected to find the output in my Hangfire dashboard, but it shows no logging whatsoever. Is there a crucial step I am missing?
Using Hangfire 1.7.18 and Hangfire.Console 1.4.2

Related

Jobrunr: How to add custom filters for job?

I am trying to integrate jobrunr 5.1.6 to my micronaut application. I am unable to get the filter implementing ApplyStateFilter triggered when the job is processed. Please see below:
class JobCompletionFilter : ApplyStateFilter {
override fun onStateApplied(job: Job?, oldState: JobState?, newState: JobState?) {
if (newState != null) {
if (isFailed(newState) && maxAmountOfRetriesReached(job)) {
// Failed after retry. Add logs and handle strategy logic
} else if (newState is SucceededState) {
// Job succeeded. Add job completion logic
}
}
}
How do I inject this filter for all of my jobs? Below is how I am enqueueing the jobs
#Inject
lateinit var jobScheduler: JobScheduler
jobScheduler.enqueue {jobProcessor.execute(job)}
In the free version, you will have to manually configure JobRunr via the Fluent Java API instead of using the Spring integration. There, you have the possibility to pass the filter. An example:
JobRunr
.configure()
.withJobFilter(...) // pass your job filters here
For more info, see the fluent API.
In the Pro version, any Job Filter which is a Spring Bean (e.g. an #Component or #Service) will automatically be executed without any additional configuration. For more info, see the docs.

Tracking hangfire background jobs with app insights

I have set up app insights in Asp.net core application. All my web api requests are tracked on app insights and if I have any failures I can simply find them in Failures section.
However, I have also Hangfire background jobs running and if they are failing I can't find them on app insights. Also I have alert rule Whenever the total http server errors is greater than or equal to 1 count and I am not sure if hangfire 5xx errors will go under this condition.
So is there any way to track Hangfire jobs failures and get notified about them?
Hangfire handles most exceptions under the hood, so App Insights is not going to pick them up by default. There is also a bunch of configuration you have to do with App Insights as well.
I wrote a JobFilter for Hangfire which allows you to connect with App Insights, this should be enough to get you going:
https://github.com/maitlandmarshall/MIFCore/blob/master/MIFCore.Hangfire/Analytics/AppInsightsEventsFilter.cs
And for the App Insights configuration:
https://github.com/maitlandmarshall/MIFCore/blob/master/MIFCore.Hangfire/Analytics/TelemetryConfigurationFactory.cs
To put everything together from the above links:
var appInsights = this.rootScope.ResolveOptional<AppInsightsConfig>();
var childScope = ServiceScope = this.rootScope.BeginLifetimeScope("HangfireServiceScope");
var activator = new AutofacLifecycleJobActivator(childScope);
var options = new BackgroundJobServerOptions()
{
Activator = activator,
Queues = new[] { JobQueue.Alpha, JobQueue.Beta, JobQueue.Default, JobQueue.Low }
};
this.globalConfig
.UseFilter(new BackgroundJobContext());
if (!string.IsNullOrEmpty(appInsights?.InstrumentationKey))
{
var telemetryClient = new TelemetryClient(TelemetryConfigurationFactory.Create(appInsights));
this.globalConfig.UseFilter(new AppInsightsEventsFilter(telemetryClient));
}
using (var server = new BackgroundJobServer(options))
{
await server.WaitForShutdownAsync(stoppingToken);
}
There was a nice nuget package created Hangfire.Extensions.ApplicationInsights.
So, install the package:
Install-Package Hangfire.Extensions.ApplicationInsights
and add the line to ConfigureService method:
services.AddHangfireApplicationInsights();
If your solution requires some custom details you can adjust the code from github repository.

Task Module call from Ms Teams in Bot Framework

I am looking to open a task module (Pop up - iframe with audio/video) in my bot that is connected to Teams channel. I am having issues following the sample code provided on the GitHub page.
I have tried to follow the sample and incorporate to my code by did not succeed.
In my bot.cs file I am creating card action of invoke type:
card.Buttons.Add(new CardAction("invoke", TaskModuleUIConstants.YouTube.ButtonTitle, null,null,null,
new Teams.Samples.TaskModule.Web.Models.BotFrameworkCardValue<string>()
{
Data = TaskModuleUIConstants.YouTube.Id
}));
In my BotController.cs that inherits from Controller
[HttpPost]
public async Task PostAsync()
{
// Delegate the processing of the HTTP POST to the adapter.
// The adapter will invoke the bot.
await _adapter.ProcessAsync(Request, Response, _bot);
}
public async Task<HttpResponseMessage> Post([FromBody] Activity activity)
{
if (activity.Type == ActivityTypes.Invoke)
{
return HandleInvokeMessages(activity);
}
return new HttpResponseMessage(HttpStatusCode.Accepted);
}
private HttpResponseMessage HandleInvokeMessages (Activity activity)
{
var activityValue = activity.Value.ToString();
if (activity.Name == "task/fetch")
{
var action = Newtonsoft.Json.JsonConvert.DeserializeObject<Teams.Samples.TaskModule.Web.Models.BotFrameworkCardValue<string>>(activityValue);
Teams.Samples.TaskModule.Web.Models.TaskInfo taskInfo = GetTaskInfo(action.Data);
Teams.Samples.TaskModule.Web.Models.TaskEnvelope taskEnvelope = new Teams.Samples.TaskModule.Web.Models.TaskEnvelope
{
Task = new Teams.Samples.TaskModule.Web.Models.Task()
{
Type = Teams.Samples.TaskModule.Web.Models.TaskType.Continue,
TaskInfo = taskInfo
}
};
return msg;
}
return new HttpResponseMessage(HttpStatusCode.Accepted);
}
There is more code as per the GitHub sample but I won't paste it here. Can someone point me into the correct direction ?
I have got to the stage that it is displaying a pop up window but the content and title comes from manifest file instead of creating actual iframe also no video is rendering. My goal is to render video within my teams using iframe container.
The important part from the sample:
This sample is deployed on Microsoft Azure and you can try it yourself by uploading Task Module CSharp.zip to one of your teams and/or as a personal app. (Sideloading must be enabled for your tenant; see step 6 here.) The app is running on the free Azure tier, so it may take a while to load if you haven't used it recently and it goes back to sleep quickly if it's not being used, but once it's loaded it's pretty snappy.
So,
Your Teams Admin MUST enable sideloading
Your bot MUST be sideloaded into Teams
The easiest way to do this would be download the sample manifest, open it in App Studio, then edit your bot information in. You then need to make sure Domains and permissions > Valid Domains are set for your bot. Also ensure you change the Tabs URLs to your own.
You also need to make sure that in your Tasks, the URLs they call ALL use https and not http. If anywhere in the chain is using http (like if you're using ngrok and http://localhost), it won't work.

easynetQ delayed respond/request resulting in timeout

I've run into a problem with using the request/respond pattern of EasyNetQ while using it on our server (Windows Server 2008). Not able to reproduce it locally at the moment.
The setup is we have 2 windows services (running as console applications for testing) which are connected through the request/respond pattern in EasyNetQ. This has been working as expected until recently on the server where the request side does not "consume" the responses until after the request timeouts.
I have included 2 links to pastebin which contain the console logging of EasyNetQ which will hopefully make my problem a bit more clear.
RequestSide
RespondSide
Besides that, my request code looks like this:
var request = new foobar();
var response = _bus.Request<foobar, foobar2>(request);
and on the respond side:
var response = new response();
_bus.Respond<foobar, foobar2>(request =>
{
try
{
....
return response;
}
catch (Exception e)
{
....
return response;
}
});
As I've said, the request side sends the request as expected and the respond side consumes/catches it. This works as it should, but when the respond side is done processing and responds (which it does, the messages can be seen in the RabbitMQ management thingy) the request doesn't consume/catch the response until after the request has timed out (default timeout is 10s, tried setting to 60s aswell, makes no difference). This is also evident in the logs linked above as you'll see on the RequestSide, with the 5 or so messages received from the response queue which previously timed out.
I've tried using RespondAsync in case the processing was taking too long and messing something up, didn't help. Tried using both RespondAsync & RequestAsync, just messed everything up even more (I was probably doing something wrong with the request :)).
I might be missing something, but I'm not sure what to try from here.
EDIT: Noticed I messed something up. As well as added more context below:
The IBus used for the request/response is created and injected with Ninject:
class FooModule : NinjectModule
{
public override void Load()
{
Bind<IBus>().ToMethod(ctx => RabbitHutch.CreateBus("host=localhost", x => x.Register<IEasyNetQLogger>(_ => logger))).InSingletonScope();
}
}
And it's all tied together by the service being constructed using Topshelf with Ninject like so:
static void Main(string[] args)
{
HostFactory.Run(x =>
{
x.UseNinject(new FooModule());
x.Service<FooService>(s =>
{
s.ConstructUsingNinject();
s.WhenStarted((service, control) => service.Start(control));
s.WhenStopped((service, control) => service.Stop(control));
});
x.RunAsLocalSystem();
});
}
The Topshelf setup has all been tested pretty thoroughly and it works as intended, and should not really be relevant for the request/respond problem, but I thought I would provide a bit more context.
I had this same issue, my problem was i set the timeout only in the response but not in the request side, after i set the timeoute in both side it worked fine
my connection for eg.
host=hostname;timeout=120;virtualHost=myhost;username=myusername;passw
ord=mypassword

Web API 2 return OK response but continue processing in the background

I have create an mvc web api 2 webhook for shopify:
public class ShopifyController : ApiController
{
// PUT: api/Afilliate/SaveOrder
[ResponseType(typeof(string))]
public IHttpActionResult WebHook(ShopifyOrder order)
{
// need to return 202 response otherwise webhook is deleted
return Ok(ProcessOrder(order));
}
}
Where ProcessOrder loops through the order and saves the details to our internal database.
However if the process takes too long then the webhook calls the api again as it thinks it has failed. Is there any way to return the ok response first but then do the processing after?
Kind of like when you return a redirect in an mvc controller and have the option of continuing with processing the rest of the action after the redirect.
Please note that I will always need to return the ok response as Shopify in all it's wisdom has decided to delete the webhook if it fails 19 times (and processing too long is counted as a failure)
I have managed to solve my problem by running the processing asynchronously by using Task:
// PUT: api/Afilliate/SaveOrder
public IHttpActionResult WebHook(ShopifyOrder order)
{
// this should process the order asynchronously
var tasks = new[]
{
Task.Run(() => ProcessOrder(order))
};
// without the await here, this should be hit before the order processing is complete
return Ok("ok");
}
There are a few options to accomplish this:
Let a task runner like Hangfire or Quartz run the actual processing, where your web request just kicks off the task.
Use queues, like RabbitMQ, to run the actual process, and the web request just adds a message to the queue... be careful this one is probably the best but can require some significant know-how to setup.
Though maybe not exactly applicable to your specific situation as you are having another process wait for the request to return... but if you did not, you could use Javascript AJAX kick off the process in the background and maybe you can turn retry off on that request... still that keeps the request going in the background so maybe not exactly your cup of tea.
I used Response.CompleteAsync(); like below. I also added a neat middleware and attribute to indicate no post-request processing.
[SkipMiddlewareAfterwards]
[HttpPost]
[Route("/test")]
public async Task Test()
{
/*
let them know you've 202 (Accepted) the request
instead of 200 (Ok), because you don't know that yet.
*/
HttpContext.Response.StatusCode = 202;
await HttpContext.Response.CompleteAsync();
await SomeExpensiveMethod();
//Don't return, because default middleware will kick in. (e.g. error page middleware)
}
public class SkipMiddlewareAfterwards : ActionFilterAttribute
{
//ILB
}
public class SomeMiddleware
{
private readonly RequestDelegate next;
public SomeMiddleware(RequestDelegate next)
{
this.next = next;
}
public async Task Invoke(HttpContext context)
{
await next(context);
if (context.Features.Get<IEndpointFeature>().Endpoint.Metadata
.Any(m => m is SkipMiddlewareAfterwards)) return;
//post-request actions here
}
}
Task.Run(() => ImportantThing() is not an appropriate solution, as it exposes you to a number of potential problems, some of which have already been explained above. Imo, the most nefarious of these are probably unhandled exceptions on the worker process that can actually straight up kill your worker process with no trace of the error outside of event logs or something at captured at the OS, if that's even available. Not good.
There are many more appropriate ways to handle this scenarion, like a handoff a service bus or implementing a HostedService.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-6.0&tabs=visual-studio