Hangfire - new server appears every time code changed - hangfire

I'm new to Hangfire. I have it working on a dev machine but every time I change the code and run the app (asp.net Core 2 MVC) - a new server appears in the list on the dashboard.
I can't find anything in the documentation about this - or in the sample files. I've read about cancellation tokens but these seem to be for intentional shutdown requests not code updates?!
Is this expected behaviour? Am I expected to manually restart the application in IIS every time code is updated (more important on the server than dev machine obviously).
Thanks.

Found a work around # this link which worked for me. Credit to ihockett.
TLDR
I know this is a pretty old topic at this point, but I’ve been running into a similar issue. I wanted to throw my contribution to working around jobs which have been aborted due to server shutdown. If automatic retries are disabled (Attempts = 0), or the jobs fails due to server shutdown and is beyond the maximum number of attempts, you can run into this issue. Unfortunately for us, this was causing new jobs to not start processing until the aborted jobs were either manually deleted or re-queued.
Basically, I took the following approach to automatically handle aborted jobs: during startup and after initializing the BackgroundJobServer, I use the MonitoringApi to get all of the currently processing jobs. If there are any, I loop through each and call BackgroundJob.Requeue(jobId). Here’s the code, for reference:
var monitor = Hangfire.JobStorage.Current.GetMonitoringApi();
if (monitor.ProcessingCount() > 0)
{
foreach (var job in monitor.ProcessingJobs(0, (int)monitor.ProcessingCount()))
{
BackgroundJob.Requeue(job.Key);
}
}

Related

Sonar Api: After scan is finished on new pull request it’s not possible to get /api/measures/component?metricKeys=coverage

SonarQube: Enterprise Edition Version 9.2.4 (build 50792)
Sonar client: 4.7.0.2747
Scan is launched for merge request in gitlab. I am requesting coverage for pull request.
Imidietly after scan (using scanner client) is finished I try to get coverage by following call:
http:///api/measures/component?metricKeys=coverage&component=&pullRequest=
I am getting:
404 : “{“errors”:[{“msg”:“Component \u0027u0027 of pull request \u0027\u0027 not found”}]}”
Interestingly if I put some sleep (1 second) after scan is finished and before i do a call to get coverage everything is fine.
It seems it has to do something with the fact that it’s a new pull request and regardless scan is finished and it generates link with results, it still requires some time before it will be possible for the api call i mentioned to be able to return coverage. Also, if i retry the operation(scan and get results) on already existing pull request there are no issues like this.
Could you please elaborate on this issue, is such behavior is expected or maybe there are some other ways I can get coverage right away after the scan is finished without adding any sleeps…
As a side observation under same circumstances if i do scan on new pull request and call another api (/issues/search?) to get list of detected issues and it successfully works without any additional sleeps,
Thank you.
After the call from the scanner client completes, SonarQube executes a "background task" in the project that finalizes the computations of measures. When the background task is complete, your measures will be available. This is why adding a "sleep" appears to work for you. In reality, it's just luck that you're sleeping long enough. The proper way to do this is to either manually check the status of the background task, or use tools that check for the background task completion under the covers.
If you're using Jenkins pipelines, and you have the "webhook" properly configured in SonarQube to notify completion of the background task, then the "waitForQualityGate" pipeline step does this, first checking to see if the task is already complete, and if not, going into a polling loop waiting for it to complete.
The machinery uses the "report-task.txt" file that should be written by the scanner. This is in the form of a Java properties file, but there's only one property in the file that you care about, which is the "ceTaskId" property. That is the id of the background task. You can then make an api call to "/api/ce/task?id=", which returns a block that tells you whether the background task is complete or not.

Remote debug Idea doesn't works for openresty

I am using mobDebug. If run a lua script from command line everything works.
But when I run them from openresty the Idea doesn't stop. It only writes "Connected/ Disconnected"
Configs:
location / {
access_by_lua_block {
local client = require("client")
}
client.lua:
local mobdebug = require("mobdebug");
mobdebug.start()
local lfs = require("lfs")
print("Folder: "..lfs.currentdir())
modebug debug_hook is not invoked for needed lines, set_breakpoints don't invoked.
Idea Debug Logs, but nothing occures:
Idea catch debug from terminal client.lua; But it miss it from running nginx.
THIS IS NOT AN ANSWER. It's just that I am experiencing basically the same problem, and comment space is too small to fit all the relevant observations I would like to share:
I was actually able to stop immediately after mobdebug.start() in code running in nginx, and to step-debug - but only in code called directly from init_by_lua_block. This code of course executes once during the server startup or config reload.
I was never able to stop in worker code (e.g. rewrite_by_lua_*). mobdebug.coro() didn't help, and mobdebug.on() threw about "attempt to yield across C-call boundary"
I was only ever able to stop one time, on next statement after mobdebug.start(); once I hit |> (Resume program), it won't stop on any further breakpoints.
Using mobdebug.loop() is not a correct way to do this, as it's used for live coding, which is not going to work as expected with this setup. mobdebug.start() should be used instead.
Please see an example of how this debugging can be setup with ZeroBrane Studio here: http://notebook.kulchenko.com/zerobrane/debugging-openresty-nginx-lua-scripts-with-zerobrane-studio. All the details to how paths to mobdebug and required modules are configured should still be applicable to your environment.

How to continue test when the page still not completely loaded in selenium

Actually, I am creating automation testing for an e-commerce website. Actually, the website have function lazy load or something. I am testing it on UAT server. So, it will load the page slowly because the specification of the server. It takes more than 60 sec or more to load all the resources from the webpage. So, when I am trying to create selenium automation, it always waiting more than 60 sec to continue the next step (because waiting the page fully loaded). Please, someone give me tips how to continue run the test step after 10 seconds wait the page to load. It won't throw an exception, just continue the test step.
Not possible.
If you find some element and try execute some action while loading you will get stale element error + due loading issue you will have a lot of failed tests and it will take a lot more time to debug.
Automation means to execute fast and have reliable results.
It seems that this environment is not built for automation, you should request more resources.
As an alternative maybe you can use a headless driver or see if you can put the same build on a VM.
Why this is an issue: Selenium needs to wait for each request to be complete.For example when you request a page, if the page is not received entirely and the server still sending info then the request is not done, it is logical that you need a complete request in order to continue.
You should address this to your Project Manager/QA Lead and ask for advice/option on how to handle this.
Please note that these costs should be included/added in the automation price.You need to address this in a simple way:
good server -> automation runs smoothly and fast and the testing is
done faster
bad server -> unable to run automation since is not reliable and each
test has a high rate of failure => alternative X day(s) of
manual testing for each build
If this would be a coding issue like some delayed ajax request then you would have some solutions, devs could help, but if is an infrastructure/resources issue then if not depending on you, and you cannot solve it.
You could use try any type of wait implicit/explicit, explicit would throw some exception, but this is not a solution for poor resources.

Best way to run scheduled tasks in ASP.NET CORE [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Today we have built a console application for running the scheduled tasks for our ASP.NET website. But I think this approach is a bit error prone and difficult to maintain. How do you execute your scheduled task (in an windows/IIS/ASP.NET environment)
Update:
Examples of tasks:
Sending email from an email-queue in the database
Removing outdated objects from the database
Retrieving stats from Google AdWords and fill a table in the database.
This technique by Jeff Atwood for Stackoverflow is the simplest method I've come across. It relies on the "cache item removed" callback mechanism build into ASP.NET's cache system
Update: Stackoverflow has outgrown this method. It only works while the website is running but it's a very simple technique that is useful for many people.
Also check out Quartz.NET
All of my tasks (which need to be scheduled) for a website are kept within the website and called from a special page. I then wrote a simple Windows service which calls this page every so often. Once the page runs it returns a value. If I know there is more work to be done, I run the page again, right away, otherwise I run it in a little while. This has worked really well for me and keeps all my task logic with the web code. Before writing the simple Windows service, I used Windows scheduler to call the page every x minutes.
Another convenient way to run this is to use a monitoring service like Pingdom. Point their http check to the page which runs your service code. Have the page return results which then can be used to trigger Pingdom to send alert messages when something isn't right.
Create a custom Windows Service.
I had some mission-critical tasks set up as scheduled console apps and found them difficult to maintain. I created a Windows Service with a 'heartbeat' that would check a schedule in my DB every couple of minutes. It's worked out really well.
Having said that, I still use scheduled console apps for most of my non-critical maintenance tasks. If it ain't broke, don't fix it.
I've found this to be easy for all involved:
Create a webservice method such as DoSuchAndSuchProcess
Create a console app that calls this webmethod.
Schedule the console app in the task scheduler.
Using this methodology all of the business logic is contained in your web app, but you have the reliability of the windows task manager, or any other commercial task manager to kick it off and record any return information such as an execution report. Using a web service instead of posting to a page has a bit of an advantage because it's easier to get return data from a webservice.
Why reinvent the wheel, use the Threading and the Timer class.
protected void Application_Start()
{
Thread thread = new Thread(new ThreadStart(ThreadFunc));
thread.IsBackground = true;
thread.Name = "ThreadFunc";
thread.Start();
}
protected void ThreadFunc()
{
System.Timers.Timer t = new System.Timers.Timer();
t.Elapsed += new System.Timers.ElapsedEventHandler(TimerWorker);
t.Interval = 10000;
t.Enabled = true;
t.AutoReset = true;
t.Start();
}
protected void TimerWorker(object sender, System.Timers.ElapsedEventArgs e)
{
//work args
}
Use Windows Scheduler to run a web page.
To prevent malicous user or search engine spiders to run it, when you setup the scheduled task, simply call the web page with a querystring, ie : mypage.aspx?from=scheduledtask
Then in the page load, simply use a condition :
if (Request.Querystring["from"] == "scheduledtask")
{
//executetask
}
This way no search engine spider or malicious user will be able to execute your scheduled task.
This library works like a charm
http://www.codeproject.com/KB/cs/tsnewlib.aspx
It allows you to manage Windows scheduled tasks directly through your .NET code.
Additionally, if your application uses SQL SERVER you can use the SQL Agent to schedule your tasks. This is where we commonly put re-occurring code that is data driven (email reminders, scheduled maintenance, purges, etc...). A great feature that is built in with the SQL Agent is failure notification options, which can alert you if a critical task fails.
I'm not sure what kind of scheduled tasks you mean. If you mean stuff like "every hour, refresh foo.xml" type tasks, then use the Windows Scheduled Tasks system. (The "at" command, or via the controller.) Have it either run a console app or request a special page that kicks off the process.
Edit: I should add, this is an OK way to get your IIS app running at scheduled points too. So suppose you want to check your DB every 30 minutes and email reminders to users about some data, you can use scheduled tasks to request this page and hence get IIS processing things.
If your needs are more complex, you might consider creating a Windows Service and having it run a loop to do whatever processing you need. This also has the benefit of separating out the code for scaling or management purposes. On the downside, you need to deal with Windows services.
If you own the server you should use the windows task scheduler. Use AT /? from the command line to see the options.
Otherwise, from a web based environment, you might have to do something nasty like set up a different machine to make requests to a certain page on a timed interval.
I've used Abidar successfully in an ASP.NET project (here's some background information).
The only problem with this method is that the tasks won't run if the ASP.NET web application is unloaded from memory (ie. due to low usage). One thing I tried is creating a task to hit the web application every 5 minutes, keeping it alive, but this didn't seem to work reliably, so now I'm using the Windows scheduler and basic console application to do this instead.
The ideal solution is creating a Windows service, though this might not be possible (ie. if you're using a shared hosting environment). It also makes things a little easier from a maintenance perspective to keep things within the web application.
Here's another way:
1) Create a "heartbeat" web script that is responsible for launching the tasks if they are DUE or overdue to be launched.
2) Create a scheduled process somewhere (preferrably on the same web server) that hits the webscript and forces it to run at a regular interval. (e.g. windows schedule task that quietly launches the heatbeat script using IE or whathaveyou)
The fact that the task code is contained within a web script is purely for the sake of keeping the code within the web application code-base (the assumption is that both are dependent on each other), which would be easier for web developers to manage.
The alternate approach is to create an executable server script / program that does all the schedule work itself and run the executable itself as a scheduled task. This can allow for fundamental decoupling between the web application and the scheduled task. Hence if you need your scheduled tasks to run even in the even that the web app / database might be down or inaccessible, you should go with this approach.
You can easily create a Windows Service that runs code on interval using the 'ThreadPool.RegisterWaitForSingleObject' method. It is really slick and quite easy to get set up. This method is a more streamlined approach then to use any of the Timers in the Framework.
Have a look at the link below for more information:
Running a Periodic Process in .NET using a Windows Service:
http://allen-conway-dotnet.blogspot.com/2009/12/running-periodic-process-in-net-using.html
We use console applications also. If you use logging tools like Log4net you can properly monitor their execution. Also, I'm not sure how they are more difficult to maintain than a web page, given you may be sharing some of the same code libraries between the two if it is designed properly.
If you are against having those tasks run on a timed basis, you could have a web page in your administrative section of your website that acts as a queue. User puts in a request to run the task, it in turn inserts a blank datestamp record on MyProcessQueue table and your scheduled task is checking every X minutes for a new record in MyProcessQueue. That way, it only runs when the customer wants it to run.
Hope those suggestions help.
One option would be to set up a windows service and get that to call your scheduled task.
In winforms I've used Timers put don't think this would work well in ASP.NET
A New Task Scheduler Class Library for .NET
Note: Since this library was created, Microsoft has introduced a new task scheduler (Task Scheduler 2.0) for Windows Vista. This library is a wrapper for the Task Scheduler 1.0 interface, which is still available in Vista and is compatible with Windows XP, Windows Server 2003 and Windows 2000.
http://www.codeproject.com/KB/cs/tsnewlib.aspx

HTML5 Server-Sent Events prototyping - ambiguous error and repeated polling?

I'm trying to get to grips with Server-Side Events as they fit my requirements perfectly and seem like they should be simple to implement, however I can't get past a vague error and what looks like the connection repeatedly being closed and re-opened. Everything I have tried is based on this and other tutorials.
The PHP is a single script:
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
?>
and the JavaScript looks like this (run on body load):
function init() {
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function(e) {
document.getElementById('output').innerHTML += e.data + '<br />';
}, false);
source.addEventListener('open', function(e) {
document.getElementById('output').innerHTML += 'connection opened<br />';
}, false);
source.addEventListener('error', function(e) {
document.getElementById('output').innerHTML += 'error<br />';
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
I have searched around a bit but can't find information on
If Apache needs any special configuration to support server-sent events, and
How I can initiate a push from the server with this kind of setup (e.g. can I simply execute the PHP script from CLI to give a push to the already-connected-browser?)
If I run this JS in Chrome (16.0.912.77) it opens the connection, receives the time, then errors (with no useful information in the error object), then reconnects in 3 seconds and goes through the same process. In Firefox (10.0) I get the same behaviour.
EDIT 1: I thought the issue could be related to the server I was using, so I tested on a vanilla XAMPP install and the same error comes up. Should a basic server configuration be able to handle this without modification / extra configuration?
EDIT 2: The following is an example of output from the browser:
connection opened
server time: 01:47:20
error
connection opened
server time: 01:47:23
error
connection opened
server time: 01:47:26
error
Can anyone tell me where this is going wrong? The tutorials I have seen make it look like SSE is very straightforward. Also any answers to my two numbered questions above would be really helpful.
Thanks.
The problem is your php.
With the way your php script is written, only one message is sent per execution. That's how it works if you access the php file directly, and that's how it works if you access the file with an EventSource. So in order to make your php script send multiple messages, you need a loop.
<?php
header('Content-Type: text/event-stream');
header('Cache-Control: no-cache');
function sendMsg($id, $msg) {
echo "id: $id" . PHP_EOL;
echo "data: $msg" . PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
while(true) {
$serverTime = time();
sendMsg($serverTime, 'server time: ' . date("h:i:s", time()));
sleep(1);
}
?>
I have altered your code to include an infinite loop that waits 1 second after every message sent (following an example found here: Using server-sent events).
This type of loop is what I'm currently using and it eliminated the constant connection drop and reconnect every 3 seconds. However (and I've only tested this in chrome), the connections are now only kept alive for 30 seconds. I will be continuing to figure out why this is the case and I'll post a solution when I find one, but until then this should at least get you closer to your goal.
Hope that helps,
Edit:
In order to keep the connection open for ridiculously long times with php, you need to set the max_execution_time (Thanks to tomfumb for this). This can be accomplished in at least three ways:
If you can alter your php.ini, change the value for "max_execution_time." This will allow all of your scripts to run for the time you specify though.
In the script you wish to run for a long time, use the function ini_set(key, value), where key is 'max_execution_time' and value is the time in seconds you wish your script to run for.
In the script you wish to run for a long time, use the function set_time_limit(n) where n is the number of seconds that you wish your script to run.
Server Sent Events are easy only when it comes to the Javascript part. First of all a lot of tutorials on SSE in the internet are closing their connections in the server part. Be it PHP or Java examples. This is really astonishing because what you get then is just a different way of implementing a "Ajax Polling" system with a strictly defined payload structure (and some minor features like client retry values set by server side). You can easily implement that with a few lines of jQuery. No need for SSE then.
According to the spec of SSE, i would say that the retry shouldnt be the normal way of implementing a client side loop. For me SSE is a one way streaming method which relies on a server backend which does not close the connection after pushing the first data to the client.
In Java its useful to use Servlet3 Async spec in order to free the request thread immediately and do the processing / streaming in a different thread. This works so far but still i dont like the 30 seconds connection lifetime for the EventSource request. Even i am pushing data every 5 seconds, the connection will be terminated after 30 seconds (chrome, firefox). Of course SSE will reconnect per default after 3 seconds but still i dont think this is the way it should be.
One problem is that some Java MVC frameworks dont have the ability to keep the connection open after data sending, so that you end up coding to the bare Servlet API. After on 24hours on coding prototypes in Java, i am more or less dissapointed because the gain over a traditional jQuery-Ajax-loop is not THAT much. And the problem with polyfilling the SSE feature is also existant.
The problem is not a server side issue, this all happens on the client and is part of the spec (I know it sounds weird).
http://dev.w3.org/html5/eventsource/
"When a user agent is to reestablish the connection, the user agent must run the following steps. These steps are run asynchronously, not as part of a task. (The tasks that it queues, of course, are run like normal tasks and not asynchronously.)"
Queue a task to run the following steps:
If the readyState attribute is set to CLOSED, abort the task.
Set the readyState attribute to CONNECTING.
Fire a simple event named error at the EventSource object.
I can't see any need to have an error here, so I have modified your Init function to filter out the error event fired whilst connecting.
function init() {
var CONNECTING = 0;
var source;
if (!!window.EventSource) {
source = new EventSource('events.php');
source.addEventListener('message', function (e) {
document.getElementById('output').innerHTML += e.data + '';
}, false);
source.addEventListener('open', function (e) {
document.getElementById('output').innerHTML += 'connection opened';
}, false);
source.addEventListener('error', function (e) {
if (source.readyState != CONNECTING) {
document.getElementById('output').innerHTML += 'error';
}
}, false);
}
else {
alert("Browser doesn't support Server-Sent Events");
}
}
There is no actual issue with the code, that I can see. The answer selected as correct, is then, incorrect.
This sums up the behavior mentioned in the question (http://www.w3.org/TR/2009/WD-html5-20090212/comms.html):
"If such a resource (with the correct MIME type) completes loading (i.e. the entire HTTP response body is received or the connection itself closes), the user agent should request the event source resource again after a delay equal to the reconnection time of the event source. This doesn't apply for the error cases that are listed below."
The problem lies with the stream. I've successfully kept a single EventStream open before in perl; just send the appropriate HTTP headers, and start sending stream data; never shutdown the stream server side. The issue is that it seems most HTTP libraries attempt to close the stream after its been opened. This will cause the client to attempt to reconnect to the server, which is fully standard compliant.
This means that it will appear that the problem is solved by running a while loop, for a couple of reasons:
A) The code will continue to send data, as if it were pushing out a large file
B) The code (php server) will never have the chance to attempt to close the connection
However, the problem here is obvious: to keep the stream alive, a constant stream of data must be sent. This results in wasteful utilization of resources, and negates any benefits the SSE stream is supposed to provide.
I'm not enough of a php guru to know, but I'd imagine that something in the php server/later in the code is prematurely closing the stream; I had to manipulate the stream at Socket level with Perl to keep it open, since HTTP::Response was closing the connection, and causing the client browser to attempt to re-open the connection. In Mojolicious (another Perl web framework), this can be done by opening a Stream object and setting the timeout to zero, so that the stream never times out.
So, the proper solution here is not to use a while loop; it is to call the appropriate php functions for opening, and keeping open, a php stream.
I was able to do it by implementing a custom event loop. It seems that this html5 feature is not ready at all and has compatibility issues even with the latest version of google chrome. Here it is, working on firefox (can't get the message sent correctly on chrome) :
var source;
function Body_Load(event) {
loopEvent();
}
function loopEvent() {
if (source == undefined) {
source = new EventSource("event/message.php");
}
source.onmessage = function(event) {
_e("out").value = event.data;
loopEvent();
}
}
P.S. : _e is a function that calls document.getElementById(id);
According to the Spec, the 3 second reconnection is by design when the connection is closed. PHP with a loop should theoretically stop this but the PHP script will be running indefinitely and wasting resources. You should try to avoid using apache and php for SSE because of this issue.
The standard http response should close a connection once the response is sent. You can change this with the header "connection: keep-alive" which should tell the browser that the connection is meant to stay open although this can cause problems if you're using proxies.
node.js or something similar is a better engine to use for SSE rather than apache/php and since it's basically JavaScript, its pretty easy to get to grips with.
Server Sent Event as name suggests the data should be traveling from server to client if it has to reconnect every three seconds to retrieve data from server then it is no different than other polling mechanisms.The purpose of SSE is to alert client as soon as there is new data which client is unaware of.Since server closes connection even if header is keep-alive there is no other way than to run php script in infinite loop but with considerable thread sleep to prevent burden on server.Till now i don't see any other way out and its better than spamming server every 3 seconds for new data.
I'm trying the same thing. With varying degrees of success.
Had the same problem with Firefox, running the same js code as mentioned.
Using the Nginx server and some PHP that exited(ie no continual loop), I could get messages back to a "Request" from firefox only once the PHP exited.
Run the PHP as a script in PHP.exe and all is good on the concole, stings are printed when flushed. However, Nginx doesn't send the data until the PHP has completed. Tried adding extra \r\n\r\n and flush() or ob_flush() did not help.
There is no pushing of data, as shown in Wireshark logs, just a delayed response packet to the GET.
Read that I need a "push" module for Nginx that requires a re-build from source.
So this is definitely an Nginx problem.
Using a socket in 'C' I was able to push data to Firefox as expected, and the socket was kept open, and no messages were missed. However this has the disadvantage that I need to server the page.html and the events/stream from the same socket or firefox will not connect due to Cross Site Url problems. There are some ways around this in certain situations, but not for a iframe in a menu system. This approach did prove the point that the SSE does work with firefox and there are pushed packets in the wireshark log. Where option 1 only had request/reply packets.
All this said, I still don't have a solution. I've tried to remove the buffering on the PHP and Nginx. But still nothing until PHP finishes. Tried different header options, eg chunks didn't help either.
I don't feel like writing a full blown http server in 'C' but this seems to be the only option that is working for me at the moment.
I'm about to try Apache, but most write ups suggest that this is worse than Nginx at this job.