Cron not working properly in Continuum Environment - continuum

I configured a Cron Expression in Continuum Environment.It works fine but suddenly now it stopped working
runs at 9 am EDT every day 0 0 9 ? * *

Typically some information will be logged in continuum.log that indicates any changes to the schedule. If the scheduler stopped working it may require Continuum to be restarted. If it continues not to work after restarting the configuration may not be stored correctly and need to be reset.

Related

Can I run infinite loop in the background in ASP.NET CORE IIS

At the end of the startup.cs I have a loop that runs every second infinitely. the loop just updates the DB every second. the server is deployed on windows IIS (AWS EC2).
I want to know, after the first start of the server, will it run infinitely (every second) even without any traffic to the server? or will it stop after some time if the server has no traffic and starts again (run startup. cs again) when someone comes to the server? e.g, if we have no traffic in 1 month, will it running in the background for one month or stop?
Update
Turns our loop will be removed after 5 - 10 minutes. is there any way I can achive this (run the loop for month without any traffic)?
Update 2
The timeout is a setting in AWS EC2 https://aws.amazon.com/blogs/aws/elb-idle-timeout-control (which I did not have access to) appreciate any suggessions.
Update 3
I was able to resolve this by separating timer logic into a console app and run inside ec2.
You should look at using "Hosted Services" to achieve what you want..
"In ASP.NET Core, background tasks can be implemented as hosted services."

Splunk 7.2.9.1 Universal forwarder on SUSE Linux12.4 not communicating and forwarding logs to Indexer after certain period of time

I have noticed Splunk 7.2.9.1 Universal forwarder on SUSE Linux12.4 is not communicating to deployment server and forwarding logs to indexer after certain period of time. "splunkd" process appears to be running while this issue persists.
I have to restart UFW for it to resume communication to deployment and forward logs. But this will again stop communication after certain period of time.
I cannot see any specific logs in splunkd.log while this issue occurs.
However, i noticed below message from watchdog.log
06-16-2020 11:51:09.055 +0200 ERROR Watchdog - No response received from IMonitoredThread=0x7f24365fdcd0 within 8000 ms. Looks like thread name='Shutdown' is busy !? Starting to trace with 8000 ms interval.
Can somebody help to understand what is causing this issue.
This appears to be a Known Issue. From the 7.2.9.1 release notes:
Universal Forwarders stop sending data repeatedly throughout the day
Workaround: In limits.conf, try changing file_tracking_db_threshold_mb
in the [inputproc] stanza to a lower value.
I did not find a version where this is not listed as a known problem.

Redis server not starting - Forked Process did not respond in a timely manner

Redis server which was working fine got stopped suddenly and the error is:
BeginForkOperation: system error caught. error code=0x00000000, message=Forked
Process did not respond in a timely manner.
Not able to figure out why it is happening, and also when I am restarting my machine then
if I start the redis-server it's working fine.
Please help me in this regard.
You should try updating your Redis version, the guys from MSOpenTech fixed a lot of bugs in the last months and this one looks related, at least the error message is identical: https://github.com/MSOpenTech/redis/issues/144

How to tell supervisor to restart processes when app code changed?

I am new to Tornado and supervisor. I have deployed a tornado app on Debian server and now it is running fine under supervisor/nginx. After that, I made a small change on the app's template file but it does not take effect apparently because the tornado processes need to be restarted. But I don't know to do so. I tried different things like
service supervisor restart
and also in supervisorctl command line I tried restart, reload, update etc.
But the old process are still running and the change in code still not applied. So wondering how to instruct supervisor to restart the app processes and ideally make supervisor sensitive to code change by adding some commands into supervisor.conf
Ok, I figured out. Here is the answer:
supervisor> restart all
and check whether really restarted:
supervisor> status
tornadoes:tornado-8000 RUNNING pid 17697, uptime 0:00:20
tornadoes:tornado-8001 RUNNING pid 17698, uptime 0:00:20
tornadoes:tornado-8002 RUNNING pid 17707, uptime 0:00:19
tornadoes:tornado-8003 RUNNING pid 17712, uptime 0:00:18

RUN#Cloud consistently throws me out during a heavy operation

I'm using a large app instance to run a basic java web application (GWT + Spring). There's an expensive operation within my application (report) which takes a long time to execute.
I've tried running it with the cloudbees SDK on my local machine with similar settings as it would be on the cloud and it seems to function just fine. It runs in about 3-4 minutes.
On the cloud, it seems to be taking longer. The problem isn't the fact that it takes long. What happens in that cloudbees terminates the session after 5 minutes and gives me an error in my browser saying 'Unable to connect to server. Please contact your administrator'. A report which doesn't take as long runs just fine. My application has a session timeout of 30 minutes, so that isn't a problem either.
What could possibly be going wrong? Is it something to do with cloudbees?
This may be due to proxy buffering of your request through the routing layer (revproxy) - so it most likely isn't a session timeout - but the http connection getting cut.
You can either set proxyBuffering=false via the bees CLI command (eg when you deploy the app) - this will ensure longer running connections can work.
Ideally, however, you could change the app slightly to return to the browser with some token which you can poll with to get completion status, as even with a connection that lasts that long, over the internet it may provide a bad experience vs locally.