Disabling EJB Timer (GlassFish 3.1, Java EE 6) - glassfish

We have a VIP (BIG-IP) that actually moves the web service requests to two nodes each with its own GlassFish server 3.1 and our services deployed. So it is not a true glassfish cluster.
My problem is that I have a lot of Scheduler services like the one listed below:
#Schedule(minute = "55", hour = "23", dayOfWeek = "Wed")
public void runScheduledMedicaidPaymentProcess() {
Is there a way for me to disable the EJB Timer Service on one node so that the above method is not run on both nodes when it is 11:55 pm on Wednesday?
I did see the use of _Default pool for Clusters mentioned in Glassfish server document, but as I explained before ours is not a true cluster. Please let me know if there is any way to stop the timer on one so that it is not activated.

If you're not using a cluster then you really just have two independent instances. You're going to have to create some sort of semaphore that each method checks (a db column might be a good solution). The method would return whether or not it's okay to run the timer. Each of your instances would call the method but only one instance would end up running the timer.
Or...
Setup a cluster.

Related

How does the distributed executor service in Redisson work with regards to scoping / closuring?

If I push a Runnable to a redisson distributed executor service, what rules am I required to oblige by?
Surely , I can not have free reign, I do not see how that is possible, yet, it is not mention in the docs at all, nor are any rules apparently enforced by the API, like R extends Serializable or similar.
If I pass this runnable:
new Runnable(()-> {
// What can I access here, and have it be recreated in whatever server instance picks it up later for execution?
// newlyCreatedInstanceCreatedJustBeforeThisRunnableWasCreated.isAccissible(); // ?
// newlyComplexInstanceSuchAsADatabaseDriverThatisAccessedHere.isAccissible(); // ?
// transactionalHibernateEntityContainingStaticReferencesToComplexObjects....
// I think you get the point.
// Does Redisson serialize everything within this scope?
// When it is recreated later, surely, I can not have access to those exact objects, unless they run on the same server, right?
// If the server goes does and up, or another server executes this runnable, then what happens?
// What rules do we have to abide by here?
})
Also, what rules do we have to abide by when pushing something to a RQueue, RBlockingDequeu, or Redisson live objects?
It is not clear from the docs.
Also, would be great if a link to a single site documentation site could be provided. The one here requires a lot of clickin and navigation:
https://github.com/redisson/redisson/wiki/Table-of-Content
https://github.com/redisson/redisson/wiki/9.-distributed-services#933-distributed-executor-service-tasks
You can have an access to RedisClient and taskId. Full state of task object will be serialized.
TaskRetry setting applied to each task. If task isn't executed after 5 minutes since the moment of start then it will requeued.
I agree that the documentation is lacking some "under the hood" explanations.
I was able to execute db reads and inserts through the Callable/runnable that was submitted to the remote ExecutorService.
I configured a single Redis on a remote VM, the database and the app running locally on my laptop.
The tasks were executed without any errors.

PCF - Pivotal App manager - Routing same URL to different version of same application.

We have two versions of application deployed to PCF.
Can we have same "Route / URL" for both the versions of application and define %of traffic each have to handle?
example.com/myapp -> Applicatoin instance 1 -> **Handle 90% of request**
example.com/myapp -> Applicatoin instance 2 -> **Handle 10% of request**
We need this in Pilot kind of scenario to avoid one big bang deployment and any potential downtime.
Have checked out how routing works in PCF here. Could find solution for what we want.
https://docs.cloudfoundry.org/devguide/deploy-apps/routes-domains.html#map-route
The simplest way to do this (avoid implementing your own load-balancing) is as follows:
1) Start 9 instances of Application 1 for every instance of Application 2
2) Map the same route to both applications (you can do this with cf map-route or use the Apps Manager Web UI)
Now 10% of requests will be serviced by Application 2. As you observe system behavior, you can adjust instance counts until you have completed the transition to Application 2, or rolled back to Application 1.

Best way to run scheduled tasks in ASP.NET CORE [duplicate]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Today we have built a console application for running the scheduled tasks for our ASP.NET website. But I think this approach is a bit error prone and difficult to maintain. How do you execute your scheduled task (in an windows/IIS/ASP.NET environment)
Update:
Examples of tasks:
Sending email from an email-queue in the database
Removing outdated objects from the database
Retrieving stats from Google AdWords and fill a table in the database.
This technique by Jeff Atwood for Stackoverflow is the simplest method I've come across. It relies on the "cache item removed" callback mechanism build into ASP.NET's cache system
Update: Stackoverflow has outgrown this method. It only works while the website is running but it's a very simple technique that is useful for many people.
Also check out Quartz.NET
All of my tasks (which need to be scheduled) for a website are kept within the website and called from a special page. I then wrote a simple Windows service which calls this page every so often. Once the page runs it returns a value. If I know there is more work to be done, I run the page again, right away, otherwise I run it in a little while. This has worked really well for me and keeps all my task logic with the web code. Before writing the simple Windows service, I used Windows scheduler to call the page every x minutes.
Another convenient way to run this is to use a monitoring service like Pingdom. Point their http check to the page which runs your service code. Have the page return results which then can be used to trigger Pingdom to send alert messages when something isn't right.
Create a custom Windows Service.
I had some mission-critical tasks set up as scheduled console apps and found them difficult to maintain. I created a Windows Service with a 'heartbeat' that would check a schedule in my DB every couple of minutes. It's worked out really well.
Having said that, I still use scheduled console apps for most of my non-critical maintenance tasks. If it ain't broke, don't fix it.
I've found this to be easy for all involved:
Create a webservice method such as DoSuchAndSuchProcess
Create a console app that calls this webmethod.
Schedule the console app in the task scheduler.
Using this methodology all of the business logic is contained in your web app, but you have the reliability of the windows task manager, or any other commercial task manager to kick it off and record any return information such as an execution report. Using a web service instead of posting to a page has a bit of an advantage because it's easier to get return data from a webservice.
Why reinvent the wheel, use the Threading and the Timer class.
protected void Application_Start()
{
Thread thread = new Thread(new ThreadStart(ThreadFunc));
thread.IsBackground = true;
thread.Name = "ThreadFunc";
thread.Start();
}
protected void ThreadFunc()
{
System.Timers.Timer t = new System.Timers.Timer();
t.Elapsed += new System.Timers.ElapsedEventHandler(TimerWorker);
t.Interval = 10000;
t.Enabled = true;
t.AutoReset = true;
t.Start();
}
protected void TimerWorker(object sender, System.Timers.ElapsedEventArgs e)
{
//work args
}
Use Windows Scheduler to run a web page.
To prevent malicous user or search engine spiders to run it, when you setup the scheduled task, simply call the web page with a querystring, ie : mypage.aspx?from=scheduledtask
Then in the page load, simply use a condition :
if (Request.Querystring["from"] == "scheduledtask")
{
//executetask
}
This way no search engine spider or malicious user will be able to execute your scheduled task.
This library works like a charm
http://www.codeproject.com/KB/cs/tsnewlib.aspx
It allows you to manage Windows scheduled tasks directly through your .NET code.
Additionally, if your application uses SQL SERVER you can use the SQL Agent to schedule your tasks. This is where we commonly put re-occurring code that is data driven (email reminders, scheduled maintenance, purges, etc...). A great feature that is built in with the SQL Agent is failure notification options, which can alert you if a critical task fails.
I'm not sure what kind of scheduled tasks you mean. If you mean stuff like "every hour, refresh foo.xml" type tasks, then use the Windows Scheduled Tasks system. (The "at" command, or via the controller.) Have it either run a console app or request a special page that kicks off the process.
Edit: I should add, this is an OK way to get your IIS app running at scheduled points too. So suppose you want to check your DB every 30 minutes and email reminders to users about some data, you can use scheduled tasks to request this page and hence get IIS processing things.
If your needs are more complex, you might consider creating a Windows Service and having it run a loop to do whatever processing you need. This also has the benefit of separating out the code for scaling or management purposes. On the downside, you need to deal with Windows services.
If you own the server you should use the windows task scheduler. Use AT /? from the command line to see the options.
Otherwise, from a web based environment, you might have to do something nasty like set up a different machine to make requests to a certain page on a timed interval.
I've used Abidar successfully in an ASP.NET project (here's some background information).
The only problem with this method is that the tasks won't run if the ASP.NET web application is unloaded from memory (ie. due to low usage). One thing I tried is creating a task to hit the web application every 5 minutes, keeping it alive, but this didn't seem to work reliably, so now I'm using the Windows scheduler and basic console application to do this instead.
The ideal solution is creating a Windows service, though this might not be possible (ie. if you're using a shared hosting environment). It also makes things a little easier from a maintenance perspective to keep things within the web application.
Here's another way:
1) Create a "heartbeat" web script that is responsible for launching the tasks if they are DUE or overdue to be launched.
2) Create a scheduled process somewhere (preferrably on the same web server) that hits the webscript and forces it to run at a regular interval. (e.g. windows schedule task that quietly launches the heatbeat script using IE or whathaveyou)
The fact that the task code is contained within a web script is purely for the sake of keeping the code within the web application code-base (the assumption is that both are dependent on each other), which would be easier for web developers to manage.
The alternate approach is to create an executable server script / program that does all the schedule work itself and run the executable itself as a scheduled task. This can allow for fundamental decoupling between the web application and the scheduled task. Hence if you need your scheduled tasks to run even in the even that the web app / database might be down or inaccessible, you should go with this approach.
You can easily create a Windows Service that runs code on interval using the 'ThreadPool.RegisterWaitForSingleObject' method. It is really slick and quite easy to get set up. This method is a more streamlined approach then to use any of the Timers in the Framework.
Have a look at the link below for more information:
Running a Periodic Process in .NET using a Windows Service:
http://allen-conway-dotnet.blogspot.com/2009/12/running-periodic-process-in-net-using.html
We use console applications also. If you use logging tools like Log4net you can properly monitor their execution. Also, I'm not sure how they are more difficult to maintain than a web page, given you may be sharing some of the same code libraries between the two if it is designed properly.
If you are against having those tasks run on a timed basis, you could have a web page in your administrative section of your website that acts as a queue. User puts in a request to run the task, it in turn inserts a blank datestamp record on MyProcessQueue table and your scheduled task is checking every X minutes for a new record in MyProcessQueue. That way, it only runs when the customer wants it to run.
Hope those suggestions help.
One option would be to set up a windows service and get that to call your scheduled task.
In winforms I've used Timers put don't think this would work well in ASP.NET
A New Task Scheduler Class Library for .NET
Note: Since this library was created, Microsoft has introduced a new task scheduler (Task Scheduler 2.0) for Windows Vista. This library is a wrapper for the Task Scheduler 1.0 interface, which is still available in Vista and is compatible with Windows XP, Windows Server 2003 and Windows 2000.
http://www.codeproject.com/KB/cs/tsnewlib.aspx

30 sec periodic task to poll external web service and cache data

I'm after some advice on polling an external web service every 30 secs from a Domino server side action.
A quick bit of background...
We track the location of cars thru the TomTom api. We now have a requirement to show this in our web app, overlayed onto a map (google, bing, etc.) and mashed up with other lat long data from our application. Think of it as dispatching calls to taxis and we want to assign those calls to the taxis (...it's not taxis\ calls, but it is similar process). We refresh the dispatch controllers screens quite aggressively, so they can see the status of all the objects and assign to the nearest car. If we trigger the pull of data from the refresh of the users screen, we get into some tricky controlling server side, else we will hit the max allowable requests per minute to the TomTom api.
Originally I was going to schedule an agent to poll the web service, write to a cached object in our app, and the refreshing dispatch controllers screen pulls the data from our cache....great, except, user requirement is our cache must be updated every 30secs. I can create a program doc that runs every 1 min, but still not aggressive enough.
So we are currently left with: our .net guy will create a service that polls TomTom every 30secs, and we retrieve from his service, or I figure out a way to do in Domino. It would be nice to do in Domino database, and not some stand alone java app or .net, to keep as much of the logic as possible in one system (Domino).
We use backing beans heavily in our system. I will be testing this later today I hope, but would this seem like a sensible route to go down..?:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
...or are their limitations I am not aware of, has anyone tackled this before in Domino or have any comments?
Thanks in advance,
Nick
Check out DOTS (Domino OSGi Tasklet Service): http://www.openntf.org/internal/home.nsf/project.xsp?action=openDocument&name=OSGI%20Tasklet%20Service%20for%20IBM%20Lotus%20Domino
It allows you to define background Java tasks on a Domino server that have all the advantages of agents (can be scheduled or triggered) with none of the performance or maintenance issues.
If you cache the data in a bean (application or session scoped). Have a date object that contains the last refreshed date. When the data is requested, check last cached date against current time. If it's more than/equal to 30 seconds, refresh data.
A way of doing it would be to write a managed bean which is created in the application scope ( aka there can only be one..). In this managed bean you take care of the 30sec polling of the webservice by good old java webservice implementation and a java thread which you start at the creation of your managed-bean something like
public class ServicePoller{
private static myThread = null;
public ServicePoller(){
if(myThread == null){
myThread = new ServicePollThread();
(new Thread(myThread)).start());
}
}
}
class ServicePollThread implements Runnable(){
private hashMap yourcache = null;
public ServicePollThread(){
}
public void run(){
while(running){
doPoll();
Thread.sleep(4000);
}
}
....
}
This managed bean will then poll every 30 seconds the webservice and save it's findings in a hashmap or some other managed-bean classes. This way you dont need to run an agent or something like that and you achieve when you use the dispatch screen to retrieve data from the cache.
Another option would be to write an servlet ( that would be possible with the extlib but I cant find the information right now ) which does the threading and reading the service for you. Then in your database you should be able to read the cache of the servlet and use it wherever you need.
As Tim said DOTS or as jjtbsomhorst said a thread or an Eclipse job.
I've created a video describing DOTS: http://www.youtube.com/watch?v=CRuGeKkddVI&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=4&feature=plcp
Next Monday I'll publish a sample how to do threads and Eclipse jobs. Here is a preview video: http://www.youtube.com/watch?v=uYgCfp1Bw8Q&list=UUtMIOCuOQtR4w5xoTT4-uDw&index=1&feature=plcp

How can I reject a Windows "Service Stop" request in ATL 7?

I have a Windows service built upon ATL 7's CAtlServiceModuleT class. This service serves up COM objects that are used by various applications on the system, and these other applications naturally start getting errors if the service is stopped while they are still running.
I know that ATL DLLs solve this problem by returning S_OK in DllCanUnloadNow() if CComModule's GetLockCount() returns 0. That is, it checks to make sure no one is currently using any COM objects served up by the DLL. I want equivalent functionality in the service.
Here is what I've done in my override of CAtlServiceModuleT::OnStop():
void CMyServiceModule::OnStop()
{
if( GetLockCount() != 0 ) {
return;
}
BaseClass::OnStop();
}
Now, when the user attempts to Stop the service from the Services panel, they are presented with an error message:
Windows could not stop the XYZ service on Local Computer.
The service did not return an error. This could be an internal Windows error or an internal service error.
If the problem persists, contact your system administrator.
The Stop request is indeed refused, but it appears to put the service in a bad state. A second Stop request results in this error message:
Windows could not stop the XYZ service on Local Computer.
Error 1061: The service cannot accept control messages at this time.
Interestingly, the service does actually stop this time (although I'd rather it not, since there are still outstanding COM references).
I have two questions:
Is it considered bad practice for a service to refuse to stop when asked?
Is there a polite way to signify that the Stop request is being refused; one that doesn't put the Service into a bad state?
You can't do this. Once the SCM sends a SERVICE_CONTROL_STOP to your service, you have to stop.
If your other apps are also services, you can make them dependencies within the SCM. Of course, if the apps using this service are just regular applications that can't be used.
When ATL's lock count increments to 1, call SetServiceStatus() with the SERVICE_ACCEPT_STOP flag omitted in the SERVICE_STATUS::dwControlsAccepted field. Then you will not receive any SERVICE_CONTROL_STOP requests at all. Any attempt to stop the service will fail immediately. When ATL's lock count falls back to 0, call SetServiceStatus() again with the SERVICE_ACCEPT_STOP flag specified.
I just had to do this in 2 (older) ATL-based services, and it works well. Granted, I was unable to figure out the best way to override Lock() and Unlock() directly, so I just put a small loop inside my service that checks GetLockCount() at frequent intervals and calls SetServiceStatus() when needed.
In your constructor, update m_status.dwControlsAccepted removing SERVICE_ACCEPT_STOP. For instance:
CMyServiceModule::CMyServiceModule()
: ATL::CAtlServiceModuleT<CMyServiceModule, IDS_SERVICENAME>()
{
m_status.dw &= ~SERVICE_ACCEPT_STOP
}