Is there an automatic batch job triggering facility in Linux like the JES queue for IBM mainframes? - jobs

Is there a way to automatically trigger a job (program) on Linux by placing a file or script in a certain place like the JES queue for IBM mainframes?
I've seen daemon systems that poll or setting up cron jobs but I have never seen a system that will run scripts if you place them in a certain place on the file system.
I'm looking for something that would act like this:
>cat script.ksh;
!#/bin/ksh
echo 'hello'
>cp cript.ksh /job;
hello

Related

How to attach multiple worker for a queue in rabbit MQ

I am using exchange based pattern in Rabbit MQ.
Producer --> Exchange --> Queues --> Consumer1
How do I run multiple consumer (C1, C2, C3 so on....) for load balancing purpose and scalability of the consumers.
Is it ok run ./worker.js twice thrice based on uses?
Yes it should be ok to run your workers multiple times as that would run multiple instances of your worker listening to your queue to achieve what you want. Please refer this tutorial from RabbitMQ for more info. Specifically see section Round-robin dispatching
To quote a few details:
One of the advantages of using a Task Queue is the ability to easily parallelise work. If we are building up a backlog of work, we can just add more workers and that way, scale easily.
You need three consoles open. Two will run the worker.js script. These consoles will be our two consumers - C1 and C2.
Just to add on #AJS answer. You may want to make use of a 'Process Monitor/Manager' like Supervisord to manage your long-running program C1 and most importantly to run multiple of them (C1, C2, C3 so on....). Just install supervisor in your environment (local, VPS, Docker etc), then add a configuration file like the example below to make it run, monitor and restart multiple worker.js processes as needed,
So, create a supervisor configuration file for your program, eg. my_awesome_worker.conf and place it /etc/supervisor/conf.d directory.
[program:wise_worker]
process_name=%(program_name)s_%(process_num)02d
command=node /my_app_location/worker.js
autostart=true
autorestart=true
numprocs=4
stderr_logfile=/var/log/myapp.err.log
stdout_logfile=/var/log/myapp.out.log
user=myuser
To update the changes, run
$sudo supervisorctl reread
$sudo supervisorctl update
Note the process_name and numprocs section is responsible for running 4 worker.js processes (keep numprocs equal to or less than your number of CPUs). The numprocs in combination with process_name expression of %(program_name)s_%(process_num)02d, will create four processes, namely wise_worker_00, wise_worker_01, wise_worker_02 and wise_worker_03.
Verify that they are all running using
$sudo systemctl status supervisor
or
$sudo service supervisor status

How can I understand if there are threads in hang in the WebSphere Application Server

I'm using IBM Workload Scheduler (TWS) and when the product does not behave as expected or does not reply in a timely fashion, I am under the impression that there could be a thread hanging or blocked somewhere.
Is there a way to tell if there is a blocked thread?
The first step to do is to check if in the SystemOut.log file of WebSphere Application Server (located in WAS_profile_path/logs/server1/SystemOut.log or WAS_profile_path\logs\server1\SystemOut.log in the master domain manager) there is any evidence that one or more threads are hanging. To do this, you can run the following command in the context of an UNIX shell:
cat WAS_profile_path/logs/server1/SystemOut*.log | grep hung
If this command returns something like:
root#MASTER:/opt/IBM/TWA/WAS/TWSProfile/logs/server1# cat SystemOut*.log | grep hung
[6/20/17 5:45:33:988 CEST] 000000b9 ThreadMonitor W WSVR0605W: Thread "WorkManager.ResourceAdvisorWorkManager : 0" (0000009e) has been active for 697451 milliseconds and may be hung. There is/are 1 thread(s) in total in the server that may be hung.
this might mean that a WebSphere thread could be hung.
This may and may not be true, sometimes you have a thread that performs a lot of work and exceeds the set time limit (default value is 10 minutes).
In case you suspect that you are experiencing a real thread hung, consider to give a look to the following articles which provide detailed information to collect the data necessary to diagnose and resolve the issue:
WebSphere MustGather procedure on Linux
WebSphere MustGather procedure on Windows
A similar document exists also for AIX platform.

Running a service in the background withouth getting stuck

Currently I am having an issue with trying to run a process/script in the background[The master starts it on the minion]
The script is something like this:
#!/bin/bash
nohup ping 8.8.8.8 >/dev/null&
And I call it from the master with:
Process-Name:
service.running:
- name: Script-Name
- enable: True
For some reason it gets stuck on the master,I've read a little bit on this issue[it has happenned before apparently] and tried their solutions on it but apparently nothing involving the service state seems to work.
Is there anyway to work around this?
In short, you should configure your script as system daemon first (SysV init.d script, or systemd unit, or ... depends on OS).
Details
The service.running function requires properly configured system service ~ daemon.
For example, on RHEL-based Linux, if you don't see your script name in the output of one of these commands, you should configure it as proper service first (which is a separate topic):
# systemd
systemctl list-units | grep your_service_name
# SysV init.d
chkconfig --list | grep your_service_name
And because you want to start it in background, cmd.run function is not the right tool either:
It will only report successful start of the script without waiting for its completion results.
It will also start new instance of your script every time.
However, if all you simply want is to "fire and forget", use cmd.run.

How to tell supervisor to restart processes when app code changed?

I am new to Tornado and supervisor. I have deployed a tornado app on Debian server and now it is running fine under supervisor/nginx. After that, I made a small change on the app's template file but it does not take effect apparently because the tornado processes need to be restarted. But I don't know to do so. I tried different things like
service supervisor restart
and also in supervisorctl command line I tried restart, reload, update etc.
But the old process are still running and the change in code still not applied. So wondering how to instruct supervisor to restart the app processes and ideally make supervisor sensitive to code change by adding some commands into supervisor.conf
Ok, I figured out. Here is the answer:
supervisor> restart all
and check whether really restarted:
supervisor> status
tornadoes:tornado-8000 RUNNING pid 17697, uptime 0:00:20
tornadoes:tornado-8001 RUNNING pid 17698, uptime 0:00:20
tornadoes:tornado-8002 RUNNING pid 17707, uptime 0:00:19
tornadoes:tornado-8003 RUNNING pid 17712, uptime 0:00:18

Find out the return code of the privileged help run through SMJobSubmit

Is there a way to know the return code or process ID of the process which gets executed when the privileged helper tool is installed as a launchdaemon and launched via SMJobSubmit().
I have an application which to execute some tasks in privileged manner uses the SMJobSubmit API as mentioned here.
Now in order to know whether the tasks succeeded or not, I will have to do one of the following.
The best option is to get the return code of the executable that ran.
Another option would be if I could create a pipe between my application and the launchd.
If the above two are not possible, I will have to resort to some hack like writing a file in /tmp location and reading it from my app.
I guess SMJobSubmit internally submits the executable with a launchdaemon dictionary to the launchd which is then responsible for its execution. So is there a way I could query launchd to find out the return code for the executable run with the label "mylabel".
There is no way to do this directly.
SMJobSubmit is a simple wrapper around a complicated task. It also returns synchronously despite launching a task asynchronously. So, while it can give you an error if it fails to submit the job, if it successfully submits a job that fails to run, there is no way to find that out.
So, you will have to explicitly write some code to communicate from your helper to your app, to report that it's up and running.
If you've already built some communication mechanism (signals, files, Unix or TCP sockets, JSON-RPC over HTTP, whatever), just use that.
If you're designing something from scratch, XPC may be the best answer. You can't use XPC to launch your helper (since it's privileged), but you can manually create a connection by registering a Mach service and calling xpc_connection_create_mach_service.