Which process runs Webmin Scheduled Functions? - webmin

Exactly who wakes up and runs the entries in Webmin Scheduled Functions? doesn't seem to be crond. is it miniserv.pl?

Looking at miniserv.pl file, tells me that as miniserv is always running, it effectively manages its cron jobs.
# Initially read webmin cron functions and last execution times
&read_webmin_crons();
Search through the file for webmin_crons string, you will get all info you need in this regard.
As far as I'm aware, we don't use system crond to run internal scheduled functions.

Related

Starting something at a certian time and then stopping it

I run a Garry's mod server and I want to stop my server at 7AM then start it again. To safely close the server it requires me to type quit in the server console. How could I make a script that rules quit in the console then starts my start.sh file after that is done? I was looking at crontabs but they are confusing to me.
I'm not familiar with Garry's mod servers, but in general you can do the following in a cron file:
0 7 * * * quit && /path/to/start.sh
the first bit ensures that the command is run at 7AM of machine's time every day. You can use crontab guru as a simple UI that shows what the numbers and * mean on cron scheduling.
I've assumed that quit closes the piece of code running on the machine, and doesn't restart or shutdown it down.
Also make sure to use absolute paths when running scripts from cron files.

Execute job spoon with software

I have a JOB done in SPOON, which is executed without problems in the command line, but I would like to know if there is any software in which I can execute these JOBS and go to see the execution visually. The idea is that for the most pleasant exploitation area these tasks are executed.
You have two solutions:
Carte:
Use the carte server which is shipped with the PDI. Install the PDI on any server, launch carte (specifying the port), then you can execute/view/stop/restart job/transformation from any browser. Documentation is here.
Of course you can launch a job/transformation from your own PDI. Just define a new Slave server, on the left panel, tab view, default username/password = cluster/cluster. Then each time you run a job/transformation, choose the carte server, instead of Pentaho/local in the Run configuration.
Loggin
If you just want to follow job/transformation, you may use the database logging: Right-click any where, Parameters, Logging, Job/Transformation, then define a database, a table and a logging interval of 2 seconds.
Then every two seconds, the line_read, line_written, errors, and log_field are written to a database. This database can be read by an external process and displayed on the screen or on a browser.
This method is used in the github/ETL-pilot which uses a tomcat (because you probably have a tomcat already running with a Pentaho server), but can easily be adapted to a nodejs or any other server. (If you do it and OpenSource it, please add a link to your work on our github).

Finalize wsadmin script task "createAuthDataEntry"

I have a follow-up question to this issue: is it possible to finalize the AdminTask.createAuthDataEntry task in one wsadmin script?
I need to invoke this task so that WAS can establish a connection to a datasource that I have defined in the same script.
Defining an auth entry from the web console does not require a restart. Typically I would not expect that a restart would be required for authentication changes.
I have tried to use the task AdminControl.invoke(AdminControl.queryNames('WebSphere:*,type=Server,node=%s,process=%s' % ('node', 'server')), 'restart') inside the script, but this stops the instance without booting it up again. Also, I cannot verify the datasource connection within the same script because of these limitations.
Creating or modifying authentication data entries from wsadmin requires a server restart. We have an RFE to allow wsadmin to make dynamic updates to them without a server restart which you can vote for. In order to stop and start your server using wsadmin, it's probably easiest for the OS-level (bat or sh) script that invokes wsadmin to call two scripts.

Glassfish start-cluster command fails

I'm attempting to restart a glassfish server (glassfish3?) using a script which runs daily, which executes the following commands:
asadmin stop-cluster myapp-cluster
asadmin start-cluster myapp-cluster
However, either command just gives the ouptut:
autoscaling.us-east-1.amazonaws.com
The server lives on Amazon, obviously. Any idea why it's failing, or better yet, how to make it work?
After some more googling, I logged into the asadmin, and at the prompt, typed "start-domain". That started the domain. After that, I typed "start-cluster myapp-cluster", and now the app is up.
So, apparently for some reason the domain was down. Probably it would be a good idea to modify the script to stop the cluster, stop the domain, then start the domain, and start the cluster. Ideally, glassfish would stop going down every few days...

SQL 2012 - SSIS Package not populating Text file when scheduled

I'm working on SQL 2012 Enterprise and I have a set of SSIS package exports which push data out to text files on a shared network folder. The packages aren't complex and under most circumstances they work perfectly. The problem I'm facing is that they do not work when scheduled - despite reporting that they have succeded.
Let me explain the scenarios;
1) When run manually from within BIDS, they work correctly, txt files are created and populated with data.
2) When deployed to the SSISDB and run from the Agent job they also work as expected - files are created and populate with data.
3) When the Agent job is scheduled to run in the evening, the job runs and reports success. The files are created but the data is not populated.
I've checked the reports on the Integration Services Catalogs and compared the messages line by line from the OnInformation. Both runs reports that the Flat File Destination wrote xxxx rows.
The data is there, the Agent account has the correct access. I cannot fathom why the job works when started manually, but behaves differently when scheduled.
Has anyone seen anything similar? It feels like a very strange bug....
Kind Regards,
James
Make sure that the account you have set up as the proxy for the SSIS task has read/write access to the file.
IMX, when you run an SQL Agent job manually, it appears to use the context of the user who initiates it in some way. I always assumed it was a side effect of impersonation. It's only when it actually runs with the schedule that everything uses the assigned security rights.
Additionally, I think when the user starts the job, the user is impersonating the proxy, but when the job is run via the schedule, the agent's account is impersonating the proxy. Make sure the service account has the right to impersonate the proxy. Take a look at sp_grant_login_to_proxy and sp_enum_login_for_proxy.
Here's a link that roughly goes through the process:
http://www.mssqltips.com/sqlservertip/2163/running-a-ssis-package-from-sql-server-agent-using-a-proxy-account/
I also recall this video being useful:
http://msdn.microsoft.com/en-us/library/dd440761(v=SQL.100).aspx
I had the same problem with Excel files. It was permission rights.
What worked for me was adding the SERVICE account to the folder's security tab. Then the SQL Agent can access the files.