Maximo Automation script logs on script crash - jython

I'm working with a Jython automation script in Maximo. I'm seeing the output of my print statements in the MXServer logs when the script exits correctly, but none of my print statements make it into the logs if the script crashes - even the print statements that execute before the script crashes. Is this just how Maximo works, or is there some way of seeing those print statements when the script crashes? It's very difficult to debug why the script is crashing if I can't see my print statements.

That is the way the scripting engine works in Maximo. If it can't exit cleanly, it won't dump the output it had received so far (or be able to grab the output stored in the executing script). This is a primary reason why you should be using logging statements instead of print statements (there are some others too, like being able to change the logging output level; this parallels the advice that you should never use System.out.println and should use a logger instead). Logging statements from the script will still print out as the script runs, even if the script does not exit cleanly (or has not exited at all yet).
As of Maximo 7.6.0.something they even added some logging helper functions to the special service object that scripts get. You can call service.log_debug, service.log_info, etc. to log out at the respective level to a premade logger.

Related

Execute Unidata Process from the shell command lines?

Is it possible to execute the Unidata process from the Unix Command line??
If it's possible, can anyone please let me know how to??
I just want to add some Unidata Processes into the shell script and run it from the Unix
Cron job.
Unidata Process
Unix Command line
Yes! There are several approaches, depending on how your application is setup.
Just pipe the input to the udt process and let 'er rip
$cd /path/to/account
$echo "COUNT VOC" | udt
This will run synchronously, and you may have to also respond to any prompts your application puts up, unless it is checking to see if the session is connected to a tty. Check the LOGIN paragraph in VOC to see what runs at startup.
Same, but run async as a phantom
$cd /path/to/account
$udt PHANTOM COUNT VOC
This will return immediately, the commands will run in the background. Have to check the COMO/PH file for the output from the command. It's common for applications to skip or have a cut down startup process when run as a phantom (check for #USERTYPE)
If none of the above work because of the way your application is written, use something like expect to force the issue.
spawn udt
expect "ogin:"
send "rubbleb\r"
etc.
https://en.wikipedia.org/wiki/Expect for more info on expect

Start a Spring-Shell based application not interactive

Is it possible to start a specific command of a Spring-Shell app and then return/exit the shell after the command is finished? Further is it possible to expose the exit code (System.exit) of the app to the operating system shell?
For my purpose i will take advantage of the plugin mechanism and the CLI-Annotations of Spring-Shell. But in general there is no human interaction with the app, instead a job scheduler (UC4) will start the app and check the exit code to generate an email in case of an exit code not equal to 0. On the other hand for manual tests by our customer, there is also the need of tab completion, usage help etc.
This behavior is already built-in (although we considered removing it, or at least make it optional). I see now that it is useful :)
Simply invoke the shell with your (sole) command and the shell will spin up, execute the command, and quit. Also, the return code of the shell already indicates whether there was an error or not (tried with an inexistant command for example). Of course, if your custom commands do not properly indicate an error (i.e. print an error message but perform a normal return) this will not work. You should throw an exception instead.
The behavior is back.
Run spring-shell with #my-script, like so:
java -jar my-app.jar #my-script
Where my-script is a file with your commands:
my-command1 arg1 arg2
my-command2 arg1 arg2

Execute Multiple PowerShell Files using a SSIS Package

I have multiple PowerShell script files that I need to execute in a sequential flow(one after other). Can someone please help me how to schedule multiple PowerShell files to be executed using a SSIS Package. And I need to build a fault tolerant model were I need to re-execute a powershell script in case of failure.
Running PowerShell
There isn't a built-in Execute PowerShell task (pity) so you'll need to use an Execute Process Task with the path to powershell.exe
Something that you will need to take into consideration is that the default execution policy for PowerShell is Restricted which cannot run a script. Further complicating matters is the account that runs the SSIS package will also need to have its execution policy modified to be able to fire off those scripts. It's a simple matter of Set-ExecutionPolicy RemoteSigned or whatever level you feel is appropriate but you'll need to do this from within the account.
Fault Tolerance
The simple approach is to ignore the return code in the Execute Process Task. Alternatively, if the desire is to keep running the PS1 until it doesn't fail, then you'd wrap a For Loop Container around the Execute Process Task and only set the terminal condition once the task returns a success value. Things might still go sideways depending on what the failure is.

ms-access: doing repetitive processes with vba/sql

i have an access database backend that contains three tables. i have distributed the front end to several users. this is a very simple database with minimal functionality. i need to import certain rows from a file every hour into one of the tables in the database. i would like to know what is the best way to automate this process so that i can have it running hourly. i need it to be running sort of as a service in the background. can you tell me how you would do this?
You could have for example:
a ms-access file with all necessary code to run the import proc
a BAT file containing the command line(s) that will run this ms-access file with all requested parameters. Check ms-access command line parameters to see the available options.
a task scheduler service software to launch the BAT file: depending on the task scheduler and the command line to be sent, you could even avoid the BAT file step
If all you want to do is run some queries, I would not do this by automating all of Access, but instead by writing a VBScript that uses DAO to execute the SQL directly. That's a much more efficient way to do it, and will run without a console logon (which may or may not be required for full Access to be run by the task scheduler).

Making WSGI debug framework?

I'm trying to learn about using mod-wsgi, and I thought the best way would be for me to write my own simple 'debug' framework. I am NOT looking to use someone else's debug framework at this time.
The problem is, I'm not sure how to get started.
Specifically, I have a script working now where there is a WSGIAlias to my python script:
/testscript -> /home/bill/testscript.py [this works ok]
There are several annoying problems here, namely that if there is any syntax error of any kind, apache returns a 500 server error, and I have to check the server logs, which is annoying.
What I would like to do is to have some kind of framework called, that then encapsulates my script, this way when an error occurs (like a syntax error in testscript.py or any other type of exception), I can catch the exception, and return a nicely formatted HTML file with debugging information.
My question is, how do I 'pass' the script I want to run as an argument to my debug script?
From the command line, it would be easy, I would do something like this:
$ python debug.py myscript.py
How can I do this using WSGI though? Any ideas?