I have some lua scripts that are used many times. I don't want to use luaL_load every time I change between scripts. For example:
load script1
run script1
load script2
run script2
load script1
run script1
I want to keep a reference or something to script1 to be able to run it without loading it again. Is this possible? I'm new to lua and maybe this question is stupid... but for me is seems a good optimization to avoid loading a script if it is used often. I want that the above code to be turned in something like this:
load script1
load script2
set current script script1
run script1
set current script script2
run script2
set current script script1
run script1
Well, all you need to do is save the compiled chunk that luaL_loadfile pushes on the stack. To do this, you can use lua_pushvalue(L,-1) to make a copy of the compiled chunk at the top of the stack (because luaL_ref will pop it), and int luaL_ref(L,LUA_REGISTRYINDEX) to get a reference to it in the registry. Whenever you need the chunk you can use lua_rawgeti(L,LUA_REGISTRYINDEX, refToChunk), which will push the chunk back on the stack, ready for lua_calling it.
Related
Is it possible to execute the Unidata process from the Unix Command line??
If it's possible, can anyone please let me know how to??
I just want to add some Unidata Processes into the shell script and run it from the Unix
Cron job.
Unidata Process
Unix Command line
Yes! There are several approaches, depending on how your application is setup.
Just pipe the input to the udt process and let 'er rip
$cd /path/to/account
$echo "COUNT VOC" | udt
This will run synchronously, and you may have to also respond to any prompts your application puts up, unless it is checking to see if the session is connected to a tty. Check the LOGIN paragraph in VOC to see what runs at startup.
Same, but run async as a phantom
$cd /path/to/account
$udt PHANTOM COUNT VOC
This will return immediately, the commands will run in the background. Have to check the COMO/PH file for the output from the command. It's common for applications to skip or have a cut down startup process when run as a phantom (check for #USERTYPE)
If none of the above work because of the way your application is written, use something like expect to force the issue.
spawn udt
expect "ogin:"
send "rubbleb\r"
etc.
https://en.wikipedia.org/wiki/Expect for more info on expect
I already have CICD in Jenkins automated for my team. A push to the master branch will test & deploy my team’s node app to npm. However the steps to prepare get a release are complicated and many, and right now just reside in a text file. I just copy those steps from the text document and paste them into a Unix command line to run them. I want to code something to automate/tool that release prep.
I need to run steps of commands, and pause to confirm.
I need to be able to quit at any step and resume at any step.
I need to alternate between performing steps for the computer and informational steps for displaying to people.
Nice to have:
It would be nice to have steps be relatively human readable in the code.
I would prefer to use someone else's to not roll my own.
I already know JavaScript, Bash, Make, yml
How can I best automate my pre-release steps?
You can just pass all the commands to the shell script like so in unix,
$ vi release.sh
#!/bin/bash
//Release commands here
I need to run steps of commands, and pause to confirm.
You can add the follow piece of code on the commands that you would like conformation before proceeding
echo "Do you want to continue?(yes/no)"
read input
if [ "$input" == "yes" ]
then
echo "continue"
fi
I need to be able to quit at any step and resume at any step.
I'm guessing you mean PAUSE and resume
when your shell script is running and you feel the urge to PAUSE you can use Crtl+Z to PAUSE the script and do whatever you want to do like run other scripts/process or go for a cup of coffee :)
To resume, type
$jobs -->List all jobs
[1]+ Stopped release
run fg(foreground) or bg(background)
Note: have to be in the same active shell for it to work
I need to alternate between performing steps for the computer and
informational steps for displaying to people
Add echo
echo "Going to copy the file from actual location to target location"
cp ACTUAL_LOC/file.txt TARGET_LOC/file.txt
It would be nice to have steps be relatively human readable in the
code.
This totally depends on how well you write the script file :)
I would prefer to use someone else's to not roll my own.
Do You mean rollback in sql or unix commands when a failure happens??
I use the command
valgrind --tool=massif --threshold=1 <bin>
The command only generates a massif.out. file after I close the test program. Is there a way to let massif dump the file incrementally during the test program runs?
The file produced at the end contains the status of the memory at different moments of the program run. The output file can then visualised various ways e.g. using ms_print or massif-visualizer.
These will show the evolution of the memory, and so should correspond to your request of seeing 'incremental' dumps.
You can also if you want trigger massif dump yourself during execution, typically using vgdb from a shell window. See http://www.valgrind.org/docs/manual/ms-manual.html#ms-manual.monitor-commands for more information.
if I have two or more running python console applications at the same time of same application, but executed several times by hand or any other way.
Is there any method from python code itself to stop all extra processes, close console window and keep running only one
The solution I would use would be to have a lockfile created in the tmp directory.
The first instance would start, check for the existence of the file, create the file since it is not there, then run; the following instances will start, check for the existence of the file, then quit since it's there. The original instance would remove the lockfile as its last instruction. NOTE: If the app runs into an error and does not execute the instruction to remove the lockfile, you would need to manually remove it else the app will always see the file.
I've seen on other threads that some suggest using the ps command and look for your app's name, which would work; however, if your app will ever run on Windows, you would need to use tasklist.
Is there any way to call mprirun inside FORTRAN program? I'm working on public linux cluster via ssh and the main idea is to automatically enqueue program after its execution is over.
I tried to write something like this at the end of the program:
CALL system('mpirun -np 16 -maxtime 100 TestNP')
But recieved this error:
sh: mpirun: command not found
Any ideas ?
The problem is the missing path prefix, so specifying an absolute path for mpirun should help. However there are several problems with your approach:
If every MPI process executes it, you would have too many instances running, so only one of the nodes (e.g. the master node) should execute it.
The original program won't be finished, until the one called via the system() call did not finish. So, if your queue is wall-clock limited, you don't gain anything at all.
Typically, tasks like this are done via shell-scripts. E.g. in Bash you would write something like:
while true; do
mpirun your_program
done
This would re-invoke mpirun continuously until not killed by you or the queuing system. (So be careful with it!)