I have a shell script which asks for user input and depending on the input opens db connection using sqlplus and run some sql querys like drop table /create table/select/update. Is it possible that the sql part be run as background job,so that even if i lose vpn connectivity to the network,all the sql queries gets executed.
Also ,when the sql parts gets completed and user is prompted with another input the shell script comes to foreground and after getting the input again goes to background?
I have found some questions which tell us how to run the script in background,but i want to run ONLY some parts of the same script in background if possible(and come to foreground for user input).Though i can make multiple scripts too handle it(dividing the scripts in parts which needs to be called in background and calling them though another script),i would rather do it in a single script if possible.
You can break your main script up into functions / smaller scripts to achieve the desired behavior of a mix of background processes and foreground processes.
For example, in your main script:
#!/bin/sh
echo "Starting script..."
# do so more stuff here, maybe ask user for input
./run_background_process_1 &
# ask the user for some more input
./run_background_process_2 &
...
Use the & symbol at the end of script calls to denote that they should be run in the background.
(Updated) If you'd like to keep everything in 1 script, use functions to break up / encapsulate the parts of logic that you would like to run in the background. Call these functions by suffixing the call with &, same as above.
You can try the following example to see that it works:
#!/bin/sh
hello() {
condition="yes"
while [[ $condition== "yes" ]]
do
echo "."
sleep 1
done
}
# Script main starts here
echo "Start"
hello &
echo "Finish"
Remove the & after hello and you'll see that it behaves differently.
There are tools which allow you to keep scripts running despite loss of connection. For example, check out http://www.gnu.org/software/screen/ - one of its features is Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the users terminal.
After search on internet i found out i can use three methods to make the script background :
1) using bg: How do I put an already-running process under nohup? .but unfortunately ,this didnt worked for me in ksh shell.
2) using coprocesses
3) using nohup
I decided to go with nohup as it was easier to implement. The sqlplus part which needed to be run in background ,i made another script of it and called it from the main script using nohup
nohup script-name.ksh ${parameter1} ${paramter2} &
This worked for me.
Related
I have multiple scripts running. All the scripts have the same name but the commands they execute are doing different things.
I am trying to figure out if a certain instance of the script is running and if so I don't want it to run again. This is difficult because all of the scripts have the same name so it could be a false positive by using pgrep.
My idea was if there is any attribute or description I could attach when it is first run, then I can grep for that unique description and tell which instance is running, is there any way to do this?
Does anyone have any other ideas?
Thanks
You can implement below logic to check weather script is already running or not.
if [ -f Script1.lck ];then
echo "ALREADY INSTANCE RUNNING `date`"
echo "EXITING"
else
echo "NO INSTANCE RUNNING "
touch Script1.lck
#Your Script code here.............
rm -f Script1.lck
fi
*I used concept that every time it runs checks for Script1.lck file if file not exists than it means no instance is running. So it creates file and start executing your code.
Suppose in between you executed the script then it checks for .lck file and and .lck already exists due to previous instance.
*In last i removed the .lck file.
*By using different lck file names for different script you can check which script is running.
I have a script which interacts with user (prints some questions to stderr and gets input from stdin) and then prints some data to stdin. I want to put the output of the script to a variable in vimscript. It probably should look like this:
let a = system("./script")
The supposed behavior is that script runs, interacts with user, and after all a is assigned with its output to stdout. But instead a is assigned both with outputs to stdout and stderr, so user seed no prompts.
Could you help me fixing it?
Interactive commands are best avoided from within Vim; especially with GVIM (on Windows), a new console window pops up; you may not have a fully functional terminal, ...
Better query any needed arguments in Vimscript itself (with input(); or pass them on from a custom Vim :command), and just use the external script non-interactively, feeding it everything it needs.
What gets captured by system() (as well as :!) is controlled by the 'shellredir' option. Its usual value, >%s 2>&1 captures stdout as well as stderr. Your script needs to choose one (e.g. stdout) for its output, and the other for user interaction, and the Vimscript wrapper that invokes it must (temporarily) change the option.
:let save_shellredir = &shellredir
:set shellredir=>
:let a = system('./script') " The script should interact via stderr.
:let &shellredir = save_shellredir
Call the script within the other as,
. ./script.sh
I think this is what you meant.
I'm just switch to zsh and now adapting the alias in which was printing some text (in color) along with a command.
I have been trying to use the $fg array var, but there is a side effect, all the command is printed before being executed.
The same occur if i'm just testing a echo with a color code in the terminal:
echo $fg_bold[blue] "test"
]2;echo "test" test #the test is in the right color
Why the command print itself before to do what it's supposed to do ? (I precise this doesn't happen when just printing whithout any wariable command)
Have I to set a specific option to zsh, use echo with a special parameter to get ride of that?
Execute the command first (keep its output somewhere), and then issue echo. The easiest way I can think of doing that would be:
echo $fg[red] `ls`
Edit: Ok, so your trouble is some trash before the actual output of echo. You have some funny configuration that is causing you trouble.
What to do (other than inspecting your configuration):
start a shell with zsh -f (it will skip any configuration), and then re-try the echo command: autoload colors; colors; echo $fg_bold[red] foo (this should show you that the problem is in your configuration).
Most likely your configuration defines a precmd function that gets executed before every command (which is failing in some way). Try which precmd. If that is not defined, try echo $precmd_functions (precmd_functions is an array of functions that get executed before every command). Knowing which is the code being executed would help you search for it in your configuration (which I assume you just took from someone else).
If I had to guess, I'd say you are using oh-my-zsh without knowing exactly what you turned on (which is an endless source of troubles like this).
I don't replicate your issue, which I think indicates that it's either an option (that I've set), or it's a zsh version issue:
$ echo $fg_bold[red] test
test
Because I can't replicate it, I'm sure there's an option to stop it happening for you. I do not know what that option is (I'm using heavily modified oh-my-zsh, and still haven't finished learning what all the zsh options do or are).
My suggestions:
You could try using print:
$ print $fg_bold[red] test
test
The print builtin has many more options than echo (see man zshbuiltins).
You should also:
Check what version zsh you're using.
Check what options (setopt) are enabled.
Check your ~/.zshrc (and other loaded files) to see what, if any, options and functions are being run.
This question may suggest checking what TERM you're using, but reading your question it sounds like you're only seeing this behaviour (echoing of the command after entry) when you're using aliases...?
I have to source a tcsh script to modify environment variables.
Some tests are to be done and if any fails the sourcing shall stop without exiting the shell. I do not want to run the script as a subprocess because I would need to modify env variables in the parent process which a subprocess cannot do. This is similar but different to this question where the author actually can run the script as a subprocess.
The usual workaround is to create an alias which runs a script (csh/bash/perl/python/...) which writes a tempfile with all the env var settings and at the end sources & deletes that tempfile. Here's more info for those interested (demoing a solution for bash). For my very simple and short stuff I'm doing that additional alias is not wanted.
So my workaround is to provoke a syntax error which stops any source execution. Here's an example:
test $ADMIN_USER = `filetest -U: $SOME_FILE` || "Error: Admin user must own admin file"
The shortcircuit || causes the error text to be ignored in case of goodness. On a test failure the error text is interpreted as a command, not found, the source stops and produces a reasonable error message:
Error: Admin user must own admin file: Command not found.
Is there any nicer way in doing this? Some csh/tcsh built-in that I've overlooked?
Thanks to a discussion with the user shellter I just verified my assumption that
test $ADMIN_USER = `filetest -U: $SOME_FILE` || \
echo "Error: Admin user must own admin file" && \
exit
would actually quit the enclosing interactive shell. But it does not.
So the answer to my above question actually is:
Just use a normal exit and the source will stop sourcing the script while keeping the calling interactive shell running.
I am currently looking at using Scala scripts to control the life-cycle of a MySQL database instead of using MS-DOS scripts (I am on Windows XP).
I want to have a configuration script which only holds configuration information, and 1 or more management scripts which use the configuration information to perform various operations such as start, stop, show status, etc .....
Is it possible to write a Scala script which includes/imports/references another Scala script?
I had a look at the -i option of the scala interpreter, but this launches an interactive session which is not what I want.
According to Scala man, script pre-loading only works for interactive mode.
As a workaround, you can exit the interactive mode after running the script. Here's the code of child.bat (script that includes another generic one):
::#!
#echo off
call scala -i genetic.bat %0
goto :eof
::!#
def childFunc="child"
println(geneticFunc)
println(childFunc)
exit;
genericFunc is defined at genetic.bat
The output of child.bat:
>child.bat
Loading genetic.bat...
...
geneticFunc: java.lang.String
Loading child.bat...
...
childFunc: java.lang.String
generic
child
I'd use Process and call the other Scala script just like any other command.
One option would be to have a script which concatenates two files together and then launches it, something like:
#echo off
type config.scala > temp.scala
type code.scala >> temp.scala
scala temp.scala
del temp.scala
or similar. Then you keep the two seperate as you wished.