I have multiple scripts running. All the scripts have the same name but the commands they execute are doing different things.
I am trying to figure out if a certain instance of the script is running and if so I don't want it to run again. This is difficult because all of the scripts have the same name so it could be a false positive by using pgrep.
My idea was if there is any attribute or description I could attach when it is first run, then I can grep for that unique description and tell which instance is running, is there any way to do this?
Does anyone have any other ideas?
Thanks
You can implement below logic to check weather script is already running or not.
if [ -f Script1.lck ];then
echo "ALREADY INSTANCE RUNNING `date`"
echo "EXITING"
else
echo "NO INSTANCE RUNNING "
touch Script1.lck
#Your Script code here.............
rm -f Script1.lck
fi
*I used concept that every time it runs checks for Script1.lck file if file not exists than it means no instance is running. So it creates file and start executing your code.
Suppose in between you executed the script then it checks for .lck file and and .lck already exists due to previous instance.
*In last i removed the .lck file.
*By using different lck file names for different script you can check which script is running.
Related
I want to check a file in Unix via Automic. If the file doesnt exist it should switch the host and check if the file is there.
The problem is, that I dont now how to implement a error handling.
Everytime the script object is processing and cant find the file the skript aborted. I need a new starting point in the skript but "ON_ERROR" or ":RESTART" doesnt work.
How can I implement a logic like this: IF the script aborted due to the error-massage 'No such file or directory'start the script from here instead.
Thank you very much for your help!
Best regards
I have solved it. Use the function PREP_PROCESS_FILENAME to check if the file exists in the folder!
You have to start the task twice in the same workflow. The task-job checks if the script exists otherwise nothing to do.
if [ -f "/path/to/script" ]
then
bash /path/to/script
else
echo "Script not found"
fi
In Post-Script you can modify the state for the empty task with :MODIFY_STATE. Depend on report or returncode
By default, IntelliJ Idea will insert (something like) the following as the header of a new source file:
/**
* Created by JohnDoe on 2016-04-27.
*/
The corresponding template is:
/**
* Created by ${USER} on ${DATE}.
*/
Is it possible to update this template so that it inserts the last date of modification when the file is changed? For example:
/**
* Created by JohnDoe on 2016-03-27.
* Last modified by JaneDoe on 2016-04-27
*/
It is not supported out of the box. I suggest you do not include information about author and last edit/create time in file at all.
The reason is that your version control system (Git, SVN) contains the same information automatically. So the manual labelling is just duplicate of already existing info, but is only more error prone and needs to be manually updated.
Here's a working solution similar to what I'm using. Tested on mac os.
Create a bash script which will replace first occurrence of Last modified by JaneDoe on $DATE only if the exact value is not contained in the file:
#!/bin/bash
FILE=src/java/test/Test.java
DATE=`date '+%Y-%m-%d'`
PREFIX="Last modified by JaneDoe on "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
sed -i '' "1,/$(echo "$STRING")/ s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
fi
Install File Watchers plugin.
Create a file watcher with appropriate scope (it may be this single file or any other scope, so that any change in project's source code will update modified date or version etc.) and put a path to your bash script into Program field.
Now every time the file changes the date will update. If you want to update date for each file separately, an argument $FilePath$ should be passed to the script.
This might have been just a comment to #oleg-mikhailov excellent idea, but the code snippet won't fit. Basically, I just tweaked his solution.
I needed a slightly different syntax but that's not the issue. The issue was that when the script ran automatically upon file save using the File Watchers plugin, if ran on a file which doesn't include PREFIX it would run over and over for ever.
I presume the that the issue is with the plugin itself, as it didn't happen when run from the shell, but I'm not sure why it happened.
Anyway, I ended up running the following script (as I said only a slight change with respect to the original). The new script also raises an error if the the prefix doesn't exist. For me this is a feature as Pycharm prompts me with the error, and I can fix the file.
Tested with PyCharm 2021.2.3 on macOS 11.6.
#!/bin/bash
FILE=$1
DATE=`date '+%Y-%m-%d'`
PREFIX="last_modified_date: "
STRING="$PREFIX.*$"
SUBSTITUTE="$PREFIX$DATE"
if ! grep -q "$SUBSTITUTE" "$FILE"; then
if grep -q "$PREFIX" "$FILE"; then
sed -i '' "s/$(echo "$STRING")/$(echo "$SUBSTITUTE")/" $FILE
else
echo "Error!"
echo "'$PREFIX' doesn't appear in $FILE"
exit 1
fi
fi
PHPStorm has not a "hook" for launching task after detect a change in file (just for uploading in server yes). Code templating is based on the creation of file not change.
The behaviour you want (automatic change file after manual change file) can be useful for lot of things but it's circular headhache for editor. Because if you change a file it must change file (and if a file is change ? it change file ?).
However, You can, perhaps, "enable Live Templates" when you launch a "reformat code" which able to rewrite your begin template code that way rewrite date modification.
Other solution is that use a tools with as grunt but I don't know if manage php file.
I have a shell script which asks for user input and depending on the input opens db connection using sqlplus and run some sql querys like drop table /create table/select/update. Is it possible that the sql part be run as background job,so that even if i lose vpn connectivity to the network,all the sql queries gets executed.
Also ,when the sql parts gets completed and user is prompted with another input the shell script comes to foreground and after getting the input again goes to background?
I have found some questions which tell us how to run the script in background,but i want to run ONLY some parts of the same script in background if possible(and come to foreground for user input).Though i can make multiple scripts too handle it(dividing the scripts in parts which needs to be called in background and calling them though another script),i would rather do it in a single script if possible.
You can break your main script up into functions / smaller scripts to achieve the desired behavior of a mix of background processes and foreground processes.
For example, in your main script:
#!/bin/sh
echo "Starting script..."
# do so more stuff here, maybe ask user for input
./run_background_process_1 &
# ask the user for some more input
./run_background_process_2 &
...
Use the & symbol at the end of script calls to denote that they should be run in the background.
(Updated) If you'd like to keep everything in 1 script, use functions to break up / encapsulate the parts of logic that you would like to run in the background. Call these functions by suffixing the call with &, same as above.
You can try the following example to see that it works:
#!/bin/sh
hello() {
condition="yes"
while [[ $condition== "yes" ]]
do
echo "."
sleep 1
done
}
# Script main starts here
echo "Start"
hello &
echo "Finish"
Remove the & after hello and you'll see that it behaves differently.
There are tools which allow you to keep scripts running despite loss of connection. For example, check out http://www.gnu.org/software/screen/ - one of its features is Programs continue to run when their window is currently not visible and even when the whole screen session is detached from the users terminal.
After search on internet i found out i can use three methods to make the script background :
1) using bg: How do I put an already-running process under nohup? .but unfortunately ,this didnt worked for me in ksh shell.
2) using coprocesses
3) using nohup
I decided to go with nohup as it was easier to implement. The sqlplus part which needed to be run in background ,i made another script of it and called it from the main script using nohup
nohup script-name.ksh ${parameter1} ${paramter2} &
This worked for me.
I have to source a tcsh script to modify environment variables.
Some tests are to be done and if any fails the sourcing shall stop without exiting the shell. I do not want to run the script as a subprocess because I would need to modify env variables in the parent process which a subprocess cannot do. This is similar but different to this question where the author actually can run the script as a subprocess.
The usual workaround is to create an alias which runs a script (csh/bash/perl/python/...) which writes a tempfile with all the env var settings and at the end sources & deletes that tempfile. Here's more info for those interested (demoing a solution for bash). For my very simple and short stuff I'm doing that additional alias is not wanted.
So my workaround is to provoke a syntax error which stops any source execution. Here's an example:
test $ADMIN_USER = `filetest -U: $SOME_FILE` || "Error: Admin user must own admin file"
The shortcircuit || causes the error text to be ignored in case of goodness. On a test failure the error text is interpreted as a command, not found, the source stops and produces a reasonable error message:
Error: Admin user must own admin file: Command not found.
Is there any nicer way in doing this? Some csh/tcsh built-in that I've overlooked?
Thanks to a discussion with the user shellter I just verified my assumption that
test $ADMIN_USER = `filetest -U: $SOME_FILE` || \
echo "Error: Admin user must own admin file" && \
exit
would actually quit the enclosing interactive shell. But it does not.
So the answer to my above question actually is:
Just use a normal exit and the source will stop sourcing the script while keeping the calling interactive shell running.
I'm using xcodebuild inside a bash script on a continuous integration server.
I would like to know when a build as failed in the script, so I can exit prematurely from it and mark the build as failed.
xcodebuild displays a BUILD FAILED message to the console, but I don't succeed in getting a return value.
How can I achieve this?
Thanks in advance
I solved my problem using this command: xcodebuild -... || exit 1
You can use the "$?" variable to get the return code of the previous command.
xcodebuild -...
if [[ $? == 0 ]]; then
echo "Success"
else
echo "Failed"
fi
xcodebuild always returns 0, regardless of the actual test result. You should check for either ** BUILD FAILED ** or ** BUILD SUCCEEDED ** in the output to know whether tests pass or not.
Xcodebuild can return any of the error codes listed below and not restricted to EX_OK (or int 0).
However, I learnt from solution provided by Dmitry and modified as following. It works for me and I hope it could be helpful.
xcodebuild -project ......
if test $? -eq 0
then
echo "Success"
else
echo "Failed"
fi
Error codes
The following are the error codes that can be returned by sysexits archive:
EX_OK (0): Successful exit.
EX_USAGE (64): The command was used incorrectly, e.g., with the wrong number of arguments, a bad flag, a bad syntax in a parameter, or whatever.
EX_DATAERR (65): The input data was incorrect in some way. This should only be used for user's data and not system files.
EX_NOINPUT (66): An input file (not a system file) did not exist or was not readable. This could also include errors like ``No message'' to a mailer (if it cared to catch it).
EX_NOUSER (67): The user specified did not exist. This might be used for mail addresses or remote logins.
EX_NOHOST (68): The host specified did not exist. This is used in mail addresses or network requests.
EX_UNAVAILABLE (69): A service is unavailable. This can occur if a support program or file does not exist. This can also be used as a catchall message when something you wanted to do doesn't work, but you don't know why.
EX_SOFTWARE (70): An internal software error has been detected. This should be limited to non-operating nonoperating operating system related errors as possible.
EX_OSERR (71): An operating system error has been detected. This is intended to be used for such things as cannot fork'',cannot create pipe'', or the like. It includes things like getuid returning a user that does not exist in the passwd file.
EX_OSFILE (72): Some system file (e.g., /etc/passwd, /var/run/utmp, etc.) does not exist, cannot be opened, or has some sort of error (e.g., syntax error).
EX_CANTCREAT (73): A (user specified) output file cannot be created.
EX_IOERR (74): An error occurred while doing I/O on some file.
EX_TEMPFAIL (75): Temporary failure, indicating something that is not really an error. In send-mail, sendmail, mail, this means that a mailer (e.g.) could not create a connection, and the request should be reattempted later.
EX_PROTOCOL (76): The remote system returned something that was ``not possible'' during a protocol exchange.
EX_NOPERM (77): You did not have sufficient permission to perform the operation. This is not intended for file system problems, which should use EX_NOINPUT or EX_CANTCREAT, but rather for higher level permissions.
EX_CONFIG (78): Something was found in an unconfigured or misconfigured state.
For more info click here.
Maybe it's not because of the xcodebuild not returning non-zero when build failed. Your shell script continuing to run regardless of the returning-error line might be the result of that you didn't run the script with a "-e" option.
Try put #!/bin/bash -e ahead of the script file.
Generally, you can always check the exist value for the last run process in Unix bash by:
$ echo $?
where $? is a built-in placeholder of the exist value of last executed command. For more details about other bash's built-in variables see here.
So, first run your command you want to investigate its return code, then run echo as above.
Whether the compiled product (.a or .ipa file)exists