I have a pipeline that needs to copy some files from a folder to a new one only if the files exists in the source folder.
This is my script line:
script:
- cp source_folder/file.txt dest_folder/ 2>/dev/null
I have also tried this:
script:
- test -f source_folder/file.txt && cp source_folder/file.txt dest_folder/ 2>/dev/null
but still fails if the file do not exists.
Cleaning up project directory and file based variables.
ERROR: Job failed: exit code 1
How can I check the file and copy it only if exists?
EDIT:
this command is executed on a server, the pipeline use ssh to log into
Check for the existence of the file (-f) and, in positive case, copy it.
script:
- |
files=(conf.yaml log.txt)
for file in $files; do
if [[ -f "source_folder/$file" ]]; then
cp source_folder/$file dest_folder
fi
done
Take a look at other answers for one-shot less-flexible statements.
Note: I haven't tested the script above, but I'm quite accustomed with Gitlab pipeline and bash.
Related
I have a CI stage with the following command, which has to be executed remotely and checks if the mentioned file exists, if yes it creates a backup for it.
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_1.txt" ] && cp -v "${PATH}/test_1.txt" ${PATH}/test_1_$CI_COMMIT_TIMESTAMP.txt)'
The issue is, this job always fails whether the file exists or not with the following output:
ssh user#hostname '([ -f /etc/file/path/test_1.txt ] && cp -v /etc/file/path/test_1.txt /etc/file/path/test_1_$CI_COMMIT_TIMESTAMP.txt)'
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
Running the same command manually, just works fine. So,
How can I make sure that this job succeeds as long as command logic is executed successfully and only fail incase there are some genuine failures?
There is no way for the job to know if the command you ran remotely worked or not. It can only know if the ssh instruction worked or not. You can force it to always succeed by appending || true to any instruction.
However, if you want to see and save the output of your remote instruction, you can do something like this:
ssh user#host command 2>&1 | tee ssh-session.log
I am running the following command to remove files from a gcs bucket prior to loading new files there.
gsutil -m rm gs://mybucket/subbucket/*
If there are no files in the bucket, it throws the "CommandException: One or more URLs matched no objects".
I would like for it to delete the files if exists without throwing the error.
There is same error with gsutil ls gs://mybucket/subbucket/*
How can I rewrite this without having to handle the exception explicitly? Or, how to best handle these exceptions in batch script?
Try this:
gsutil -m rm gs://mybucket/foo/* 2> /dev/null || true
Or:
gsutil -m ls gs://mybucket/foo/* 2> /dev/null || true
This has the effect of suppressing stderr (it's directed to /dev/null), and returning a success error code even on failure.
You might not want to ignore all errors as it might indicate something different that file not found. With the following script you'll ignore only the 'One or more URLs matched not objects' but will inform you of a different error. And if there is no error it will just delete the file:
gsutil -m rm gs://mybucket/subbucket/* 2> temp
if [ $? == 1 ]; then
grep 'One or more URLs matched no objects' temp
if [ $? == 0 ]; then
echo "no such file"
else
echo temp
fi
fi
rm temp
This will pipe stderr to a temp file and will check the message to decide whether to ignore it or show it.
And it also works for single file deletions. I hope it helps.
Refs:
How to grep standard error stream
Bash Reference Manual - Redirections
You may like rsync to sync files and folders to a bucket. I used this for clearing a folder in a bucket and replacing it with new files from my build script.
gsutil rsync -d newdata gs://mybucket/data - replaces data folder with newdata
I'm on a Windows machine using Git 2.7.2.windows.1 with MinGW 64.
I have a script in C:/path/to/scripts/myScript.sh.
How do I execute this script from my Git Bash instance?
It was possible to add it to the .bashrc file and then just execute the entire bashrc file.
But I want to add the script to a separate file and execute it from there.
Let's say you have a script script.sh. To run it (using Git Bash), you do the following: [a] Add a "sh-bang" line on the first line (e.g. #!/bin/bash) and then [b]:
# Use ./ (or any valid dir spec):
./script.sh
Note: chmod +x does nothing to a script's executability on Git Bash. It won't hurt to run it, but it won't accomplish anything either.
#!/usr/bin/env sh
this is how git bash knows a file is executable. chmod a+x does nothing in gitbash. (Note: any "she-bang" will work, e.g. #!/bin/bash, etc.)
If you wish to execute a script file from the git bash prompt on Windows, just precede the script file with sh
sh my_awesome_script.sh
if you are on Linux or ubuntu write ./file_name.sh
and you are on windows just write sh before file name like that sh file_name.sh
For Linux -> ./filename.sh
For Windows -> sh file_name.sh
If your running export command in your bash script the above-given solution may not export anything even if it will run the script. As an alternative for that, you can run your script using
. script.sh
Now if you try to echo your var it will be shown. Check my the result on my git bash
(coffeeapp) user (master *) capstone
$ . setup.sh
done
(coffeeapp) user (master *) capstone
$ echo $ALGORITHMS
[RS256]
(coffeeapp) user (master *) capstone
$
Check more detail in this question
I had a similar problem, but I was getting an error message
cannot execute binary file
I discovered that the filename contained non-ASCII characters. When those were fixed, the script ran fine with ./script.sh.
Once you're in the directory, just run it as ./myScript.sh
If by any chance you've changed the default open for .sh files to a text editor like I had, you can just "bash .\yourscript.sh", provided you have git bash installed and in path.
I was having two .sh scripts to start and stop the digital ocean servers that I wanted to run from the Windows 10. What I did is:
downloaded "Git for Windows" (from https://git-scm.com/download/win).
installed Git
to execute the .sh script just double-clicked the script file it started the execution of the script.
Now to run the script each time I just double-click the script
#!/bin/bash at the top of the file automatically makes the .sh file executable.
I agree the chmod does not do anything but the above line solves the problem.
you can either give the entire path in gitbash to execute it or add it in the PATH variable
export PATH=$PATH:/path/to/the/script
then you an run it from anywhere
I have a shell script which in turn calls sql file. Its a bash shell running in UNIX. Following is the main steps taken in script.
1) Generate Term file
2) Remove previous day's Term and Rpt file from utility directory.
3) Copy Term file from Run directory to Utility directory.
4) Run the sql file
5) Copy the output, RPT file from Utility to Run directory.
Here is the code snippet:
> RUN_DIR/nj.terms
if [[ -s RUN_DIR/nj.terms ]] then
rm -f /utl/nj.terms
rm -f /utl/nj.rpt
cp RUN_DIR/nj.terms utl
/bin/sqlplus USER PSWD #sql
cp utl/nj.RPT RUN_DIR
fi
I get following error from sql output as :
ORA-29283 - Invalid file operation.
Mostly this error is due to absence of Term file whenever the sql runs. Due this error RPT file will not be generated and cause failure in following copy command (cp utl/nj.RPT RUN_DIR).After the failure, when we checked the Term file ,it was present in Utl directory.
This error occurs randomly. Is there any chance system takes more time to copy the Term file to Utility directory and before completing it sql was run? It would be great if someone can help me in this situation.
I own a QNAP-219P and I want to set this up manually using s3cmd.
I did quite a bit of research on this, and here are the references I got:
http://web.archive.org/web/20091120211330/http://codemonkeybrown.com/qnaps3.html
http://wiki.qnap.com/wiki/Running_Your_Own_Application_at_Startup
http://wiki.qnap.com/wiki/Add_items_to_crontab
http://blog.wingateuk.com/2013/03/cloud-backup-on-qnap-nas.html?showComment=1413660445187#c8935766892046800936
I'm trying to get the s3cmd to work on my TS-219P.
I got everything to work (on command line), even running the script file (s3-backup.sh) on command line:
#!/bin/bash <-- I also tried #!/bin/sh
/share/maintenance/s3cmd-1.5.0-rc1/s3cmd --rr sync -rv /share/all-shared-folders/emilie/ s3://kingjim-backup/kingjim-nas/emilie/ >> /share/maintenance/log/s3cmd/backup_`date "+%Y%m%d-%H-%M"`.log <-- I also tried running s3cmd via python by adding /usr/bin/python on the front.
If I run using the SSH command prompt, it seems to work perfectly.
The problem though, is the cronjob. I can confirm the cronjob trigger, and it was run, because my log file (the one above) was generated, but the log is always empty, even though I'm sure there are some new files created/modified.
This is my cronjob task:
14 3 * * * /share/maintenance/s3-backup.sh 2>&1 | logger
I've done a number of different variations on the above, but couldn't find out what was missing.
I feel like some dependency is missing when the crontab is running, as compared to when I run it on command prompt. But I don't know how to debug crontab.
Found out that the problem was that the s3cmd configuration file was not found when running s3cmd.
So the fix was simply to copy this .s3config file to a safe shared folder, and then call the s3cmd with the "--config" parameter followed by the file.
Like this:
/share/maintenance/s3-backup/s3cmd/s3cmd --config
/share/maintenance/s3-backup/s3cmd.config --rr sync -rv /share/MD0_DATA/ s3://xxx-backup/xxx-nas/ >> /share/maintenance/s3-backup/logs/backup_`date "+%Y%m%d-%H-%M"`.log 2>&1