I have a plan in bamboo, with n Repos. Via ssh task i would like to iterate through each of them. The total number of Repos is dynamic.
Bamboo variables explained here
https://confluence.atlassian.com/bamboo/bamboo-variables-289277087.html
My approaches look like this:
touch test.txt
#get count
echo "count ${bamboo.planRepository[#]}\n" >> test.text
for repo in "${bamboo.planRepository[#]}"
do
echo "${repo.name}\n" > test.txt
done
START=1
END=5
i=$START
while [[ $i -le $END ]]
do
printf "${bamboo.planRepository.${i}.name}\n" > test.txt
((i = i + 1))
done
Im not firm with ssh script syntax and nothing did work.
Any suggestions how to do this?
Related
I have some files which have wrong time and date, but the filename contains the correct time and date and I try to write a script to fix this with the touch command.
Example of filename:
071212_090537.jpg
I would like this to be converted to the following format:
1712120905.37
Note, the year is listed as 07 in the filename, even if it is 17 so I would like the first 0 to be changed to 1.
How can I do this using awk or sed?
I'm quite new to awk and sed, an programming in general. Have tried to search for a solution and instruction, but haven't manage to figure out how to solve this.
Can anyone help me?
Thanks. :)
Take your example:
awk -F'[_.]' '{$0=$1$2;sub(/^./,"1");sub(/..$/,".&")}1'<<<"071212_090537.jpg"
will output:
1712120905.37
If you want the file to be removed, you can let awk generate the mv origin new command, and pipe the output to |sh, like: (comments inline)
listYourFiles| # list your files as input to awk
awk -F'[_.]' '{o=$0;$0=$1$2;sub(/^./,"1");sub(/..$/,".&");
printf "mv %s %s\n",o,$0 }1' #this will print "mv ori new"
|sh # this will execute the mv command
It's completely unnecessary to call awk or sed for this, you can do it in your shell. e.g. with bash:
$ f='071212_090537.jpg'
$ [[ $f =~ ^.(.*)_(.*)(..)\.[^.]+$ ]]
$ echo "1${BASH_REMATCH[1]}${BASH_REMATCH[2]}.${BASH_REMATCH[3]}"
1712120905.37
This is probably what you're trying to do:
for old in *.jpg; do
[[ $old =~ ^.(.*)_(.*)(..)\.[^.]+$ ]] || { printf 'Warning, unexpected old file name format "%s"\n' "$old" >&2; continue; }
new="1${BASH_REMATCH[1]}${BASH_REMATCH[2]}.${BASH_REMATCH[3]}"
[[ -f "$new" ]] && { printf 'Warning, new file name "%s" generated from "%s" already exists, skipping.\n' "$new" "$old" >&2; continue; }
mv -- "$old" "$new"
done
You need that test for new already existing since an old of 071212_090537.jpg or 171212_090537.jpg (or various other values) would create the same new of 1712120905.37
I think sed really is the easiest solution:
You could do this:
▶ for f in *.jpg ; do
new_f=$(sed -E 's/([0-9]{6})_([0-9]{4})([0-9]{2})\.jpg/\1\2.\3.jpg/' <<< $f)
mv $f $new_f
done
For more info:
You probably need to read an introductory tutorial on regular expressions.
Note that the -E option to sed allows use of extended regular expressions, allowing a more readable and convenient expression here.
Use of <<< is a Bashism known as a "here-string". If you are using a shell that doesn't support that, A <<< $b can be rewritten as echo $b | A.
Testing:
▶ touch 071212_090538.jpg 071212_090539.jpg
▶ ls -1 *.jpg
071212_090538.jpg
071212_090539.jpg
▶ for f in *.jpg ; do
new_f=$(sed -E 's/([0-9]{6})_([0-9]{4})([0-9]{2})\.jpg/\1\2.\3.jpg/' <<< $f)
mv $f $new_f
done
▶ ls -1
0712120905.38.jpg
0712120905.39.jpg
I want a script to exit out of the script if disk space is beyond threshold(ex:75%). Trying below things, But no luck.
df -kh | awk '{if( 0+$5 >= 75 ) exit;}'
Trying above command, its not working. Can anyone help me on this.
This is because your df output is NOT coming in a single line or so, to make this you need to add -P option with it try following once.
df -hP | awk '{if( 0+$5 >= 75 ){print "exiting now..";exit 1}}'
OR
df -hP | awk '$5+0>=75{print "exiting now..";exit 1}'
OR with mount name who is the culprit for breaching threshold.
df -hP | awk '$5+0>=75{print "Mount " $1 " has crossed threshold so exiting now..";exit 1}'
In case you don't have -P option in your box then try following.
df -k | awk '/^ +/ && $4+0>=75{print "Mount " prev" has crossed threshold so exiting now..";exit 1} !/^ +/{prev=$0}'
I am using print statement to make sure exit is working. also -P option was tested on BASH systems.
Since OP told he needs to exit from complete script itself so requesting OP to add following code outside of for loop of his code.(I haven't tested it though but this should work)
if [[ $? -eq 1 ]]
then
echo "Exiting the complete script now..."
exit
else
echo "Looks good so going further in script now.."
fi
If you are using this in a script to exit the script (as opposed to exiting a long awk script) then you need to call exit from the outer script:
if df -kh | awk '{if ($5+0 > 75) exit 1 }'; then echo OK; else echo NOT; fi
Don't forget that df returns one line per mount point, you can do:
if dk -kh /home ....
to check for a particular mount point.
i've search for command or solution to repeat a script after n times but i can't find it.
This is my rusty script:
#!/bin/csh -f
rm -rf result120
rm -rf result127
rm -rf result126
rm -rf result125
rm -rf result128
rm -rf result129
rm -rf result122
rm -rf output
rm -rf aaa
### Get job id from user name
foreach file ( `cat name` )
echo `bjobs -u $file | awk '$1 ~ /^[0-9]+/ {print $1}' >> aaa`
echo "loading"
end
### Read in job id
foreach file ( `cat aaa` )
echo `bjobs -l $file >> result120`
echo "loading"
end
### Get pattern in < >
awk '{\
gsub(/ /,"",$0)}\
BEGIN {\
RS =""\
FS=","\
}\
{\
s=1\
e=150\
if ($1 ~/Job/){\
for(i=s;i<=e;i++){\
printf("%s", $(i))}\
}\
}' result120 > result126
grep -oE '<[^>]+>' result126 > result125
### Get Current Work Location
awk '$1 ~ /<lsf_login..>/ {getline; print $1}' result125 >result122 #result127
### Get another information and paste it with CWD
foreach file1 ( `cat aaa` )
echo `bjobs $file1 >> result128`
echo "getting data"
end
awk '$1 ~ /JOBID/ {getline; printf "%-15s %-15s %-15s %-15s %-20s\n", $1, $2, $3, $4, $5}' result128 >> result129
paste result129 result122 >> output
### Summary
awk '{count1[$2]++}{count2[$4]++}{count3[$3]++}\
END{\
print "\n"\
print "##########################################################################"\
print "There are: ", NR " Jobs"\
for(name in count1){ print name, count1[name]}\
print "\n"\
for(queqe in count2){ print queqe, count2[queqe]}\
print "\n"\
for(stt in count3){ print stt, count3[stt]}\
}' output >> output
And my desire is run it again per 15 minutes to get report. Someone told me use Wait but i've searched for it in man wait and can't find any
useful example. That's why i need yours help to solve this problem.
Thanks a lot.
run the script every 15 mins
while true; do ./script.sh; sleep 900; done
or set a cron job or use watch
For c shell you have to write
while (1)
./script.sh
sleep 900
end
but why use csh since you have bash? Double check the syntax, since I don't remember it much anymore...
Following #karakfa answer, you have basically 2 options.
1) Your first option, even if you use a sleep implements a kind of busy-waiting strategy (https://en.wikipedia.org/wiki/Busy_waiting), this stragegy uses more CPU/memory than your second option (the cron approach) because you will have in memory your processus footprint even if it is actually doing nothing.
2) On the other hand, in the cron approach your processus will only appear while doing useful activities.
Just Imagine if you implement this kind of approach for many programs running on your machine, a lot of memory will be consume by processus in waiting states, it will also have an impact (memory/CPU usage) on the scheduling algorithm of your OS since it will have more processes in queue to manage.
Therefore, I would absolutely recommend the cron/scheduling approach.
Anyway,your cron daemon will be running in background whether you add the entry or not in the crontab, so why not adding it?
Last but not least, imagine if your busy-waiting processus is killed for any reason, if you go for the first option you will need to restart it manually and you might lose a couple of monitoring entries.
Hope it helps you.
I'm writing a unix script which does an awk and pipes to a while loop. For some reason, though, the while loop iterates only once. Can someone point out what I am missing?
awk '{ print $1, $2}' file |
while IFS=" " read A B
do
echo $B
if [ "$B" -eq "16" ];
then
grep -A 1 $A $1 | python unreverse.py
else
grep -A 1 $A
fi
done
"file" looks something like
cheese 2
elephant 5
tiger 16
Solution
The solution is to replace:
grep -A 1 $A
With:
grep -A 1 "$A" filename
Where filename is whatever file you intended grep to read from. Just guessing, maybe you intended:
grep -A 1 "$A" "$1"
I added double-quotes to prevent any possible word-splitting.
Explanation
The problem is that, without the filename, the grep command reads from and consumes all of standard input. It does this on the first run through the loop. Consequently, there is not input left for the second run and read A B fails and the loop terminates.
A Simpler Example
We can see the same issue happening with many fewer statements. Here is a while loop that is given two lines of input but only loops once:
$ { echo 1; echo 2; } | while read n; do grep "$n"; echo "n=$n"; done
n=1
Here, simply by adding a filename to the grep statement, we see that the while loop executes twice, as it should:
$ { echo 1; echo 2; } | while read n; do grep "$n" /dev/null; echo "n=$n"; done
n=1
n=2
I'm trying to do something like this (logic wise) but it's not working:
if (ls | wc -l ) >100; echo "too many files!"
else ls;
the point is adding this to my bashrc.
Any ideas?
Just an edit because I think I was slightly misunderstood. What I want is that when I type ls (or an alias that runs a modified ls) anywhere the files are only listed when there aren't a lot of them (I want something to add to my .bashrc). Being a bit of a moron, I sometimes type ls in directories where I have thousands of files so I'd like a way to circumvent that.
Rather than parsing ls, which is not best practice, you can do this with a bash array:
files=(*)
if ((${#files[#]} > 100)); then echo 'Too many files!'; else ls; fi
Probably in the actual problem you want a specific directory, and not the CWD; in that case, you might want something like this:
files=(/path/to/directory/*)
if ((${#files[#]} > 100)); then
echo 'Too many files!'
else (
cd /path/to/directory
ls
)
fi
Note that I've wrapped the cd into a parenthesized compound command, which means that the cd will be local. That assumes that you don't actually want the full path to appear in the output of ls.
You can do using find:
numFiles=$(find . -maxdepth 1 ! -name . -print0 | xargs -0 -I % echo . | wc -l)
(( numFiles > 100 )) && echo "too many files!" || ls
You can make this as function and put it in .bashrc
As others have pointed out, this is not an accurate way to count the number of files you have. It will miscount files that contain newlines for example and can have other issues.
It is, however, a perfectly good way to count the number of lines that ls will print and not show them if they're too many which is what you're presumably trying to do.
So, to answer your general question, to make one command depend on the result of another, you can use one of
command1 && command2
That will run command2 only if command1 was successful. If you want the second to be executed only if the first's results pass some test you can use:
[ command1 ] && command2
For your example, that would be:
[ $(ls | wc -l) -gt 100 ] && echo too many
To also execute ls again if the test is passed, use either
[ $(ls | wc -l) -gt 100 ] && echo too || ls
or
if [ $(ls | wc -l) -gt 200 ]; then echo 'too many files!'; else ls; fi
However, all of these are inelegant since they need to run the command twice. A better way might be to run the command once, save its output to a variable and then test the variable:
x=$(ls); [ $(wc -l <<<"$x") -gt 100 ] && echo 'too many!' || printf "%s\n" "$x"
Here, the output of ls is saved in the variable $x, then that variable is given as input to wc and if it has more than 100 lines, a message is printed. Else, the variable is.
For the sake of completeness, here's another safe approach that will count files correctly:
[ $(find -maxdepth 1 | grep -cF './') -gt 100 ] && echo 'too many!' || ls
A quick one liner:
test `find . -maxdepth 1 -type f|wc -l` -gt 100 && echo "Too Many Files"
A short one
[ $(ls | wc -l ) -gt 100 ] && echo too many
Combining some of the responses above - a simple Alias:
alias chkls='MAX=100 ; F=(*) ; if [[ ${#F[*]} -gt ${MAX} ]] ; then echo "## Folder: $(pwd) ## Too many files: ${#F[*]} ##" ; else ls ; fi '