I am writing simple Script which displays regular files in a directory.
#!/bin/bash
for FILE in "$#"
do
if [ -f "$FILE" ]
then
ls -l "$FILE"
fi
done
Even though my directory have 2 files, this script is not showing anything.
Can some one please what is wrong in my script?
why dont you go for simple command like :
ls -p|grep -v /
coming to your issue :
#!/bin/bash
for FILE in "$#"
do
if [ -f "$FILE" ]
then
ls -l "$FILE"
fi
done
try this
for FILE in $#/*
instead of
for FILE in "$#"
Related
im am currently trying to understand a kickstart file. The first post-section has the following lines in it?
%post --nochroot
mkdir /mnt/sysimage/tmp/ks-tree-copy
if [ -d /oldtmp/ks-tree-shadow ]; then
cp -fa /oldtmp/ks-tree-shadow/* /mnt/sysimage/tmp/ks-tree-copy
elif [ -d /tmp/ks-tree-shadow ]; then
cp -fa /tmp/ks-tree-shadow/* /mnt/sysimage/tmp/ks-tree-copy
fi
cp /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
cp -f /tmp/ks-pre.log* /mnt/sysimage/root/ || :
cp `awk '{ if ($1 ~ /%include/) {print $2}}' /tmp/ks.cfg` /tmp/ks.cfg /mnt/sysimage/root
%end
Im not really sure why this section exists. I think it may save some logfiles created while installing, so that one can look at it afterwards, but thats just a guess.
Thanks in advance for answers :)
I've been having problems with multiple hidden infected PHP files which are encrypted (ClamAV can't see them) in my server.
I would like to know how can you run an SSH command that can search all the infected files and edit them.
Up until now I have located them by the file contents like this:
find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \;
Note: $tnawdjmoxr is a piece of the code
How do you locate and remove this code inside all PHP files in the directory /public_html/?
You can add xargs and sed:
find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \; | xargs -d '\n' -n 100 sed -i 's|\$tnawdjmoxr||g' --
You may also use sed immediately than using grep -but- it can alter the modification time of that file and may also give some unexpected modifications like perhaps some line endings, etc.
-d '\n' makes it sure that every argument is read line by line. It's helpful if filenames has spaces on it.
-n 100 limits the number of files that sed would process in one instance.
-- makes sed recognize filenames starting with a dash. It's also commendable that grep would have it: grep -l -e '$tnawdjmoxr' -- {} \;
File searching may be faster with grep -F.
sed -i enables inline editing.
Besides using xargs it would also be possible to use Bash:
find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \; | while IFS= read -r FILE; do sed -i 's|\$tnawdjmoxr||g' -- "$FILE"; done
while IFS= read -r FILE; do sed -i 's|\$tnawdjmoxr||g' -- "$FILE"; done < <(exec find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \;)
readarray -t FILES < <(exec find /home/***/public_html/ -exec grep -l '$tnawdjmoxr' {} \;)
sed -i 's|\$tnawdjmoxr||g' -- "${FILES[#]}"
I am new to awk and shell based programming. I have a bunch of files name file_0001.dat, file_0002.dat......file_1000.dat. I want to change the file names such as the number after file_ will be a multiple of 4 in comparison to previous file name. SO i want to change
file_0001.dat to file_0004.dat
file_0002.dat to file_0008.dat
and so on.
Can anyone suggest a simple script to do it. I have tried the following but without any success.
#!/bin/bash
a=$(echo $1 sed -e 's:file_::g' -e 's:.dat::g')
b=$(echo "${a}*4" | bc)
shuf file_${a}.dat > file_${b}.dat
This script will do that trick for you:
#!/bin/bash
for i in `ls -r *.dat`; do
a=`echo $i | sed 's/file_//g' | sed 's/\.dat//g'`
almost_b=`bc -l <<< "$a*4"`
b=`printf "%04d" $almost_b`
rename "s/$a/$b/g" $i
done
Files before:
file_0001.dat file_0002.dat
Files after first execution:
file_0004.dat file_0008.dat
Files after second execution:
file_0016.dat file_0032.dat
Here's a pure bash way of doing it (without bc, rename or sed).
#!/bin/bash
for i in $(ls -r *.dat); do
prefix="${i%%_*}_"
oldnum="${i//[^0-9]/}"
newnum="$(printf "%04d" $(( 10#$oldnum * 4 )))"
mv "$i" "${prefix}${newnum}.dat"
done
To test it you can do
mkdir tmp && cd $_
touch file_{0001..1000}.dat
(paste code into convert.sh)
chmod +x convert.sh
./convert.sh
Using bash/sed/find:
files=$(find -name 'file_*.dat' | sort -r)
for file in $files; do
n=$(sed 's/[^_]*_0*\([^.]*\).*/\1/' <<< "$file")
let n*=4
nfile=$(printf "file_%04d.dat" "$n")
mv "$file" "$nfile"
done
ls -r1 | awk -F '[_.]' '{printf "%s %s_%04d.%s\n", $0, $1, 4*$2, $3}' | xargs -n2 mv
ls -r1 list file in reverse order to avoid conflict
the second part will generate new filename. For example: file_0002.dat will become file_0002.dat file_0008.dat
xargs -n2 will pass two arguments every time to mv
This might work for you:
paste <(seq -f'mv file_%04g.dat' 1000) <(seq -f'file_%04g.dat' 4 4 4000) |
sort -r |
sh
This can help:
#!/bin/bash
for i in `cat /path/to/requestedfiles |grep -o '[0-9]*'`; do
count=`bc -l <<< "$i*4"`
echo $count
done
I am trying to use s3cmd tool to invalidate my files,
it seems s3cmd automatically choose a distribution for me,
but I have more distributions from the same bucket, how can I choose distribution to invalidate ?
I have tried this :
s3cmd sync —cf-invalidate myfile cf://XXXXXXXXXX/mypath
but it does not work. I get this:
Invalid source/destination”
any idea?
thanks!
I believe you'd be looking to force an invalidation via the origin (i.e. your S3 bucket or similar) like so:
s3cmd --cf-invalidate _site/ s3://example-origin.com/
Here is my final conclusion:
after many many tries , the solution is very restrict.
follow this format:
s3cmd sync --cf-invalidate --acl-public --preserve --recursive ./[local_folder] s3://[my_bucket]/[remote_folder]/
when I run this command , the actual folder should be in command running folder
local file should have ./
remote folder should be end by /
I could not make s3cmd's invalidation work, so I used s3cmd to update the file and cloudfront-invalidator to do the invalidation. The script reads the aws authentication used by s3cmd for cloudfront-invalidator.
#!/bin/bash
if [ -z "$(which s3cmd)" ]; then
echo "s3cmd is not installed or is not on the PATH"
exit -1
fi
if [ -z "$(which cloudfront-invalidator)" ]; then
echo "cloudfront-invalidator is not installed or is not on the PATH"
echo "See https://github.com/reidiculous/cloudfront-invalidator"
echo "TL;DR: sudo gem install cloudfront-invalidator"
exit -1
fi
function awsKeyId {
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /aws_access_key_id/) print $2}' ~/.aws/config | tr -d ' '
}
function awsSecret {
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /aws_secret_access_key/) print $2}' ~/.aws/config | tr -d ' '
}
export file="stylesheets/main.css"
export distributionId=blahblah
export bucket=www.blahblah
s3cmd -P -m 'text/css' put public/$file s3://$bucket/$f
cloudfront-invalidator invalidate `awsKeyId` `awsSecret` $distributionId $file
I have a .txt file and on each line is a different file location e.g.
file1.zip
file2.zip
file3.zip
How can I open that file, loop through each line and rm -f filename on each one?
Also, will deleting it throw an error if the file doesn't exist (has already been deleted) and if so how can I avoid this?
EDIT: The file names may have spaces in them, so this needs to be catered for as well.
You can use a for loop with cat to iterate through the lines:
IFS=$'\n'; \
for file in `cat list.txt`; do \
if [ -f $file ]; then \
rm -f "$file"; \
fi; \
done
The if [ -f $file ] will check if the file exists and is a regular file (not a directory). If the check fails, it will skip it.
The IFS=$'\n' at the top will set the delimiter to be newlines-only; This will allow you to process files with whitespace.
xargs -n1 echo < test.txt
Replace 'echo' with rm -f or any other command. You can also use cat test.txt |
'man xargs' for more info.