s3cmd invalidation choose distribution - amazon-s3

I am trying to use s3cmd tool to invalidate my files,
it seems s3cmd automatically choose a distribution for me,
but I have more distributions from the same bucket, how can I choose distribution to invalidate ?
I have tried this :
s3cmd sync —cf-invalidate myfile cf://XXXXXXXXXX/mypath
but it does not work. I get this:
Invalid source/destination”
any idea?
thanks!

I believe you'd be looking to force an invalidation via the origin (i.e. your S3 bucket or similar) like so:
s3cmd --cf-invalidate _site/ s3://example-origin.com/

Here is my final conclusion:
after many many tries , the solution is very restrict.
follow this format:
s3cmd sync --cf-invalidate --acl-public --preserve --recursive ./[local_folder] s3://[my_bucket]/[remote_folder]/
when I run this command , the actual folder should be in command running folder
local file should have ./
remote folder should be end by /

I could not make s3cmd's invalidation work, so I used s3cmd to update the file and cloudfront-invalidator to do the invalidation. The script reads the aws authentication used by s3cmd for cloudfront-invalidator.
#!/bin/bash
if [ -z "$(which s3cmd)" ]; then
echo "s3cmd is not installed or is not on the PATH"
exit -1
fi
if [ -z "$(which cloudfront-invalidator)" ]; then
echo "cloudfront-invalidator is not installed or is not on the PATH"
echo "See https://github.com/reidiculous/cloudfront-invalidator"
echo "TL;DR: sudo gem install cloudfront-invalidator"
exit -1
fi
function awsKeyId {
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /aws_access_key_id/) print $2}' ~/.aws/config | tr -d ' '
}
function awsSecret {
awk -F '=' '{if (! ($0 ~ /^;/) && $0 ~ /aws_secret_access_key/) print $2}' ~/.aws/config | tr -d ' '
}
export file="stylesheets/main.css"
export distributionId=blahblah
export bucket=www.blahblah
s3cmd -P -m 'text/css' put public/$file s3://$bucket/$f
cloudfront-invalidator invalidate `awsKeyId` `awsSecret` $distributionId $file

Related

RedHat Kickstart file: What does this post-section do?

im am currently trying to understand a kickstart file. The first post-section has the following lines in it?
%post --nochroot
mkdir /mnt/sysimage/tmp/ks-tree-copy
if [ -d /oldtmp/ks-tree-shadow ]; then
cp -fa /oldtmp/ks-tree-shadow/* /mnt/sysimage/tmp/ks-tree-copy
elif [ -d /tmp/ks-tree-shadow ]; then
cp -fa /tmp/ks-tree-shadow/* /mnt/sysimage/tmp/ks-tree-copy
fi
cp /etc/resolv.conf /mnt/sysimage/etc/resolv.conf
cp -f /tmp/ks-pre.log* /mnt/sysimage/root/ || :
cp `awk '{ if ($1 ~ /%include/) {print $2}}' /tmp/ks.cfg` /tmp/ks.cfg /mnt/sysimage/root
%end
Im not really sure why this section exists. I think it may save some logfiles created while installing, so that one can look at it afterwards, but thats just a guess.
Thanks in advance for answers :)

Rename files in Amazon S3

I would like to rename all files in my Amazon S3 bucket with extension.PDF to .pdf (lowercase).
Did someone already have to do this? There are a lot of files (around 1500). Is S3cmd the best way to do this? How would you do?
s3cmd --recursive ls s3://bucketname |
awk '{ print $4 }' | grep "*.pdf" | while read -r line ; do
s3cmd --recursive mv s3://<s3_bucketname>/$line/ s3://<s3_bucketname>/${line%.*}".PDF"
done
A local linux/unix example for renaming all files with .pdf extension to .PDF extension.
mkdir pdf-test
cd pdf-test
touch a{1..10}.pdf
Before
ls
a1.pdf a2.pdf a4.pdf a6.pdf a8.pdf grep.sh
a10.pdf a3.pdf a5.pdf a7.pdf a9.pdf
The script file grep.sh
#/bin/bash
ls |grep .pdf | while read -r line ; do # here use ls from s3
echo "Processing $line"
# your s3 code goes here
mv $line ${line%.*}".PDF"
done
Add permissions and try
chmod u+x grep.sh
./grep.sh
After
ls
a1.PDF a2.PDF a4.PDF a6.PDF a8.PDF grep.sh
a10.PDF a3.PDF a5.PDF a7.PDF a9.PDF
You can apply the same logic. instead of mv use s3 mv.

NcFTP -S with -bb

I am trying to upload all changed files to my FTP server. However, I cannot use -S .tmp and -v when I use the -bb flag - and I can't use those options with ncftpbatch at all. Here is my code:
#!/bin/bash -eo pipefail
IN=$(git diff-tree --no-commit-id --name-only -r HEAD)
OUT=$(echo $IN | tr ";" "\n")
for file in "${OUT[#]}"; do
ncftpput -bb -S .tmp -v -u "zeussite#kolechia.heliohost.org" -p "*****" ftp.kolechia.heliohost.org "/" $file
done
ncftpbatch
As you can see, I need the -S .tmp to avoid breaking the site during uploads. -v provides output to prevent my CI service from timing out.
How can I upload only the changed files - without temporarily breaking the site? I'm thinking of just logging in separately for each file, but that is bad practice.
Why not launch a function in background which just prints dummy values like uploading, please wait and then sleeps for few seconds and do it again. Outside the loop you can kill that background job
If you don't want any output
printf "\0"
or
printf "a\b"

SSH - Loop through lines from txt file and delete files

I have a .txt file and on each line is a different file location e.g.
file1.zip
file2.zip
file3.zip
How can I open that file, loop through each line and rm -f filename on each one?
Also, will deleting it throw an error if the file doesn't exist (has already been deleted) and if so how can I avoid this?
EDIT: The file names may have spaces in them, so this needs to be catered for as well.
You can use a for loop with cat to iterate through the lines:
IFS=$'\n'; \
for file in `cat list.txt`; do \
if [ -f $file ]; then \
rm -f "$file"; \
fi; \
done
The if [ -f $file ] will check if the file exists and is a regular file (not a directory). If the check fails, it will skip it.
The IFS=$'\n' at the top will set the delimiter to be newlines-only; This will allow you to process files with whitespace.
xargs -n1 echo < test.txt
Replace 'echo' with rm -f or any other command. You can also use cat test.txt |
'man xargs' for more info.

shell scripting - print files in adirectory

I am writing simple Script which displays regular files in a directory.
#!/bin/bash
for FILE in "$#"
do
if [ -f "$FILE" ]
then
ls -l "$FILE"
fi
done
Even though my directory have 2 files, this script is not showing anything.
Can some one please what is wrong in my script?
why dont you go for simple command like :
ls -p|grep -v /
coming to your issue :
#!/bin/bash
for FILE in "$#"
do
if [ -f "$FILE" ]
then
ls -l "$FILE"
fi
done
try this
for FILE in $#/*
instead of
for FILE in "$#"