In my Travis script is there a way when uploading contents to S3 Bucket as follows :
# deploy:
# provider: script
# skip_cleanup: true
# script: "~/.local/bin/aws s3 sync dist s3://mybucket --region=eu-west-1
# --delete"
# before_deploy:
# - npm run build
# - pip install --user awscli
I also want to set a no cache header on a particular file in that bucket (i.e. sw.js). Is that currently possible in the SDK ?
I am afraid that this is not possible using a single s3 sync command. But you may try to execute two commands using exclude and include options. One to sync all except the sw.js and the other one just for sw.js.
script: ~/.local/bin/aws s3 sync dist s3://mybucket --include "*" --exclude "sw.js" --region eu-west-1 --delete ; ~/.local/bin/aws s3 sync dist s3://mybucket --exclude "*" --include "sw.js" --region eu-west-1 --delete --cache-control "no-cache" --metadata-directive REPLACE
Note: --metadata-directive REPLACE option is necessary for non-multipart copies.
Related
How can I include the _ (underscore) on the include pattern?
I have a S3 bucket with files with the following format
20220630_084021_abc.json
20220630_084031_def.json
20220630_084051_ghi.json
20220630_084107_abc.json
20220630_084118_def.json
So, I would like to get all the files that start with 20220630_0840*.
So, I've tried to fetch them using multiple variations on the include pattern, so far I had used the followings:
aws s3 cp s3://BUCKET/ LocalFolder --include "20220630_0840*" --recursive
aws s3 cp s3://BUCKET/ LocalFolder --include "[20220630_0840]*" --recursive
aws s3 cp s3://BUCKET/ LocalFolder --include "20220630\_0840*" --recursive
None of them really work I'm getting all the files that have their name starting with 20220630
When s3 integration is enabled on Ant media server, recorded VODs are uploading to s3 bucket but when we are uploading any VOD file on server it is not uploading on s3 bucket. Is there any way to do that on the server side?
You can automatically transfer the VoD files with this script to the S3 bucket by following the steps below.
1. Install FFmpeg
apt-get update && apt-get install ffmpeg -y
2. Save the script under /usr/local/antmedia/ and change permission by using chmod command it. (Don't forget to add AWS Access/Secret keys)
You can download the script from the following link or you can find it at the bottom of the page.
https://github.com/muratugureminoglu/Scripts/blob/master/vod-upload-s3.sh
chmod +x /usr/local/antmedia/vod-upload-s3.sh
3. Modify the red5-web.properties file in your webapps as follows.
vim [AMS-DIR]/webapps/your_application/WEB-INF/red5-web.properties
Add or change the following line.
settings.vodUploadFinishScript=/usr/local/antmedia/vod-upload-s3.sh
4. Restart Ant Media Server service.
systemctl restart antmedia
5. Follow the link below to play the uploaded VoD files.
VOD not playing after s3 recording enabled in Ant Media server
That's it.
Script
#!/bin/bash
#
# Installation Instructions
#
# apt-get update && apt-get install ffmpeg -y
# vim [AMS-DIR]/webapps/applications(LiveApp or etc.)/WEB-INF/red5-web.properties
# settings.vodUploadFinishScript=/Script-DIR/vod-upload-s3.sh
# sudo service antmedia restart
#
# Check if AWS CLI is installed
if [ -z `which aws` ]; then
rm -r aws* > /dev/null 2>&1
echo "Please wait. AWS Client is installing..."
curl "https://d1vvhvl2y92vvt.cloudfront.net/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" > /dev/null 2>&1
unzip awscliv2.zip > /dev/null 2>&1
sudo ./aws/install &
wait $!
echo "AWS Client installed."
rm -r aws*
fi
# Delete the uploaded VoD file from local disk
DELETE_LOCAL_FILE="Y"
AWS_ACCESS_KEY=""
AWS_SECRET_KEY=""
AWS_REGION=""
AWS_BUCKET_NAME=""
#AWS Configuration
aws configure set aws_access_key_id $AWS_ACCESS_KEY
aws configure set aws_secret_access_key $AWS_SECRET_KEY
aws configure set region $AWS_REGION
aws configure set output json
tmpfile=$1
mv $tmpfile ${tmpfile%.*}.mp4"_tmp"
ffmpeg -i ${tmpfile%.*}.mp4"_tmp" -c copy -map 0 -movflags +faststart $tmpfile
rm ${tmpfile%.*}.mp4"_tmp"
aws s3 cp $tmpfile s3://$AWS_BUCKET_NAME/streams/ --acl public-read
if [ $? != 0 ]; then
logger "$tmpfile failed to copy file to S3. "
else
# Delete the uploaded file
if [ "$DELETE_LOCAL_FILE" == "Y" ]; then
aws s3api head-object --bucket $AWS_BUCKET_NAME --key streams/$(basename $tmpfile)
if [ $? == 0 ];then
rm -rf $tmpfile
logger "$tmpfile is deleted."
fi
fi
fi
I want to upload a directory (A folder consist of other folders and .txt files) to a folder(partition) in a specific S3 bucket along with a given KMS-id via CLI. The following command which is to upload a jar file to an S3 bucket, was found.
The command I found for upload a jar:
aws s3 sync /?? s3://???-??-dev-us-east-2-813426848798/build/tmp/snapshot --sse aws:kms --sse-kms-key-id alias/nbs/dev/data --delete --region us-east-2 --exclude "*" --include "*.?????"
Suppose;
Location (Bucket Name with folder name) - "s3://abc-app-us-east-2-12345678/tmp"
KMS-id - https://us-east-2.console.aws.amazon.com/kms/home?region=us-east-2#/kms/keys/aa11-123aa-45/
Directory to be uploaded - myDirectory
And I want to know;
Whether the same command can be used to upload a directory with a
bunch of files and folders in it?
If so, how this command should be changed?
the cp command works this way:
aws s3 cp ./localFolder s3://awsexamplebucket/abc --recursive --sse aws:kms --sse-kms-key-id a1b2c3d4-e5f6-7890-g1h2-123456789abc
I haven't tried sync command with kms, but the way you use sync is,
aws s3 sync ./localFolder s3://awsexamplebucket/remotefolder
I'm trying to remove the objects (empty bucket) and then copy new ones into an AWS S3 bucket:
aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive
aws s3 cp ./ s3://BUCKET_NAME/ --region us-east-2 --recursive
The first command fails with the following error:
An error occurred (InvalidRequest) when calling the ListObjects
operation: You are attempting to operate on a bucket in a region that
requires Signature Version 4. You can fix this issue by explicitly
providing the correct region location using the --region argument, the
AWS_DEFAULT_REGION environment variable, or the region variable in the
AWS CLI configuration file. You can get the bucket's location by
running "aws s3api get-bucket-location --bucket BUCKET". Completed 1
part(s) with ... file(s) remaining
Well, the error prompt is self-explanatory but the problem is that I've already applied the solution (I've added the --region argument) and I'm completely sure that it is the correct region (I got the region the same way the error message is suggesting).
Now, to make things even more interesting, the error happens in a gitlab CI environment (let's just say some server). But just before this error occurs, there are other buckets which the exact same command can be executed against and they work. It's worth mentioning that those other buckets are in different regions.
Now, to top it all off, I can execute the command on my personal computer with the same credentials as in CI server!!! So to summarize:
server$ aws s3 rm s3://OTHER_BUCKET --region us-west-2 --recursive <== works
server$ aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive <== fails
my_pc$ aws s3 rm s3://BUCKET_NAME --region us-east-2 --recursive <== works
Does anyone have any pointers what might the problem be?
For anyone else that might be facing the same problem, make sure your aws is up-to-date!!!
server$ aws --version
aws-cli/1.10.52 Python/2.7.14 Linux/4.13.9-coreos botocore/1.4.42
my_pc$ aws --version
aws-cli/1.14.58 Python/3.6.5 Linux/4.13.0-38-generic botocore/1.9.11
Once I updated the server's aws cli tool, everything worked. Now my server is:
server$ aws --version
aws-cli/1.14.49 Python/2.7.14 Linux/4.13.5-coreos-r2 botocore/1.9.2
I'm copying between s3 buckets files from specific dates that are not sequence.
In the example I'm copying from the 23, so I'd like to copy 15th, 19, and 23rd.
aws s3 --region eu-central-1 --profile LOCALPROFILE cp
s3://SRC
s3://DEST --recursive
--exclude "*" --include "2016-01-23"
This source mentions using sequences http://docs.aws.amazon.com/cli/latest/reference/s3/
for include.
It appears that you are asking how to copy multiple files/paths in one command.
The AWS Command-Line Interface (CLI) allows multiple --include specifications, eg:
aws s3 cp s3://SRC s3://DEST --recursive --exclude "*" --include "2016-01-15/*" --include "2016-01-19/*" --include "2016-01-23/*"
The first --exclude says to exclude all files, then the subsequent --include parameters add paths to be included in the copy.
See: Use of Exclude and Include Filters in the documentation.