Upload only files with no extension in filename - amazon-s3

I would like to execute aws s3 sync . s3://<some bucket> and use the --exclude flag to exclude all files with an extension in the filename and change the content-type.
Tried this but does not work. Still finds files with extension.
/usr/bin/aws s3 sync /home/www s3://<bucket name> --dryrun --exclude "*.*" --include "*" --content-type text/html

You just need to use --exclude to exclude files with extensions
aws s3 sync --exclude "*.*" --content-type="text/html" . s3://hernan-test-bucket/
Example execution:
:~# ls
file1 file2 file3.txt file4.txt
:~# aws s3 sync --exclude "*.*" --content-type="text/html" . s3://bucket/
upload: ./file1 to s3://bucket/file1
upload: ./file2 to s3://bucket/file2

Related

How to copy files from S3 using include pattern with underscore on the file name

How can I include the _ (underscore) on the include pattern?
I have a S3 bucket with files with the following format
20220630_084021_abc.json
20220630_084031_def.json
20220630_084051_ghi.json
20220630_084107_abc.json
20220630_084118_def.json
So, I would like to get all the files that start with 20220630_0840*.
So, I've tried to fetch them using multiple variations on the include pattern, so far I had used the followings:
aws s3 cp s3://BUCKET/ LocalFolder --include "20220630_0840*" --recursive
aws s3 cp s3://BUCKET/ LocalFolder --include "[20220630_0840]*" --recursive
aws s3 cp s3://BUCKET/ LocalFolder --include "20220630\_0840*" --recursive
None of them really work I'm getting all the files that have their name starting with 20220630

How to upload a directory to a AWS S3 bucket along with a KMS ID through CLI?

I want to upload a directory (A folder consist of other folders and .txt files) to a folder(partition) in a specific S3 bucket along with a given KMS-id via CLI. The following command which is to upload a jar file to an S3 bucket, was found.
The command I found for upload a jar:
aws s3 sync /?? s3://???-??-dev-us-east-2-813426848798/build/tmp/snapshot --sse aws:kms --sse-kms-key-id alias/nbs/dev/data --delete --region us-east-2 --exclude "*" --include "*.?????"
Suppose;
Location (Bucket Name with folder name) - "s3://abc-app-us-east-2-12345678/tmp"
KMS-id - https://us-east-2.console.aws.amazon.com/kms/home?region=us-east-2#/kms/keys/aa11-123aa-45/
Directory to be uploaded - myDirectory
And I want to know;
Whether the same command can be used to upload a directory with a
bunch of files and folders in it?
If so, how this command should be changed?
the cp command works this way:
aws s3 cp ./localFolder s3://awsexamplebucket/abc --recursive --sse aws:kms --sse-kms-key-id a1b2c3d4-e5f6-7890-g1h2-123456789abc
I haven't tried sync command with kms, but the way you use sync is,
aws s3 sync ./localFolder s3://awsexamplebucket/remotefolder

S3 : Download Multiple Files in local git-bash console

I have multiple files in S3 bucket like
file1.txt
file2.txt
file3.txt
another-file1.txt
another-file1.txt
another-file1.txt
now, I want to download first 3 files, name startwith "file*", How can i download from aws s3 in local git-bash console?
Simply you can download with below command :
aws s3 cp --recursive s3://bucket-name/ /local-destination-folder/ --exclude "*" --include "file*"

Travis AWS S3 SDK set cache header for particular file

In my Travis script is there a way when uploading contents to S3 Bucket as follows :
# deploy:
# provider: script
# skip_cleanup: true
# script: "~/.local/bin/aws s3 sync dist s3://mybucket --region=eu-west-1
# --delete"
# before_deploy:
# - npm run build
# - pip install --user awscli
I also want to set a no cache header on a particular file in that bucket (i.e. sw.js). Is that currently possible in the SDK ?
I am afraid that this is not possible using a single s3 sync command. But you may try to execute two commands using exclude and include options. One to sync all except the sw.js and the other one just for sw.js.
script: ~/.local/bin/aws s3 sync dist s3://mybucket --include "*" --exclude "sw.js" --region eu-west-1 --delete ; ~/.local/bin/aws s3 sync dist s3://mybucket --exclude "*" --include "sw.js" --region eu-west-1 --delete --cache-control "no-cache" --metadata-directive REPLACE
Note: --metadata-directive REPLACE option is necessary for non-multipart copies.

How to AND OR aws s3 copy statements with include

I'm copying between s3 buckets files from specific dates that are not sequence.
In the example I'm copying from the 23, so I'd like to copy 15th, 19, and 23rd.
aws s3 --region eu-central-1 --profile LOCALPROFILE cp
s3://SRC
s3://DEST --recursive
--exclude "*" --include "2016-01-23"
This source mentions using sequences http://docs.aws.amazon.com/cli/latest/reference/s3/
for include.
It appears that you are asking how to copy multiple files/paths in one command.
The AWS Command-Line Interface (CLI) allows multiple --include specifications, eg:
aws s3 cp s3://SRC s3://DEST --recursive --exclude "*" --include "2016-01-15/*" --include "2016-01-19/*" --include "2016-01-23/*"
The first --exclude says to exclude all files, then the subsequent --include parameters add paths to be included in the copy.
See: Use of Exclude and Include Filters in the documentation.