How to use rclone to download data from S3 - amazon-s3

I have a quick question on rclone.
I am trying to download data from tradestatistics.io, where it gives a sample code for downloading:
rclone sync spaces:tradestatistics/hs-rev1992-visualization hs-rev1992-visualization
My question is that how to access list of files in that source and can it be directly done with terminal?

Assuming you've already installed rclone (https://rclone.org/downloads/)
To configure rclone to see storage on S3, see https://rclone.org/s3/
Assuming spaces: is your correctly configured rclone source remote, you can list all files using terminal amnd rclone lsl command:
rclone lsl spaces:tradestatistics/hs-rev1992-visualization
where tradestatistics is the bucket and hs-rev1992-visualization is the root folder.
A more human-readable list can be done with lsf. It's not recursive, so add -R:
rclone lsf -R spaces:
More details at https://rclone.org/commands/rclone_lsl/ with info on other lists.

Related

Microsoft File Share Storage Sync

Is there a way to sync files that are saved either locally or in GitLab to the Microsoft File Shares storage? So I want it to be unidirectional local -> File Shares storage.
I did not find anything yet. I know there is a way to do it when using a Blob storage, via the azcopy sync command.
A way to do it is to delete all files first and then use the az storage file upload-batch command but this is way too cumbersome and ugly.
I even checked the rclone cli but this is also not possible for the Microsoft File Shares storage.
Am I missing something? This is ridiculous...
I finally found the answer:
Syntax
azcopy sync '<local-directory-path>' 'https://<storage-account-name>.file.core.windows.net/<file-share-name><SAS-token>' --recursive --delete-destination 'true'
Example from Documentation:
azcopy sync 'C:\myDirectory' 'https://mystorageaccount.file.core.windows.net/myfileShare?sv=2018-03-28&ss=bfqt&srt=sco&sp=rwdlacup&se=2019-07-04T05:30:08Z&st=2019-07-03T21:30:08Z&spr=https&sig=CAfhgnc9gdGktvB=ska7bAiqIddM845yiyFwdMH481QA8%3D' --recursive --delete-destination 'true'

gsutil cp / download file to windows server

I'm very new at this and need some help; I'm sure I'm not doing something right. I have a Synology NAS that has a cool options to sync files to Google cloud storage. This is a great way to get my backups off site 
I have my backups syncing to a cold line storage bucket. Now that my files are syncing I'm looking to document the process if I need to retrieve them.
I want to download a whole folder and all of the files inside it to a windows server. I installed the gsutil and trying to run this command.
gsutil -m cp -R dir gs://bhp_backup_sync/backup/foldername
but after I run this I get the following exception.
CommandException: No URLs matched: dir
CommandException: 1 file/object could not be transferred.
NOOB here what am I missing?

AWS S3 download buckets

How can I download files from AWS s3 to my local computer. I use mover for scheduling but I don't find any option to download to hard disk.
Thanks a lot
Ragam
You can use minio client aka mc for the same, its a single binary file. Its open source & supports Linux, Mac, Windows, FreeBSD operating system.
Installing
GNU/Linux, Download mc for:
64-bit Intel from https://dl.minio.io/client/mc/release/linux-amd64/mc
32-bit Intel from https://dl.minio.io/client/mc/release/linux-386/mc
32-bit ARM from https://dl.minio.io/client/mc/release/linux-arm/mc
$ chmod +x mc
$ ./mc --help
Microsoft Windows, Download mc for:
64-bit from https://dl.minio.io/client/mc/release/windows-amd64/mc.exe
32-bit from https://dl.minio.io/client/mc/release/windows-386/mc.exe
C:\Users\Username\Downloads> mc.exe --help
Setting up
$ ./mc config host add <ALIAS> <YOUR-S3-ENDPOINT> <YOUR-ACCESS-KEY> <YOUR-SECRET-KEY> S3v4
Example
$ ./mc config host add mys3 https://s3.amazonaws.com BKIKJAA5BMMU2RHO6IBB V7f1CwQqAcwo80UEIJEjc5gVQUSSx5ohQ9GSrr12
Copying from S3 to local machine [Windows]
Copy a bucket recursively from aliased Amazon S3 cloud storage to local filesystem on Windows.
$ ./mc cp --recursive s3\documents\ C:\Backups
Copying from S3 to local machine [Linux]
Copy a bucket recursively from aliased Amazon S3 cloud storage to local filesystem on Linux.
$ ./mc cp --recursive s3/documents/ /home/minio/Backups
Note: Here content of bucket "documents" are getting copied locally to Backups directory.
Hope it helps.
Desclaimer: I work for Minio
There are many software packages available, most free, that will allow you to do that. I use Cloudberry's products. You can also link to each file directly. Looking at the properties of the files in the AWS console will give you the required information.

how to copy file from amazon server to s3 bucket

i am working with s3 bucket. i need to copy an image from my amazon server to s3 bucket. any idea how can i do it? i saw some sample codes but i dont know how to use it.
if (S3::copyObject($sourceBucket, $sourceFile, $destinationBucket, $destinationFile, S3::ACL_PRIVATE)) {
echo "Copied file";
} else {
echo "Failed to copy file";
}
it seems that this code is used only to bucket but not for the server?
thanks for help.
Copy between S3 Buckets
AWS released a command line interface for copying between buckets.
http://aws.amazon.com/cli/
$ aws s3 sync s3://mybucket-src s3://mybucket-target --exclude *.tmp
..
This will copy from one target bucket to another bucket.
I have no tested this, but I believe that this will operate in series, by downloading the files to your system and then uploading to the bucket.
See the documentation here : S3 CLI Documentation
I've used s3cmd for several years, and it's been very reliable. If you're using Ubuntu it's available with:
apt-get install s3cmd
You can also use one of the SDKs to develop your own tool.

Can I move an object into a 'folder' inside an S3 bucket using the s3cmd mv command?

I have the s3cmd command line tool for linux installed. It works fine to put files in a bucket. However, I want to move a file into a 'folder'. I know that folders aren't natively supported by S3, but my Cyberduck GUI tool converts them nicely for me to view my backups.
For instance, I have a file in the root of the bucket, called 'test.mov' that I want to move to the 'idea' folder. I am trying this:
s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov
but I get strange errors like:
WARNING: Retrying failed request: /idea/test.mov (timed out)
WARNING: Waiting 3 sec...
I also tried quotes, but that didn't help either:
s3cmd mv 's3://mybucket/test.mov' 's3://mybucket/idea/test.mov'
Neither did just the folder name
s3cmd mv 's3://mybucket/test.mov' 's3://mybucket/idea/'
Is there a way within having to delete and reput this 3GB file?
Update: Just FYI, I can put new files directly into a folder like this:
s3cmd put test2.mov s3://mybucket/idea/test2.mov
But still don't know how to move them around....
To move/copy from one bucket to another or the same bucket I use s3cmd tool and works fine. For instance:
s3cmd cp --r s3://bucket1/directory1 s3://bucket2/directory1
s3cmd mv --recursive s3://bucket1/directory1 s3://bucket2/directory1
Probably your file is quite big, try increasing socket_timeout s3cmd configuration setting
http://sumanrs.wordpress.com/2013/03/19/s3cmd-timeout-problems-moving-large-files-on-s3-250mb/
Remove the ' signs. Your code should be:
s3cmd mv s3://mybucket/test.mov s3://mybucket/idea/test.mov
Also try what are the permissions of your bucket - for your username you should have all the permissions.
Also try to connect CloudFront to your bucket. I know it doesn' make sense but I have similar problem to bucket which do not have cloudfront instance clonnected to it.