Synchronizing S3 Folders/Buckets [closed] - amazon-s3

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have an S3 Bucket that holds static content for all my clients in production. I also have a staging environment which I use for testing before I deploy to production. I also want the staging environment to point to S3 to test uploads and other functions. Problem is, I don't want the staging server to reference the same production s3 bucket/folder, because there is a risk of overriding production files.
My solution is to use a different folder within the same bucket, or create a different bucket all together that I can refresh periodically with the contents of the production bucket. Is there a way to easily sync two folders or buckets on Amazon S3?
Any other suggestions for managing this type of scenario would also be greatly appreciated.

s3cmd is a nice CLI utility you can use in a cronjob. It even as a sync feature similar to *nix rsync.

There's also DragonDisk - like cloudberry explorer and other amazon s3 clients - except it's free and multi-platform (QT).
It does sync jobs, both local<->s3 and s3<->s3. It also has a command line interface which can do syncing: http://www.dragondisk.com/faq.html

Here's an example on S3CMD for your reference https://www.admon.org/system-tuning/sync-two-amazon-s3-buckets/

Check out CloudBerry Explorer and it's ability to sync data between local computer and Amazon S3. Might not exactly what you want but will help you to get started. More info here

CloudBerry Explorer comes with PowerShell command line interface and you can learn here how to use it to do sync.

Related

S3: Service that replace access to the local file system with S3

I have an application that heavily uses the local file system. We need to port the application to use S3. What services are out there that will automate the access to the S3 without having to changing the source code of the application.
These services somehow mask the S3 FS as a local FS.
Thanks.
See FuseOverAmazon (or s3fs) but keep in mind that S3 is an eventual consistency data store and your app should be architected to take that into account. It's also important to note that trying to mount an S3 bucket as a file system has very poor performance.
Take a look at RioFS. Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Hope it helps !

Amazon S3 file upload via torrent [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know that amazon-S3 supports BitTorrent and that we can download files using a torrent client. My question is,
Is it possible to upload files from my pc to s3 via torrent, either directly or using ec2 ?
Note:
I have a website where users upload large size video files which are stored in S3. It would be helpful if they could just upload a torrent file so that they can seed whenever they want and also have multiple seeders for the same file which decreases their upload time...
You'll have to install a BitTorrent client on EC2 (or on your own system), download the torrent file, and upload it to S3. S3 does not natively support fetching BitTorrent files from other sources and storing them in a bucket.
you can set a PUBLIC s3 file as downloadable by torrent if you add ?torrent at the end of the url.
For example, I just now uploaded following file:
http://mybucket.s3.amazonaws.com/Install/hexagon4.png
if you set the link to
http://mybucket.s3.amazonaws.com/Install/hexagon4.png?torrent
it will be downloaded as torrent.
I don't know if this is possible to do for closed files

How to transload a file from URL to Amazon S3 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a 7GB file I need to download and store in a S3 bucket.
Can I transload it directly to S3 without having to download it to my computer?
AFAIK this is not possible. Here's another to confirm this: Is it possible to upload to S3 by just providing a URL?
You can make it appear that your files are using a different server to be accessed from or uploaded by playing with the CNAME record though: Using amazon s3, upload files using their servers but URL should appear to be from mine
But I don't think this is what you want.
You could however download the file to one of Amazon's EC2 servers and upload from there to S3.

Amazon ec2 and s3 [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
How we can mount amazon s3 on amazon ec2
Hi,
I have one Amazon ec2 account and Amazon s3 account. Now I want to store some files in s3 and want to retrieve these files for some computation in ec2. my question is how we can upload files into the buckets of s3 and how we can access these files from ec2 . how we can make a connection between these two.how we locate s3?
Everything is done through standard HTTP methods: GET, PUT, etc.
Amazon has produced some very clear documentation explaining how to work with S3: http://docs.amazonwebservices.com/AmazonS3/latest/dev/
There are also open source libraries published for today's mainstream languages (PHP, .NET, Java, Ruby, Python, etc). These can greatly reduce your development time, however it helps to read throught the AWS docs to know what's happening behind the scenes (especially when something breaks).

Website backups on Amazon s3 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
EDIT: I have completely changed the question.
I want to use Amazon S3 for my backups and I am looking for a debian lenny software (or a php script) that could allow me to achieve what I need. It is a flash games website:
Upload all files and subdirectories from the specified directories, but only uploading the files that were added/changed (overwrites old files on S3).
Perform a database dump and upload it to S3, keeping only 7 previous dumps.
Lightweight and easy to use
It has to be possible to run it as a cron job
Should work on Debian Lenny
Anything that matches all these specifications?
Automating backups with Amazon S3 on Linux.
You should be able to tweak the procedure this describes to meet your specific needs.
The PHP script Website 2 Backup does exactly what you want :
Backup to Amazon S3
Incremental backup of files, only new and modified files are backuped, deleted files are listed.
Backup of the database, with auto clean of old archives
Clean and easy to setup ajax administration
It runs with the cron job
And it does more, data encrypted on serveur, data recovery with few clicks....
Jungle Disk offers a server edition. It also gives you the choice to use Rackspace Cloud Files for backups instead of Amazon S3, which may be less expensive if you're going to be updating the backup often, as all data transfer to and from Cloud Files is free.