Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I know that amazon-S3 supports BitTorrent and that we can download files using a torrent client. My question is,
Is it possible to upload files from my pc to s3 via torrent, either directly or using ec2 ?
Note:
I have a website where users upload large size video files which are stored in S3. It would be helpful if they could just upload a torrent file so that they can seed whenever they want and also have multiple seeders for the same file which decreases their upload time...
You'll have to install a BitTorrent client on EC2 (or on your own system), download the torrent file, and upload it to S3. S3 does not natively support fetching BitTorrent files from other sources and storing them in a bucket.
you can set a PUBLIC s3 file as downloadable by torrent if you add ?torrent at the end of the url.
For example, I just now uploaded following file:
http://mybucket.s3.amazonaws.com/Install/hexagon4.png
if you set the link to
http://mybucket.s3.amazonaws.com/Install/hexagon4.png?torrent
it will be downloaded as torrent.
I don't know if this is possible to do for closed files
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a 7GB file I need to download and store in a S3 bucket.
Can I transload it directly to S3 without having to download it to my computer?
AFAIK this is not possible. Here's another to confirm this: Is it possible to upload to S3 by just providing a URL?
You can make it appear that your files are using a different server to be accessed from or uploaded by playing with the CNAME record though: Using amazon s3, upload files using their servers but URL should appear to be from mine
But I don't think this is what you want.
You could however download the file to one of Amazon's EC2 servers and upload from there to S3.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have some video files stored on S3, and I want to get video information on them using ffmpeg. However, when I do a command such as :
$ ffmpeg -i 'http://test.s2.amazonaws.com/video.mov
I get a HTTP error 403 Forbidden response. How would I do this command? I also want to make sure not anyone can execute stuff on these files. Thank you.
Update: I was able to do this after making the ACL public-read=everyone for the video, didn't need to use s3fs after all.
You have two options:
You can download the file and run ffmpeg on localfile,
or you can use s3fs to mount your s3 bucket as a filesystem.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
EDIT: I have completely changed the question.
I want to use Amazon S3 for my backups and I am looking for a debian lenny software (or a php script) that could allow me to achieve what I need. It is a flash games website:
Upload all files and subdirectories from the specified directories, but only uploading the files that were added/changed (overwrites old files on S3).
Perform a database dump and upload it to S3, keeping only 7 previous dumps.
Lightweight and easy to use
It has to be possible to run it as a cron job
Should work on Debian Lenny
Anything that matches all these specifications?
Automating backups with Amazon S3 on Linux.
You should be able to tweak the procedure this describes to meet your specific needs.
The PHP script Website 2 Backup does exactly what you want :
Backup to Amazon S3
Incremental backup of files, only new and modified files are backuped, deleted files are listed.
Backup of the database, with auto clean of old archives
Clean and easy to setup ajax administration
It runs with the cron job
And it does more, data encrypted on serveur, data recovery with few clicks....
Jungle Disk offers a server edition. It also gives you the choice to use Rackspace Cloud Files for backups instead of Amazon S3, which may be less expensive if you're going to be updating the backup often, as all data transfer to and from Cloud Files is free.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Hi I have recently set up an amazon s3 account for a personal project.
I have successfully uploaded some image(jpg) files and have set the ACL for the bucket to public however when trying to view a file via the browser, the following xml is returned instead of the jpg.
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>77293A7937279B15</RequestId>
−
<HostId>
cQ3FXKg7ZU4z80QqUGMBheG0FRrFJP4HQx1pCy6UTFDk4pbjR8oYuCa1BmS6jnpe
</HostId>
</Error>
Am I missing something here? Do I need to set up a distribution, or should I be able to access the files regardless?
Any hints much appreciated.
You need to set the ACP of the resource (object) to anonymous ("public"), as well.
Note that this is not a programming question. If you need S3 support, check out the Amazon S3 forum.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have an S3 Bucket that holds static content for all my clients in production. I also have a staging environment which I use for testing before I deploy to production. I also want the staging environment to point to S3 to test uploads and other functions. Problem is, I don't want the staging server to reference the same production s3 bucket/folder, because there is a risk of overriding production files.
My solution is to use a different folder within the same bucket, or create a different bucket all together that I can refresh periodically with the contents of the production bucket. Is there a way to easily sync two folders or buckets on Amazon S3?
Any other suggestions for managing this type of scenario would also be greatly appreciated.
s3cmd is a nice CLI utility you can use in a cronjob. It even as a sync feature similar to *nix rsync.
There's also DragonDisk - like cloudberry explorer and other amazon s3 clients - except it's free and multi-platform (QT).
It does sync jobs, both local<->s3 and s3<->s3. It also has a command line interface which can do syncing: http://www.dragondisk.com/faq.html
Here's an example on S3CMD for your reference https://www.admon.org/system-tuning/sync-two-amazon-s3-buckets/
Check out CloudBerry Explorer and it's ability to sync data between local computer and Amazon S3. Might not exactly what you want but will help you to get started. More info here
CloudBerry Explorer comes with PowerShell command line interface and you can learn here how to use it to do sync.