Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there a way to check that my files are already on the edge servers for my users to load from? Or does amazon s3 take time to spread your files around the world? How Long and can I receive notification about when?
So after I uploaded a file, I immediately tested the load speed by asking users from other far away places(like places in Japan). They said that it was rather slower than my current hosting in the US. That's odd because Amazon does have an edge server in Tokyo so Amazon s3 should be faster?
Before I created my bucket, I did set the region to be in the standard US. Is that why? If so, is there a way to set your files to work around the world?
Thank you for your time.
As you already said, your S3 buckets are situated in a specific location, for example us-east, europe, us-west etc. This is the place where your files are physically stored. They are not distributed geographically. Users from other places in the world will experience delay when requesting data from these buckets.
What you are looking for is the Cloudfront CDN from Amazon. You can specify an origin (that would be your S3 bucket in your case) and then your files will be distributed to all the Amazon Cloudfront edge locations worldwide. Check out their FAQ and the list of edge locations.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I have been using rclone to back up google drive data to AWS S3 cloud storage. I have multiple google drive accounts whose backup happens on AWS S3. All those google drives have different numbers of documents.
I want to compress those documents into a single zip file and then it needs to be copied on S3.
Is there any way to achieve the same?
I referred to the link below, but it doesn't have complete steps to accomplish the task.
https://rclone.org/compress/
Any suggestion would be appreciated.
Rclone can't compress the files, but you can instead use a simple code to zip or rar the files and then use rclone to back them up to AWS.
If this is OK, I can explain the details here.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have implemented an antivirus system using ClamAV on one of my apps which uses Google cloud storage for uploading files.
Currently what I am doing is, listening to bucket upload, download it on one of my servers, scan it using ClamAV, and deleting it if it was infected.
I am a newbie to this, Is it possible that the whole cloud bucket gets infected by a virus on upload only.
i.e, can a virus execute himself on the bucket(any cloud bucket) itself?
If yes then please suggest some other solution to solve this issue as my current solution would be ineffective in this case.
Object Storage systems do not provide an execution framework hence an infected file cannot infect other files in the bucket.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We're uploading and serving/streaming media (pics,videos) using amazon s3 for storage combined with cloudfront for serving media. The site is used slightly but the Amazon costs come to 3000 $ per month and from the report 90% of the costs originate from the S3 service .
Heard that cloud can be expensive if you don't code the right way ..now my questions :
What is the right way ? and where should I pay more attention, to the way I upload files or to the way I serve them?
Has anyone else had to deal with unexpected high costs , if yes what was the cause?
We have almost similar model. We stream (rtmp ) from S3 and cloudfront. We do have 1000s of files and decent load, but our monthly bill for s3 is around 50$ ( negligible as compared to your figure). Firstly , you should complain about your charges to the technical support of AWS. They always give you a good response and also suggest better ways to utilize resources. Secondly , I think if you do live streaming, where you divide the file into chucks and stream them one by one, instead of streaming or downloading the whole file, it might be effective , in terms of i/o where users are not watching the whole video, but just the part of it. Also, you can try to utilize caching eat application level.
Another opportunity to get better picture on what's going on in your buckets: Qloudstat
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I would like to serve user uploaded content (pictures, videos, and other files) from a CDN. using Amazon S3 with cloudfront seems like a reasonable way to go. My only question is about the speed of the file system. My plan was to host user media with the following uri. cdn.mycompany.com/u/u/i/d/uuid.jpg.
I don't haven any prior experience with S3 or CDN's and I was just wondering if this strategy would scale well to handle a large amount of user uploaded content. And if there might be another conventional way to accomplish this.
You will never have problems dealing with scale on CloudFront. It's an enterprise-grade beast.
Disclaimer: Not if you're Google.
It is an excellent choice. Especially for streaming video and audio, CloudFront is priceless.
My customers use my plugin to display private streaming video and audio, one of them even has 8,000 videos in one bucket without problems.
My question stemmed from a misunderstanding of S3 buckets as a conventional file system. I was concerned that hacking too many files in the same directory would create overhead in finding the file. However, it turns out that S3 buckets are implemented more something like a hashmap so this overhead doesn't actually exist. See here for details: Max files per directory in S3
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm looking for examples of large websites that are hosted on Amazon EC2/S3/Cloudfront/etc.
I worked at a company where we ran a site that did 500k unique visitors per month on EC2 running 10 instances but that's still relatively small potatoes compared to some larger sites. I know smug mug and foursquare are also hosted on EC2. What other large websites are on EC2?
Reddit uses EC2
How large is 'large'? I know that heroku runs on EC2, and they do a pretty large amount of traffic.
Netflix, DropBox, Zynga (maker of Farmville) are pretty big users of EC2
Instagram uses Amazon... at least their images are hosted on S3. Check out this link:
http://distillery.s3.amazonaws.com/media/2011/09/17/42923def0eb141fc8bb4ab0c963d0dbc_6.jpg
Also, I'm pretty sure Reddit moved away from Amazon Cloud after the most recent outage.
Here's a link to Amazon case studies that details some of their larger clients:
http://aws.amazon.com/solutions/case-studies/
In the financial world Nasdaq OMX is a good example but virtually every bank uses EC2 for computational needs (not as a main system but highly used by both banks and hedge funds to run simulations).