I've been looking all over and I can't find a yes or no answer. Can I restrict a bucket in S3 to specific size?
If so, could you please point me into the right direction in doing so? Thanks.
Well you can do this from within the application you are building, assuming you know the size of the bucket in question, you can use the AWS API for getting the bucket size. However, there seems to be no way to accomplish this from within the AWS dashboard, nor can it be done with an S3 Bucket Policy.
Bummer, because I think this would be a great feature as well.
My advice is to be careful of which applications are uploading content to your S3 bucket, or to interface your application with the AWS API, check the bucket size before inserting content. This is not ideal however.
Related
Implementing a requirement to store images in AWS bucket instead of NetSuite. Since the bucket is private, I have to upload and generate the URL in backend/suitelet.
I tried to include AWS SDK into Suitelet by defining, but that doesn't work.
I want to get to know whether can we use/include SDKs inside Suitelet?
How can I implement a solution for this without using any third party solutions?
How are permissions for the links managed? Can you make them publicly viewable? Remember unless the links you generate are timestamped anyone with the link can get to the image.
In terms of uploading the images check out https://github.com/DeepChannel/netsuite-savedsearch-s3
If you need to keep have each image have a magic link you could use a Heroku app or an AWS lambda. The app would check a hash based on link parameters and proxy the image if the hash is valid. If your images are supposed to be private to a customer this would be the way to go.
If you are using the images generally on a website then just make the bucket publicly readable and use the API to upload.
I'm trying to figure out what is the best way to find the nearest bucket in Amazon S3 without using the GPS.
Is there any method in Obj-C to detect the current continent?
Is there anything in the AWS framework?
My idea was to use the current country (from iphone preferences) to and make a list of continents with countries and then choose the correct bucket.
Another idea was to get the current timezone and check which timezone is closer to the timezone of the bucket.
Any ideas on this?
If you are just looking to serve up static files from S3, this is all done automatically if you use CloudFront to serve content from a bucket.
CloudFront will look after serving the content from the nearest edge location to the client.
I read the directions for posting, so I will be as specific as possible.
I have an S3 bucket with numerous FLV files that I will be allowing customers to stream on THEIR domains.
What I am trying to accomplish is
Setting a bucket policy that 'GRANTS' access to specific domains (a list) to stream my bucket files from their domains.
A bucket policy that restricts a user to 'one stream' per domain. In other words, for each domain listed in the above policy, they can only stream one file at a time on their site.
The premise is a video site where customers will be streaming videos specific to their niche. I make host and deliver the videos, but need some control over their delivery.
All files are in ONE bucket. There aren't any weird things going on with the files. It's very straight forward.
I just need the bucket policy control that would Grant and also Restrict the ability of my customers to stream my content from their domains.
I PRAY I have been clear enough, but please don't hesitate to ask if I have confused you...
Thanks VERY much
A
I don't think you can achieve what you want by simply setting access permissions to the bucket.
I checked in AccessControlList and CannedAccessControlList.
Your best bet will be to write a webservice wrapper to access the bucket data.
You will have better control over the data you serve and may be you might also explore the option of cached copy of data for higher optimization.
I'm trying to figure out how to store a database consisting of metadata in Amazon SimpleDB, with the actual content the metadata refers to (videos) in S3. As I understand it, I should place a pointer in SimpleDB that refers to the videos in S3. What is this pointer, exactly? Is it the URL of the video located in S3?
Also, are there any code samples that would pertain to this?
Thanks!
You're right, just enter the url on simpleDB and you're done.
What you're trying to do is pointed as an use case: http://aws.amazon.com/en/simpledb/usecases_metadata_indexing/
Taking a look at the code library, you can filter by S3 or SimpleDB and you'll find examples like SimpleDB PHP Sample Program Set and Travel Log - Sample Java Web Application.
Regards.
I'm not sure how to word the question but here is what I am looking to do.
I have a site that uses custom map tile overlays on a google map.
The javascript calls a php file on my server that checks to see if an existing map tile exists for the given x, y, and zoom level.
If if exists, it displays that image using file_get_contents.
If it doesn't exist, it creates the new tile then displays it.
I would like to utilize Amazon S3 store and serve the images since there could end being a lot of them and my server is slow. If I have my script check to see if the image exists on amazon and then display it, I am guessing I am not getting the benefits of the speed and Amazons CDN. Is there a way to do this?
Or is there a way to try and pull the file from Amazon first then set up something on Amazon to redirect to my script if the files no there?
Maybe host the script on another of Amazons services? The tile generation is quite slow also in some cases.
Thanks
Ideas:
1 - Use CloudFront, but point it to a cluster of tile generation machines. This way, you can generate the tiles on demand, and any future requests are served right from Cloudfront.
2 - Use CloudFront, but back with with an S3 store of generated tiles. Turn on logging for the S3 bucket, so you can detect failed requests. Consume those logs on a schedule, and generate the missing tiles. This results in a cheaper way of generating tiles, but means that when a tile fails the user get's nothing.
3 - Just pre-generate all the tiles. Throw tasks in an SQS queue, then spin up a collection of EC2 instances to generate the tiles. This will cost the most up front, but all users get a fast experience.
I've written a blog post with a strategy for dealing with this. It's designed to make intelligent and thrifty use of CloudFront, maximize caching and deal with new versions of existing images. You may find the technique described there helpful. The example code shows how to handle different dimensions (i.e. thumbnails) of images. You could modify it to handle different zoom levels.
I need to update that post to support CloudFront custom origins, and I think that for your application you might be better off skipping S3 and using a custom origin. The advantage of a custom origin is simply that it's probably going to be easier to manage all of your images on your local filesystem compared to managing them on S3.