Is this legacy cloudfront configuration the same as my non-legacy configuration? - amazon-s3

At 11:50 in this video the presenter explains how to configure a Cloudfront behaviour to whitelist the CloudFront-Viewer-Country. While I have followed his instructions as closely as I can, this part of the video is the only one where I cannot be sure because he is using AWS's old console and I'm using the new design, which does not have the same terminology - and hence while his example clearly works, mine does not, leading me to suspect that this is the breaking point.
Old console with the CloudFront-Viewer-Country header whitelisted:
My console with what I hope is the same confguration:
How can I know if these show the same setup of behaviour?
When I say that mine does not work, I have 4 different S3 buckets with different images all named the same. These are served via CloudFront with a Lambda#Edge function derived from the example JS function to direct requests to the appropriate bucket content. What I expect is that viewing the image from the European region (via VPN change) I should see one image and when viewing from the US region I should see a different image. I do not.

Related

Fetching content binary from database or fetching content by its link from storage service

For an app (web + phone) there are two options:
Image binaries in database. Server replies to app HTTP request with images as base64
Images in storage service like Amazon S3 or Azure Blob Storage or a self-hosted one. Image links in database. Server handles app HTTP requests by sending back only the links to images. The app fetches the images from storage by their link
Which option above is the standard practice? Which one has less trouble down the road?
To some extent, the answer to this question is always opinion based, and partly depends on the specific use case.
I would think that the second approach is used more often. One reason is that normally, storage within a database is slightly more expensive than file storage in many cases. Also, what is the real use case? Assuming you use HTML pages that reference images via the img element or via CSS as background image, then the base64 return value would not be that useful, and OTOH the more complicated graphic at the bottom of your picture would get a bit more simple from the client view: The resolution of the link would be resolved by the server when generating the HTML and determine the src of the img, and then the browser would simply apply standard HTML logic and request the image data from the storage service via HTTP.
However, if you would want to optimize load times (and your images would be more or less unique per page so that browser caching of images across pages would not help much), then you could use data URLs embedded into the HTML, and then the first approach could potentially be useful. In this case, all the logic including the generation of the data URL within the HTML would be handled on the server, and the browser would have a single http request.

Can we use SDKs directly in Suitelet?

Implementing a requirement to store images in AWS bucket instead of NetSuite. Since the bucket is private, I have to upload and generate the URL in backend/suitelet.
I tried to include AWS SDK into Suitelet by defining, but that doesn't work.
I want to get to know whether can we use/include SDKs inside Suitelet?
How can I implement a solution for this without using any third party solutions?
How are permissions for the links managed? Can you make them publicly viewable? Remember unless the links you generate are timestamped anyone with the link can get to the image.
In terms of uploading the images check out https://github.com/DeepChannel/netsuite-savedsearch-s3
If you need to keep have each image have a magic link you could use a Heroku app or an AWS lambda. The app would check a hash based on link parameters and proxy the image if the hash is valid. If your images are supposed to be private to a customer this would be the way to go.
If you are using the images generally on a website then just make the bucket publicly readable and use the API to upload.

Hidden video URL on video.js

I wanna hidden my URL video's hosted on Amazon S3 to prevent people download them.
I saw another strategy (Amazon Bucket Policies) but I think it's too complex for this case.
Is possible hidden that one?
What do you suggest for this problem?
Instead of the video tag in the beginning, you could have a data tag with just an id. You could then reference in this id, compare it in the javascript and inject the appropriate video url. As previously stated, there's no true way to protect it. With my method, people can still use fiddler to see where it's being referenced in. They can also use the browser's dev tools.

Does Google Images allow hotlinking?

I wrote a script that uses the Google Images JSON API to automatically fetch thumbnails for posts. I'm currently linking directly to the thumbnail (eg. http://t3.gstatic.com/images?q=tbn:ANd9GcTok4m3DWNRv8gxMDTE0bwj8m-jYl2UGdlbc7ig158m0XosD-lcQEIFcg). Does Google allow that?
If not, I should be allowed to download the thumbnails to my server right?
Its all about traffic. If your app will make tons of traffic, they can ban your server.
Anyway, better/best way is to ask them about this subject.
Also this might help you : Google Terms of Service Highlights
I see problems when you download the image thumbnail to your server and render. Images shown in search results might be copyrighted/inappropriate. They are crawled images so the owner can request google to remove at anytime. On contrary, if you cache them locally and render, I see the workflow is broken and you might be rendering image that ideally should have been revoked.
Coming back to hot linking, can you explain bit more on the actual usage context. What API you are using, what are you searching at, do you own the website / posts that you are filtering?
Also image search API is deprecated one. By terms it would be active only for three years since notice.

Possible to get image from Amazon S3 but create it if it doesn't exist

I'm not sure how to word the question but here is what I am looking to do.
I have a site that uses custom map tile overlays on a google map.
The javascript calls a php file on my server that checks to see if an existing map tile exists for the given x, y, and zoom level.
If if exists, it displays that image using file_get_contents.
If it doesn't exist, it creates the new tile then displays it.
I would like to utilize Amazon S3 store and serve the images since there could end being a lot of them and my server is slow. If I have my script check to see if the image exists on amazon and then display it, I am guessing I am not getting the benefits of the speed and Amazons CDN. Is there a way to do this?
Or is there a way to try and pull the file from Amazon first then set up something on Amazon to redirect to my script if the files no there?
Maybe host the script on another of Amazons services? The tile generation is quite slow also in some cases.
Thanks
Ideas:
1 - Use CloudFront, but point it to a cluster of tile generation machines. This way, you can generate the tiles on demand, and any future requests are served right from Cloudfront.
2 - Use CloudFront, but back with with an S3 store of generated tiles. Turn on logging for the S3 bucket, so you can detect failed requests. Consume those logs on a schedule, and generate the missing tiles. This results in a cheaper way of generating tiles, but means that when a tile fails the user get's nothing.
3 - Just pre-generate all the tiles. Throw tasks in an SQS queue, then spin up a collection of EC2 instances to generate the tiles. This will cost the most up front, but all users get a fast experience.
I've written a blog post with a strategy for dealing with this. It's designed to make intelligent and thrifty use of CloudFront, maximize caching and deal with new versions of existing images. You may find the technique described there helpful. The example code shows how to handle different dimensions (i.e. thumbnails) of images. You could modify it to handle different zoom levels.
I need to update that post to support CloudFront custom origins, and I think that for your application you might be better off skipping S3 and using a custom origin. The advantage of a custom origin is simply that it's probably going to be easier to manage all of your images on your local filesystem compared to managing them on S3.