Some basic questions about Amazon S3 - amazon-s3

I know this isn't a direct technical problem but this seems like an ideal place to ask since I know other developers have experience using this service. I was about to ask this on the Amazon AWS forums but realized you need to be a AWS account holder to do that. I don't want to signup with them before getting the following answered:
Is Amazon S3 a CDN? or is it just an online storage service meant for personal use? Even if it isn't a CDN are you at least allowed to serve website assets from it to a high traffic site?
I have an adult dating site I would like to store assets for in S3? Is this type of site allowed under their tos? What they had to say on the matter in their tos was way too broad. Basically this site has nude images of members but they are all of age and uploaded by the users themselves. The site is targeted only to U.S. users and is legal under U.S. laws.

Amazons S3 service can be used as a CDN if you want depending on the size of your site your might want to look at cloudfront which will allow you to have your content shared across multiple zones, for what your describing s3 will be fine for your needs but as for amazons rules with content im not to sure.

Amazon stands for storage services.
You can use S3 to store files for private or public use.
If you want to use CDN services, you have to use Cloud Front.
Cloud front accepts S3 as input data to spread it to CDN servers.
About the policies, Im uncertain, but you can use it for store any type of data as long you have its rights.

Related

Using object storage with authorization for a web app

I'm contributing to developping a web (front+back) application, which uses OpenID Connect (with auth0) for authentication & authorization.
The web app needs authentication to access some public & some restricted information (restriction are per-user or depending on certain group-related rules).
We want to provide a upload/download features for documents such as .pdf, and we have implemented minIO (pretty similar to AWS S3) for public documents.
However, we can't wrap ou heads around restricted-access files :
should we implement OIDC on minIO for users to access directly the buckets but with temporary access tokens, allowing for fine-grained authorization policy
or should the back-office be the only one to have keys to minIO and be the intermediary between the object storage and users ?
Looking for good practices here, thanks in advance for your help.
Interesting question, since PDF docs are web static content unless they contain sensitive data. I would aim to separate secured (API) and non-secured (web) concerns on this one.
UNSECURED RESOURCES
If there is no security involved, connecting to a bucket from the front end makes sense. The bucket contents can also be distributed to a content delivery network, for best global performance. The PDF can be considered a web resource.
SECURED RESOURCES
Requests for these need to be treated as an API request, if a PDF doc contains sensitive data. APIs should receive an access token and enforce access to documents via scopes and claims.
You might use a Documents API for this. The implementation might still connect to a bucket, but this might be a different bucket that the browser does not have access to.
SUMMARY
This type of solution is often clearer if you think in terms of URL design. Eg the front end might have 2 document URLs:
publicDocs
secureDocs
By default I would treat docs that users upload as secure, unless they select an upload option such as make public.

Bucket SSL / High Bill for Bucket? - Google Cloud

I am hosting a simple static website via Google Bucket right know:
Does the Bill look familiar to you? I am surprised by the high usage numbers.
Does there exist a Hitcounter for GoogleBucket-Websites?
How can I secure my bucket website with SSL?
I tried to follow the Loadbalancing Manual, but somehow it doesn`t work.
As stated in the documentation
While you can serve your content through HTTPS using direct URIs such
as https://storage.googleapis.com/my-bucket/my-object, when hosting a
static website using a CNAME redirect, Cloud Storage only supports
HTTP
As you correctly stated using the loadbalancer is a recommended method to serve your content trough HTTPS. If you need help with this I would recommend you to ask another question with the details of the steps followed and the error impeding you to continue.
Using a load balancer will let you use Stackdriver to monitor the access to your account. Using Stackdriver you can get custom metrics and get the number of users entering your website.
Also discussing your Google Cloud Platform billing invoice in Stackoverflow is not recommended as it is not related to programming. If you need help with your billing you should contact the Billing support team of Google Cloud Platform.

What is the equivalent for AWS Cloudfront service in Google Cloud?

Today, I am using AWS S3 bucket and on top of it I am using AWS CloudFront.
I want to have also a Google Cloud storage with CloudFront, so I found the Storage where I can create bucket and put their my static files/images which is equivalent to the S3 bucket. But what about CloudFront? Where do I set CloudFront in Google Cloud?
Thanks in advance.
Google Cloud features built-in edge caching in its points of presence for services like Cloud Storage and App Engine, so in many cases you may not need a separate CDN product. I would suggest measuring your use case with and without a CDN from a few countries before adding in the extra expense. Keep in mind that objects need to be publicly readable with cache control settings that allow caching (which is the default for public objects) in order for Google's edge caches to cache them.
Google Cloud does have a CDN service, though, called Google Cloud CDN. It ties in with Cloud Load Balancing. It offers direct support for GCS buckets, although that's still in alpha. The upside is that serving GCS resources via Cloud CDN adds some nice perks, such as the ability to use custom domains with HTTPS or mapping GCS bucket names to differently-named domains.
In addition, if you're happy with CloudFront, I believe that you can use GCS (or pretty much anything else) as an origin server for it.

Amazon S3 API OAuth-style access to 3-rd party buckets

I'm a newbie in AWS infrastructure, and I can't figure out how to build auth process which I want.
I want to have something similar to what other cloud storages, like Box, Dropbox, Onedrive have:
developer registeres OAuth app with a set of permissions
client with one click can give a consent for this app to have listed permissions on his own account and it's content, eternally, until consent is deliberately withdrawn
Now, as far as I understand, client should go to console and create a user, create a role for him, then send this user's id and key to my app, which is not that convinient. I'm looking for a most easy and simple way to do that.
I've tested "Login with Amazon" + "Amazon Cognito", but it turned out as a completely opposite mechanism: client should set up Login, link it to Cognito, to provide me one click access.
So, is it even possible? Which is the best way to implement such auth process?
There isn't a way to do what you're trying to do, and I would suggest that there's a conceptual problem with comparing Amazon S3 to Dropbox, Box, or Onedrive -- it's not the same kind of service.
S3 is a service that you could use to build a service like those others (among other purposes, of course).
Amazon Simple Storage Service (Amazon S3), provides developers and IT teams with secure, durable, highly-scalable cloud storage.
https://aws.amazon.com/s3/
Note the target audience -- "developers and IT teams" -- not end-users.
Contrast that with Amazon Cloud Drive, another service from Amazon -- but not part of AWS.
The Amazon Cloud Drive API and SDKs for Android and iOS enable your users to access the photos, videos, and documents that they have saved in the Amazon Cloud Drive, and provides you the ability to interact with millions of Amazon customers. Access to the free Amazon Cloud Drive API and SDKs for Android and iOS enable you to place your own creative spin on how users upload, view, edit, download, and organize their digital content using your app.
https://developer.amazon.com/public/apis/experience/cloud-drive/
The only way for your app to access your app's user's bucket would be for the user to configure and provide your app with a key and secret, or to configure their bucket policy to allow the operation by your app's credentials, or to create an IAM role and allow your app to assume it on their behalf, or something similar within the authentication and authorization mechanisms in AWS... none of which sound like a good idea.
There's no OAuth mechanism for allowing access to resources in an AWS account.

When using S3 in AWS, how do you manage access to specific images?

I am developing image server through S3 in AWS(Amazon Web Service) but i should solve the management issue
What i mean is that end user should be able to specific images in S3
For that, i am thinking about IAM(Identity access management) for allowing some users to access specific images.
What i want to know is that whether there is other solutions or not.
Actually, i have found about cognito but unfortunately cognito is supported in only 2 regions....
If you have a good idea, please give me explanation thank you
Unfortunately there is nothing in the suite of AWS services that fits your use case 100%.
While Amazon Cognito is only available in 2 regions, this does not restrict you to accessing S3 from only those 2 regions with credentials vended from the service. You could Amazon Cognito and IAM roles to define a policy that would allow for limited permissions to a set of files based on the prefix. However, at the current time, role policies would allow you to restrict access to 2 classes of files:
"Public files" - files accessible via all identities in your pool.
"Private files" - files accessible only to a specific identity in your pool.
If you wanted to support restricting access to specific files to specific users in your application you would need to handle this through a backend application that would proxy the access to the files in S3.