I want to have my own private hosted object storage with S3 compatibility.
Now I found minio as a solution. My question is: If I have an application A that is able to connect to amazon S3 storage does that imply that I could also connect to minio?
More specifically if minio created a presigned URL is application A (capable of amazon S3) also able to use the presigned URL?
Yes, minio is compatible with AWS S3. You can have your application that is currently connecting to AWS S3 connect to minio.
Presigned URLs can be used by any application as long as they have not expired.
If you are just starting out on minio, please join our slack channel at https://slack.min.io
Related
When a client app is on prem and an AWS is setup with Direct Connect with the corporate on-prem network, how exactly can the client app gain access to the s3 objects?
For example, suppose a client app simply wants to obtain jpg images which live in an S3 bucket.
What type of configuration do I need to make to the S3 bucket permissions?
What configuration do I need to do at the VPC level?
I'd imagine that since Direct Connect is setup, this would greatly simplify an on prem app gaining access to an S3 bucket. Correct?
Would VPC endpoints come in to play here?
Also, 1 constaint here : the client app is not within my control: the client app simply needs a URL it can reach for the image. It cannot easily be changed to support sending credentials in the request, unfortunately. This may be a very important constraint worth mentioning.
Any insight is appreciated. Thank you so much.
you might want to consider these
https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
https://aws.amazon.com/premiumsupport/knowledge-center/s3-private-connection-no-authentication/
And for troubleshooting, try this
https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
If you need to access S3 over DirectConnect,
S3-DirectConnect
//BR
P.S. let me know if that's work for you.. :)
I had very similar issue to solve, also searching like you on how to force client to use direct connect to download content from S3.
In my case, the client is one on-prem load-balancer facing internet that needed to serve content hosted on S3 (CloudFront was not possible).
2 articles already mentioned are important to take into account but not sufficient:
Direct connect for virtual private interface
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-direct-connect/
=> Needed to setup all the VPC endpoint and routing between onprem and AWS.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#accessing-bucket-and-aps-from-interface-endpoints
=> Explain partially how to access bucket using VPC-Endpoints
The missing information from the latest AWS page is what URL structure you need to use to connect to your S3 endpoint, here is the structure I discovered working:
https://bucket.[vpc-endpoint-id].s3.[region].vpce.amazonaws.com/[bucket-name]/[key]
With that scheme, you can address any object on one S3 bucket using S3 VPC endpoint using normal web request.
We use that concept to serve securely files hosted on S3 bucket via our on-prem load-balancer and specific domain name using our Direct-Connect capacity.
The LB just rewrite the URL and get the files directly from the S3 bucket. The real client doesn't know either the file is served from S3 in backend in reality.
Is it possible to restrict an Amazon S3 website endpoint to CloudFront only? I see this is possible for S3 rest endpoints but was wondering if there were any new workarounds to do this for S3 website endpoints.
For website endpoint you can use bucket policy to allow only CloudFront IP address, not restrictive as OAI but still a way.
http://d7uri8nf7uskq.cloudfront.net/tools/list-cloudfront-ips
For S3 as an origin, CLOUDFRONT_REGIONAL_EDGE_IP_LIST IP address are not used unless you're using lambda#edge or AWS has enabled it intentionally so you can allow only CLOUDFRONT_GLOBAL_IP_LIST.
I've got small static JSON files sitting in an AWS S3 bucket that my (hybrid PhoneGap/Cordova) mobile app needs to read in (not write). I want to use Cloudflare between them. I know there are plenty of articles about static website hosting with this combination but I'm wondering if that's overkill for this? i.e. can I just connect Cloudflare to my S3 bucket without configuring all the static hosting stuff on S3, and if so how?
The JSON files are public and that's fine, I don't need to restrict access to just the app.
Thanks
You will need to configure static web hosting in S3 to achieve this since it will require an endpoint to forward traffic from Cloudfare.
For more details about the configurations, refer the article on Static Site Hosting with S3 and CloudFlare.
We have a number of google cloud storage transfer job that sync from aws s3 buckets to google buckets. I am assuming that they are using https to transfer the data but where can I get a confirmation that they do. Where can I get information about minimum TLS version used in these transfer jobs.
Regarding the Cloud Storage TLS, in this document you could find the TLS information for the gsutil commands which requests are done via the JSON API. These requests are via HTTPS only, and are used within the cloud console too.
I have hosted my static website in S3 bucket using angular5 and mapped to a custom domain using Route53. I want to have SSL/TLS(HTTPS) for my site, so I used ACM to generate the certificate and mapped it to my site using CloudFront. The ACM status is issued and it says it's in use. but my website is not HTTPS enabled.
Everything is hosted in us-east-1, I am accessing my site from East-Asia. Is this an issue?
Am I missing something?
The ACM certificate for CloudFront should have been generated in the N.Virginia region. Then you should be able to assign it to your CloudFront distribution.
In your CloudFront distribution Origin, you should set the "Origin Protocol Policy" parameter to "HTTPS Only" if you want to use HTTPS between CloudFront and your S3 bucket.
In your CloudFront distribution Cache Behavior, you should set the "Viewer Protocol Policy" parameter to "Redirect HTTP to HTTPS" so that every HTTP communication between the clients and your CloudFront distribution is redirected to use HTTPS.
Then you would have to change your DNS record to point to the CloudFront distribution CNAME.
Additionally you could configure your CloudFront distribution and your S3 bucket to restrict access directly from the clients to the S3 buckets, so that every request goes through your ClouddFront distribution.
Using an Origin Access Identity to Restrict Access to Your Amazon S3 Content
Typically, if you're using an Amazon S3 bucket as the origin for a
CloudFront distribution, you grant everyone permission to read the
objects in your bucket. This allows anyone to access your objects
either through CloudFront or using the Amazon S3 URL. CloudFront
doesn't expose Amazon S3 URLs, but your users might have those URLs if
your application serves any objects directly from Amazon S3 or if
anyone gives out direct links to specific objects in Amazon S3