Set permission for static website on Azure Blob Storage - authentication

I have some static websites hosted on Azure Blob Storage and I want to grant access to those websites only for authenticated users from an ASP.NET MVC application.
I can't have the Blob Storage public.
I think I cannot use Shared Access Signatures taking in consideration that the website uses lots of javascript, css that are downloaded automatically by the main .htm page.
What's the best solution in this case?

If the permissions from your application must be checked then you can build a controler in your application that will act as a proxy between client and blob storage. No Shared Signatures, only regular blob keys on and your regular authentication for users.
The action takes the url relative to your blob as argument. You can add a custom route so is nicely handle links you can make inside your static website.

Related

Using object storage with authorization for a web app

I'm contributing to developping a web (front+back) application, which uses OpenID Connect (with auth0) for authentication & authorization.
The web app needs authentication to access some public & some restricted information (restriction are per-user or depending on certain group-related rules).
We want to provide a upload/download features for documents such as .pdf, and we have implemented minIO (pretty similar to AWS S3) for public documents.
However, we can't wrap ou heads around restricted-access files :
should we implement OIDC on minIO for users to access directly the buckets but with temporary access tokens, allowing for fine-grained authorization policy
or should the back-office be the only one to have keys to minIO and be the intermediary between the object storage and users ?
Looking for good practices here, thanks in advance for your help.
Interesting question, since PDF docs are web static content unless they contain sensitive data. I would aim to separate secured (API) and non-secured (web) concerns on this one.
UNSECURED RESOURCES
If there is no security involved, connecting to a bucket from the front end makes sense. The bucket contents can also be distributed to a content delivery network, for best global performance. The PDF can be considered a web resource.
SECURED RESOURCES
Requests for these need to be treated as an API request, if a PDF doc contains sensitive data. APIs should receive an access token and enforce access to documents via scopes and claims.
You might use a Documents API for this. The implementation might still connect to a bucket, but this might be a different bucket that the browser does not have access to.
SUMMARY
This type of solution is often clearer if you think in terms of URL design. Eg the front end might have 2 document URLs:
publicDocs
secureDocs
By default I would treat docs that users upload as secure, unless they select an upload option such as make public.

ImageFlow.NET server accessing private Azure Blob Storage containers

I want to make sure I understand how ImageFlow.NET server works with images stored on a private Azure Blob Storage container.
Currently, we access images directly from Azure Blob Storage and we need to create a SAS token for images to be available in our frontend apps -- inlcuding mobile apps.
Our primary interest in ImageFlow.NET server is resizing images on demand. Would we still need to generate a SAS token for each image if we use ImageFlow.NET server to handle images for us?
For example, if we were to request a downsized version of image myimage.jpg, which is stored on Azure Blob Storage, do we still need to generate a SAS token or will ImageFlow server simply pull the image and send it to the requesting app without a SAS token?
Imageflow.NET Server has an easy API if you need to change this or hook up a different blob storage provider or design.
In the default Azure plugin setup, Imageflow authenticates with Azure using the configured credentials to access protected blobs, but clients themselves do not need an SAS token. Imageflow's own access can be restricted via Azure and by configuring the allowed buckets list.
Often, you need to have authorization for client/browser access as well as for Imageflow getting to blob storage. You can use any of the existing ASP.NET systems and libraries for this as if you're protecting static files or pages, or you can use Imageflow's built-in signing system that is actually quite similar to SAS tokens.
You can configure Imageflow to require a signature be appended to URLs. There's a utility method for generating those.
Then it's on you to only give those URLs to users who are allowed to access them.
Essentially, Imageflow supports any client authentication/authorization system you want to add to the app.
If you need something customized between Imageflow and Azure, that's also easy to customize (In fact, there's a single file adapter in the example project that implements a different approach for cases where you don't want to limit which buckets Imageflow accesses).

How to restrict public user access to s3 buckets or minIO?

I have got a question about minio or s3 policy. I am using a stand-alone minio server for my project. Here is the situation :
There is only one admin account that receives files and uploads them to minio server.
My Users need to access just their own uploaded objects. I mean another user is not supposed to see other people's object publicly (e.g. by visiting direct link in URL).
Admin users are allowed to see all objects in any circumstances.
1. How can i implement such policies for my project considering i have got my database for user authentication and how can i combine them to authenticate the user.
2. If not what other options do i have here to ease the process ?
Communicate with your storage through the application. Do policy checks, authentication or authorization in the app and store/grab files to/from storage and make the proper response. I guess this is the only way you can have limitation on uploading/downloading files using Minio.
If you're using a framework like Laravel built in S3 driver works perfectly with Minio; Otherwise it's just matter of a HTTP call. Minio provides HTTP APIs.

GCP external application to app-engine endpoint authentication

We are building a small web-UI using React that will be served up by GCP App-Engine (standard). The UI will display a carousel of images along with some image metadata to our client's employees when they click on a link inside of their internal GIS system. We are looking to authenticate these calls since the App-Engine endpoint will be exposed publicly, and are hoping to use a GCP Service Account private key that will be used by the client to create a time-limited JSON web-token that will give temporary access to the GIS user when they open the web-UI. We are following this GCP documentation. In summary:
We create a new service-account with necessary IAM permissions in GCP along with a key
We share the private key with client which they then use to sign a Json Web Token which is passed in the call to our endpoint when user accesses our web-UI from their GIS system
Call is authenticated by GCP backend (ESP/OpenAPI)
Question: is this a recommended approach for external system accessing GCP resources or is there a better pattern more applicable to this type of situation (external system accessing GCP resource)?
I believe this is the recommended approach for your use case.
According to the official documentation:

How to give access to s3 files to authenticated web site users

I have a .net core web app, when someone uploads a file to a post in the app, I store it in an s3 bucket. I don’t want the s3 bucket to be publicly accessible, I only want logged in users to be able to download files from it.
Is the recommended solution for this creating temporary links directly to the s3 files when they are requested through the site by authenticated users? I don’t want these links to be accessible later by non-authenticated users.
Or should I download the file to the web server then stream it to the user, in effect doubling my bandwidth usage?
You should generate the links from the .net backend which the client wont easily be able to copy and share. And they will expire after given time.
Try this from Amazon documentation:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURLDotNetSDK.html