Reverse proxy Google Cloud storage to authenticate requests - authentication

I'd like to provide an API to some clients to upload files to Google Cloud Storage. I don't want the clients to know the implementation details (whether they are uploading the files to Google Cloud Storage). The idea is that the API would authenticate the requests that they send and proxy them to another authenticated request to the Google Cloud API.
I'm not a backend/systems developer so I don't know what's the best way to approach this. The two options that I considered were:
Implementing a reverse proxy. My main concern with this approach is that those endpoints would have a lot of traffic and the API reverse proxy wouldn't be able to handle it.
Using the signed URL authentication mechanism from Google Cloud Storage. Users would ask the API for a URL to upload the resources to and the API would send that URL back to the user (that includes the authentication details embedded).
Has anyone experience with this kind of setup? How would you recommend me to do it?

I would suggest using App Engine Standard to setup the transfers. Unfortunately, not all the languages supported have equivalent libraries.
In php, here's an example of setting up direct transfers from users to the cloud:
https://cloud.google.com/appengine/docs/standard/php/googlestorage/user_upload
This requires the php CloudStorageTools, and although CloudStorageTools exist for other languages it isn't clear to me that they support the same operation.
In particular it isn't apparent that this exists in Python, but fortunately you can use the Blobstore API to do basically the same thing: https://cloud.google.com/appengine/docs/standard/python/blobstore/
The advantage of this approach is that the transfer doesn't go through App Engine, which both saves money and eliminates problems with App Engine timing out.
Java and Go don't seem to have equivalent methods available, but I don't use those languages so maybe I'm just missing them. Of course, if the files you are dealing with aren't too big you can upload them to App Engine and have App Engine store them before you timeout.
If you are setting up direct transfers you can't completely hide the URL of the destination, so your users could probably tell you were using Google Storage. I don't know if that is a problem, but if so you could use a bucket CNAME redirect to hide that too.
https://cloud.google.com/storage/docs/request-endpoints
Hope this is helpful.

There are examples on how to upload files using Cloud Storage. This is the example for Java, but there are other languages as well (Python, Node, Go, Ruby, etc).
The app uses a simple form to let the user choose a local file:
<!-- Rest of the code -->
<form method="POST" action="/upload" enctype="multipart/form-data">
<input type="file" name="file"> <input type="submit">
</form>
Then the form send this to the /upload servlet:
#Override
public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException, ServletException {
final Part filePart = req.getPart("file");
final String fileName = filePart.getSubmittedFileName();
// Modify access list to allow all users with link to read file
List<Acl> acls = new ArrayList<>();
acls.add(Acl.of(Acl.User.ofAllUsers(), Acl.Role.READER));
// the inputstream is closed by default, so we don't need to close it here
Blob blob = storage.create(BlobInfo.newBuilder(BUCKET_NAME, fileName).setAcl(acls).build(),
filePart.getInputStream());
// return the public download link
resp.getWriter().print(blob.getMediaLink());
}
All of this happens in the back-end hence the user is not aware of what service you're using to upload the file, which is what you want.
In this particular case, make sure to avoid the lines:
resp.getWriter().print(blob.getMediaLink()); as it writes the address of the upload service:
https://www.googleapis.com/download/storage/v1/b/my-bucket-name/o/file-name.zip?generation=1524137198335299&alt=media
And also in the front-end, a line that tells the user:
<p>Select a file to upload to your Google Cloud Storage bucket.</p>

Related

Is it possible to use Azure Blob Storage on a website that has no authentication?

I need to create a way for anyone who visits my website to upload an image to an Azure Blob Container. The website will have input validations on the file.
I've considered using an Azure Function to write the validated file to the Blob Container, but I can't seem to find a way to do this without exposing the Function URL to the world (similar to this question.
I would use a System-Assigned Managed Identity (SAMI) to authenticate the Function to the Storage account, but because of this, anyone could take the Function URL and bypass the validations and upload.
How is this done in the real world?
If I understand correctly, the user uploads a file via an HTTP POST call to your server, which validates it. You would like to use an Azure Function to then upload the validated file to the Blob Storage.
In this case, you can restrict the access to the Azure Function; so that it can only be called from your server's IP. This way the users cannot reach that Function. This can be done via the networking settings, and is available on all Azure Function plans.
You could also consider implementing the validation logic within the Azure Function.
Finally (perhaps I should have started with this), if you are only considering writing an Azure Function to upload data to a Storage Account, you should perhaps first consider using the Blob Service REST API, specifically the PUT Blob endpoint. There are also official Storage Account SDKs for different languages/ecosystems that you could use to do this.
• Since, you are using an Azure function default generic URL on your website for uploading blobs with no authentication, I would suggest you to please create an ‘A’ host record for your function app. Considering that you have a website, you may be having a custom domain for your website to be unique and as you might be having a custom domain, the custom domain’s DNS records must be hosted on a public DNS server. Thus, similarly, on the same public DNS server, you will have to create an ‘A’ host record for the function app and assign it the same public IP address that is shown and assigned in Azure. This will ensure that your public DNS server has an active DNS resolver for the function app globally and then ensure to create a ‘CNAME’ record for your default generic Azure function app URL with the same URL as the alias in the DNS records and the ‘A’ host record as the assigned value in it.
In this way, whenever, any anonymous person visits your website and tries to upload an image, he will be shown the function app URL as ‘abc.xyz.com’ and not the generic Azure function app URL thus successfully ensuring that your objective is achieved.
• Once the above said has been done, then publish the new ‘CNAME’ record created in the public DNS server as your function app URL. This will not expose the generic Azure function app URL and mask it as well as ensure that it is secured since you will be uploading an SSL/TLS certificate for the website to be HTTPS protected in the function app workspace itself as shown below in the snapshot: -
For more information, kindly refer the below documentation link: -
https://learn.microsoft.com/en-us/azure/dns/dns-custom-domain

is there a full Nextcloud API accessable from outside?

I use Nextcloud as a normal user to store and share files.
I decided to use it as a backend for a web application I am developing so that I can store the files in Nextcloud while the frontend is done by me.
I spent some hours on the API docs
https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/index.html
and, with some disappointment, unless I have not made a mistake, I realized that the only API that can be used from outside Nextcloud is the WebDav API.
This is a minimalistic API that allows doing basic things such as uploading a file by passing the full path like with this GET statement (authenticated by basic auth passing username and password in the headers:
GET https://nextcloud.example.com/remote.php/dav/files/username/FolderOne/SubFolderTwo/HelloWorld.txt
This will download the file located in /FolderOne/SubFolderTwo/HelloWorld.txt
with a PUT request, it is possible to overwrite the file by passing the file content in the raw body request
This is very effective but minimalistic.
I was expecting to have a full REST API to access more properties and perform complex operations.
Could you please tell me if I missed some important information?
There is the OCS API but it works only from inside Nextcloud.
Thanks.
A full REST API is avaiable - https://docs.nextcloud.com/server/22/developer_manual/client_apis/OCS/ocs-api-overview.html
Create a Share - https://docs.nextcloud.com/server/latest/developer_manual/client_apis/OCS/ocs-share-api.html
The OwnCloud documentation also offers more examples
https://doc.owncloud.com/server/10.8/developer_manual/core/apis/ocs-share-api.html
You can register an App id and use that to login or passthru a username and password in the authentication header.

logic app, 403 if try to connect to storage behind firewall

i deployed a logic app standard to use vnet integration. In our scenario we want to get attachment from an email and store it to a storage account type datalake. We are using following connectors:
Office 365 and
Azure blob Storage
the problem is that our storage are behind firewall and private endpoint. If storage account are in all network flow work but not work if it is under firewall and we got 403(logic app although is under vnet integration, pass over internet as i can see on log analytics).
i also following microsoft doc and also this link without success:
https://techcommunity.microsoft.com/t5/integrations-on-azure/deploying-standard-logic-app-to-storage-account-behind-firewall/ba-p/2626286
i also tried this and works:
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azureblobstorage?WT.mc_id=Portal-Microsoft_Azure_Support&tabs=single-tenant#access-blob-storage-with-managed-identities
but i got file corrupted E.g. body of other connector if is a csv or if attachment is an excel file is corrupted. here the flow via https:
There is a way to use vnet integration and storage private endpoint or there is a way to take the attachment and save it as-is via https connector? (independently by file extension)?
Standard Logic app with private end point, cannot access the storage account with private end point. Storage account can be used as storage for logic app. But accessing the storage account and files from Logic app is not possible, if both resources are in same region.
To achieve this, we need to create Logic app and storage in 2 different region and whitelist the Ip. Refer the below link
https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/access-azure-blob-using-logic-app/ba-p/1451573

How to use the eBay Browse API just to search for products via one server

I try to migrate from eBay Finding API to Browse API. My technical setting is quiet easy:
A Server searches the Browse API to find products by a keyword. Thats it.
Does anybody know if I need to implement OAuth, a redirection page for eBay-Users to log in etc.? I don't need all those features..
Thanks!
You can use the browse API with the client credential flow that mints the Application access token.
Application tokens are general-use tokens that give access to interfaces that return application data. For example, many GET requests require only an Application token for authorization.
See Documentation
The client credential flow does not require a User to Login via eBay and the redirect etc. However, you can only use the "GET" methods like getItem, getItemByLegacyId or search for example.
If you using NodeJs or Browser you can checkout the "Get Item" example here. (The library will get the Application access token automatically and return the result.)

Google Sign-in and Spring Security

I am ashamed to admit that I burned four full days trying to get Spring Security 3.1 to play nicely with Google Sign-in in a standard JSF web application. Both are awesome frameworks in their own right but they seemed incompatible. I finally got it to work in some fashion but strongly suspect that I have missed some fundamental concept and am not doing it the best way.
I am writing an app that our helpdesk uses to track system testing during maintenance activities when our systems are down and cannot host the app, so it is hosted externally. Our Active Directory and IdP are down during this activity so I cannot use our normal authentication systems. Google Sign-in is a perfect solution for this.
Google Sign-in works great in the browser using Google Javascript libraries and some simple code. The browser communicates with Google to determine if the user is already signed in, and if not, opens a separate window where the user can submit credentials and authenticate. Then a small bit of Javascript can send a concise, ephemeral id_token returned from Google to the server which the server can use to verify the authentication independently with Google. That part was easy. The beauty is that if the user is already signed into Gmail or some other Google app, authentication has already happened and Google does not challenge the user again.
Spring Security works great on the server side to protect specified resources and authenticate a user with a username and password. However, in this case, we never see the username or password - the credentials are protected by secure communication between the browser and Google. All we know is whether or not the user is authenticated. We can get the Google username, but Spring Security expects credentials that it can use to authenticate, to a database, in-memory user base, or any other system. It is not, to my knowledge, compatible with another system that simply provides yea-or-nay authentication in the browser.
I found many good examples online that use Spring Boot with EnableOAuth2Sso (e.g. here) but surprisingly few that use Spring Security in a standard app server which does not support EnableOAuth2Sso, and those few did not show any solution I could discern.
Here is how I've done it. I followed Google's simple directions here to provide authentication in the browser. I added this code to the onSignIn() method to send the id_token to the server.
var xhr = new XMLHttpRequest(); // Trigger an authentication for Spring security
xhr.open("POST", "/<my app context>/j_spring_security_check", true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
var params = "profileID=" + profile.getId() + "&fullname=" + profile.getName() + "&email=" + profile.getEmail() + "&id_token=" + googleUser.getAuthResponse().id_token
+ "&j_username=" + profile.getEmail() + "&j_password=" + id_token;
xhr.send(params);
window.location.replace("/<my app context>/index.xhtml");
Unfortunately the Spring Authentication object, when passed to the AuthenticationProvider that I provided, did not contain anything but the j_username and j_password parameters as Authentication.getPrincipal() and Authentication.getCredentials(), but this is all I really needed. This is a bit of an abuse of those parameters since I have set them to email and id_token, not username and password.
I wanted to pass the user's full name and email, which Google provides in Javascript as googleUser.getName() and googleUser.getEmail(), to the backend as well. Since Spring Security does not accommodate anything but the username/password, and I was using Primefaces/JSF, I used Primefaces RemoteCommand to call a method on the backing bean with this information. This also feels a little clumsy.
In addition, I had to use window.location.replace() (in code above) because Spring Security did not redirect to my index.xhtml page as expected when I set this in the context with:
<security:form-login login-page='/login.xhtml' authentication-failure-url="/login.xhtml?error=true" default-target-url="/index.html" always-use-default-target="true" />
I have no idea why this does not work.
However, the app does now behave as I want in that it authenticates the user and the authenticated user can access the resources specified in Spring Security, and I wanted to share this in case anyone is doing a similar thing. Can anyone suggest a cleaner/better way? Thanks in advance.