logic app, 403 if try to connect to storage behind firewall - azure-storage

i deployed a logic app standard to use vnet integration. In our scenario we want to get attachment from an email and store it to a storage account type datalake. We are using following connectors:
Office 365 and
Azure blob Storage
the problem is that our storage are behind firewall and private endpoint. If storage account are in all network flow work but not work if it is under firewall and we got 403(logic app although is under vnet integration, pass over internet as i can see on log analytics).
i also following microsoft doc and also this link without success:
https://techcommunity.microsoft.com/t5/integrations-on-azure/deploying-standard-logic-app-to-storage-account-behind-firewall/ba-p/2626286
i also tried this and works:
https://learn.microsoft.com/en-us/azure/connectors/connectors-create-api-azureblobstorage?WT.mc_id=Portal-Microsoft_Azure_Support&tabs=single-tenant#access-blob-storage-with-managed-identities
but i got file corrupted E.g. body of other connector if is a csv or if attachment is an excel file is corrupted. here the flow via https:
There is a way to use vnet integration and storage private endpoint or there is a way to take the attachment and save it as-is via https connector? (independently by file extension)?

Standard Logic app with private end point, cannot access the storage account with private end point. Storage account can be used as storage for logic app. But accessing the storage account and files from Logic app is not possible, if both resources are in same region.
To achieve this, we need to create Logic app and storage in 2 different region and whitelist the Ip. Refer the below link
https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/access-azure-blob-using-logic-app/ba-p/1451573

Related

Is it possible to use Azure Blob Storage on a website that has no authentication?

I need to create a way for anyone who visits my website to upload an image to an Azure Blob Container. The website will have input validations on the file.
I've considered using an Azure Function to write the validated file to the Blob Container, but I can't seem to find a way to do this without exposing the Function URL to the world (similar to this question.
I would use a System-Assigned Managed Identity (SAMI) to authenticate the Function to the Storage account, but because of this, anyone could take the Function URL and bypass the validations and upload.
How is this done in the real world?
If I understand correctly, the user uploads a file via an HTTP POST call to your server, which validates it. You would like to use an Azure Function to then upload the validated file to the Blob Storage.
In this case, you can restrict the access to the Azure Function; so that it can only be called from your server's IP. This way the users cannot reach that Function. This can be done via the networking settings, and is available on all Azure Function plans.
You could also consider implementing the validation logic within the Azure Function.
Finally (perhaps I should have started with this), if you are only considering writing an Azure Function to upload data to a Storage Account, you should perhaps first consider using the Blob Service REST API, specifically the PUT Blob endpoint. There are also official Storage Account SDKs for different languages/ecosystems that you could use to do this.
• Since, you are using an Azure function default generic URL on your website for uploading blobs with no authentication, I would suggest you to please create an ‘A’ host record for your function app. Considering that you have a website, you may be having a custom domain for your website to be unique and as you might be having a custom domain, the custom domain’s DNS records must be hosted on a public DNS server. Thus, similarly, on the same public DNS server, you will have to create an ‘A’ host record for the function app and assign it the same public IP address that is shown and assigned in Azure. This will ensure that your public DNS server has an active DNS resolver for the function app globally and then ensure to create a ‘CNAME’ record for your default generic Azure function app URL with the same URL as the alias in the DNS records and the ‘A’ host record as the assigned value in it.
In this way, whenever, any anonymous person visits your website and tries to upload an image, he will be shown the function app URL as ‘abc.xyz.com’ and not the generic Azure function app URL thus successfully ensuring that your objective is achieved.
• Once the above said has been done, then publish the new ‘CNAME’ record created in the public DNS server as your function app URL. This will not expose the generic Azure function app URL and mask it as well as ensure that it is secured since you will be uploading an SSL/TLS certificate for the website to be HTTPS protected in the function app workspace itself as shown below in the snapshot: -
For more information, kindly refer the below documentation link: -
https://learn.microsoft.com/en-us/azure/dns/dns-custom-domain

How to make an upload method for a large file to Yandex Cloud Serverless function to be called on it?

So I want to have no personal server infrastructure. I want to have a HTTP API roun
t a user can upload a file into (2GB+) so that:
File would be stored to object storedge for 3 days
A serverless function would be called on it
So how to make an upload method for a large file to Yandex.Cloud Serverless function to be called on it?
So I need something similar to this AWS sample for YC
There is a limit on request size that Yandex Cloud Serverless Function could handle. It is 3.5MB. So you won't be able to upload 2Gb.
There is a workaround — upload the file directly to Object Storage using a pre-signed link. To generate the link, you'll need AWS-like credentials from Yandex Cloud.
Passing them to the client side is not safe, so it would be better to generate the link on the server (or Serverless Function) and return it to the client.
Here is the tutorial covering the topic.

Protecting the client secrets and client id of production apis in POSTMAN

My company has a strict compliance policy with respect to protecting the client secrets and passwords of azure active directory client apps(client secret for aad app) and service accounts (passwords).
However during bug fixing in production or replicating issues on production code or active debugging, we need to debug the production code by passing these credentials from postman or fiddler.
Is it safe to save these keys in Postman and share it by generating a public url? Is there any way of sharing it from postman to a specific set of users? Or is there any better way of sharing the API requests with set of users.
You can invite someone to postman workspace using thier email Id, sharing public collection url is not safe anyone with url can access that
Other way is to download collection and environment as json and send that json file instead .
There is no way to mask secrets as even if you store it in variable , the secret will be exposed in postman console
https://learning.postman.com/docs/collaborating-in-postman/sharing/
to invite to workspace :
Create a workspace :
Invite someone to workspace
select team, type the user's email id you want to invite , click add , then click create workspace. A mail will be send to the user's email through which user can join workspace.
now share that collection to or environment to that workspace

Reverse proxy Google Cloud storage to authenticate requests

I'd like to provide an API to some clients to upload files to Google Cloud Storage. I don't want the clients to know the implementation details (whether they are uploading the files to Google Cloud Storage). The idea is that the API would authenticate the requests that they send and proxy them to another authenticated request to the Google Cloud API.
I'm not a backend/systems developer so I don't know what's the best way to approach this. The two options that I considered were:
Implementing a reverse proxy. My main concern with this approach is that those endpoints would have a lot of traffic and the API reverse proxy wouldn't be able to handle it.
Using the signed URL authentication mechanism from Google Cloud Storage. Users would ask the API for a URL to upload the resources to and the API would send that URL back to the user (that includes the authentication details embedded).
Has anyone experience with this kind of setup? How would you recommend me to do it?
I would suggest using App Engine Standard to setup the transfers. Unfortunately, not all the languages supported have equivalent libraries.
In php, here's an example of setting up direct transfers from users to the cloud:
https://cloud.google.com/appengine/docs/standard/php/googlestorage/user_upload
This requires the php CloudStorageTools, and although CloudStorageTools exist for other languages it isn't clear to me that they support the same operation.
In particular it isn't apparent that this exists in Python, but fortunately you can use the Blobstore API to do basically the same thing: https://cloud.google.com/appengine/docs/standard/python/blobstore/
The advantage of this approach is that the transfer doesn't go through App Engine, which both saves money and eliminates problems with App Engine timing out.
Java and Go don't seem to have equivalent methods available, but I don't use those languages so maybe I'm just missing them. Of course, if the files you are dealing with aren't too big you can upload them to App Engine and have App Engine store them before you timeout.
If you are setting up direct transfers you can't completely hide the URL of the destination, so your users could probably tell you were using Google Storage. I don't know if that is a problem, but if so you could use a bucket CNAME redirect to hide that too.
https://cloud.google.com/storage/docs/request-endpoints
Hope this is helpful.
There are examples on how to upload files using Cloud Storage. This is the example for Java, but there are other languages as well (Python, Node, Go, Ruby, etc).
The app uses a simple form to let the user choose a local file:
<!-- Rest of the code -->
<form method="POST" action="/upload" enctype="multipart/form-data">
<input type="file" name="file"> <input type="submit">
</form>
Then the form send this to the /upload servlet:
#Override
public void doPost(HttpServletRequest req, HttpServletResponse resp) throws IOException, ServletException {
final Part filePart = req.getPart("file");
final String fileName = filePart.getSubmittedFileName();
// Modify access list to allow all users with link to read file
List<Acl> acls = new ArrayList<>();
acls.add(Acl.of(Acl.User.ofAllUsers(), Acl.Role.READER));
// the inputstream is closed by default, so we don't need to close it here
Blob blob = storage.create(BlobInfo.newBuilder(BUCKET_NAME, fileName).setAcl(acls).build(),
filePart.getInputStream());
// return the public download link
resp.getWriter().print(blob.getMediaLink());
}
All of this happens in the back-end hence the user is not aware of what service you're using to upload the file, which is what you want.
In this particular case, make sure to avoid the lines:
resp.getWriter().print(blob.getMediaLink()); as it writes the address of the upload service:
https://www.googleapis.com/download/storage/v1/b/my-bucket-name/o/file-name.zip?generation=1524137198335299&alt=media
And also in the front-end, a line that tells the user:
<p>Select a file to upload to your Google Cloud Storage bucket.</p>

How do I access Google Drive Application Data from a remote server?

For my application I want the user to be able to store files on Google Drive and for my service to have access to these same files created with the application.
I created a Client ID for web application and was able to upload/list/download files from JavaScript (client side) with drive.appfolder scope. This is good, this is half of what I want to do.
Now I want to access the same files from Node.js (server side). I am lost as to how to do this. Do I create a new Client ID for the server? (if so, how will the user authenticate?) Do I pass the AuthToken my user got client-side and try to use that on the server? I don't think this will work as the AuthToke is time-sensitive (and probably not intended to be used from multiple IPs).
Any direction or example server-side code will be helpful. Again, all I want is to access these same files the user created with my application, not any other files in the user's Google Drive.
CLARIFICATION: I think my question boils down to: "Is it possible to access the same Application Data on Google Drive both client-side and server-side?"
Do I create a new Client ID for the server?
Up to you. You don't need to, but you can. See below.
if so, how will the user authenticate?
Up to you. OAuth is about authorisation, not authentication.
Just in case you meant authorisation, the user authorises the Project, which may contain multiple client IDs.
Do I pass the AuthToken my user got client-side and try to use that on the server?
You can do, but not a good idea for the reason you state. The preferred approach is to have a separate server Client ID, and use that to request offline access, which returns (eventually) a Refresh Token, which you store in your server. You then use that Refresh Token to request Access Tokens whenever you need them.
AuthToken is ... (and probably not intended to be used from multiple IPs).
It is not bound to a specific IP address
Is it possible to access the same Application Data on Google Drive both client-side and server-side?"
Yes
Most of what you need is at https://developers.google.com/accounts/docs/OAuth2WebServer