Assume we have an web application where user will be provided an option to upload folder/file
Assume we have created an IAM and configured FTP to access S3 via FTP.
Now the user can upload a folder/file via FTP or web application.
We need to create a event notification or call a rest api whenever a new folder or file has been uploaded.
How to register an event register and call the rest API whenever a new folder or file has been uploaded?
How to differentiate whether the upload or file has been done from web application or FTP server from S3 ?
We need to call a different api when the folder has been created via web application and we need to call a different api when the folder has been created via FTP server
Thanks.
1) You can register to an object created event using lambda function via the properties of the bucket(ObjectCreated(All) event), and then execute a call to your API.
2) You can add a metadata to you file while uploading it saving the source of the upload. For example in your http request add the following header x-amz-meta-filesource. Every header starting with x-amz-meta- will be saved as the file metadata.
Related
So I want to have no personal server infrastructure. I want to have a HTTP API roun
t a user can upload a file into (2GB+) so that:
File would be stored to object storedge for 3 days
A serverless function would be called on it
So how to make an upload method for a large file to Yandex.Cloud Serverless function to be called on it?
So I need something similar to this AWS sample for YC
There is a limit on request size that Yandex Cloud Serverless Function could handle. It is 3.5MB. So you won't be able to upload 2Gb.
There is a workaround — upload the file directly to Object Storage using a pre-signed link. To generate the link, you'll need AWS-like credentials from Yandex Cloud.
Passing them to the client side is not safe, so it would be better to generate the link on the server (or Serverless Function) and return it to the client.
Here is the tutorial covering the topic.
I am trying to upload a file to OneDrive using Graph SDK for .Net Core from worker service.
Basically, some files are created at random time and those files needs to be uploaded to specified path on OneDrive from worker service at midnight every day.
I have following information stored in appconfig.json file in application:
ClientID
ClientSecret
TenantID
I have checked samples on various sites but could not find how to upload files using above ID and Secret. I believe there must but some kind of authProvider that I could initialize using above ID and Secret.
I also checked miscrosoft's documentation but coudl not find any example on how to upload file using SDK with ID and Secret.
https://learn.microsoft.com/en-us/onedrive/developer/rest-api/api/driveitem_put_content?view=odsp-graph-online
Additional Point
Upload new file
Overwrite if file already exists(hope that graph already supports this)
file size is < 4MB
path format /folder1/folder2/filename.pdf
Any help would be appreciated.
Remember to assign Application permission (Client credentials flow) to the app registration. See Application permission to Microsoft Graph.
You can use Client credentials provider.
IConfidentialClientApplication confidentialClientApplication = ConfidentialClientApplicationBuilder
.Create(clientId)
.WithTenantId(tenantID)
.WithClientSecret(clientSecret)
.Build();
ClientCredentialProvider authProvider = new ClientCredentialProvider(confidentialClientApplication);
Upload small file (<4M):
GraphServiceClient graphClient = new GraphServiceClient(authProvider);
using var stream = new System.IO.MemoryStream(Encoding.UTF8.GetBytes("The contents of the file goes here."));
await graphClient.Users[{upn or userID}].Drive.Items["{item-id}"].Content
.Request()
.PutAsync<DriveItem>(stream);
If you want to upload large file, see Upload large files with an upload session.
UPDATE:
Use Install-Package Microsoft.Graph.Auth -IncludePrerelease in Tools -> NuGet Package Manager -> Package Manager Console.
It supports both uploading a new file and replacing an existing item.
For how to get the items id, please refer to Get a file or folder.
For example, get the item id of /folder1/folder2 (have a quick test in Microsoft Graph Explorer):
GET https://graph.microsoft.com/v1.0/users/{userID}/drive/root:/folder1:/children
It will list all the children in folder1, including folder2. Then you can find item id of folder2.
UPDATE 2:
Client credentials provider is application identity while all the providers below are user identity.
Since you want to access personal OneDrive (including user identity), you could choose Authorization code provider or Interactive provider.
Authorization code provider: you need to implement interactively sign-in for your web app and use this provider.
Interactive provider: you can easily use this provider to implement interactively sign-in in a console app.
You can have a quick test with the second provider.
Please note that you should add the Delegated (personal Microsoft account) permissions into the app registration. See Delegated permission to Microsoft Graph.
And in this case, you should modify all graphClient.Users[{upn or userID}] to graphClient.Me in your code.
I'm afraid that you have to implement sign-in interactively auth flow to access personal OneDrive.
I'm using Auth0 to manage authentication in a web app.
Since it took me a while to get it working, I'd like to export the application and API settings like for example:
the application name
the client id
the supported auth methods
the allowed callback URLs
basically everything else relevant to reproduce the application configuration
I found a lot of documentation about exporting user data but nothing about exporting application or API settings.
Generally, the steps described here are just a quick&dirty workaround to read data from the Auth0 Management API without leaving your browser. I wonder why there is no "export" button for this directly in the UI. In both cases (application and API settings), open the network monitoring tab of your browser (usually F12); Then...
Exporting Application Settings
Load the "settings" page of the application you'd like to export (e.g. https://manage.auth0.com/dashboard/eu/<your_auth0_tenant_name_here>/applications/<your_application_client_id_here>/settings)
You'll see a GET request to manage.auth0.com/api/clients/<your_application_client_id_here>.
The response to this GET request is a JSON containing everything you need to reproduce the application settings; also sensitive data like the client secret and signing keys.
Exporting API Settings
Load the "settings" page of the API you'd like to export (e.g. https://manage.auth0.com/dashboard/eu/<your_auth0_tenant_name_here>/apis/<your_api_id_here>/settings)
You'll see a GET request to manage.auth0.com/api/resource-servers/<your_api_id_here>.
The response to this GET request is a JSON containing everything you need to reproduce the API settings.
I can successfully send the InitiateMultipartUploadRequest and get InitiateMultipartUploadResponse back, but then get Access Denied error when sending the 1st UploadPartRequest.
Note that all of the below cases upload the document successfully:
Exactly the same code (i.e. using multipart upload), but to a different bucket that uses SSE-S3 encryption.
Using low-level API and uploading the document in one go, i.e. creating PutObjectRequest and then calling amazonS3Client.PutObjectAsync(putObjectRequest).
Using high-level API TransferUtility class.
Maybe the encryption key was not forwarded in the call properly.
What is the proper way of delegating file access authentication from S3 to our authentiation service?
For example: web site's user(he have our session id in headers) sending request to S3 to get file by url. S3 sends request to our authentication service asking if user with such headers can access that file, and if our auth service allow getting that file it will be downloaded.
There are a lot of information about presigned requests but absolutely nothing about s3 quering with "hidden" authentication.
If a file has been made public on S3, then of course anyone can download it, using a direct link to the file.
If the file is not public, then there needs to be some type of authentication. There are really only two ways a file from S3 can be obtained if it is not public, one is via a pre-signed url, and the other is to be an Amazon user who has access to S3. Obviously this is how it works when you yourself want to access an object on S3, you must provide your access key and a signature in the header of the GET request. You can grant other users access to S3 via Amazon IAM, which is more like the 'hidden' authentication you mentioned. Via the IAM route, there are different ways of providing access including Federated Users. Visit this link to learn more:
http://docs.aws.amazon.com/AmazonS3/latest/dev/MakingAuthenticatedRequests.html
If you are simply trying to provide a authenticated user access to a file, the best and easiest way to do that would be to create a pre-signed url with an expiration time. The expiration time can be something short, like 10 minutes or even 1 minute, to prevent the user from passing the link to others.