In my react native app, I want to pass along a user’s AWS cognito credentials to a WebView inside the app so that it can be used to access files which are stored on a private S3 bucket.
So basically I have the following working:
- log into Cognito (via aws-amplify’s Auth class)
- Security on the S3 bucket allowing only logged in users to have access to its content.
I have tried to send the headers to the Webview
<WebView
source={{
uri: source,
headers: {
Authorization:
"AWS4-HMAC-SHA256 …”
}
}}
But that does not seem to work. Does anyone know how to do this?
Ok, after many emails to AWS entreprise support team members, and many hours of hair pulling; I have found out that S3 does not currently support passing along credentials from Cognito.
What we can do is:
Place CloudFront in front of S3, and use Origin Access Identity (OAI) to protect the data. This works well to securitize the access, HOWEVER it does not allow me to pass along the credentials to S3. This is because the communication between CloudFront and S3 now pas the OAI which means a single identity for all users.
Sign each of the S3 access URLs that you need to access.
I used the latter as I need to restrict user access. The code to sign the URLs that I used was:
In react-native:
import AWS, { Auth, Storage } from "aws-amplify";
Storage.get("image.jpg").then(result => {
console.log(result);
}).catch(err => {
console.log(err);
});
In node.js:
import AWS from "aws-sdk";
AWS.config.update({ accessKeyId, secretAccessKey, region });
const s3 = new AWS.S3();
s3.getSignedUrl("getObject", {
Bucket: "s3-bucket-name",
Key: "my/path/image.jpg",
Expires: 60 * 5 * 1000, // 5 Minutes
});
I hope it can help others.
Related
I have a React app built using Serverless NextJS and served behind AWS CloudFront. I am also using AWS Cognito to do authentication of our users.
After a user successfully authenticates through AWS Cognito, they are redirected to my React App with a query string containing OAuth tokens (id_token, access_token, refresh_token, raw[id_token], raw[access_token], raw[refresh_token], raw[expires_in], raw[token_type]).
It seems that the query string is simply larger than AWS CloudFront's limits and it is throwing the following error below:
413 ERROR
The request could not be satisfied.
Bad request. We can't connect to the server for this app...
Generated by cloudfront (CloudFront)
Request ID: FlfDp8raw80pAFCvu3g7VEb_IRYbhHoHBkOEQxYyOTWMsNlRjTA7FQ==
This error has been encountered before by many other users (see example). Keen to know:
Are there any workarounds? Perhaps is there a way to configure AWS Cognito to reduce the number of tokens that it is passing in the query string by default?
Is it possible to configure AWS CloudFront to ignore enforcing its default limits on certain pages (and not cache theme)?
What's the suggestion going forward? The only thing I can imagine is not to use AWS CloudFront.
After analysing the query fields that AWS Cognito sends to a callback URL, I was able to determine that not all fields are required for my usecase. Particularly the raw OAuth token fields.
With that information, I solved the problem by writing a "middleware" to intercept my backend system redirecting to my frontend (that is sitting behind CloudFront) and trimming away query string fields that I do not need to complete authentication.
In case this could inspire someone else stuck with a similar problem, here is what my middleware looks like for my backend system (Strapi):
module.exports = (strapi) => {
return {
initialize() {
strapi.app.use(async (ctx, next) => {
await next();
if (ctx.request.url.startsWith("/connect/cognito/callback?code=")) {
// Parse URL (with OAuth query string) Strapi is redirecting to
const location = ctx.response.header.location;
const { protocol, host, pathname, query } = url.parse(location);
// Parse OAuth query string and remove redundant (and bloated) `raw` fields
const queryObject = qs.parse(query);
const trimmedQueryObject = _.omit(queryObject, "raw");
// Reconstruct original redirect Url with shortened query string params
const newLocation = `${protocol}//${host}${pathname}?${qs.stringify(
trimmedQueryObject
)}`;
ctx.redirect(newLocation);
}
});
},
};
};
I just started using AWS Cognito in my App, I followed instructions and installed Amplify and created User Pool and Identity pool and set up everything.
I created a signup form and signet up with no problems with Aut.signUp() and confirmed email.
But when I tried logging in I entered my credentials and got NotAuthorizedException, Incorrect username or password.
I am logging in like this:
Auth.signIn(user.Username, user.Password)
.then((res) => {
AsyncStorage.setItem('token', JSON.stringify(user))
.then(res =>{
console.log('saved')
})
.catch(err=>{
console.log(err)
})
No matter what I enter in input fields I get this error. I just started using Amazon AWS and can't really think of a problem
So after a few days of trying to solve this, I found a solution.
In AWS config by default Authentication Flow Type is set to USER_SRP_AUTH
What you need to do is in your AWS config put:
authenticationFlowType: 'USER_PASSWORD_AUTH',
And then go to amazon cognito panel -> user pool - > app clients - >show details -> Enable username-password (non-SRP) flow for app-based authentication (USER_PASSWORD_AUTH).
My code uses the AWS Javascript SDK to upload to S3 directly from a browser. Before the upload happens, my server sends it a value to use for 'Authorization'.
But I see no way in the AWS.S3.upload() method where I can add this header.
I know that underneath the .upload() method, AWS.S3.ManagedUpload is used but that likewise doesn't seem to return a Request object anywhere for me to add the header.
It works successfully in my dev environment when I hardcode my credentials in the S3() object, but I can't do that in production.
How can I get the Authorization header into the upload() call?
Client Side
this posts explains how to post from a html form with a pre-generated signature
How do you upload files directly to S3 over SSL?
Server Side
When you initialise the S3, you can pass the access key and secret.
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
accessKeyId: '[value]',
secretAccessKey: '[value]'
});
const params = {};
s3.upload(params, function (err, data) {
console.log(err, data);
});
Reference: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html
Alternatively if you are running this code inside AWS services such as EC2, Lambda, ECS etc, you can assign a IAM role to the service that you are using. The permissions can be assigned to the IAM Role
I suggest that you use presigned urls.
I am using minio client to access S3. The S3 storage I am using has two endpoints - one (say EP1) which is accessible from a private network and other (say EP2) from the internet. My application creates a presigned URL for downloading an S3 object using EP1 since it cannot access EP2. This URL is used by another application which is not on this private network and hence has access only to EP2. This URL is (obviously) not working when used by the application outside the network since this URL has EP1 in it.
I have gone through minio documentation but did not find anything which help me specify alternate endpoints.
So my question is -
Is there anything which I have missed from minio that can help me?
Is there any S3 feature which allows generating presigned URL for an
object with EP2 in it?
Or is this not solvable without changing
current network layout?
You can use minio-js to manage this
Here is an example that you can use
var Minio = require('minio')
var s3Client = new Minio.Client({
endPoint: "EP2",
port: 9000,
useSSL: false,
accessKey: "minio",
secretKey: "minio123",
region: "us-east-1"
})
var presignedUrl = s3Client.presignedPutObject('my-bucketname', 'my-objectname', 1000, function(e, presignedUrl) {
if (e) return console.log(e)
console.log(presignedUrl)
})
This will not contact the server at all. The only thing here is that you need to know the region that bucket belongs to. If you have not set any location in minio, then you can use us-east-1 by default.
I am dealing with some legacy applications and want to use Amazon AWS API Gateway to mitigate some of the drawbacks.
Application A, is able to call URLs with parameters, but does not support HTTP basic AUTH. Like this:
https://example.com/api?param1=xxx¶m2=yyy
Application B is able to handle these calls and respond. BUT application B needs HTTP basic authentication.
The question is now, can I use Amazon AWS API Gateway to mitigate this?
The idea is to create an API of this style:
http://amazon-aws-api.example.com/api?authcode=aaaa¶m1=xxx¶m2=yyy
Then Amazon should check if the authcode is correct and then call the API from Application A with all remaining parameters while using some stored username+password. The result should just be passed along back to Application B.
I could also give username + password as a parameter, but I guess using a long authcode and storing the rather short password at Amazon is more secure. One could also use a changing authcode like the ones used in 2-factor authentications.
Path to a solution:
I created the following AWS Lambda function based on the HTTPS template:
'use strict';
const https = require('https');
exports.handler = (event, context, callback) => {
const req = https.get(event, (res) => {
let body = '';
res.setEncoding('utf8');
res.on('data', (chunk) => body += chunk);
res.on('end', () => callback(null, body));
});
req.on('error', callback);
req.end();
};
If I use the Test function and provide it with this event it works as expected:
{
"hostname": "example.com",
"path": "/api?param1=xxx¶m2=yyy",
"auth": "user:password"
}
I suppose the best way from here is to use the API gateway to provide an interface like:
https://amazon-aws-api.example.com/api?user=user&pass=pass¶m1=xxx¶m2=yyy
Since the params of an HTTPs request are encrypted and they are not stored in Lambda, this method should be pretty secure.
The question is now, how to connect the API gateway to the Lambda.
You can achieve the scenario mentioned with AWS API Gateway. However it won't be just a proxy integration, rather you need to have a Lambda function which will forward the request by doing the transformation.
If the credentials are fixed credentials to invoke the API, then you can use the environmental variables in Lambda to store them, encrypted by using AWS KMS Keys.
However if the credentials are sent for each user (e.g logged into the application from a web browser) the drawbacks of this approach is that you need to store username and password while also retrieving it. Its not encourage to store passwords even encrypted. If this is the case, its better to passthrough (Also doing the transformations) rather storing and retrieving in between.