node S3 Object Storage Linode - amazon-s3

Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}

It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.

You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.

Related

SignatureDoesNotMatch on GET request to pre-signed URL (hellosign)

i am attempting to grab a pdf file for e-signature using pre-signed URLs. i use Amplify Storage to generate the URL. then i provide the URL to the HelloSign function to bring the file into the embedding window. i am able to open the document using window.open but this other method brings the SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. error.
The request signature we calculated does not match the signature you provided. Check your key and signing method.
key: string = 'bucketname/path/to/file.pdf'
expires: number = 60
const url = await Storage.get(key, {
level: 'public',
cacheControl: 'no-cache',
signatureVersion: 'v4',
ContentType: 'application/pdf',
expires: 60,
customPrefix: {
public: '',
protected: '',
private: '',
},
})
hellosign.open(url, {
clientId,
skipDomainVerification: true,
})
one fix i found on Github was to include the signatureVersion when initializing the S3 class instance using the SDK. i am not using the SDK however, so i tried providing it when configuring Amplify like so:
AWSS3: {
bucket,
region,
signatureVersion: 'v4',
},
needless to say it did not fix the issue. i could not find any references to this in the docs, as the Amplify.configure function takes any in their docs.
i tried clicking the pre-signed URL and was able to download the doc without issue. i inspected the request and saw the correct headers that match the outgoing request originating from hellosign.open. any ideas what i can try?
hellosign.open() is only expected to load embedded URLs. These would be URLs returned from requests to the following endpoints:
/embedded/sign_url/ (sign URL - embedded signing)
/template/create_embedded_draft (edit URL - embedded template creation)
/embedded/edit_url/ (edit URL - embedded template editing)
/unclaimed_draft/create_embedded (claim URL embedded requesting)
/unclaimed_draft/create_embedded_with_template (claim URL embedded requesting)
If you'd like to collect signatures on a document in an iframe on your site, embedded signing will likely be what you're looking for. In that case, you'd want to create your signature request with a request to:
/signature_request/create_embedded
and then use the URL you're creating via Amplify Storage as the value of the file_url[] request parameter.
Once you've created your request, locate the signature ID for your signer in the response object, and use that in a request to /embedded/sign_url/
This will return a sign URL that you can load using hellosign.open().

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

Cognito Custom Auth trigger not getting Session from Cognito

I tried to call the InitiateAuth API from AWS CLI. I have set up Define auth, create auth and verify auth lambda triggers correctly. The problem is that, when I ran the below command, it's showing error:
aws cognito-idp initiate-auth --client-id <my_client_id> --auth-flow CUSTOM_AUTH --auth-parameters USERNAME=uname,ChallengeName="SRP_A",SRP_A="<srp_value>"
Error: An error occurred (UserLambdaValidationException) when calling the InitiateAuth operation: DefineAuthChallenge failed with error Cannot read property 'challengeName' of undefined.
I checked the Define Auth lambda code, and also the Cloud Watch logs of Lambda execution. The error occurred because the input from Cognito contains an empty session key in the event json (which usually sent from Cognito to Lambda). As the property challengeName resides inside the session key (as shown in official documentation).
Here is the JSON event sent to Lambda from Cognito when I ran that command (I got this JSON from CloudWatch Lambda logs, I printed the event which is being sent from Cognito):
{
version: '1',
region: 'us-east-1',
userPoolId: 'us-east-1_******',
userName: 'uname',
callerContext: {
awsSdkVersion: 'aws-sdk-unknown-unknown',
clientId: '<my_client_id>'
},
triggerSource: 'DefineAuthChallenge_Authentication',
request: {
userAttributes: {
sub: 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx',
'cognito:email_alias': '<email>',
'cognito:user_status': 'CONFIRMED',
email_verified: 'true',
name: 'Custom Test',
email: '<email>'
},
session: [], -----> !! Empty
userNotFound: false
},
response: { challengeName: null, issueTokens: null, failAuthentication: null }
}
What is the reason? Is it because I am sending the request from CLI so Cognito not able to create a session or something? I'm not sure. Any help will be appreciated.
Session holds previous auth challenge results (either from built-in challenges or you custom challenges). It will be empty for the first invocation of the define auth challenge lambda. As the name suggests you have to define the auth challenge in the handler response.

In Fargate container why can I CRUD S3 but can't create a presigned post

I'm using node in a docker container and locally I use my IAM keys for both creating, reading and deleting files to an S3 bucket as well as creating pre-signed posts. When up on a Fargate container, I create a taskRole and attach a policy which gives it full access to S3.
taskRole.attachInlinePolicy(
new iam.Policy(this, `${clientPrefix}-task-policy`, {
statements: [
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['S3:*'],
resources: ['*'],
}),
],
})
);
With that role, I can create, read and delete files with no issues from the API. When the API tries to create a pre-signed post however, I get the error:
Error: Unable to create a POST object policy without a bucket, region, and credentials
It seems super strange to me that I can run the other operations, but it fails with the presignedPOST, especially since my S3 actions are all allowed.
const post: aws.S3.PresignedPost = await s3.createPresignedPost({
Bucket: bucket,
Fields: { key },
Expires: 60,
Conditions: [['content-length-range', 0, 5242880]],
});
Here is the code I use. I am logging the bucket and key so I'm positive that they are valid values. One thought I had was when running locally, I will run aws.configure to set my keys but in Fargate I purposefully omit that. I thought that it was getting the right keys since the other s3 operations work without fail. Am I approaching this right?
When using IAM role credentials with AWS sdk, you must either use the asynchronous (callback) version of createPresignedPost or guarantee that your credentials have been resolved before calling the await version of this method.
Something like this will work with IAM based credentials:
const s3 = new AWS.S3()
const _presign = params => {
return new Promise((res, rej) => {
s3.createPresignedPost(params, (err, data) => {
if (err) return rej(err)
return res(data)
})
})
}
// await _presign(...) <- works
// await s3.createPresignedPost(...) <- won't work
Refer: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property

How to resolve 403 Forbidden error when uploading to S3

I'm setting up a vuejs / DropzoneJS - app loosely based on kfei's vue-s3-dropzone app. It's designed to upload files (by using a PUT method) to AWS S3 serverlessly using a AWS Lambda function and a AWS S3 bucket.I'm basically getting a XMLHttpRequest at 'https://xxxxxxxxxxxxxxxxxxx' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status and a 403 error code when I try to upload an image to the S3 bucket. Is there anything I can do to
This is what I've did:
created an S3 bucket
Set up a bucket policy and a CORS configuration in the S3 bucket settings:
enter image description here
enter image description here
Created a lambda function that is supposed to sign a URL which allows a PUT upload for each file to S3, with the Role executing the Lambda having a PutObject and PutObjectAcl permission on the S3 bucket:
enter image description here
Set up an Api Gateway API with an OPTIONS method (to pass the preflight check) and a PUT method with these CORS settings:
b. The OPTIONS method has a Mock backend integration with the Integration Response returning the following:
Access-Control-Allow-Headers 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,x-requested-with'
Access-Control-Allow-Methods ‘PUT,OPTIONS'
Access-Control-Allow-Origin '*'
c. The PUT method has:
"Access-Control-Allow-Origin": "*"
In AWS Api Gateway: Setup a api-key and a usage plan
The lambda code:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var bucketName = process.env.AWS_BUCKET_NAME;
exports.handler = (event, context) => {
if (!event.hasOwnProperty('contentType')) {
context.fail({ err: 'Missing contentType' });
}
if (!event.hasOwnProperty('filePath')) {
context.fail({ err: 'Missing filePath' });
}
var params = {
Bucket: bucketName,
Key: event.filePath,
Expires: 3600,
ContentType: event.contentType
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) {
context.fail({ err });
} else {
context.succeed({ url });
}
});
};
Expected: Successful upload of files
Actual: Possible CORS issues.
getSignedUrl will work fine if you were uploading the file from an API client like Postman or a Node.js server, but as you state you are seeing a preflight check fail, I'm assuming you are using some kind of HTML form & frontend js.
From the AWS JavaScript SDK Docs regarding getSignedUrl:
Note: Not all operation parameters are supported when using pre-signed
URLs. Certain parameters, such as SSECustomerKey, ACL, Expires,
ContentLength, or Tagging must be provided as headers when sending a
request. If you are using pre-signed URLs to upload from a browser and
need to use these fields, see createPresignedPost().
As you are setting the 'Expires' param when calling getSignedUrl and are sending from the browser, you need to use createPresignedPost instead of getSignedUrl in your Lambda code.
You will then need to POST instead of PUT from the browser to S3.
NB: Remember to update your CORS rules for S3 with POST