SignatureDoesNotMatch on GET request to pre-signed URL (hellosign) - amazon-s3

i am attempting to grab a pdf file for e-signature using pre-signed URLs. i use Amplify Storage to generate the URL. then i provide the URL to the HelloSign function to bring the file into the embedding window. i am able to open the document using window.open but this other method brings the SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. error.
The request signature we calculated does not match the signature you provided. Check your key and signing method.
key: string = 'bucketname/path/to/file.pdf'
expires: number = 60
const url = await Storage.get(key, {
level: 'public',
cacheControl: 'no-cache',
signatureVersion: 'v4',
ContentType: 'application/pdf',
expires: 60,
customPrefix: {
public: '',
protected: '',
private: '',
},
})
hellosign.open(url, {
clientId,
skipDomainVerification: true,
})
one fix i found on Github was to include the signatureVersion when initializing the S3 class instance using the SDK. i am not using the SDK however, so i tried providing it when configuring Amplify like so:
AWSS3: {
bucket,
region,
signatureVersion: 'v4',
},
needless to say it did not fix the issue. i could not find any references to this in the docs, as the Amplify.configure function takes any in their docs.
i tried clicking the pre-signed URL and was able to download the doc without issue. i inspected the request and saw the correct headers that match the outgoing request originating from hellosign.open. any ideas what i can try?

hellosign.open() is only expected to load embedded URLs. These would be URLs returned from requests to the following endpoints:
/embedded/sign_url/ (sign URL - embedded signing)
/template/create_embedded_draft (edit URL - embedded template creation)
/embedded/edit_url/ (edit URL - embedded template editing)
/unclaimed_draft/create_embedded (claim URL embedded requesting)
/unclaimed_draft/create_embedded_with_template (claim URL embedded requesting)
If you'd like to collect signatures on a document in an iframe on your site, embedded signing will likely be what you're looking for. In that case, you'd want to create your signature request with a request to:
/signature_request/create_embedded
and then use the URL you're creating via Amplify Storage as the value of the file_url[] request parameter.
Once you've created your request, locate the signature ID for your signer in the response object, and use that in a request to /embedded/sign_url/
This will return a sign URL that you can load using hellosign.open().

Related

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

node S3 Object Storage Linode

Im trying to use the aws-sdk to acces my linode S3 compatible bucket, but everything I try doesn't work. Not sure what the correct endpoint should be? For testing purposes is my bucket set to public read/write.
const s3 = new S3({
endpoint: "https://linodeobjects.com",
region: eu-central-1,
accesKeyId: <accesKey>,
secretAccessKey: <secretKey>,
});
const params = {
Bucket: bucketName,
Key: "someKey",
Expires: 60,
};
const uploadURL = await s3.getSignedUrlPromise("putObject", params);
The error im getting
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'Could not load credentials from any providers',
code: 'CredentialsError',
time: 2021-07-15T08:29:50.000Z,
retryable: true,
originalError: {
message: 'EC2 Metadata roleName request returned error',
code: 'TimeoutError',
time: 2021-07-15T08:29:49.999Z,
retryable: true,
originalError: [Object]
}
}
}
It seems like a problem with the credentials of the environment that this code is executed in and not with the bucket permissions themselves.
The pre-signing of the URL is an operation that is done entirely locally. It uses local credentials (i.e., access key ID and secret access key) to create a sigv4 signature for the URL. This also means that whether or not the credentials used for signing the URL are valid is only checked at the moment the URL is used, and not at the moment of signing the URL itself.
The error simply indicates that from all the ways the SDK is trying to find credentials (more info here) it cannot find credentials it can use to sign the URL.
This might be unrelated, but according to the documentation, the endpoint should be the following: The endpoint URI to send requests to. The default endpoint is built from the configured region. The endpoint should be a string like 'https://{service}.{region}.amazonaws.com' or an Endpoint object. Which, in the code example above, is not the case.
You should set the endpoint to be eu-central-1.linodeobjects.com. When using Linode object storage the region is not determined by the endpoint that you use.

How to resolve 403 Forbidden error when uploading to S3

I'm setting up a vuejs / DropzoneJS - app loosely based on kfei's vue-s3-dropzone app. It's designed to upload files (by using a PUT method) to AWS S3 serverlessly using a AWS Lambda function and a AWS S3 bucket.I'm basically getting a XMLHttpRequest at 'https://xxxxxxxxxxxxxxxxxxx' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status and a 403 error code when I try to upload an image to the S3 bucket. Is there anything I can do to
This is what I've did:
created an S3 bucket
Set up a bucket policy and a CORS configuration in the S3 bucket settings:
enter image description here
enter image description here
Created a lambda function that is supposed to sign a URL which allows a PUT upload for each file to S3, with the Role executing the Lambda having a PutObject and PutObjectAcl permission on the S3 bucket:
enter image description here
Set up an Api Gateway API with an OPTIONS method (to pass the preflight check) and a PUT method with these CORS settings:
b. The OPTIONS method has a Mock backend integration with the Integration Response returning the following:
Access-Control-Allow-Headers 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,x-requested-with'
Access-Control-Allow-Methods ‘PUT,OPTIONS'
Access-Control-Allow-Origin '*'
c. The PUT method has:
"Access-Control-Allow-Origin": "*"
In AWS Api Gateway: Setup a api-key and a usage plan
The lambda code:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var bucketName = process.env.AWS_BUCKET_NAME;
exports.handler = (event, context) => {
if (!event.hasOwnProperty('contentType')) {
context.fail({ err: 'Missing contentType' });
}
if (!event.hasOwnProperty('filePath')) {
context.fail({ err: 'Missing filePath' });
}
var params = {
Bucket: bucketName,
Key: event.filePath,
Expires: 3600,
ContentType: event.contentType
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) {
context.fail({ err });
} else {
context.succeed({ url });
}
});
};
Expected: Successful upload of files
Actual: Possible CORS issues.
getSignedUrl will work fine if you were uploading the file from an API client like Postman or a Node.js server, but as you state you are seeing a preflight check fail, I'm assuming you are using some kind of HTML form & frontend js.
From the AWS JavaScript SDK Docs regarding getSignedUrl:
Note: Not all operation parameters are supported when using pre-signed
URLs. Certain parameters, such as SSECustomerKey, ACL, Expires,
ContentLength, or Tagging must be provided as headers when sending a
request. If you are using pre-signed URLs to upload from a browser and
need to use these fields, see createPresignedPost().
As you are setting the 'Expires' param when calling getSignedUrl and are sending from the browser, you need to use createPresignedPost instead of getSignedUrl in your Lambda code.
You will then need to POST instead of PUT from the browser to S3.
NB: Remember to update your CORS rules for S3 with POST

Correct code to upload local file to S3 proxy of API Gateway

I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.

How to get thumbnails for shared OneDrive files to unauthorized users? This useful OneDrive feature provided in the old API is broken now

My app uses OneDrive API feature to let unauthorized users get thumbnails for shared OneDrive files using old API request:
https:// apis.live.net/v5.0/skydrive/get_item_preview?type=normal&url=[shared_link_to_OneDrive_file]
This feature is broken now (any such links return XMLHttpRequest connection error 0x2eff).
And my Windows Store app can not longer provide this feature.
Anyone can try to check it, link to shared OneDrive file:
https://onedrive.live.com/redir?resid=AABF0E8064900F8D!27202&authkey=!AJTeSCuaHMc45eY&v=3&ithint=photo%2cjpg
links to a preview image for shared OneDrive file (according to old OneDrive API
"Displaying a preview of a OneDrive item" - https:// msdn.microsoft.com/en-us/library/jj680723.aspx):
https://apis.live.net/v5.0/skydrive/get_item_preview?type=normal&url=https%3A%2F%2Fonedrive.live.com%2Fredir%3Fresid%3DAABF0E8064900F8D!27202%26authkey%3D!AJTeSCuaHMc45eY%26v%3D3%26ithint%3Dphoto%252cjpg
generates error: SCRIPT7002: XMLHttpRequest: Network error 0x2eff
Сurrent OneDrive API thumbnail feature:
GET /drive/items/{item-id}/thumbnails/{thumb-id}/{size}
is just for authorized users and can not provide access to thumbnails for shared OneDrive files to unauthorized users
How can a Windows Store app let unauthorized users get thumbnails for shared OneDrive files (videos etc.) using the current OneDrive API?
Any ideas?
You need to make a call to the following API:
GET /drive/items/{item-id}/thumbnails/{thumb-id}/{size}/content
This call needs to use authorization and returns a redirect to a cache-safe thumbnail location. You can then use this new url to serve thumbnails to unauthenticated users.
e.g.
Request:
GET https://api.onedrive.com/v1.0/drive/items/D094522DE0B10F6D!152/thumbnails/0/small/content
Authorization: bearer <access token>
Response:
HTTP/1.1 302 Found
Location: https://qg3u2w.bn1302.livefilestore.com/y3m1LKnRaQvGEEhv_GU3mVsewg_-aizIXDaVczGwGFIqtNcVSCihLo7s2mNdUrKicuBnB2sGlSwMQTzQw7v34cHLkchKHL_5YC3IMx1SMcpndtdb9bmQ6y2iG4id0HHgCUlgctvYsDrE24XALwXv2KWRUwCCvDJC4hlkqYgnwGBUSQ
You can now use the link in the Location header to access the thumbnail without signing in. This url will change only if the contents of the file change.
You can read more in the documentation here.
I just figured it out. It is based on the information in this article from Microsoft...
https://learn.microsoft.com/en-ca/onedrive/developer/rest-api/api/driveitem_list_thumbnails?view=odsp-graph-online
... look at the section, "Getting thumbnails while listing DriveItems." It shows you the relevant JSON return structure from a call such as:
GET /me/drive/items/{item-id}/children?$expand=thumbnails
Basically, the JSON return structure gives you string URL's for each of the thumbnail formats. You then create URLSession's to upload these URL's (once you've converted them from String to URL)
Here is an excerpt of code using Swift (Apple):
////////////////////////////////////////////////////////////////////////////////
//
// Download a thumbnail with a URL and label the URLSession with an ID.
//
func downloadThumbnail(url: URL, id: String) {
// Create a URLSession. This is an object that controls the operation or flow
// control with respect to asynchronous operations. It sets the callback delegate
// when the operation is complete.
let urlSession: URLSession = {
//let config = URLSessionConfiguration.default
let config = URLSessionConfiguration.background(withIdentifier: id)
config.isDiscretionary = true
config.sessionSendsLaunchEvents = true
//config.identifier = "file download"
return URLSession(configuration: config, delegate: self as URLSessionDelegate, delegateQueue: OperationQueue.main)
}()
// Create the URLRequest. This is needed so that "Authorization" can be made, as well
// as the actual HTTP command. The url, on initialization, is the command... along
// with the "GET" setting of the httpMethod property.
var request = URLRequest(url: url)
// Set the Authorization header for the request. We use Bearer tokens, so we specify Bearer + the token we got from the result
request.setValue("Bearer \(self.accessToken)", forHTTPHeaderField: "Authorization")
request.httpMethod = "GET"
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
// This initiates the asynchronous data task
let backgroundTask = urlSession.downloadTask(with: request)
//backgroundTask.earliestBeginDate = Date().addingTimeInterval(60 * 60)
backgroundTask.countOfBytesClientExpectsToSend = 60
backgroundTask.countOfBytesClientExpectsToReceive = 15 * 1024
backgroundTask.resume()
}
... of course you need to have the correct "accessToken," (shown above) but you also have to have written the generic callback function for URLSession, which is:
func urlSession(_ session: URLSession, downloadTask: URLSessionDownloadTask,
didFinishDownloadingTo location: URL) {
Swift.print("DEBUG: urlSession callback reached")
// This was the identifier that you setup URLSession with
let id = session.configuration.identifier
// "location" is the temporary URL that the thumbnail was downloaded to
let temp = location
// You can convert this URL into any kind of image object. Just Google it!
}
Cheers, Andreas