I am experimenting with libcurl for a multipart upload to s3. My initiate multipart request looks like this
POST /my_new_file.mbi?uploads HTTP/1.1 Accept: / Host:
test_bucket.s3.amazonaws.com Date: Thu, 01 May 2014 13:35:17 GMT
Authorization: AWS4-HMAC-SHA256
Credential=XXXXXXX/20140501/us-east-1/s3/aws4_request,SignedHeaders=host,Signature=1a3fd6195040494dd95507455a3b1eefef40346485e3fdafbe6cc136192365a2
I get the following response
The provided 'x-amz-content-sha256' header must be a valid SHA256.
s3 documentation says we do not need any other headers for Initiate multipart upload call(POST). Have tried with various combinations of signed empty content, but no luck.
What am I missing here? Any suggestions here will be very helpful.
Thanks
I haven't used version 4 auth, yet, for multipart uploads (my code uses v2), but I did find this:
x-amz-content-sha256
When using signature version 4 to authenticate request, this header provides a hash of the request payload. For more information see Authenticating Requests by Using the Authorization Header (Compute Checksum of the Entire Payload Prior to Transmission) - Signature Version 4. When uploading object in chunks, you set the value to STREAMING-AWS4-HMAC-SHA256-PAYLOAD to indicate that the signature covers only headers and that there is no payload. For more information, see Authenticating Requests Using HTTP Authorization Header (Chunked Upload).
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html
Obviously, chunked and multipart are not the same thing, but perhaps this value is also appropriate for a multipart upload request, or will generate a new and more helpful error message. The documentation seems unfortunately sparse in this case.
For Googlers who got this error:
Missing required header for this request: x-amz-content-sha256
While using awscli, what worked for me was setting the region correctly in the file ~/.aws/config (I'm using Ubuntu) to us-east-1. US only doesn't work nor does US Standard. The returned error doesn't really indicate that.
STREAMING-AWS4-HMAC-SHA256-PAYLOAD appears to no longer work. I was able to make it work by passing the SHA256 hash of the empty string, e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Related
When i try to test api with localhost:[port] it gives the invalid character in header ["Host"] console error. I am using dotnet core webApi. I cross checked the CORS configuration from api end it is fine. The issue is on the Postman side.
Postman version: v8.7.0
I had the same error being reported for any forked or created Postman requests:
Error: Invalid character in header content ["Host"]
The request URL was using a global parameter:
{{BaseUri}}/some/sort/of/resource
In the console logs the following was reported (URLs redacted):
Request Headers
User-Agent: PostmanRuntime/7.29.0
Accept: */*
Cache-Control: no-cache
Postman-Token: 9d14e81d-1e21-44a2-93ed-2758f0ad24fa
Host: my.url.co.uk↵
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Note the ↵ character at the end of the Host Header.
The global BaseUri parameter did not appear to have a line break at the end of it. However, completely deleting said parameter and recreating it seems to have fixed the issue.
I also had same incident and I was able to find the error by exporting "My Workspace" content and open it from notepad++. Then change the encoding to "ANSI" from notepad++ (Encoding=> ANSI). You will notice special characters as below,
This can happen when you copy the url and paste in Postman and then try to edit it.
If you are getting this URL from someone what you can ask is to provide exported json file from postman. Then import it to your workspace.
I thought the issue was in a variable I was using because the error was telling me there's an invalid character in my host https://localhost:4431 which is exactly the value of my variable.
I figured out the invalid character was actually not in my variable but in the rest of the URL in my request.
Turns out, when copying endpoint names from the Swagger of my API, I was also copying an invisible character %E2%80%8B. I saw it when checking the API's console RequestPath:/%E2%80%8BmyEndpoint
Removing this invisible character solved the issue
Taken from question comments. by Fidel Garcia
I created the request again via Add Request menu and it works. I'm not sure if it is a problem with the update and old requests. The old one is still failing.
===================================================
It also worked for me. I created new request with the same parameters and it worked.
I created a requested with and wrote the parameters in headers. After wrong requested I changed it to correct one (post request and parameters in body) and got the error. After creating new request with correct configuration (post request parameters in body) it worked correctly.
In my case:
I removed the authentication from Header then I re-enter the authentication credentials again.
In my case enter after param and path generate error. Exact reason could be found in postman console.
In my case this happened because I added an extra blanckspace at the end of an environment variable deffinition. That extra space was being taken into account in the route when making a request.
Be careful with those extra blanck spaces.
I am using FineUploader to upload to S3. I have everything working including deletes. However, when I upload larger files that get broken into multi-part uploads, I get the following error in the console (debugging turned on):
Specific problem detected initiating multipart upload request for 0: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.'.
Can someone point me in the right direction as what I should check for settings, or what additional info you might need?
Since you haven't included anything really specific to your setup, code, or the failing request, my best guess is that your server isn't returning a proper signature response for uploads made to the S3 REST API (which is used for larger files). You'll need to review that procedure for generating a response to this type of signature request.
Here's the relevant section from Fine Uploader's S3 documentation:
Fine Uploader S3 uses Amazon S3’s REST API to initiate, upload,
complete, and abort multipart uploads. The REST API handles
authentication by signing canonically formatted headers. This signing
is something you need to implement server-side. All your server needs
to do to authenticate and supported chunked uploads direct to Amazon
S3 is sign a string representing the headers of the request that Fine
Uploader sends to S3. This string is found in the payload of the
signature request:
{ "headers": /* string to sign */ }
The presence of this property indicates to your sever that this is, in
fact, a request to sign a REST/multipart request and not a policy
document.
This signature for the headers string differs slightly from the policy
document signature. You should NOT base64 encode the headers string
before signing it. All you must do, server-side, is generate an HMAC
SHA1 signature of the string using your AWS secret key and then base64
encode the result. Your server should respond with the following in
the body of an ‘application/json’ response:
{ "signature": /* signed headers string */ }
I'm using Azure Blob Storage to store media files and providing access to these files using Shared Access Signatures; everything is working well in this regard.
However, I have a client application that needs to "resume" access to these files and does so using an HTTP RANGE header. When it makes a request like this, it is unhappy with the result it gets back from Azure.
I'm not sure how to view the details on the Azure side to see if the request failed, or if it just returned something the client didn't expect, and I have no debugging visibility into the client.
Here's what the incoming range header looks like:
RANGE: bytes=4258672-
From the Azure documentation I've read it appears to support RANGE headers, however I'm wondering if there is a conflict using RANGE and Shared Access Signatures together?
Update:
It appears that Azure may be returning an incorrect status code for RANGE requests, which is causing my client apps to reject the response. The documentation states that Azure will respond with an HTTP status code of 206 when responding to a RANGE request, however when I issue a RANGE request like this:
curl -I -H "User-Agent: Bonos" -r 500- "https://murfie.blob.core.windows.net/168464/1.mp3?st=2013-07-03T16%3A34%3A32.4832235Z&se=2013-07-03T17%3A34%3A32.4613735Z&sr=b&sp=r&sig=mJgQGW%2Fr3v8HN2%2BVV3Uady7J68nFqeHyzQb37HAhfuE%3D"
Azure returns the following:
HTTP/1.1 200 OK
Content-Length: 19988911
Content-Type: application/octet-stream Charset=UTF-8
Last-Modified: Fri, 07 Jun 2013 16:44:50 GMT
ETag: 0x8D031B57670B986
Server: Blob Service Version 1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 77312761-65a9-42ef-90cd-ff718a80b231
Date: Wed, 03 Jul 2013 16:41:01 GMT
We got this straightened out.
As #BrentDaCodeMonkey mentioned, Azure returns the expected 206 response if you're using API version 2011-01-18 or better, but in our case we don't originate the request so we can't specify this using the request header.
However, some Microsoft friends tipped us of to the fact that you can set the API version globally for a storage account, but you need to use the REST API to do so (it's not something you can do in the management UI). This post explains how:
http://msdn.microsoft.com/en-us/library/windowsazure/hh452235.aspx
After setting the DefaultServiceVersion to 2011-01-18, we're now getting back the expected 206 status for RANGE requests.
For those who are struggling with the Azure Service API and the tricky Authorization, I recommend the this very simple C# snippet that does exactly the same in a very simpler way (at least for me).
var credentials = new Microsoft.WindowsAzure.Storage.Auth.StorageCredentials("storagename", "storagekey");
var account = new Microsoft.WindowsAzure.Storage.CloudStorageAccount(credentials, true);
var client = account.CreateCloudBlobClient();
var properties = client.GetServiceProperties();
properties.DefaultServiceVersion = "2013-08-15";
client.SetServiceProperties(properties);
You'll need to add a nuget package WindowsAzure.Storage v9.3.3 (obsolete, but still works)
I reached out to some members of the product team and was given the following...
The 200 vs 206 is due to the presents of the "-I" flag in the curl command. This results in a HEAD request instead of a GET which is essentially as "get blob properties" call instead of a "get blob" which will cause the range header to be ignored. Also be sure to specify the version headers as "x-ms-version:2011-08-18" or later since the "startByte-" range format was only supported on that version of later.
For more information on range headers, see: http://msdn.microsoft.com/en-us/library/windowsazure/ee691967.aspx
Yes, it works. I've used SAS to stream video to mobile phones, which use Range headers.
Its easy to verify with a bit of code too.
I am attempting to build a c# module to connect to the Twitter streaming API using OAuth (now the only option). I have got to the point where my module will successfully access api urls using GET, but everything I do to try and make a POST request fails with a 401.
I have checked my signature is correct by using the OAuth Tool tab on the page for my Twitter App, and fixing the values for nonce and timestamp in my code. I have curl for Windows set up and can verify that it works with the sample curl script generated by the OAuth tool (by the way, this needs some correction of the quotes to make it work for curl in Windows Cmd. Get rid of single quotes on values that don't need them, use double quotes on anything that needs to be quoted, and on the Authorization header, use double quotes and escape double quotes within the header with a backslash).
I have even gone to the length of running curl in trace mode and outputting the bytes I send in the post body from my c# code and I can verify that they are the same.
I am trying to access 'https://stream.twitter.com/1.1/statuses/filter.json' using 'track=twitter' as the post body. The headers are:
Accept: */*
User-Agent: curl/7.21.7(amd64-pc-win32) libcurl/7.21.7 OpenSSL/0.9.8rzlib/1.2.5
Content-Type: application/x-www-form-urlencoded
Host: stream.twitter.com
Content-Length: 13
Connection: Keep-Alive
Authorization: OAuth <the oauth stuff>
I can't inspect the packets being sent to check on the wire that the requests are identical as they are of course SSL encoded.
Any ideas?
I eventually got this to work. Things that might help you if you have this kind of problem which I discovered:
I had a problem initially because I created a new nonce every time the bit of code was accessed. This meant the nonce which was used in generating the signature key was different from the one in the header. Obviously fail.
I then ran into the above problem. What it was is that I was adding the OAuth header to my request AFTER I sent the request body. For some reason it seems to send the request as soon as you write to the request stream for a POST.
It was very useful in finding 2. that I found out how to use Fiddler to trace web requests from code. Essentially all you need to do is add this to your web.config:
<system.net>
<defaultProxy>
<proxy proxyaddress="http://127.0.0.1:8888" />
</defaultProxy>
</system.net>
As soon as I tried to read the HTTPS request, Fiddler prompted me to install bits so it could decrypt the request, which I did and then I could see the exact request going down the wire. I could compare this with what cURL was doing using
-x 127.0.0.1:8888
option.
However I then ran into a problem with my request timing out. Which bizarrely enough was caused by the fact that Fiddler was proxying the response. Once I took the above out of my web.config again it all worked. Halleluja!
I try to verify the integrity of a file that was uploaded to a bucket but I don't find any information of this.
In the file's headers, there is a "E-tag" but I think its not a md5 checksum.
So, how can I check if the file that I uploaded on Amazon S3 is the same that I have on my computer ?
Thanks. :)
If you are using the REST API to upload an object (up to 5GB) in a single operation, then you can add the Content-MD5 header in your PUT request. According the the S3 documentation for PUT, the Content-MD5 header is:
The base64 encoded 128-bit MD5 digest of the message (without the headers) according to RFC 1864. This header can be used as a message integrity check to verify that the data is the same data that was originally sent. Although it is optional, we recommend using the Content-MD5 mechanism as an end-to-end integrity check.
Check this answer on how to compute a base64 encoded 128-bit MD5 digest. If you are using s3curl, you can include the computed digest in your request headers using the --contentMd5 option.
If the md5 digest computed by Amazon upon upload completion does not match the md5 digest you provided in the Content-MD5 header, Amazon will respond with a BadDigest error code.
If you are using multipart upload, the Content-MD5 header serves as an integrity check for each part individually. Once the multipart upload is finalized, Amazon does not currently provide a way to verify the integrity of the assembled file.