I am running into issues while trying to delete a file that resides in Amazon S3 using the WSO2 ESB connector. Here is what I have done:
1) Created a proxy service in WSO2 ESB using the following taken from the WSO2 documentation :
2) I made sure that the proxy service is deployed to WSO2 then using POSTMAN
3) I submitted the following to POSTMAN:
<deleteObject>
<accessKeyId>MYACCESSKEY</accessKeyId>
<secretAccessKey>MYSECRETKEY</secretAccessKey>
<methodType>DELETE</methodType>
<contentType>application/xml</contentType>
<expect>100-continue</expect>
<region>us-east-1</region>
<host>s3.amazonaws.com</host>
<bucketUrl>http://s3.amazonaws.com/MYBUCKET</bucketUrl>
<bucketName>MYBUCKET</bucketName>
<isXAmzDate>true</isXAmzDate>
<objectName>FILETODELETE.txt</objectName>
<versionId></versionId>
</deleteObject>
I get the following error message (beginning of message):
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>MYACCESSKEY</AWSAccessKeyId>
<StringToSign>AWS4-HMAC-SHA256
Sat, 07 Jul 2018 15:25:18 GMT
20180707/us-east-1/s3/aws4_request
618b0c822492e3dd2a8f4d9e1ea</StringToSign> <SignatureProvided>06b2b268cb90b69a1c5dadbb689ed4ccf7b459ff1b5</SignatureProvided>
<StringToSignBytes>BUNCH OF NUMBERS</StringToSignBytes>
<CanonicalRequest>DELETE
/MYBUCKET/xxxxx.txt/
content-type:application/xml
host:s3.amazonaws.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:Sat, 07 Jul 2018 15:25:18 GMT
content-type;host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD</CanonicalRequest>
<CanonicalRequestBytes>BUNCH OF NUMBERS</CanonicalRequestBytes>
<RequestId>SOODEDIBD</RequestId>
<HostId>vLllBSaWMHkV+gqX6yh7+43WK4PsAO4VVXLdGePBvGWZtxxExbBqI=</HostId>
I recreated my S3 credentials, but I am still running into the same error, any help will be greatly appreciated.
Frank
After going through delete object for S3, you need to check if the header values are going proper or not, as there is authentication required before accessing the web service , The header should look something like this .
DELETE /my-image.jpg?versionId=3HL4kqCxf3vjVBH40Nrjfkd HTTPS/1.1
Host: bucketName.s3.amazonaws.com
x-amz-mfa: 20899872 301749
Date: Wed, 28 Oct 2009 22:32:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:0RQf4/cRonhpaBX5sCYVf1bNRuU=
You can put wso2 esb on debug mode and confirm where proper header is being passed or not, else take a TCPDUMP to be accurate
Related
The question is with reference to ""https://learn.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python"
The section 'https://learn.microsoft.com/en-us/azure/iot-dps/quick-create-simulated-device-x509-python#simulate-the-device' talks about modifying certain parameters. I get the following error when running the python code.
$ python provisioning_device_client_sample.py -i 0ne0007F9D9 -s X509 -p http
Python 2.7.12 (default, Nov 12 2018, 14:36:49)
[GCC 5.4.0 20160609]
Provisioning Device Client for Python
Starting the Provisioning Client Python sample...
Scope ID=0ne0007F9D9
Security Device Type X509
Protocol HTTP
Provisioning API Version: 1.2.12
Press Enter to interrupt...
Register status callback:
reg_status = CONNECTED
user_context = None
PUT /0ne0007F9D9/registrations/riot-device-cert/register?api-version=2018-09-01-preview HTTP/1.1
UserAgent: prov_device_client/1.0
Accept: application/json
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Host: global.azure-devices-provisioning.net:443
content-length: 39
len: 39
{ "registrationId":"riot-device-cert" }
HTTP Status: 401
date: Thu, 26 Sep 2019 18:48:49 GMT
content-type: application/json; charset=utf-8
transfer-encoding: chunked
x-ms-request-id: 883b82ee-f696-4e68-9aec-61abc1e4a55b
strict-transport-security: max-age=31536000; includeSubDomains
{"errorCode":401002,"trackingId":"883b82ee-f696-4e68-9aec-61abc1e4a55b","message":"CA certificate not found.","timestampUtc":"2019-09-26T18:48:50.364959Z"}
Error: Time:Thu Sep 26 14:48:50 2019 File:/usr/sdk/src/c/provisioning_client/src/prov_device_ll_client.c Func:prov_transport_process_json_reply Line:323 failure retrieving json auth key value
Error: Time:Thu Sep 26 14:48:50 2019 File:/usr/sdk/src/c/provisioning_client/src/prov_transport_http_client.c Func:prov_transport_http_dowork Line:941 Unable to process registration reply.
Error: Time:Thu Sep 26 14:48:50 2019 File:/usr/sdk/src/c/provisioning_client/src/prov_device_ll_client.c Func:on_transport_registration_data Line:572 Failure retrieving data from the provisioning service
Register device callback:
register_result = PARSING
iothub_uri = None
user_context = None
Device registration failed!
I could not find a location where I should copy the device certificates. May be my understanding is wrong. Help me correct it.
Thanks,
Sreeju
Did you use Visual Studio to build the project like they mentioned in the previous step? If so, VS is supposed to wire that in for you so that you don't have to copy the cert anywhere on your end, just use the cert to set up the device on the AzIotHub side.
To troubleshoot why that isn't happening, can you include or link your built provisioning_device_client_sample.py file? It probably shows or points to where the X509SecurityClient class is being instantiated, which will lead to the X509 object, which has an attribute (self._cert_file) that will show the file path. It'd also help if you can run this in a python IDE so we could pull stuff up on a console.
If that is inconvenient, I could build the SDK/sample and run through the thing myself, but I haven't opened up Visual Studio on my VM in ages and will probably need to go through some licensing fandango. (I mostly use the IOTHub Device & Service SDKs, the newer versions of which don't need to be built, or the REST api for areas where the SDKs break.) It'll be a little bit before I have some spare time for that.
I am trying to create an Azure Cloud service using the REST API in a C# application. the XML used to describe the service is:
<CreateHostedService xmlns="http://schemas.microsoft.com/windowsazure">
<ServiceName>AppHostingCloudService</ServiceName>
<Label>base64-encoded-label-of-cloud-service</Label>
<Description>This cloud service will host VMs</Description>
<Location>Western-Europe</Location>
</CreateHostedService>
I did set up all headers correctly using the right certificate and all, I get an HTTP 400 error that says Bad Request, here is the details of the error:
+ response {StatusCode: 400, ReasonPhrase: 'Bad Request', Version: 1.1, Content: System.Net.Http.StreamContent, Headers:
{
Date: Thu, 31 Jul 2014 15:49:45 GMT
Server: Microsoft-HTTPAPI/2.0
Content-Length: 220
Content-Type: application/xml; charset=utf-8
}} System.Net.Http.HttpResponseMessage
Any ideas,
Thanks
400 Error usually means something wrong with the XML you're sending. Most likely it is the name of the datacenter location. You may want to get the list of locations by performing List Locations operation and see the right location name for Western Europe data center.
While there is a thread about this problem on Google's FAQ, it seems like there are only two answers that have satisfied other users. I'm certain there is no proxy on my network and I'm pretty sure I've configured boto as I see credentials in the request.
Here's the capture from gsutil:
/// Output sanitized
Creating gs://64/...
DEBUG:boto:path=/64/
DEBUG:boto:auth_path=/64/
DEBUG:boto:Method: PUT
DEBUG:boto:Path: /64/
DEBUG:boto:Data: <CreateBucketConfiguration><LocationConstraint>US</LocationConstraint></\
CreateBucketConfiguration>
DEBUG:boto:Headers: {'x-goog-api-version': '2'}
DEBUG:boto:Host: storage.googleapis.com
DEBUG:boto:Params: {}
DEBUG:boto:establishing HTTPS connection: host=storage.googleapis.com, kwargs={'timeout':\
70}
DEBUG:boto:Token: None
DEBUG:oauth2_client:GetAccessToken: checking cache for key dc3
DEBUG:oauth2_client:FileSystemTokenCache.GetToken: key=dc3 present (cache_file=/tmp/o\
auth2_client-tokencache.1000.dc3)
DEBUG:oauth2_client:GetAccessToken: token from cache: AccessToken(token=ya29, expiry=2\
013-07-19 21:05:51.136103Z)
DEBUG:boto:wrapping ssl socket; CA certificate file=.../gsutil/third_party/boto/boto/cace\
rts/cacerts.txt
DEBUG:boto:validating server certificate: hostname=storage.googleapis.com, certificate ho\
sts=['*.googleusercontent.com', '*.blogspot.com', '*.bp.blogspot.com', '*.commondatastora\
ge.googleapis.com', '*.doubleclickusercontent.com', '*.ggpht.com', '*.googledrive.com', '\
*.googlesyndication.com', '*.storage.googleapis.com', 'blogspot.com', 'bp.blogspot.com', \
'commondatastorage.googleapis.com', 'doubleclickusercontent.com', 'ggpht.com', 'googledri\
ve.com', 'googleusercontent.com', 'static.panoramio.com.storage.googleapis.com', 'storage\
.googleapis.com']
GSResponseError: status=400, code=MissingSecurityHeader, reason=Bad Request, detail=A non\
empty x-goog-project-id header is required for this request.
send: 'PUT /64/ HTTP/1.1\r\nHost: storage.googleapis.com\r\nAccept-Encoding: iden\
tity\r\nContent-Length: 98\r\nx-goog-api-version: 2\r\nAuthorization: Bearer ya29\r\nU\
ser-Agent: Boto/2.9.7 (linux2)\r\n\r\n<CreateBucketConfiguration><LocationConstraint>US</\
LocationConstraint></CreateBucketConfiguration>'
reply: 'HTTP/1.1 400 Bad Request\r\n'
header: Content-Type: application/xml; charset=UTF-8^M
header: Content-Length: 232^M
header: Date: Fri, 19 Jul 2013 20:44:24 GMT^M
header: Server: HTTP Upload Server Built on Jul 12 2013 17:12:36 (1373674356)^M
It looks like you might not have a default_project_id specified in your .boto file.
It should look something like this:
[GSUtil]
default_project_id = 1234567890
Alternatively, you can pass the -p option to the gsutil mb command to manually specify a project. From the gsutil mb documentation:
-p proj_id Specifies the project ID under which to create the bucket.
I'm trying to use the S3 API to get the location (region) of a bucket. I'm following the docs (http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTBucketGETlocation.html)
and I've constructed the following request:
GET http://s3.amazonaws.com/?location
Host: bucketname.s3.amazonaws.com
Date: Thu, 03 Mar 2011 18:21:59 GMT
Authorization: AWS <my auth string>
But rather than getting a "LocationConstraint" XML response, I get the "ListAllMyBucketsResult" (which just lists all the buckets in my account).
What am I doing wrong? BTW, the bucket I'm testing against is located in the EU.
Sounds like an error in your call. I would start with either s3cmd.rb or s3cmd just to be sure that you get the information from publicly vetted tools. Try:
s3cmd info s3://my-bucket-name
or
s3cmd.rb location my-bucket-name
should give you the location info. Obviously you'll need to configure the s3 auth stuff.
Answering my own question, I found the solution:
The bucket name must also be included in the URL, like this:
GET http://bucketname.s3.amazonaws.com/?location
Host: bucketname.s3.amazonaws.com
Date: Thu, 03 Mar 2011 18:21:59 GMT
Authorization: AWS <my auth string>
As I have to parse it, I must know what the returning data will be structured into?
The GET operation on the Service endpoint (s3.amazonaws.com) returns a list of all of the buckets owned by the authenticated sender of the request.
Sample Request:
GET / HTTP/1.1
Host: s3.amazonaws.com
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS 15B4D3461F177624206A:xQE0diMbLRepdf3YB+FIEXAMPLE=
Sample Response:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://doc.s3.amazonaws.com/2006-03-01">
<Owner>
<ID>bcaf1ffd86f461ca5fb16fd081034f</ID>
<DisplayName>webfile</DisplayName>
</Owner>
<Buckets>
<Bucket>
<Name>quotes;/Name>
<CreationDate>2006-02-03T16:45:09.000Z</CreationDate>
</Bucket>
<Bucket>
<Name>samples</Name>
<CreationDate>2006-02-03T16:41:58.000Z</CreationDate>
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
Source: S3 REST API » Operations on the Service » GET Service
The S3 API is described here.