Update certificate for an individual enrollment in azure DPS via REST API - azure-iot-hub

I am doing operations on azure DPS via REST apis. I am able to post a new provision successfully in "Create new individual enrollment", via REST API.The Url is used is, "https://name.azure-devices-provisioning.net/enrollments/registrationId?api-version=2019-03-31";. My current task is to update the certificate for an individual enrollment in DPS. I am extracting the body of the certificate and attaching it to the request body and doing the PUT operation. But I am getting an error saying "Enrollment already exists with different cert info". I am providing the content-type and authorization sas token as headers. And in request body, I am passing the eTag along. I guess I have cleared all the pre-requisites for an update operation. Please help me to understand if I am doing something wrong here. Thanks in advance!

We can start by step by step process where I have successfully updated my x509 certificate with Azure DPS via REST API call. I hope this helps with your query.
Step1: Create the Individual Enrollment with the x509 certificate, pass it in base64 format. Please make a note of the 'eTag' value in the response.
Verify In the Azure DPS portal. I see the thumbprint matches my x509 certificate.
Step 2: Now time for updating the existing Enrollment with a new certificate. use the base64 format. use the "If-Match" in the request header and the value is the "eTag" obtained/copied from the previous step.
Below image shows the Request Header with the "If-match =eTag" for updating the existing entity.
Finally, we can see the response is a success for the update operation.
Verify In the Azure DPS portal. I see the thumbprint matches my new x509 certificate.
Sample Request body, in case, if it is useful to anyone.
{
"attestation": {
"type": "x509",
"X509": {
"clientCertificates": {
"primary":{
"certificate":"base64 string of your cert"
}
}
}
},
"registrationId":"testenrollment10",
"capabilities":{"iotEdge":false},
"provisioningStatus": "enabled"
}
Please leave your comment below to this response if you need further help in this matter.

For updating the individual enrollment in DPS you also have to add the etag value in the request header. The header property name for this is If-Match. For details see https://learn.microsoft.com/en-us/rest/api/iot-dps/createorupdateindividualenrollment/createorupdateindividualenrollment#request-headers

Related

Is an authorization header required by the azure storage shared access signature (SAS) REST API

I need to read/write an azure storage table.
The client program is configured to use a shared access signature to read/write remote azure table.
Can anyone give me a good example of how to construct the authorization header in order to use the sas?
I am getting HTTP error code 403.
Microsoft documentation specified that all the rest API will have to be embedded with an authorization header. By default, the documentation suggests that I can use the storage account access key to generate the HMAC-SHA code for the authorization header. I think I am missing something here.
The whole idea of using a shared access signature (SAS) is to protect the storage account access key. At the same time, the doeumentation seems to suggest that the storage account owner needs to provide the account access key so the client can use the access key to generate the HMAC-SHA code. What am I missing here? can anyone shed some light here? Thanks,
If you're using sas_token in the request url, then you don't need to provide Authorization in the header.
How to check which header should be provided? in the related api page -> go to the Request Headers section -> check each header, if the header is required, then it will be described in it's Description. Here is a screenshot for your reference:
Here are the steps to query entities by using sas_token:
1.Generate sas_token from azure portal. You can refer to the screenshot below:
2.Check which header are required, as per query-entities -> request-headers, we know x-ms-date is required(Authorization is not required here since we're using sas_token). You can provide a value for x-ms-date, like Wed, 13 Jan 2021 01:29:31 GMT.
If you don't know how to get the value for x-ms-date header, you can open powershell -> type Get-Date command -> then it will generate the date:
3.Prepare the request url with sas_token, like below:
https://xxx.table.core.windows.net/testtable(PartitionKey='a1',RowKey='r11')?sv=2019-12-12&ss=t&srt=sco&sp=rwdlacu&se=2021-01-13T09:24:58Z&st=2021-01-13T01:24:58Z&spr=https&sig=xxxxx
4.Use some tools like Postman, send the request with proper header. Here is the test result by using Postman:

How to use the "Azure Storage Container URL" returned by the PNS Feedback service?

I'm trying to get feedback for push notifications, as described here: https://learn.microsoft.com/en-us/previous-versions/azure/reference/mt705560(v=azure.100).
Upon success, an Azure Storage Container URL is returned, complete with authentication token.
I have the URL:
https://pushpnsfb9bf61499e7c8fe.blob.core.windows.net/00000000002002698042?sv=2015-07-08&sr=c&sig=KbF1GtORNzAaCZH9UP7UFi9wMOYBmBgL%2BXLG3Qau9U0%3D&se=2020-08-29T19:10:17Z&sp=rl
But requesting it returns an authentication error:
<Error>
<Code>AuthenticationFailed</Code>
<Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature...</Message>
<AuthenticationErrorDetail>Signature did not match. String to sign used was... </AuthenticationErrorDetail>
</Error>
I am trying to follow the docs at https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-service-sas-create-dotnet?tabs=dotnet#create-a-service-sas-for-a-blob-container. The URL has sr=c, which seems to mean I need a "SAS for a blob container".
But where do I get the token? The returned URL has a sig querystring parameter - I tried using that to sign the request, but it didn't work.
What am I doing wrong?
When we call the Get Platform Notification Services (PNS) feedback rest api, we will get a container url with sas token. And the sas token has read and list permissions at container level. So we can use azure blob rest api to read the content, properties, metadata or block list of any blob in the container or list blobs in the container with the token. For more details, please refer to here
For example
Get container url
Test
a. list blobs
GET https://pushpnsfb2f34ecd6733e74.blob.core.windows.net/00000000002000276266?
<sas token e.t. sv=2015-07-08&sr=c&sig=SQodHcRM6p04ag9rJZBqPDmr1NMd%2FbIWoPzMZrB9TpI%3D&se=2020-09-02T05%3A28%3A07Z&sp=rl>
&restype=container&comp=list
b. read blob content
GET GET https://pushpnsfb2f34ecd6733e74.blob.core.windows.net/00000000002000276266/<blob name>?
<sas token e.t. sv=2015-07-08&sr=c&sig=SQodHcRM6p04ag9rJZBqPDmr1NMd%2FbIWoPzMZrB9TpI%3D&se=2020-09-02T05%3A28%3A07Z&sp=rl>
For more details about Azure Blob rest api, please refer to here.

Why do we have to specify server-side encryption in the http header?

This article in the AWS Developer Blog describes how to generate pre-signed urls for S3 files that will be encrypted on the server side: https://aws.amazon.com/blogs/developer/generating-amazon-s3-pre-signed-urls-with-sse-kms-part-2/ . The part that describes how to generate a url makes sense, but then the article goes on to describe how to use the url in a put request, and it says that, in addition to the generated url, one must add to the http request a header specifying the encryption algorithm. Why is this necessary when the encryption algorithm was included in the url's generation?
// Generate a pre-signed PUT URL for use with SSE-KMS
GeneratePresignedUrlRequest genreq = new GeneratePresignedUrlRequest(
myExistingBucket, myKey, HttpMethod.PUT)
.withSSEAlgorithm(SSEAlgorithm.KMS.getAlgorithm());
...
HttpPut putreq = new HttpPut(URI.create(puturl.toExternalForm()));
putreq.addHeader(new BasicHeader(Headers.SERVER_SIDE_ENCRYPTION,
SSEAlgorithm.KMS.getAlgorithm()));
I ask in part due to curiosity but also because the code that has to execute the put request in my case is running on a different machine from the one that generates the url. I won't go into the details, but it's a real hassle to make sure that the header that one machine generates matches the url that the other machine generates.
I don't know how "clear" the justification is, but my assumption is that the encryption parameters are required to be sent as headers in order to keep them from appearing in logs that log the query string.
Why is this necessary when the encryption algorithm was included in the url's generation
This aspect is easier to answer. A signed request is a way of proving to the system that someone in possession of your access-key-secret authorized this exact, specific request, down to the last byte. Change anything about the request that was included in the signature generation, and you have invalidated the signature, because now the request differs from what was authorized.
When S3 receives your request, it looks up your secret key and does exactly what your local code does... it signs the request it received and checks whether its generated signature matches the one you supplied.
A common misconception is that signed URLs are generated by the service, but they aren't. Signed URLs are generated entirely locally. The algorithm is not computationally-feasible to reverse-engineer, and for any given request, there is exactly 1 possible valid signature.
It looks like the information about encryption doesn't get included in the presigned url. I'm guessing the only reason it's included in the GeneratePresignedUrlRequest is for generating a hash that's checked for authentication. After reading up on the question of when to use url parameters vs custom headers, I have to wonder if there is any clear justification for S3's using custom headers instead of url parameters here. As mentioned in the original question, having to include these headers makes using this API difficult. I wouldn't have this problem if url parameters were used instead. Any comments on the matter would be appreciated.

proof of API response

Let's say I have access to an https weather API.
Let's say I query its health status on thursday 17/08/2017 23h30 and the API replies OK (simple OK http code).
As a client, I need to prove in the future that the service actually responded me this data.
I'm thinking to asking the API to add a crypto signature of the data sent + timestamp, in order to prove they actually responded OK at that specific time.
Is it overkill? Is there a more simple way of doing it?
A digital signature containing current date/time or even adding a time stamp issued by a third party time stamp authority is an appropriate way to ensure that the content was actually issued on a date
In general, implementing a digital signature system on HTTP requests is not so simple and you have to consider many elements:
What content will you sign: headers, payload, attachments?
Is it binary content or text? Because the algorithms and signature formats will be different
In case of text you must canonicalize the content to avoid encoding problems when you verify the signature on the client side. Also you need to define a signature algorithm to compute the content to sign
Do you also need to sign the attachments when they are sent via streams?. How are you going to handle big files?
How are you going to attach the signature to the https response: special header, additional attribute in the payload?
How is the server going to distribute the signing certificate? You should include it in a truststore on the client
But, if you only want to proof that a service response was OK/FAIL, then the server just need to add a digital signature over the payload (or computed on a concatenation of the headers) but if you want to implement something more complex I suggest you take a look at the Json Web Signature (JWS)

What is hash field in the BigCommerce webhook?

How it generate? How I can validate it?
https://developer.bigcommerce.com/api/webhooks-getting-started
{
"store_id": 11111,
"producer": "stores/abcde",
"scope": "store/order/statusUpdated",
"data": {
"type": "order",
"id": 173331
},
"hash": "3f9ea420af83450d7ef9f78b08c8af25b2213637"
}
This was answered by #KarenWhite, their developer evangelist in this thread.
https://support.bigcommerce.com/s/question/0D51B00004G6kJf/incoming-webhook-posts-hash-field-in-payload
It is hashed with SHA-1, but it is not signed with the client secret:
$payload['hash'] = sha1(json_encode($payload));
Additionally, the stance on webhook security is documented in the 2018 townhall
https://support.bigcommerce.com/s/article/BigCommerce-Town-Hall-February-2018
Q. How can I make sure that a webhook callback is initiated by BigCommerce only, and that the data is not altered between BigCommerce and my server endpoint? Can the hash returned in the webhook payload be used to verify the request?
A. Our webhooks today contain very little information -- they only contain an I.D. to go look up additional information. You would need to be authorized to verify that I.D. against the store’s API to determine the actual information being requested. We also secure our webhooks with TLS encryption, and enable developers to add their own headers to events for additional security.
I believe the hash is simply a unique identifier for an event.
One good reason to have this is when you ingest events, if you ever get duplicates from BigCommerce (which I've seen happen recently) you can tell that it's a duplicate based on the hash field.
I'd recommend using a custom header to validate the payload was from BigCommerce as noted in the getting started guide:
A headers object containing one or more name-value pairs, both string values (optional). If you choose to include a headers object, Bigcommerce will include the name-value pair(s) in the HTTP header of its POST requests to your callback URI at runtime. While this feature could be used for any purpose, one is to use it to set a secret authorization key and check it at runtime. This provides an additional level of assurance that the POST request came from Bigcommerce instead of some other party, such as a malicious actor.