I am receiving this error when I try to connect to Azure IoT Hub. Microsoft.Azure.Devices.Client.Exceptions.UnauthorizedException.
I could not post the error screen shots here, since I do not have enough reputation points.
So instead I wrote the whole details at
http://azuregeon.blogspot.in/2017/10/azure-iot-c-certificate-connectivity.html
if anyone gets a chance please have a look.
You should set the Common Name as "_demoDevice" when authenticating you x.509 device. The Common Name is used to identify this certificate in the Azure web interface.In addition, the operation will be more simple and precise based on the official documationation(PowerShell scripts to manage CA-signed X.509 certificates).
Related
I'm looking to obtain an certificate from an AATL authority to use in iText to perform tamper-proofing signatures to PDF documents as part of a cloud application that I'm working on.
As best as I'm able to determine, AATL certificates can be delivered as USB HSMs to customers after a standard Adobe AATL verification process. Unfortunately, this restricts the usage to devices I have physical access to, which obviously isn't feasible for a cloud application.
I've been trying to research what my best options are on this front, but haven't been able to find any clear guidance on best practices or impartial sources for knowledge. I've come up with two possible ideas to illustrate in slightly more concrete terms what I am looking for.
Obviously any answer that results in the same outcome of either of these ideas is more than welcome as well!
1st Idea
Is there any way for me to obtain an AATL certificate by generating a CSR from Azure Key Vault, or Azure HSM (Gemalto) and having an AATL provider issue their response such that the certificate is loaded into the Azure's standards compliant store?
By doing this, my hope would be that I could then code my Application using the Azure Key Vault APIs or the Gemalto HSM to perform signatures.
2nd Idea
If a USB HSM is my best option, is it possible to derive another certificate from my USB HSM and then load that into Azure Key vault? Will a key derived from one issued to my company by an AATL authority still pass Acrobat (and any other) authenticity checks? Or will any certificate with intermediaries between it and the AATL authority fail?
I've been digging into this since I have a very similar requirement at the moment. YES it is possible to store an AATL Document Signing certificate in Azure KeyVault because it is a FIPS 140-2 level 2 compliant HSM. You do not need the dedicated HSM although it is also supported (Azure dedicated HSM is FIPS 140-2 level 3 compliant).
As for the process, you are correct that you would need to issue a CSR from KeyVault directly. If your certificate is delivered on a USB HSM, you will not be able to transfer it to Azure KeyVault since it will be locked to the HSM.
I do not want to list any certificate providers in this answer but I was easily able to find at least 4 that supported my use-case with a quick Google search. I'm currently in the process of getting quotes from each of these vendors.
I am building up a small iot-like system, where mqtt devices(clients) are sending and receiving security-related critical information or commands.
I have got to know that TLS connection can be built optionally without client authentication thru PK certificate on the client side.
Normally, mqtt client devices don't have enough resources to support PKI, where at first it has to store a certificate and from time to time, to update it with newly issued ones when validity has passed or when the original certificate has been revoked.
That was, I think, why many of mqtt brokers have an option to configure on/off the client authentication during TLS handshake.
However, my concern is if there would be any security issue from passing the client authentication step, like, for example, a chance that some other malicious devices impersonating one of my devices can connect to the broker could obtain those critical information and commands.
My question is what best options and practices I can take to minimize that kind of risk considering the constraint resource of devices.
Missing client authentication means that everybody including an attacker can claim to be a valid client. There can be use cases like public services where this is not a problem and there are other use cases where the server wants to restrict access to specific known clients only.
There is no definitive answer to this question, it will always depend on the following factors, and only you as the designer can answer them:
What is the threat model you are working with? E.g. Who are you trying to keep out of the system and why, what are the consequences of somebody connecting a rouge client?
How much are you prepared to spend? If you intend to deploy client certificate or even a unique username/password for each device, how will it be protected? Does the hardware you intend to use support a secure enclave/hardware secret store? Meaning how hard would it be for an attacker to extract the client username/password or secret key from the device?
What other security measures do you have in place? Do you have Access Control Lists to protect which topics a client can publish/subscribe to? Do you have monitoring in place to detect malicious actions from clients so they can be disconnected and banned?
I apologize if I am asking something too obvious, but I as understand SSL should be used when making API calls to remote apps. This particularly important if you are using a password or API key in the calls so networking sniffing doesn't reveal the details.
Having said that, if I have a Cloud-hosted app, e.g. AWS hosted PHP script, that makes a call to a MongoDB database hosted by a third-party provider e.g. ObjectRocket, do I need SSL between the two?
Secondly, as I understand if both the apps are hosted on the same infrastructure, e.g. Heroku (which AWS-based) and a third-party MongoDB provider (if they use AWS of the same region) then SSL is redundant when connecting from a Heroku-hosted app to the the database as AWS will prevent networking sniffing.
Thank you for your help. I appreciate anyone who can provide some fundamental information here.
I am working on a HIPAA cloud project and am implementing a Key Store as a central repository for all of the key pairs for PHI(Private Health Information) encryption... I am not worried about the actual data because it will be encrypted at rest and in transit.
However when a worker or webrole needs to work with the data they need to decrypt and reencrypt it (if they do updates). That's where the key Store comes into play. However, I don't want this internal service exposed and I also need it to be SSLed, because sending keys in the clear, even inside a virtual network of roles wouldn't pass a security audit.
So any suggestions on how I can get a web or worker role to use SSL with an internal endpoint?
Thanks
I don't think you can. Internal endpoints are on a closed network branch, so https would normally be redundant (although I understand your compliance issues). I found this answer (to my question) very useful in figuring out the security of internal endpoints: How secure are Windows Azure internal endpoints? - see the more detailed post that Brent links to.
I'm currently working on an automated deployment process for a hosted service for Windows Azure. The creation of the .cspkg and .cscfg files works perfectly using a call to msbuild. Now I'm writing a small .NET console app that should deploy these files to Azure using the Management REST API.
There is no problem concerning the API itself. I can send a request to the API using one of my management certificates. I upload the .cspkg file to Azure BLOB Storage and then try to call Upgrade Deployment. But every time I try, I get a "400 Bad Request" response stating that the certificate with thumbprint xy was not found. This certificate is the SSL certificate (not a management certificate) I'm using for HTTPS for my custom domain (DNS CNAME).
And now, the whole thing gets interesting:
When I deploy the files using the "Publish" command in my Visual Studio, there is no problem. (I compared the .cscfg/.cspkg files from VS and from my msbuild output: apart from a few GUIDs, they're identical). And furthermore, using the Silverlight Management thingy in my browser, I can even upload my generated files that could not be uploaded using the API.
When I retrieve a list of all certificates using the List Certificates call, the certificate which is said to be missing is apparently there. I can also retrieve its data using the Get Certificate call.
So why does Azure keep telling me that the certificate was not found when using the Upgrade Deployment call? Did anyone experience something similar? Has anyone the hint for me? Thanks in advance.
P.S.: This is what Azure says when I use the API:
<Error xmlns="http://schemas.microsoft.com/windowsazure" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<Code>BadRequest</Code>
<Message>The certitficate with thumbprint 7b232c4a2d6e3deadbeef120d5dbc1fe8049fbea was not found.</Message>
</Error>
P.P.S.: Yes, the word in the response is certitficate, not certificate.
OK, after using the List Subscription Operations API call to find out what Visual Studio calls to deploy apps, I found the solution.
Turns out that the URL I used for the API request was wrong, but: with all due respect, I blame Microsoft for lousily documenting its Azure Management API.
In their documentation, they write the URL to use is:
https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deploymentslots/<deployment-slot>/?comp=upgrade
And the description is the following:
To generate the request URI, replace <subscription-id> with your subscription ID, <service-name> with the name of your service, <deployment-slot> with staging or production, and <deployment-name> with the unique name of your deployment.
What they forgot to mention is, that you have to use the DNS Name of your service, and not the Name! They could at least return an appropriate error message telling you that the service name is invalid, non-existent or doesn't belong to your subscription ID, instead of complaining about some certificate issue.
Thank you Microsoft, that cost me more than two days.
The error indicates that you have not uploaded that certificate into the hosted service's secret store. Visual Studio might be doing that automagically for you, but if you want to replicate it programmatically, then use the Add Certificate API call and upload the PFX into the deployment.
You can see '400 BadRequest - The certificate with thumbprint XYZ was not found.' appear in the CreateDeployment or UpgradeDeployment scenario for the following reason (which I just debugged):
You use the same certificate for subscription management as you do for e.g. SSL or Remote Desktop password encryption in your hosted service. You therefore will use the certificate with thumbprint XYZ to authenticate your service management REST call that creates the deployment.
When specifying your deployment parameters you pass in your CSCFG which references that same cert by its thumbprint, because it needs to configure Remote Desktop/SSL etc.
That cert is not yet added to your hosted service certs.
In this case the 400 Bad Request error really is telling you that you have a bad request, because the certificate in your CSCFG is not yet attached to your hosted service. The confusion arises (for me) because, since its a multi purpose cert, you misinterpret the error message as referring to the authentication of the request, even though you are not getting 401.