How clients are verified in Safenet Luna SA HSM? - cryptography

How Safenet Luna SA HSM clients are verified when the clients are registered using hostname ?

Safenet Luna HSMs use certificate based authentication for clients. The certificate must be copied to the HSM and have a filename that matches the hostname used in the client register command on the HSM.
A typical process for registration is:
Copy the server certificate to the client installation.
scp admin#10.41.4.98:server.pem /usr/lunasa/cert/server
Register the server locally
vtl addServer -n 10.10.10.10 -c /usr/lunasa/cert/server/server.pem
Create the client certificate on the client:
vtl createCert -n HOSTNAME
This creates a certificate and private key in the cert/client directory named:
HOSTNAME.pem (certificate)
HOSTNAMEKey.pem (private key)
Copy the client certificate to the Luna SA HSM using scp.
scp /usr/lunasa/cert/client/HOSTNAME.pem admin#10.10.10.10:
On the HSM, register the client and assign it to a partition.
client register -client HOSTNAME -hostname HOSTNAME
client assignPartition -client HOSTNAME -partition PARTITIONNAME
On the client, verify that the client is registered and operating properly:
$ vtl verify
The following Luna SA Slots/Partitions were found:
Slot Serial # Label
==== ======== =====
1 123456789 myPartition1

Looking at you comments after Keith helped with the process of trust/cert exchange.
Below is the command that you might need-
ntls ipcheck disable

HSM verifies clients based on the NTL ((Network Trust Link) connection. Establishing NTL connection is mandatory before clients makes a call to HSM via Crytoki. The procedure to establish NTL connection is explained by #Keith Bucher

Related

Troubleshooting - Setting up private GitLab server and connecting Gitlab Runners

I have a Gitlab instance running in docker on a dedicated private server (accessible only from within our vpc). We want to start doing CI using Gitlab runners so I spun up another server to host our runners.
Now that Gitlab-Runner has been configured, I try and register a runner with the private IP of the Gitlab server and the registration token
Enter the GitLab instance URL (for example, https://gitlab.com/):
$GITLAB_PRIVATE_IP
Enter the registration token:
$TOKEN
Enter a description for the runner:
[BEG-GITLAB-RUNNER]: default
Enter tags for the runner (comma-separated):
default
ERROR: Registering runner... failed runner=m616FJy- status=couldn't execute POST against https://$GITLAB_PRIVATE_IP/api/v4/runners: Post "https://$GITLAB_PRIVATE_IP/api/v4/runners": x509: certificate has expired or is not yet valid: current time 2022-02-06T20:00:35Z is after 2021-12-24T04:54:28Z
It looks like our certs have expired and to verify:
echo | openssl s_client -showcerts -connect $GITLAB_PRIVATE_IP:443 2>&1 | openssl x509 -noout -dates
notBefore=Nov 24 04:54:28 2021 GMT
notAfter=Dec 24 04:54:28 2021 GMT
Gitlab comes with let's encrypt so I decided to enable let's encrypt and cert autorenewal in gitlab rails, however when I try and reconfigure I get the error message:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[$GITLAB_PRIVATE_IP] (letsencrypt::http_authorization line 6) had an error: Acme::Client::Error::RejectedIdentifier: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 41) had an error: Acme::Client::Error::RejectedIdentifier: Error creating new order :: Cannot issue for "$GITLAB_PRIVATE_IP": The ACME server can not issue a certificate for an IP address
So it looks like I can't use the let's encrypt option that packaged with gitlab to enable the renewal of certs.
How can I create/renew ssl certs on a private linux server without a domain?
If you've set up Gitlab + Runners on private servers, what does your rails configuration look like?
Is there a way to enable DNS on a private server for the sole purpose of a certificate authority granting certs?
I would suggest to use Self-signed certificate I have tested this before and its working fine but require some work. I will try to summarize some of the steps needed:
1- generate Self-signed certificate with the domain you choose and make sure to keep it in /etc/gitlab-runner/certs/
2- you need to add the domain and certs path in /etc/gitlab/gitlab.rb
3- reconfigure giltab
4- when connecting the runner make sure to manually copy and activate certs to the runner server .

Connection to channel with SSL enabled gives error about CipherSpec not being specified but it was

I have installed IBM MQ server with a developer license (https://developer.ibm.com/articles/mq-downloads/) and followed the tutorial from here: https://developer.ibm.com/tutorials/mq-connect-app-queue-manager-windows/.
So what I have now is:
IBM MQ Manager
One Queue Manager (QM1)
4 Queues (one dead letter queue and 3 dev queues DEV.QUEUE.x) all local
2 Channels (one admin and one 'normal' server connection channel)
I enabled SSL on the QM1 queue manager:
[![SSL Settings for QM1][1]][1]
and I also created a personal certificate:
[![Key management][2]][2]
EDIT: I connected with username/password instead of using an SSL certificate. I have fixed this but now I cannot connect either.
I also set the SSL ChiperSpec for the channel to ANY.
amqsputc dev.queue.1 QM1 now gives me:
MQCONNX ended with reason code 2058
Which (https://www.ibm.com/docs/en/ibm-mq/9.2?topic=arc-2058-080a-rc2058-mqrc-q-mgr-name-error) says that the que manager name is wrong. But as far as I can see QM1 is the corecct name.
EDIT: When connecting with the amqssslc tool with the following syntax I am getting this:
amqssslc -l ibmwebspheremq -k C:\ProgramData\IBM\MQ\qmgrs\QM1\ssl\key -c DEV.APP.SVRCONN -x DEV.APP.SVRCONN -s TLS_RSA_WITH_AES_128_CBC_SHA256 -m QM1
Sample AMQSSSLC start
Connecting to queue manager QM1
Using the server connection channel DEV.APP.SVRCONN
on connection name DEV.APP.SVRCONN.
Using SSL CipherSpec TLS_RSA_WITH_AES_128_CBC_SHA256
Using SSL key repository stem C:\ProgramData\IBM\MQ\qmgrs\QM1\ssl\key
Certificate Label: ibmwebspheremq
No OCSP configuration specified.
MQCONNX ended with reason code 2538
As the error message you have shown us says, your channel definition for DEV.APP.SVRCONN has not put a value in the SSLCIPH attribute.
If this is missing at the queue manager end, use the following MQSC command to rectify:-
ALTER CHANNEL(DEV.APP.SVRCONN) CHLTYPE(SVRCONN) SSLCIPH(ANY)
or alternatively put the same value in the SSLCIPH attribute that you are using for the client application.
If this is missing at the client application end, because you can see that there is a value in the SSLCIPH attribute on the SVRCONN already, change your client application to also use the same cipherspec.
If you are unsure how to, please update your question with the SVRCONN definition and details about your client application for more help.

Enabling TLS in NiFi

I enabled TLS in NiFi by running the below command,
nifi-toolkit/nifi-toolkit-assembly/target/nifi-toolkit-1.4.0-SNAPSHOT-bin/nifi-toolkit-1.4.0-SNAPSHOT/bin/tls-toolkit.sh standalone -n "{my-ip},localhost" -C 'CN={my-ip}' -C 'CN=localhost' -o ./certs
This created the files required for TLS under the directory certs.
I moved the files under the directory certs into the conf folder of the deployment in my machine.
Installed the certificate to my machine's Keychain Access.
Now started the server using bin/nifi.sh start. My server starts, I am able to hit the server, But my request is not authorized.
I am getting the below error,
Not authorized for the requested resource. Contact the system
administrator.
Once TLS is enabled in Apache NiFi, anonymous access is no longer enabled by default. You will need to authenticate as a user in order to access the UI/API. There are three authentication mechanisms available -- client certificates, LDAP, or Kerberos. Once you configure an Initial Admin Identity in $NIFI_HOME/conf/authorizers.xml (this would be the exact CN of the client certificate you issued in the TLS Toolkit command), that user can authenticate and use the user management tools in NiFi to add additional users.
You can find more information in the NiFi Admin Guide. Bryan Bende has also written a detailed walkthrough of the process.
One note about the command you posted above -- I am not sure what your desired output is, but the command is issuing a server certificate for my-ip and another for hostname, but then two client certificates with those DNs as well. In general, you want a server certificate for hostname (possibly with a SAN entry for my-ip), and a client certificate with a DN like CN=alopresto, OU=Apache NiFi.
For example:
./bin/tls-toolkit.sh standalone
-n 'nifi.nifi.apache.org'
--subjectAlternativeNames '123.234.234.123'
-C 'CN=alopresto, OU=Apache NiFi'
-P password
-S password
-B password
-f ...conf/nifi.properties
-o ...conf/

What is the meaning of error=x509: certificate is valid for user A, not localhost in Docker?

I am using a Docker container to run a bunch of services, all of those services make use of certificates to communicate to each other.
When starting up those services there is one in concrete that complains with the following error
> discovery_1 | INFO ttn: Got public keys for token validation
> discovery_1 | DEBUG Connected to gRPC server Address=localhost:1900
> discovery_1 | FATAL Could not start client for gRPC proxy error=x509: certificate is valid for discovery, not localhost
> ttnbackbone_discovery_1 exited with code 1
I have created the certificate for "discovery" user but still Docker runs it for the localhost, in some way, which I don't understand... I have also followed this tutorial of certificates usage from Docker but still I have the same error.
What can I do further?
THanks in advance,
REgards!
I encountered this today. x509 certificates have a Common Name attribute that some software use to match the DNS hostname of a server. Here was my error with a certificate with CN of localhost and a DNS hostname of docker1-staging:
error during connect: Get https://docker1-staging:2376/v1.26/containers/json: x509: certificate is valid for localhost, not docker1-staging
I'll have to regenerate the certificate used by the Docker server and make sure it has a CN value of docker1-staging. You'll have to do the same with a CN value of localhost.

Windows ServiceBus Beta 1.0 'The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel

I configured the Bus with the scripts below.
The new cert in the LocalComputer\Personal\Certificates cert store.
The sample app throws an authorizationexception :
'The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. Inner exception {"The remote certificate is invalid according to the validation procedure."}
$SBRunAsPassword = ConvertTo-SecureString -AsPlainText -Force -String [PASSWORD];
$SBCertAutoGenerationKey = ConvertTo-SecureString -AsPlainText -Force -String [PASSWORD];
New-SBFarm -CertAutoGenerationKey $SBCertAutoGenerationKey -RunAsName 'server\user' -AdminGroup 'BUILTIN\Administrators' -PortRangeStart 9000 -TcpPort 9354 -FarmMgmtDBConnectionString 'Data Source=[SERVER]\SQLEXPRESS;Integrated Security=True'
Add-SBHost -FarmMgmtDBConnectionString 'Data Source=[SERVER]\SQLEXPRESS;Integrated Security=True' -RunAsPassword $SBRunAsPassword -CertAutoGenerationKey $SBCertAutoGenerationKey;
New-SBNamespace -Name 'DemoNameSpace' -ManageUser '[USER]';
If you're running your client application on a different machine than the server, then you need to import the CA into your your client machine to be able to trust the certificate ServiceBus presents.
This page has information on how to perform that:
http://msdn.microsoft.com/en-us/library/jj192993.aspx
Also, make sure that your client calls always use the fully qualified domain name of the machine (if your machine is domain joined). This is because the certificate that ServiceBus generates on install uses the FQDN of the box as the certificate's CN.
On a non domain computer you need to modify the url format and remove the domain component.