ServiceFabric standalone: Failed to get private key file - ssl

I have a standalone ServiceFabric cluster (3 nodes). I created SSL certificate for server and client authorization. Then I assign certificate thumbprint to a cluster config. Everything work okey( cluster health is Ok and my applications works as well. But there are a lot of errors in Microsoft-ServiceFabric/Admin log. Following warning and errors are writing to log every minute:
CryptAcquireCertificatePrivateKey failed. Error:0x80090014
Can't get private key filename for certificate. Error: 0x80090014
All tries to get private key filename failed.
Failed to get the Certificate's private key. Thumbprint: {Cert
Thumbprint}. Error: E_FAIL
Failed to get private key file. x509FindValue: {Cert Thumbprint},
x509StoreName: My, findType: FindByThumbprint, Error E_FAIL
SetCertificateAcls failed. ErrorCode: E_FAIL Can't ACL
FabricNode/ServerAuthX509FindValue, ErrorCode E_FAIL
I assinged write permitions to private keys storage for NETWORK SERVICE and SYSTEM. As well I assigned gMSA account for PK storage. But errors still apears in log.
From the other hand everything looks fine, cluster up and running...
Here is my cluster config (security part):
"security":{
"ServerCredentialType":"X509",
"ClusterCredentialType":"Windows",
"WindowsIdentities":{
"ClustergMSAIdentity":"gMSAccountName#domain.com",
"ClusterSPN":"http/servicefabric"
},
"CertificateInformation":{
"ServerCertificate": {
"Thumbprint": "{Cert Thumbprint}",
"X509StoreName": "My"
},
"ClientCertificateThumbprints":[
{
"CertificateThumbprint":"{Cert Thumbprint}",
"IsAdmin":true
}
],
"X509StoreName": "My"
}
},
For x509 certificated creation I used OpenSSL 1.0.2k-fips 26 Jan 2017. I follow the steps from this article: https://gist.github.com/harishanchu/e82d759c0235379d1778f799992b5774
Could anyone clarify this issue?

It seems like you don't have a private key file in the MachineKeys folder.
To verify if you have a physical file in the folder run this powershell command:
$certThumb = "1D6523F622E33DF46382D081BCA9AE9A2D8D78CC"
Try
{
$WorkingCert = Get-ChildItem CERT:\LocalMachine\My |where {$_.Thumbprint -match $certThumb} | sort $_.NotAfter -Descending | select -first 1 -erroraction STOP
$TPrint = $WorkingCert.Thumbprint
$rsaFile = $WorkingCert.PrivateKey.CspKeyContainerInfo.UniqueKeyContainerName
}
Catch
{
"Error: unable to locate certificate for $($CertCN)"
Exit
}
if ($WorkingCert.PrivateKey) {
$WorkingCert.PrivateKey
}
else
{
"No private key found"
}
If you get No private key found message it means there is no private key in the MachineKeys folder. Even though certificate properties can claim otherwise (there is a key icon and message You have a private key that corresponds to this certificate). Although I don't know why but for some certificates above situation happens.
As a workaround, follow these steps:
Go to the local machine cert store and delete your certificate.
Import your certificate to the local user store first.
Then import your certificate to the local machine store.
Set access rights for Network Service user.
If you follow steps above, private key will be added to MachineKeys folder and error will disappear.
Obviously you have to repeat these steps for every cluster node.

Related

Apache Mina SSHD Client - Key based authentication - Unable to authenticate to server, using keys generated from puttygen

Ive set up an SSH client with Apache Mina SSHD, to ssh into a linux server using private key generated from puttygen
The key is generated from puttygen and the public key is copied from the puttygen UI, to the authorized_keys files on the server.
The private key is exported in 2 ways
Export the private key from the puttygen menu – Conversions – Export ssh.com key and saved it to mykey.key
Clicked on ‘Save private key’ button on puttygen and saved it as a .ppk file to mykey.ppk
I’m using the following code to parse the keys in my client
Collection<KeyPair> keyPairs = PuttyKeyUtils.DEFAULT_INSTANCE.loadKeyPairs(null, Paths.get(certFilePath), FilePasswordProvider.of(certFilePassword));
To the above code –
If I pass the .ppk file I’m getting the below exception - getting exception during parsing of the key file
Exception in thread "main" java.io.StreamCorruptedException: Negative block length requested: -1875473298 at org.apache.sshd.common.config.keys.loader.putty.PuttyKeyReader.read(PuttyKeyReader.java:72)
at org.apache.sshd.common.config.keys.loader.putty.PuttyKeyReader.readInt(PuttyKeyReader.java:61)
at org.apache.sshd.common.config.keys.loader.putty.RSAPuttyKeyDecoder.loadKeyPairs(RSAPuttyKeyDecoder.java:64)
at org.apache.sshd.common.config.keys.loader.putty.AbstractPuttyKeyDecoder.loadKeyPairs(AbstractPuttyKeyDecoder.java:270) at org.apache.sshd.common.config.keys.loader.putty.AbstractPuttyKeyDecoder.loadKeyPairs(AbstractPuttyKeyDecoder.java:259) at org.apache.sshd.common.config.keys.loader.putty.AbstractPuttyKeyDecoder.loadKeyPairs(AbstractPuttyKeyDecoder.java:216) at org.apache.sshd.common.config.keys.loader.putty.AbstractPuttyKeyDecoder.loadKeyPairs(AbstractPuttyKeyDecoder.java:161) at org.apache.sshd.common.config.keys.loader.putty.AbstractPuttyKeyDecoder.loadKeyPairs(AbstractPuttyKeyDecoder.java:129)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceParser$2.loadKeyPairs(KeyPairResourceParser.java:166)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceLoader.loadKeyPairs(KeyPairResourceLoader.java:157)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceLoader.loadKeyPairs(KeyPairResourceLoader.java:148)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceLoader.loadKeyPairs(KeyPairResourceLoader.java:139)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceLoader.loadKeyPairs(KeyPairResourceLoader.java:115)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceLoader.loadKeyPairs(KeyPairResourceLoader.java:90)
at org.apache.sshd.common.config.keys.loader.KeyPairResourceLoader.loadKeyPairs(KeyPairResourceLoader.java:84)
at TestMina5keystringWithPwd.listFolderStructure(TestMina5keystringWithPwd.java:96)
at TestMina5keystringWithPwd.main(TestMina5keystringWithPwd.java:46)
If I pass the .key file, I’m getting this exception ( the load key pairs returns empty and sshd authentication just fails)
Exception in thread "main" org.apache.sshd.common.SshException: No more authentication methods available
at org.apache.sshd.client.session.ClientUserAuthService.tryNext(ClientUserAuthService.java:330)
at org.apache.sshd.client.session.ClientUserAuthService.processUserAuth(ClientUserAuthService.java:264)
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:211)
at org.apache.sshd.common.session.helpers.AbstractSession.doHandleMessage(AbstractSession.java:462)
at org.apache.sshd.common.session.helpers.AbstractSession.handleMessage(AbstractSession.java:388)
at org.apache.sshd.common.session.helpers.AbstractSession.decode(AbstractSession.java:1399)
at org.apache.sshd.common.session.helpers.AbstractSession.messageReceived(AbstractSession.java:345)
at org.apache.sshd.common.session.helpers.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:64)
at org.apache.sshd.common.io.nio2.Nio2Session.handleReadCycleCompletion(Nio2Session.java:356)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:334)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:331)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.lambda$completed$0(Nio2CompletionHandler.java:38)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:318)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:37)
at java.base/sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:129)
at java.base/sun.nio.ch.Invoker$2.run(Invoker.java:221)
at java.base/sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:113)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Pls. let me know the right methods to parse a private key generated from putty in my sshd client.

tls unsigned certificate when using terraform

The microstack.openstack project recently enabled/required tls authentication as outlined here. I am working on deploying an openstack cluster to microstack using a terraform example here. As a result of the change, I receive an unknown signed cert error when trying to create an openstack network client data source.
data "openstack_networking_network_v2" "terraform" {
name = "${var.pool}"
}
The error I get when calling terraform plan:
Error: Error creating OpenStack networking client: Post "https://XXX.XXX.XXX.132:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: x509: certificate signed by unknown authority
with data.openstack_networking_network_v2.terraform,
on datasources.tf line 1, in data "openstack_networking_network_v2" "terraform":
1: data "openstack_networking_network_v2" "terraform" {
Is there a way to ignore the certificate error, so that I can successfully use terraform to create the openstack cluster? I have tried updating the generate-self-signed parameter, but I haven't seen any change in behavior:
sudo snap set microstack config.tls.generate-self-signed=false
I think insecure provider parameter is what you are looking for:
(Optional) Trust self-signed SSL certificates. If omitted, the OS_INSECURE environment variable is used.
Try:
provider "openstack" {
insecure = true
}
Disclaimer: I haven't tried that.
The problem was that I did not source the admin-openrc.sh file that I had downloaded from the horizon web page:
$ source admin-openrc.sh
I faced the same problem, if it could help, here my contribution :
sudo snap get microstack config.tls
Key Value
config.tls.cacert-path /var/snap/microstack/common/etc/ssl/certs/cacert.pem
config.tls.cert-path /var/snap/microstack/common/etc/ssl/certs/cert.pem
config.tls.compute {...}
config.tls.generate-self-signed true
config.tls.key-path /var/snap/microstack/common/etc/ssl/private/key.pem
In terraform directory, do :
cat /var/snap/microstack/common/etc/ssl/certs/cacert.pem : copy paste -> cacert.pem
cat /var/snap/microstack/common/etc/ssl/certs/cert.pem : copy/paste -> cert.pem
cat /var/snap/microstack/common/etc/ssl/private/key.pem : copy/past -> key.pem
And create a file in your terraform directory main.tf :
provider "openstack" {
user_name = "admin"
tenant_name = "admin"
password = "pass" (get with sudo snap get microstack config.credentials.keystone-password)
auth_url = "https://host_ip:5000/v3"
#insecure = true (uncomment & comment cacert_file + key line)
cacert_file = "/terraform_dir/cacert.pem"
#cert = "/terraform_dir/cert.pem" (if needed)
key = "/terraform_dir/private.pem"
region = "microstack" (or regionOne)
}
To finish terraform plan/apply

Configure .pfx certificate

I'm working .Net 5.0 and I get these errors when I throw it to the hosting server. After a while, my website gives HTTP error 500 due to these errors. I created the certificate with OpenSSL and user profile as true but when I try to add a certificate I get these errors.
What should I do about this?
warn: Microsoft.AspNetCore.DataProtection.Repositories.EphemeralXmlRepository[50]
Using an in-memory repository. Keys will not be persisted to storage.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[59]
Neither user profile nor HKLM registry available. Using an ephemeral key repository. Protected data will be unavailable when application exits.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f071590d-e902-4b6f-bbbe-d27d7415d96b} may be persisted to storage in unencrypted form.
crit: Microsoft.AspNetCore.Hosting.Diagnostics[6]
Application startup exception
Internal.Cryptography.CryptoThrowHelper+WindowsCryptographicException: The system cannot find the file specified.
My startup is like this:
services.AddDataProtection()
.SetApplicationName("MyProjectName")
.ProtectKeysWithCertificate(new X509Certificate2(certificate, "password", X509KeyStorageFlags.MachineKeySet
| X509KeyStorageFlags.PersistKeySet
| X509KeyStorageFlags.Exportable)) //My bad line of code
.UseCryptographicAlgorithms(
new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256
}
)
.PersistKeysToFileSystem(new DirectoryInfo(keysFolder)) //shared network folder for key location
.SetDefaultKeyLifetime(TimeSpan.FromDays(600));
The problem was that iss out of date on the server side. The provider updated the server and the problem was resolved

Problem getting complete .pem from ansible letsencrypt / acme_certificate module

I was using Ansible 2.4 and included the letsencrypt module in one of my roles hoping to get a complete `.pem' format file at the end (key, chain, cert). There was no problem generating the key or using the csr to request the new cert, and no problem with the challenge, but when everything was done, I was only getting the certificate back, no chain.
When I tried to use them, Apache would fail to start saying that the key and the cert did not match. I assumed that this was because I didn't include the chain which was missing.
According to the docs here: https://docs.ansible.com/ansible/latest/modules/acme_certificate_module.html the chain|chain_dest and fullchain|fullchain_dest parameters weren't added until Ansible 2.5. So I upgraded to Ansible 2.7 (via git), and I'm still running into the exact same error...
FAILED! => {
"changed": false,
"msg": "
Unsupported parameters for (letsencrypt) module: chain_dest, fullchain_dest
Supported parameters include: account_email, account_key, acme_directory, agreement,
challenge, csr, data, dest, remaining_days"
}
I've tried the aliases and current names for both but nothing is working. Here is my current challenge-response call:
- name: Let the challenge be validated and retrieve the cert and intermediate certificate
letsencrypt:
account_key: /etc/ssl/lets_encrypt.key
account_email: ###########.###
csr: /etc/ssl/{{ myhost.public_hostname }}.csr
dest: /etc/ssl/{{ myhost.public_hostname }}.crt
chain_dest: /etc/ssl/{{ myhost.public_hostname }}.int
fullchain_dest: /etc/ssl/{{ myhost.public_hostname }}.pem
challenge: dns-01
acme_directory: https://acme-v01.api.letsencrypt.org/directory
remaining_days: 60
data: "{{ le_com_challenge }}"
tags: sslcert
The documentation says that this is valid, but the error response does not include chain|chain_dest or fullchain|fullchain_dest as valid parameters.
I would, from the docs, expect that this response should result in the new certificate being created (.crt), the chain being created (.int), and the fullchain to be created (.pem).
Any help would be appreciated.
Should have waited 5 minutes... seems that the newer parameters are only available under the newer module name acme_certificate, even though it says letsencrypt was a valid alias. As soon as I updated this it worked.

rsyslogd-2291: imrelp: could not activate relp listner

I'm trying to configure rsyslog tls with relp but keep getting errors.
I'm using RHEL 7.2 with rsyslog 8.15.
I do manage to send messages using relp + tls but without using the certificates. When I'm adding the certificates I'm getting the following error:
Jan 20 11:00:17 ip-10-0-0-114 rsyslogd-2353: imrelp[514]: error 'Failed to set certificate trust files [gnutls error -64: Error while reading file.]', object 'lstn 514' - input may not work as intended [v8.15.0 try http://www.rsyslog.com/e/2353 ]
Jan 20 11:00:17 ip-10-0-0-114 rsyslogd-2291: imrelp: could not activate relp listner, code 10031 [v8.15.0 try http://www.rsyslog.com/e/2291 ]
Server conf:
module(load="imrelp" ruleset="relp")
input(type="imrelp" port="514" tls="on"
tls.caCert="/home/ec2-user/rsyslog/ca.pem"
tls.myCert="/home/ec2-user/rsyslog/server-cert.pem"
tls.myPrivKey="/home/ec2-user/rsyslog/server-key.pem"
tls.authmode="name"
tls.permittedpeer=["client.example.co"]
)
ruleset(name="relp") {
action(type="omfile" file="/var/log/relptls2")
}
The following is the client configuration:
module(load="omrelp")
action(type="omrelp" target="10.0.0.114" port="514" tls="on"
tls.caCert="/home/ec2-user/rsyslog/ca.pem"
tls.myCert="/home/ec2-user/rsyslog/client-cert.pem"
tls.myPrivKey="/home/ec2-user/rsyslog/client-key.pem"
tls.authmode="name"
tls.permittedpeer=["server.example.co"]
)
When I remove the tls cert fields from the server configration I get client error:
Jan 20 10:35:29 ip-10-0-0-206 rsyslogd-2353: omrelp[10.0.0.114:514]:
error 'Failed to set certificate trust file [gnutls error -64: Error
while reading file.]', object 'conn to srvr 10.0.0.114:514' - action
may not work as intended [v8.15.0 try http://www.rsyslog.com/e/2353 ]
Help would be really really appreciated as I'm stack with this for long time.
Thanks!!!!
The gnutls error -64: Error while reading file error message means either:
The certificates actual path is different from what is in the
configuration file
Rsyslog service cannot read the certificates
because of permission problem
In case of permission issue you may move the certificates under /etc/rsyslog.d
In case of path issue, just fix the path :)