asp.net core data protection key encrypt - asp.net-core

In this link :
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-5.0#data-protection
it says "If data protection isn't configured, the keys are held in memory and discarded when the app restarts.", and I don't want that to happen so I configured the data protection in a startup.cs :
services.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(#"PATH-HERE"))
and when I started the app to test it, a warning shows up in the logs saying: No XML encryptor configured. Key {GUID} may be persisted to storage in unencrypted form..
I have found out that I need to use ProtectKeysWith* to encrypt the Key. but because I'm trying to publish the app to a Linux server, I cant use ProtectKeysWithDpapi or ProtectKeysWithDpapiNG ( because they can only be used on Windows servers ), so the only option left was X.509.
basically, I did some searching, and I found out I can use these commands to create a self-signed X.509 certificate :
"C:\Program Files\Git\usr\bin\openssl.exe" genrsa -out private.key 2048
"C:\Program Files\Git\usr\bin\openssl.exe" req -new -x509 -key private.key -out publickey.cer -days 2000
"C:\Program Files\Git\usr\bin\openssl.exe" pkcs12 -export -out idp.pfx -inkey private.key -in publickey.cer
and I can add this certificate in the startup like this :
services
.AddDataProtection()
.PersistKeysToFileSystem(new DirectoryInfo(#"PATH-TO-SAVE-KEYS"))
.SetDefaultKeyLifetime(new TimeSpan(90, 0, 0, 0, 0))
.SetApplicationName("APPNAME-HERE")
.ProtectKeysWithCertificate(new X509Certificate2(#"CERTIFICATE-PATH", "CERTIFICATE-PASSWORD"));
So my question is do I even need to encrypt the keys? and if I should, is my solution valid? can I use this solution in production without any problem? ( keep in mind that I'm going to use a Linux server for my app )
Update 1:
I did more digging in the StackOverflow questions and I have found this :
https://stackoverflow.com/a/48867984/14951696.
apparently using a self-signed certificate ( like what I was doing ) will be fine as long as you are using it internally. I will update again after I have published my app in case anyone has the same question.
Update 2:
I have decided to use Windows servers, and I have found no problem using the self-signed certificate to encrypt the keys. if anything happens I will update again.

My two cents on this, don't know what the problem is with what you found, if not you can us bellow solution.
In order to have a shared Data Protection key is needed to explicitly enforce it.
With one note, in at startup is key doesn't exist in the source is created with an expiration associated. The Ideea is to save XElement and the name of it on a storage that can be used to retrieve at startup that value.
At startup:
services.Configure<KeyManagementOptions>(options =>
{
IDataProtectionRepo dataProtection = services.BuildServiceProvider().GetRequiredService<IDataProtectionRepo>();
options.NewKeyLifetime = DateTime.Now.AddYears(10) - DateTime.Now; // new one is created
options.XmlRepository = new DataProtectionKeyRepository(dataProtection);
});
where DataProtectionKeyRepository is the implementation of IXmlRepository
public class DataProtectionKeyRepository : IXmlRepository
{
private IDataProtectionRepo dataProtectionRepo;
public DataProtectionKeyRepository(IDataProtectionRepo dataProtectionRepo)
{
this.dataProtectionRepo = dataProtectionRepo;
}
public IReadOnlyCollection<XElement> GetAllElements()
{
return new ReadOnlyCollection<XElement>(dataProtectionRepo.GetAll().Select(k => XElement.Parse(k.XmlData)).ToList());
}
public void StoreElement(XElement element, string friendlyName)
{
dataProtectionRepo.AddOrUpdate(new ProtectionKeyModel { Name = friendlyName, XmlData = element.ToString() });
}
}
Communication class
public class ProtectionKeyModel
{
public string Name { get; set; }
public string XmlData { get; set; }
}
And the storage repo, can be a DB, file system, cloud storage, whatever is fine for you, implement bellow interface how you like to
public interface IDataProtectionRepo
{
IEnumerable<ProtectionKeyModel> GetAll();
void AddOrUpdate(ProtectionKeyModel protectionKeyModel);
}

Related

Host name check in Custom Trust Manager

We have a java client that allows both secure and non-secure connections to LDAP hosts.
It comes as part of a software suite which
has its own server component.
We are good with non-secure connections but need to switch to secure only.
The trusted public certificates are maintained (root+intermediate+host are copy pasted into one PEM file) in a
central location with the server component external to the clients.
The custom trust manager downloads the externally held trusted certificates on demand
and builds the trusted certificate chain. This way, I guess, it avoids pre-saving the trusted certicate chain in each client.
Our LDAP hosts are load balanced and that setup has not gone well with the trust manager. When we investigated, we found two questionable lines
in the code.
An environment variable to by-pass the host name verification.
if ("T".equals(System.getenv("IGNORE_HOSTNAME_CHECK"))) return true;
It seems like doing something similar to below which I have seen elsewhere.
HostnameVerifier allHostsValid = new HostnameVerifier() {
public boolean verify(String hostname, SSLSession session) {
return true;
}
};
HttpsURLConnection.setDefaultHostnameVerifier(allHostsValid);
Host name check relies on CN value of subject alone.
if (this.tgtHostname.equalsIgnoreCase(leafCn)) return true;
I have skimmed through some RFCs related to TLS and have come across SNI, SAN:DNSName and MITM warnings
but my rudimentary knowledge is not enough to make a case one way or the other.
Any advice on improvements (or against the use of it altogether) around commented out lines labelled H1 and H2 will be greatly valued.
I intend to pass them on to the right entity later.
The cut-down version of checkServerTrusted() of the custom trust manager is pasted below.
public void checkServerTrusted(X509Certificate[] certsRcvdFromTgt, String authType) throws CertificateException
{
// Some stuff
// Verify that the last certificate in the chain corresponds to the tgt server we want to access.
checkLastCertificate(certsRcvdFromTgt[certsRcvdFromTgt.length - 1]);
// Some more stuff
}
private boolean checkLastCertificate(X509Certificate leafCert) throws CertificateException
{
// need some advice here ... (H1)
if ("T".equals(System.getenv("IGNORE_HOSTNAME_CHECK"))) return true;
try
{
String leafCn = null;
X500Principal subject = leafCert.getSubjectX500Principal();
String dn = subject.getName();
LdapName ldapDN = new LdapName(dn);
for (Rdn rdn : ldapDN.getRdns())
{
if (rdn.getType().equalsIgnoreCase("cn"))
{
leafCn = rdn.getValue().toString();
break;
}
}
// need some advice here ... (H2)
if (this.tgtHostname.equalsIgnoreCase(leafCn)) return true;
}
catch (InvalidNameException e){/*error handling*/}
throw new CertificateException("Failed to verify that the last certificate in the chain is for target " + this.tgtHostname);
}

Bind certificate to a micro service in pod (mTLS)

I am trying to implement the mTLS in cluster across micro service for secured communication. I know that there are service meshes are available for this purpose. But we would like to stay away from service mesh and implement the mTLS in cluster.
So, after going through several posts, then I am able to create the tls secret and mount the volume as part of the service deployment. This certificate i can retrieve from X509Store:
using var certificateStore = new X509Store(StoreName.Root, StoreLocation.LocalMachine, OpenFlags.ReadOnly);
if (certificateStore.Certificates.Any())
{
var certs = certificateStore.Certificates.Find(X509FindType.FindByIssuerName, issuerName, true);
if (certs.Any())
{
return certs.First();
}
}
return null;
But, now, when i am trying to assign this certificate as part of the
kestrelServerOptions.ConfigureHttpsDefaults(listenOptions =>
{
Log.Information($"Configuring the https defaults.");
if (serverCertificate == null)
{
return;
}
// self signed certificate
Log.Information($"Before : Private key: {serverCertificate?.HasPrivateKey}");
Log.Information($"After : Server certificate: {listenOptions.ServerCertificate?.Issuer}");
listenOptions.ServerCertificate = serverCertificate; // throws exception saying that the serer certificate should have the private key.
....
my secret volume has both .crt(pem) and .key files stored as part of the tls secret. But service is not able to attach this private .key to it.
I am really lost here... and not able to proceed further.
I really appreciate if someone help me to work with this certificate and mTLS.
Thanks in advance.

ServiceFabric Local Cluster SSL

I have some misunderstanding with running SF on local cluster with SSL, on localhost.
Microsoft created greate article about configuring HTTPS on your endpoints But it works well only if you use their certificate generator CertSetup.ps1 . If you try install your own pfx, it will not work.
First I created localhost self-signed cert by OpenSSL:
set OPENSSL_CONF=W:\OpenSSL-Win32\bin\openssl.cfg
openssl genrsa -out W:\CERTS\wepapissl.key -passout pass:1234567890 -aes256 2048
openssl req -x509 -new -key W:\CERTS\wepapissl.key -days 10000 -out W:\CERTS\wepapissl.crt -passin pass:1234567890 -subj /CN="localhost"
openssl pkcs12 -export -inkey W:\CERTS\wepapissl.key -in W:\CERTS\wepapissl.crt -out W:\CERTS\wepapissl.pfx -passout pass:0987654321 -passin pass:1234567890`
Second I have created default ASP.NET Core Web Application (Core 2.0, API template). And added code for configure Kestrel to use HTTPS:
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel(opt =>
{
opt.Listen(IPAddress.Any, port, listenOptions =>
{
listenOptions.UseHttps(GetCertificateFromStore());
});
})
.UseStartup<Startup>()
.Build();
private static X509Certificate2 GetCertificateFromStore()
{
var store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
try
{
store.Open(OpenFlags.ReadOnly);
var certCollection = store.Certificates;
var currentCerts = certCollection.Find(X509FindType.FindBySubjectDistinguishedName, "CN=localhost", false);
return currentCerts.Count == 0 ? null : currentCerts[0];
}
finally
{
store.Close();
}
}
I have got expected result. Page with warning about website’s security certificate:
Result from ValueController with warning
Third I have created Service Fabric Application (Stateless ASP.NET Core template). Change my ServiceManifest.xml by editing Endpoint section:
<Endpoint Protocol="https" Name="ServiceEndpoint" Type="Input" Port="8256" />
And added code for configure Kestrel to use HTTPS (class Web1 : StatelessService):
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new ServiceInstanceListener[]
{
new ServiceInstanceListener(serviceContext =>
new KestrelCommunicationListener(serviceContext, "ServiceEndpoint", (url, listener) =>
{
ServiceEventSource.Current.ServiceMessage(serviceContext, $"Starting Kestrel on {url}");
return new WebHostBuilder()
.UseKestrel(opt =>
{
int port = serviceContext.CodePackageActivationContext.GetEndpoint("ServiceEndpoint").Port;
opt.Listen(IPAddress.IPv6Any, port, listenOptions =>
{
listenOptions.UseHttps(this.GetCertificateFromStore());
});
})
.ConfigureServices(
services => services
.AddSingleton<StatelessServiceContext>(serviceContext))
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseServiceFabricIntegration(listener, ServiceFabricIntegrationOptions.None)
.UseUrls(url)
.Build();
}))
};
}
private X509Certificate2 GetCertificateFromStore()
{
var store = new X509Store(StoreName.My, StoreLocation.LocalMachine);
try
{
store.Open(OpenFlags.ReadOnly);
var certCollection = store.Certificates;
var currentCerts = certCollection.Find(X509FindType.FindBySubjectDistinguishedName, "CN=localhost", false);
return currentCerts.Count == 0 ? null : currentCerts[0];
}
finally
{
store.Close();
}
}
Result: Successful build and deploy code on local SF cluster. But my resource can't be reached
P.S. I will repeat again, if you install new cert by using PowerShell provided by Mircosoft - CertSetup.ps1, it works well for SF application. I was trying to dig in PS script, but I can not understand what I missed.
P.P.S I am new in creating certificates, but it seems strange.
I have installed pfx by CertSetup.ps1. All works well (resource is reachable).
Then I have exported cert to pfx with private key and all extended properties
Delete from LocalMachine (MY and Root), CurrentUser (MY) stores
Install exported pfx to LocalMachine (My and Root), CurrentUser (My) stores
Rebuild & Redeploy code
Resoucre can not be reached
Is it magic? Or I miss something?
Couple details was not enough clear for me, any way. Answer:
If you tried to use your own generated certificate (openssl, makecert or etc), you shoud set privileges for NETWORK SERVICE.
To manually do this on your dev box, open up certlm.msc, expand Personal->Certificates, and right-click your cert. Select All Tasks->Manage private keys and then add NETWORK SERVICE.
More here: https://github.com/Azure/service-fabric-issues/issues/714#issuecomment-381281701

Using Asymmetric Key on .Net Core

I am trying to run code from this sample
https://learn.microsoft.com/en-us/dotnet/standard/security/how-to-store-asymmetric-keys-in-a-key-container
Under .NetCore 2.0 (Web application).
However when I try to execute any line using
CspParameters
I get the following error
'CspParameters' requires Windows Cryptographic API (CAPI), which is not available on this platform.
Suggestions please on how I work around this.
Thanks.
.NET does not store cryptographic keys, that's ultimately a feature that is (or isn't) provided by the cryptographic platform it builds on top of.
To use CspParameters with .NET Core you have to run on Windows; because that's a very thin wrapper over the (old) Windows Cryptographic API. You can't use it in UAP, because UAP only allows the newer Cryptography: Next Generation (CNG) API.
macOS can store keys in a Keychain, but .NET Core doesn't provide API to read them out.
Linux (OpenSSL) does not have any key storage mechanism other than "save this to a file and load it again", but .NET Core does not support loading asymmetric keys from files.
The only way to accomplish your goal in a cross-platform mechanism is to have your asymmetric key associated with an X.509 certificate. If you build the X509Certificate2 object for which HasPrivateKey returns true you can save it to a PFX/PKCS#12 file and then load from that file; or you can add it to an X509Store instance (the "My" store for CurrentUser is the one that works best across the platforms) and then read it back from the X509Store instance.
Despite the page you referenced claiming to be written in 2017, what it really means is the content was moved from its previous location on msdn.microsoft.com on that date. The original page was written in 2008 (at least, that's the first hit on web.archive.org), so it long predated .NET Core.
You can now do it cross-platform and it works as long as you are on .netcore 3.0 or higher and you add the latest System.Security.Cryptography.Cng nuget package (NB! this will ONLY work if your project is NOT multi-targeted - it can ONLY target netcoreapp3.0) :
using (ECDsa key = ECDsa.Create())
{
key.ImportPkcs8PrivateKey(Convert.FromBase64String(privateKey), out _);
return Jose.JWT.Encode
(
payload: payload,
key: key,
algorithm: JwsAlgorithm.ES256,
extraHeaders: extraHeader
);
}
So just wanted to offer another option we found once we encountered this error. That CSP Parameters error is related to the RSACryptoServiceProvider . This has some issues with cross platform .NET Core. We found a Github issue that mentioned to use RSA.Create() method instead. I was using a Bouncy Castle library that still uses the RSACryptoServiceProvider. At the time of writing this answer, it looked like this.
public static RSA ToRSA(RsaPrivateCrtKeyParameters privKey)
{
RSAParameters rp = ToRSAParameters(privKey);
RSACryptoServiceProvider rsaCsp = new RSACryptoServiceProvider();
rsaCsp.ImportParameters(rp);
return rsaCsp;
}
So we just replaced it with a private method in the class that looked like this.
private RSA ToRSA(RsaPrivateCrtKeyParameters parameters)
{
RSAParameters rp = DotNetUtilities.ToRSAParameters(parameters);
return RSA.Create(rp);
}
This ran in linux, no errors. Bouncy probably just needs to update their libs.
Use this method to import public key from the Key string make sure to install BouncyCastle.NetCore nuget package
public static RSACryptoServiceProvider ImportPublicKey(string pem)
{
PemReader pr = new PemReader(new StringReader(pem));
AsymmetricKeyParameter publicKey = (AsymmetricKeyParameter)pr.ReadObject();
RSAParameters rsaParams = DotNetUtilities.ToRSAParameters((RsaKeyParameters)publicKey);
RSACryptoServiceProvider csp = new RSACryptoServiceProvider();// cspParams);
csp.ImportParameters(rsaParams);
return csp;
}
And then you can encrypt your data as shown below
public static string Encryption(string data,string publickey)
{
var testData = Encoding.GetEncoding("iso-8859-1").GetBytes(strText);
using (var rsa = ImportPublicKey(publickey))
{
try
{
var encryptedData = rsa.Encrypt(testData, false);
var base64Encrypted = Convert.ToBase64String(encryptedData);
return base64Encrypted;
}
finally
{
rsa.PersistKeyInCsp = false;
}
}
}

Disabling encryption in Windows Identity Foundation

Can I disable encryption of the request security token response and only manage signatures?
I'm creating a custom STS extending Microsoft.IdentityModel.SecurityTokenService.SecurityTokenService based on the demos of the WIF SDK and I cannot manage to setup not using encryption.
I just ran the "Add STS Reference" wizard in Visual Studio, selecting the option to create a new STS. The template that the tool generated does add support for token encryption, but if no cert is supplied, thne it is disabled: (I left all the default comments)
protected override Scope GetScope( IClaimsPrincipal principal, RequestSecurityToken request )
{
ValidateAppliesTo( request.AppliesTo );
//
// Note: The signing certificate used by default has a Distinguished name of "CN=STSTestCert",
// and is located in the Personal certificate store of the Local Computer. Before going into production,
// ensure that you change this certificate to a valid CA-issued certificate as appropriate.
//
Scope scope = new Scope( request.AppliesTo.Uri.OriginalString, SecurityTokenServiceConfiguration.SigningCredentials );
string encryptingCertificateName = WebConfigurationManager.AppSettings[ "EncryptingCertificateName" ];
if ( !string.IsNullOrEmpty( encryptingCertificateName ) )
{
// Important note on setting the encrypting credentials.
// In a production deployment, you would need to select a certificate that is specific to the RP that is requesting the token.
// You can examine the 'request' to obtain information to determine the certificate to use.
scope.EncryptingCredentials = new X509EncryptingCredentials( CertificateUtil.GetCertificate( StoreName.My, StoreLocation.LocalMachine, encryptingCertificateName ) );
}
else
{
// If there is no encryption certificate specified, the STS will not perform encryption.
// This will succeed for tokens that are created without keys (BearerTokens) or asymmetric keys.
scope.TokenEncryptionRequired = false;
}
// Set the ReplyTo address for the WS-Federation passive protocol (wreply). This is the address to which responses will be directed.
// In this template, we have chosen to set this to the AppliesToAddress.
scope.ReplyToAddress = scope.AppliesToAddress;
return scope;
}
I create a CustomSecurityHandler and override its GetEncryptingCredentials method returning null value like the following lines and it works:
public class MyCustomSecurityTokenHandler : Saml11SecurityTokenHandler
{
public MyCustomSecurityTokenHandler(): base() {}
protected override EncryptingCredentials GetEncryptingCredentials(SecurityTokenDescriptor tokenDescriptor)
{
return null;
}
}
then in the SecurityTokenService class i override the GetSecurityTokenHandler returning the custom class created before:
protected override SecurityTokenHandler GetSecurityTokenHandler(string requestedTokenType)
{
MyCustomSecurityTokenHandler tokenHandler = new MyCustomSecurityTokenHandler();
return tokenHandler;
}