Service Fabric ApplicationManifest parameterized certificaterefs? - ssl

I have a solution with multiple SF services. Some of them use HTTPS endpoints, so I have these specs (details hidden or changed)
We are several developers, and we are using self-signed certificates for local deployment.
<Parameters>
<Parameter Name="Api_SslCertHash" DefaultValue="<my-thumbprint-here!>" />
</Parameters>
and
<ServiceManifestImport>
<ServiceManifestRef ServiceManifestName="SomeAppPkg" ServiceManifestVersion="1.0.0" />
<Policies>
<EndpointBindingPolicy EndpointRef="ServiceEndpointHttps" CertificateRef="mycert" />
</Policies>
</ServiceManifestImport>
and then
<Certificates>
<EndpointCertificate X509FindValue="[Api_SslCertHash]" Name="mycert" />
</Certificates>
Now the problem is that we have this file checked in to Git, and since everyone has their own self-signed cert (Not: I don't know if this has to be, maybe we can share certs, by the question remains anyway) the thumbprint is different per developer.
My question is: Is it possible to have the thumbprint in an environment-variable, or get it from another source instead of changing it in the ApplicationManifest. I realise that it could be hard in a cluster environment, but maybe when deploying locally?
I have a similar requirement for ServiceManifest, where I would like to have different port numbers. I know I can override it in Local.1Node.xml but it would be nice to be able to pick it up externally.

I don't see why you cant use a variable like you are doing in your last example, but that doesn't solve your multiple developer problem.
The correct way to handle this is to generate a certificate, check it into source control and make everyone else use the same certificate. In fact, if you set up a secure cluster in production, anyone who wants to be able to view the Explorer will need the same certificate installed on their machine in order to authenticate.

Related

Are "normal" environment variables more secure than IIS environment variables?

I have an ASP.Net Core website running on IIS. I need to store some passwords that the site needs to access in production. No paid password storing systems are available to me. I chose to store my passwords in environment variables. So on the production machine I have:
a service account my_prod_service_account
an application pool MyProdAppPool that runs under my_prod_service_account
a website MyDotNetCoreSite that runs in the MyProdAppPool
Approach 1: Normal Environment Variables
I login to the production machine as my_prod_service_account and set environment variables for this user in Powershell:
[Environment]::SetEnvironmentVariable("Pwd1", "MyPrecioussss1", "User");
[Environment]::SetEnvironmentVariable("Pwd2", "MyPrecioussss2", "User");
After this MyDotNetCoreSite can read these environment variables.
Approach 2: system.webServer\aspNetCore Environment Variables
Something similar can be achieved with %WINDIR%\system32\inetsrv\config\applicationHost.config (IIS configuration file) on the production machine. It can be edited manually or through UI, but in the end it looks like this:
<configuration>
<location path="MyDotNetCoreSite">
<system.webServer>
<aspNetCore>
<environmentVariables>
<environmentVariable name="Pwd1" value="MyPrecioussss1" />
<environmentVariable name="Pwd2" value="MyPrecioussss2" />
</environmentVariables>
</aspNetCore>
</system.webServer>
</location>
</configuration>
After iisreset MyDotNetCoreSite can read these values as environment variables.
Question
I want to change my password storage method from Approach 1 to Approach 2. The former sets environment variables per user, the latter per site (which I think is neater). But I can't find enough documentation to judge whether Approach 2 has the same level of security as Approach 1. Setting a "normal" environment variable stores it in the registry at HKEY_Users\my_prod_service_account SID\Environment\Pwd1. Accessing the registry usually requires elevated permissions, and if someone breaks into it, we will have bigger problems than hackers knowing Pwd1. Is applicationHost.config as secure as the registry? Can I confidently store a password in it?
I can just give some head-scratch questions/concerns:
I am curious, have you done a set in a command line and checked if any unwanted passwords are listed in the output in plaintext?
Also, if you store all passwords in one file like in Approach#2, this makes a nice honeypot. I do not know how well the encryption works, Lex Li mentioned.

How to get the secret certificate from C# in a Service Fabric application on Unix?

I have the following in my applicationmanifest.xml:
<Principals>
<Users>
<User Name="IdentityService" AccountType="NetworkService" />
<User Name="ExplorerService" AccountType="NetworkService" />
</Users>
</Principals>
<Policies>
<SecurityAccessPolicies>
<SecurityAccessPolicy ResourceRef="IdentityCert" PrincipalRef="IdentityService" ResourceType="Certificate" />
<SecurityAccessPolicy ResourceRef="IdentityCert" PrincipalRef="ExplorerService" ResourceType="Certificate" />
</SecurityAccessPolicies>
</Policies>
<Certificates>
<SecretsCertificate X509FindValue="[IDENTITY_SERVICE_THUMBPRINT]" Name="IdentityCert" />
</Certificates>
On Windows clusters, I have been using the thumbprint to look up in localmachine
X509Certificate2 cert = X509.LocalMachine.My.Thumbprint.Find(options.Thumbprint, validOnly: false).FirstOrDefault();
without problems.
When deploying to a Unix cluster, I faced the following exception:
Unix LocalMachine X509Store is limited to the Root and CertificateAuthority stores. Unix LocalMachine X509Store is limited to the Root and CertificateAuthority stores.
I do understand what it is telling me; I can't use LocalMachine. But just to get this right, how would I locate the X509Certificate2 certificate on the Unix machines? (Is it a .NET Core or Service Fabric thing?)
From the docs:
Service Fabric generally expects X.509 certificates to be present in the /var/lib/sfcerts directory on Linux cluster nodes. This is true of cluster certificates, client certificates, etc. In some cases, you can specify a location other than the var/lib/sfcerts folder for certificates.
and..
Certificates specified in the application manifest, for example, through the SecretsCertificate or EndpointCertificate elements, must be present in the /var/lib/sfcerts directory. The elements that are used to specify certificates in the application manifest do not take a path attribute, so the certificates must be present in the default directory. These elements do take an optional X509StoreName attribute. The default is "My", which points to the /var/lib/sfcerts directory on Linux nodes. Any other value is undefined on a Linux cluster. We recommend that you omit the X509StoreName attribute for apps that run on Linux clusters.
I haven't done SF Linux in a while, so I don't have any script or snippet to help, but the docs should be straight forward.
The docs on Service Fabric Linux certificates are wrong - at least for a .NET Core service, StoreLocation.LocalMachine + StoreName.My provides no access to the cert files located at /var/lib/sfcerts. The docs may be correct for certificates used by the Service Fabric infrastructure, but they are misleading and plain wrong for SF services that require access to certificates.
The .NET Core document which acts as a public spec for X509Store support on Linux explicitly states that a new X509Store(StoreName.My, StoreLocation.LocalMachine) results in a CryptographicException, which is consistent with the original post and my experience.
So, you'll have to come up with an alternate approach to obtain the certificates. Two approaches seem viable to me:
Copy the cert files to a location where the SF service account can pick them up on startup, and either read them directly, or write them to new X509Store(StoreName.My, StoreLocation.CurrentUser) for subsequent use. You can use a SetupEntryPoint that runs as a user with AccountType="LocalSystem" to run the SetupEntryPoint as root on Linux. root is needed to read the files from /var/lib/sfcerts.
Obtain the certificate from another source, eg KeyVault. To secure this, you'll probably want to use the new Service Fabric support for Managed Service Identity.
Bottom line: It's definitely not trivial to obtain certificates from a Service Fabric service on Linux.

Which matcher should I use for a service, hosted on localhost {port} for a local Service Fabric cluster

I have a question regarding Service Fabric and Traefik.
I have managed to succesfully deploy the Traefik application to a local cluster (and actually out in my own Azure Infra too). This is alongside a service (MyService) in another application I am trying to have Traefik (RP) sit in front of.
I can see the Traefik dashboard, and I can see a backend (seemingly indicating that it has succesfully called the SF management API correctly for my application and service).
I can also see an accompanying frontend, with some routing rules (matchers). However, for the life of me, I can't get a simple request through the RP to my service.
I can hit my service directly. Service Fabric (SF) says it's in a good state also.
My local SF cluster isn't secured, so that simplifies things somewhat with .toml set up, etc.
My Service is hosted on localhost:9025 (endpoint is exposed in the service manifest and port set up (Kestrel in API)) the same too.
Traefik is set up on port 5000 (as opposed to 80 - see below).
To hit a simple version check, explicitly, I would use http://localhost:9025/myservice/myaction/v1/version
Doing http://localhost:5000/myservice/myaction/v1/version gets me either a 404 or 503 (depending on what I'm doing with matcher/modifier).
I have modified the Traefik endpoint from port 80 to 5000 too, just to switch it up and avoid any port conflicts. (I dont have an IIS sites up as it stands.) Netstat confirms that no other port is being used either.
The matcher in the Service Manifest looks like this:
<Extensions>
<Extension Name="Traefik">
<Labels xmlns="http://schemas.microsoft.com/2015/03/fabact-no-schema">
<Label Key="traefik.frontend.rule">PathPrefix:/myservice</Label>
<Label Key="traefik.enable">true</Label>
</Labels>
</Extension>
``` </Extensions>
One last thing, I guess, that would have really helped would be the ability to see the "resolved" requests. That is a request that comes into the RP, and then is matched or modified so that I can see what the RP actually reconciles a request out too. Perhaps this already exists, but tweaking of various logging didn't yield this info.
Ok,
So there is nothing wrong with the Service Manifest relating to Traefik, but rather its the exposure of the Endpoint in the Manifest which is not understood by Traefik.
This won't work:
<Endpoint Name="MyService" Protocol="http" Type="Input" Port="9025" />
However, this will:
<Endpoint Name="MyService" UriScheme="http" Port="9025" />
( The other attributes, I omitted, can still be added, but this would seem the minimum needed for Traefik to enumerate it as a viable backend)
A clear indication of wiring is indicated in the Traefik logs (this was previously absent)
Wiring frontend frontend-fabric:/MyApp/MyService to entryPoint http
AND, in the UI, for the backend, the server URI is displayed, again, this was not before.
Forgive me if this is documented somewhere, but I couldn't find anything OTHER than I did consider not seeing the server URI an issue based on a screenshot on the set up Website for Service Fabric and Traefik.
Another symptom is that the Backend, if not wired up correctly, will be displayed red, when correctly configured it will be green.
As I say, all probably very obvious but I lost many hours on this simple amendment I needed to make.

Missing configuration for the issuer of security tokens error

I inherited an existing project without its development environment. I have UAT code and a backup of the Production database. I can run up the site locally via Visual Studio but have hit an authentication problem trying to setup a fresh standalone DEV server on AWS (single server, no load balancer). The doco indicates the Prod server is a dual server setup with a load balancer.
The front end site pages do display, although some search is not working. On trying to log into the backend pages, Chrome returns "The xxx page isn't working. xxx redirected you too many times." Using developer tools, I can see the page redirects back and forth between SWT?realm=... and sitefinity?wrap_defalted=true&wrap_access_token... On the second redirect response header there is "X-Authentication-Error:Missing configuration for the issuer of security tokens 'https://xxx/Sitefinity/Authenticate/SWT' "
I tried different values in the web.config lines:
<federatedAuthentication>
<wsFederation passiveRedirectEnabled="true" issuer="http://localhost" realm="http://localhost" requireHttps="true"/>
<cookieHandler requireSsl="false"/>
</federatedAuthentication>
but that actually made things worse so I have reverted.
I checked all the settings mentioned in http://docs.sitefinity.com/administration-switch-to-claims-based-authentication and they seem to be set correctly. I don't really know what else I can check to get this working.
I found http://docs.sitefinity.com/administration-configure-security, but it does not seem like these settings are set (I don't have access to Prod server so can't confirm if it is actually setup with load balancing). I am currently using a 30 day trial license so am not sure if this is contributing to the problem. The official license is in the process of being transferred by the client. The domain name associated with the official license would be different to the domain my new server is currently running on.
I am also running version 8 code on a version 9 install of Sitefinity. I wanted to get it working before I tried to upgrade the code. I think there was also an assembly load to manifest mismatch when I tried upgrading my local version.
Found the solution: Don't mess with the SecurityConfig.config file.
<securityTokenIssuers>
<add key="B886AA7BFB5515BA63F577A44BBEB5C7AE674035514D128BC397346B11F4C97A" encoding="Hexadecimal" membershipProvider="Default" realm="http://localhost" />
</securityTokenIssuers>
<relyingParties>
<add key="B886AA7BFB5515BA63F577A44BBEB5C7AE674035514D128BC397346B11F4C97A" encoding="Hexadecimal" realm="http://localhost" />
</relyingParties>
Even though it is running on a server, the above lines should still point to localhost. It seems like these only need to be edited if you have a multi-server setup with an entirely separate STS.
I initially changed it to match the new domain name, but after some experimentation around adding localhost and HTTP variations, it seems like it works best with just localhost.
Even when I changed the web.config entry above to use the new domain as the issuer instead of localhost and the SecureConfig.config to specify only the new domain as the realms, it didn't seem to work. I guess the authentication must try to hit localhost specifically.

ColdFusion SSL authentication failure

I have a simple cfhttp request (a login) going out to an SSL server:
<cfhttp url="https://www2.[domain].com/api/user/login" method="POST" port="443" >
<cfhttpparam type="formfield" name="username" value="[username]" >
<cfhttpparam type="formfield" name="password" value="[password]" >
</cfhttp>
The request fails before it begins, and the ColdFusion server says:
I/O Exception: peer not authenticated
Both development environments work smashingly. They receive the login session and then hand that to the collector process which successfully taps the remote web service for data.
After I spent a day trying to get the correct certificate into the ColdFusion stores, I had the bright idea to actually compare them to the working development environments. I looked at them (keytool -list), and they are identical.
Now that the obvious is absolved the questions I'm left with are twofold:
Is there some other certificate repository I need to check, or alternately, is there a place where I can get ColdFusion to tell me what certificate repository it needs to find the certificate IN (on the off chance it can and has been altered) or if that is even possible.
Identify and correct else could be causing this.
Are the development and production environments the same? Are they all, for example, ColdFusion 9 Standard or ColdFusion 8 Enterprise?
In my experience, this error is usually caused by one of two things:
The administrator failed to install the certificate into the cacarts repository, or they installed it into the wrong one.
ColdFusion Enterprise and ColdFusion Developer edition (for ColdFusion 8 and ColdFusion 9 both, I believe) have an issue with the built-in BSafe CryptoJ library that is installed and certain types of certificates (I have not yet been able to determine a pattern) that causes this error. There are some workarounds if this is the case.
First, I would explore the possibility that you are importing into the wrong certificate repository. It can be hard to tell which repository is being used. In your CF Admin under "Setting Summary" you should be able to find the location of the JRE that is being used. It is listed under "Java Home". Take that directory and add lib/security to the end of it and that should be the location of the cacaerts file that is being used. I say should because I have seen at least one weird situation where it was not.
I HAD the same problem and I tried everything and can't fix it. Strange is that everything worked fine then suddenly stopped working. It might be a Java update on the server causing the problem or a change of the certificate from the website the CFHTTP is trying to access.
Anyway, here is a link I setup for a "demo" of this problem:
http://www.viaromania.eu/https.cfm
As you can see, I am trying to access a HTTPS service using CFHTTP tag. And it is not working. I deleted the certificate from C:\ColdFusion9\runtime\jre\lib\security\cacerts, generated a new one from the website URL, imported back, installed "certman" under CFID/admministrator, checked the certificate, it's there... and it's listed in my test page.
If you scroll to the bottom of my test page, you'll see a similar CFHTTP to https://www.google.com and this works fine, even if there is no certificate installed on the server.
It is important to mention that the request is working just perfect on my development machine, and here I also don't have any certificate installed...
AND THIS HOW I FIXED IT
1. Updated ColdFusion 9.0.2 with this - https://helpx.adobe.com/coldfusion/kb/cumulative-hotfix-1-coldfusion-902.html
2. Installed Java JDK 1.7.0_79 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
3. Changed the Java Home in ColdFusion Administrator / Server Settings → Java and JVM from "C:\ColdFusion9\runtime\lib\jre" to "C:\Program Files\Java\jdk1.7.0_79\jre"
That's it. I don't know if it uses any certificate or not. They were installed in the "C:\ColdFusion9\runtime\lib\jre\lib\security\cacerts" and not moved from there or anything.