Less files not loaded on FIPS mode - less

Once the FIPS mode enabled on the web server, the less files are not loaded on the client browser, getting 500 error.
I am not sure if LESS compiler uses any managed cryptography algorithms.
Need help if anyone already enabled FIPS mode with LESS.
This is the same case for jquery resource files too.

Solved:
IIS application pool is running on default identity for hosting projects. Once FIPS enabled, this identity somehow not running properly and not allowing application pool run and provide results.
Once adding service account in domain tie that with application pool it started working correctly and provides all necessary files requested by any browser.

Related

Event ID 36887, A fatal alert was received from the remote endpoint. The TLS protocol defined fatal alert code is 40

This is resulting from an outbound connection to Equifax's new TLS 1.2-enabled URL.
Background:
Servers: Windows 2012 R2, .NET 4.6.2, all TLS 1.x Enabled in Test, Stage and Production tiers per this. IIS configurations match between servers (app pools/code except tier-specific configurations/IIS settings.)
Servers are load balanced via Citrix Netscaler, but this site uses Port 80/HTTP, no HTTPS configuration.
Both tiers use the same Equifax URL, but with tier-specific credentials.
The Situation:
Prod will not communicate with their site, we get the opening error.
Our stage environment has no problem communicating.
What we have done:
- Validated TLS reg settings match
- Swapped the prod web.config to the Stage server and the communication worked, so it seems unlikely that it is a web.config issue in production.
- Validated .NET versions
- Checked LSA fips reg setting (set to 0)
- checked for wonky updates known to cause issues
We are going to setup a network trace, but for the moment we are at a bit of a loss. I would appreciate any insights as to what I might be missing.
Developers had to do the following:
Added the specification of using 4.6 per Microsoft recommendations.
Updated some other .NET references in the web.config to point specifically to 4.6.2
They made some changes in some older code pieces to make them 4.6.2-compliant.

Is it possible to run ASP.NET 5 site directly on Kestrel in Azure WebApps?

I have checked that in the web response the server is IIS when I deploy ASP.NET5 to azure web app, so I guess the IIS platform handler is used to redirect it to Kestrel. So I am wondering if it is possible to run directly on Kestrel, and what benefits/drawbacks will that have (probably regardless if it's in Azure or not). I suppose it will be a bit faster since IIS will be excluded from the pipline, but it should not be too much overhead I suppose...
On Azure Web App, you cannot bypass IIS.
But in the general case, you can definitely run Kestrel directly. It is after all just dnx web and it's exactly what the XPlat version (Linux, OSX) will end-up using (almost).
What you lose from not using IIS
Security (newer component compared to IIS)
Easy setup of SSL
Kernel module that handle file/cache and other things (kernel = faster)
Application monitoring/Keep-Alive (what happens if Kestrel crash)
Multiple hostnames single-port (80) reuse
etc.
What you gain from not using IIS
Complete control over your process
Higher overall performance
Simpler installation/execution
What you should do if you choose not to use IIS
If you are OK with the "lose" points, I would still go and host your Kestrel behind a reverse proxy or an NGINX server. Kestrel was made to be "production ready" but it's not NGINX or IIS.
It will not keep itself alive as far as I know.
If I missed anything, please let me know.
Your question is a bit ambiguous, as it asks at the same time about Azure Web Apps and about the general case. #Maxime answered the general part, so I'll answer the Azure Web App part.
It is not possible to bypass IIS in Azure Web Apps. Stack that normally run without IIS are typically handled using HttpPlatformHandler (as is the case for ASP.NET 5), or in the case of Node some variant of that (iisnode).

Web service SSL handshake fails in production environment unless SSL debugging enabled

Scenario: calling a client web service over SSL (https) with mutual SSL authentication. Different service endpoint URLs and certs (both keystore and truststore) for test vs. production environments. Both test and production environments run tomcat / JBoss clustered. Production environment has load balancing / BigIP, runs Blade and non-Blade machines.
Truststore is set (using -Djavax.net.ssl.trustStore=value) at startup. Keystore is set using System.setProperty("javax.net.ssl.keyStore", "value") in Java code. Web service call made using Axis2. All works fine in test environment, but when we moved to production environment (6 servers), it appears certs are not being forwarded for the handshake. Here's what we've done:
in test environment, handshake using test versions of certs has been working all along, with no ssl debugging enabled
confirmed in test environment that handshake with client production
endpoint succeeds (production certs,
both ours and theirs, are fine) --
this was done using
-Djavax.net.debug=handshake,ssl
confirmed that the error condition occurs on all 6 production servers
took one server out of the cluster, turned on ssl debugging for
just that one (with a restart), hit
it directly, handshake works!
switched to a different server without the debugging turned on,
handshake error condition occurs
turned debugging on on that second server (with a restart), hit it directly, handshake works!
From the evidence, it seems like somehow the debugging being enabled causes the certificates to be properly retrieved/conveyed, although that makes no sense! I wonder whether somehow the enabled debugging makes the system pay attention to the System.setProperty call, and ignore it otherwise. However, in local and test environments, handshake worked without debugging enabled.
Do I maybe need to be setting keystore on server startup like I'm setting truststore? Have been avoiding that because the keystore will differ for each of our test environments (16 of them).
Turns out that the debug setting was a red herring. What actually bit us was that there was an existing client with an SSL/basic authentication web service we call when one of their users logs in. Since in that context the keystore wasn't relevant, the javax.net.ssl.keyStore property doesn't get set -- but the SSL exchange still tries to load a keyStore (which ends up not loading any certs). Since, unfortunately, even if the javax.net.ssl.keyStore value is changed, it does not get reloaded, calls to the other client's web service sent along no keystore certs.
The solution was to set the keyStore property at server startup rather than at the point of the web service call. If at some point in the future we need to be able to use different keyStores in different contexts, it looks like we'd need to implement a custom SocketFactory.

Does JBoss cache authentication information?

When testing various authentication solutions (my own LoginModule etc) in JBoss, it seemed to me that sometimes when I redeployed a change or otherwise provoked the login form to show, that JBoss didn't actually call the authentication module.
Just wondering if there is some type of short term caching going on?
I tested both from a web application (taking care to delete cookies etc) and from a fat RMI java client.
Of course, If I restarted JBoss, the full authentication process was followed.
Is there a cache, and if so, can it be disabled for development purposes?
Yes, JBoss caches authentication information by default for a few minutes.
To disable caching, set DefaultCacheTimeout to 0 in the configuration for the JaasSecurityManagerService. The configuration is in the "jboss-service.xml" file.
For more info and various ways to flush the cache, see CachingLoginCredentials at jboss.org.

How does System.Net.Sockets perform its DNS lookups in the context of finding a WCF service?

I have a Web application and a WCF service hosted on the same Windows 2003 development server. They each have their own IIS website node responding to drs.displayscreen.web and drs.displayscreen.service host headers respectively. The hosts file contains entries for both headers pointing back to 127.0.0.1. The web site has a service reference to drs.displayscreen.service.
Both applications work perfectly when their application pool uses the 'Network Service' account.
I need to perform some COM processing under the hood on the service so I want to run the applications under a customised identity. Both sites run on a new application pool.
When I change the application pool identity to use a new windows account created for the purpose, I get the following (inner) exception:
[EndpointNotFoundException: Could not connect to http://drs.displayscreen.service/Handler.svc. TCP error code 10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.98.2:8080. ]
192.168.98.2:8080 is the address of a DNS server that is no longer in use. It is not referenced anywhere in the solution. It is not referenced by ipconfig at all.
I have made sure that the new account is a member of IIS_WPG and I have run aspnet_regiis -ga . I have also given the account explicit permission to read the hosts file.
Why does the application attempt to use the defunct DNS server to resolve the temporary url (drs.displayscreen.service) instead of the hosts file entry? It has to be a permission of some sort because it does not have this problem when running under the network service account. Help!!
Well, it appears that the answer might involve a bug in the .Net framework. I found a blog posting that clued me in to the fact that the MS .Net implementation of SocketCache.GetSocket might cache invalid sockets and another one that suggests a workaround/hack in the form of an explicit don't-use-proxies configuration setting.
We don't actually use a proxy server in the environment where this problem cropped up but it appears that SocketCache.GetSocket is overridden or behaves differently when the don't-use-proxies setting is in place. Strangely, removing the setting causes the problem to come back so obviously the SocketCache is not repaired when a valid ip/hostname is discovered and successfully used. According to the author of the first post mentioned above, the bug does not exist in Mono. :)