How to enable certain cipher-suites in WildFly? - ssl

I want to explicitly enable certain cipher-suites on my WildFly application server.
Therefore I tried to edit the configuration in wildflys standalone.xml.
Let's assume I want to enable the AES128-GCM-SHA256 cipher (cipher suite names from: OpenSSL documentation).
I've edited the standalone.xml file of my WildFly server like this:
<https-listener name="listener" socket-binding="https" security-realm="ssl-realm" enabled-cipher-suites="AES128-GCM-SHA256"/>
The WildFly boots up normally but when I open the page in my browser an error message appears.
Chrome says:
ERR_SSL_PROTOCOL_ERROR
Firefox says:
ssl_error_internal_error_alert
I've tried this with WildFly 8.1 and 8.2.
Anybody out there who can give my an advice how to correctly enable certain cipher-suites?
Regards Tom

You have to add a attribute called "enabled-cipher-suites" to the "https-listener" found at "subsystem undertow" -> "server".
An example for this configuration can be found here.
Unfortunately this example is wrong when it comes to the value of this attribute. You must not name such things as "ALL:!MD5:!DHA" but instead some explicit cipher suites.
You have to call em by their SSL or TLS cipher suites names and not their OpenSSL names.
So instead of "AES128-GCM-SHA256" you have to write "TLS_RSA_WITH_AES_128_GCM_SHA256".
To make the confusion complete you have to use "," instead of ":" as delimiter if you want to name more than one suite.
Regards
Ben

I can confirm Ben's answer. The documentation for how to configure this is sparse. I would suggest the following ciphers to support:
TLS_RSA_WITH_AES_128_GCM_SHA256,
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
in addition, the 'ALL' tag does not work and the best method is to list the ones that you wish to include and not the ones that you wish to exclude as that '!' marking does not appear to be supported.

Related

Jetty SSL problem after upgrade - unknown protocol

I upgraded Jetty 9.3.6 to Jetty 9.4.27 and I have a problem with SSL connection.
When I run curl on some supported operation I get
error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol (error 35) whereas when I run the same command on the server running Jetty 9.3.6 it is all working fine.
I configured jetty's newer version in the same way as was the older one (including keystore path and enabling https support). Do you have a clue what could have gone wrong during upgrade or what I could have missed?
Thanks a lot for your support.
Jetty 9.3.x and 9.4.x have different Cipher Suite exclusions.
Jetty 9.3.6.v20151106 looks like this ...
https://github.com/eclipse/jetty.project/blob/jetty-9.3.6.v20151106/jetty-util/src/main/java/org/eclipse/jetty/util/ssl/SslContextFactory.java#L248-L260
addExcludeProtocols("SSL", "SSLv2", "SSLv2Hello", "SSLv3");
setExcludeCipherSuites(
"SSL_RSA_WITH_DES_CBC_SHA",
"SSL_DHE_RSA_WITH_DES_CBC_SHA",
"SSL_DHE_DSS_WITH_DES_CBC_SHA",
"SSL_RSA_EXPORT_WITH_RC4_40_MD5",
"SSL_RSA_EXPORT_WITH_DES40_CBC_SHA",
"SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA",
"SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA");
Jetty 9.4.29.v20200521 looks like this ...
https://github.com/eclipse/jetty.project/blob/jetty-9.4.29.v20200521/jetty-util/src/main/java/org/eclipse/jetty/util/ssl/SslContextFactory.java#L119-L138
/**
* Default Excluded Protocols List
*/
private static final String[] DEFAULT_EXCLUDED_PROTOCOLS = {"SSL", "SSLv2", "SSLv2Hello", "SSLv3"};
/**
* Default Excluded Cipher Suite List
*/
private static final String[] DEFAULT_EXCLUDED_CIPHER_SUITES = {
// Exclude weak / insecure ciphers
"^.*_(MD5|SHA|SHA1)$",
// Exclude ciphers that don't support forward secrecy
"^TLS_RSA_.*$",
// The following exclusions are present to cleanup known bad cipher
// suites that may be accidentally included via include patterns.
// The default enabled cipher list in Java will not include these
// (but they are available in the supported list).
"^SSL_.*$",
"^.*_NULL_.*$",
"^.*_anon_.*$"
};
You'll want to evaluate why you need known vulnerable cipher suites to function.
Also, if you are using an IBM JVM all bets are off, because the IBM uses non-standard Cipher Suite names (unlike all other JVMs that use the RFC Registered Cipher Suite names).

ERROR: Fetching the page failed because other errors. Twitter Cards Issue

When I go to https://cards-dev.twitter.com/validator and enter https://piktoria.com/blog/instagram-to-drive-sales/ and adlatch.com
Validator says - Unable to render Card preview
ERROR: Fetching the page failed because other errors.
So because of that when i share anything on twitter, don't get any snippets, tried twitter support they say:
"There's something wrong with your SSL setup - I am seeing SslHandshakeException: handshake alert: unrecognized_name at remote address in my debug log which I suspect means that your server name does not match the certificate, or something similar."
Can anyone help in solving this issue
This problem happened with me also But i managed to Fix It when twiiter told me to check SSL Settings
I got the point The problem was From AES256 and AES128 (For NgiNx Web Server) You need to enable AES128
Here is Snippet
ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384!AES128:!3DES';
As you see in the Snippet the AES128 is Disabled(!)
you need to remove the ! From AES128 So the Code will be:
ssl_ciphers 'ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:!3DES';
This might be a post a bit old, but you can get this error due to a different TLS configuration.
When I looked into my webserver error logs, I encountered the following error:
2021/05/12 19:41:31 [crit] 16585#16585: *44673 SSL_do_handshake() failed (SSL: error:14209102:SSL routines:tls_early_post_process_client_hello:unsupported protocol) while SSL handshaking, client: X.X.X.X, server: X.X.X.X:443
It looks like twitter, as of now, does not support TLSv1.3 for getting the cards, and the solution is to also enable TLSv1.2. If you use the intermediate configuration from Mozilla's ssl-config tool that is good enough.
See https://ssl-config.mozilla.org/#server=nginx&version=1.17.7&config=intermediate&openssl=1.1.1d&guideline=5.6

JBAS010153: Node identifier property is set to the default value. Please make sure it is unique

I am getting the following WARN message while I start my host which is one of the Host Controller (HC) that is attached to the Domain Controller(DC).
[Server:server-two] 14:06:13,822 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 33) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
And my host-slave.xml has the following config...
<server-identities>
<!-- Replace this with either a base64 password of your own, or use a vault with a vault expression -->
<secret value="c2xhdmVfdXNlcl9wYXNzd29yZA=="/>
</server-identities>
I hope this config is the reason...... maybe I didn't understand..... but I couldn't find node identifier property rather this is the default secret value which I hope could be the cause of this WARN message.
However, I didn't mention HC to lookup host-slave.xml..... the command which I ran to start my HC is.....
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
nnn.nn.nn.88 is my DC
Else please advise what's cause of the WARN message.
And please let me know the implication of this WARN message and advise us on the required config to overcome and sort out any consecutive consequences that would've been bound for this WARN.
I'm new to wildfly, and noticed this warning when I started it standalone from eclipse (I'm doing the following tutorial: https://wwu-pi.github.io/tutorials/lectures/eai/020_tutorial_jboss_project.html)
The fix was to add a node-identifier to the core-environment in the subsystem:
<subsystem xmlns="urn:jboss:domain:transactions:2.0">
<core-environment node-identifier="meindertwillemhoving">
<process-id>
<uuid/>
</process-id>
</core-environment>
<recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
</subsystem>
This is in file [wildfly]\standalone\configuration\standalone.xml.
This is the same answer as https://developer.jboss.org/message/880136#880136
According to WFLY-10541 if you are using WildFly v14.0.0 or newer you can pass the following to the startup script to set the transaction node identifier:
-Djboss.tx.node.id=<some-unique-id>
Setting the node identifier to an unique value is only required for proper handling of XA Transactions.
You can set it as follows in your XML configuration:
<subsystem xmlns="urn:jboss:domain:transactions:6.0">
<core-environment node-identifier="${jboss.tx.node.id}">
It needs to be a unique value up to 23 bytes long.
More about this here: http://www.mastertheboss.com/jboss-server/jboss-configuration/configuring-transactions-jta-using-jboss-as7-wildfly
Building on #kaptan's answer I added the following to the bottom of
bin/standalone.conf:
JAVA_OPTS="$JAVA_OPTS -Djboss.tx.node.id=`hostname -f`
This way I don't have to remember to add the "-Djboss.tx.node.id=" when running up wildfly by hand.
For this <server-identities> is not the issue. In fact, it shouldn't be touched at all.
When JBoss is started in domain mode by domain.sh, by default there will be three servers server-one server-two server-three. When you are running one more HC attached to the DC.... the defaulted server which is in auto-start mode will get clash when we start HC attaching to DC,- by the following command.
[host-~-\-\-\bin]$./domain.sh -Djboss.domain.master.address=nnn.nn.nn.88 -b nnn.nn.nn.89 -bmanagement nnn.nn.nn.89 &
Or by having the host configuration at HC (default host.xml... until unless we choose a different one....).
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
In order to solve this, we need to turn auto-start to false..... And we need to create a new server-group...... To that group we need to add dc-created-server and hc-created-server..... we can choose the appropriate same profile either full-ha or full for both created servers across DC and HC.
SO when we start the group by configuring the required HEAP size including permgen space... You could start both DC and HC.... and in DC you could see both of your-created-servers are started in the created server-group.
DC- Domain Controller
HC- Host Controller
To deploy you need to upload .ear or web-archive in the Application Console. You cannot place it in the deployments folder as how you do in standalone mode with .dodeploy file.
If you upload the same .ear next version do the Replace option instead of the Remove & Add option in the upload process.

remove server header tomcat

I am able to rename the value of org.apache.coyote.http11.Http11Protocol.SERVER to anything else, so the HTTP-Response-Header contains something like:
Server:Apache
instead of the default
Server:Apache-Coyote/1.1
Using a empty value for org.apache.coyote.http11.Http11Protocol.SERVER does not remove the Server-Header.
How can I remove the Server-Header from my responses?
You can modify your tomcat server.xml and add a "server" option and set it to whatever you want. The server option should be set for any http or ssl connectors that you have running. For example, below is a sample HTTP Connector configuration from an example server.xml file
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" enableLookups="false" xpoweredby="false" server="Web"/>
Short answer - you can't remove the header, but you should modify it (see other answers).
The server header is defined in the RFC and it is mandatory. (not defined as optional in the spec)
Taken from http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.38
14.38 Server
The Server response-header field contains information about the software used by the origin server to handle the request.
The field can contain multiple product tokens (section 3.8) and
comments identifying the server and any significant subproducts. The
product tokens are listed in order of their significance for
identifying the application.
If the response is being forwarded through a proxy, the proxy application MUST NOT modify the Server
response-header. Instead, it SHOULD include a Via field (as described
in section 14.45).
Note: Revealing the specific software version of the server might
allow the server machine to become more vulnerable to attacks
against software that is known to contain security holes. Server
implementors are encouraged to make this field a configurable
option.
It should be possible since Tomcat 5.5. Check out this discussion: https://mail-archives.apache.org/mod_mbox/tomcat-users/200508.mbox/%3C42FBE8AA.1060401#joedog.org%3E
and this link:
https://tomcat.apache.org/tomcat-4.1-doc/config/coyote.html
Accordingly the following should set the server header to TEST. Empty should make it empty.
<Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="8180" inProcessors="5" maxProcessors="75" enableLookups="true" acceptCount="10" debug="0" connectionTimeout="20000" useURIValidationHack="false" server="TEST"/>
Setting the Server header to Apache should security-wise be good enough in most cases. Just from that it won't be possible to infer which OS nor which exact version with which modules and the versions of the modules running.
if you are using embedded tomcat then you can try below code.
import org.apache.catalina.startup.Tomcat;
final Tomcat server = new Tomcat();
server.getConnector().setXpoweredBy(false);
server.getConnector().setAttribute("server", "");
For Web application.
Set Server header from the code.
It worked for me in Java Spring boot project.
response.setHeader("Server", "none");
Try adding from code if it is deployed in tomcat.

ColdFusion CFHTTP I/O Exception: peer not authenticated - even after adding certs to Keystore

I'm currently working with a payment processor. I can browse to the payment URL from our server, so it's not a firewall issue, but when I try to use CFHTTP I get a I/O Exception: peer not authenticated. I've downloaded and installed their latest security cert into cacerts keystore and restarted CF and am still getting the same error. Not only have I installed the providers cert, but also the 2 other Verisign certificate authority certs in the certificate chain. The cert is one of the newer Class 3 Extended Validation certs.
Has anybody come across this before and found a solution?
A colleague of mine found the following after experiencing the same issue when connecting to a 3rd party.
http://www.coldfusionjedi.com/index.cfm/2011/1/12/Diagnosing-a-CFHTTP-issue--peer-not-authenticated
https://www.raymondcamden.com/2011/01/12/Diagnosing-a-CFHTTP-issue-peer-not-authenticated/
We used the solution provided in the comment by Pete Freitag further down the page. It works, but I think should be used with caution, as it involves dynamically removing and adding back in a particular property of the JsafeJCE provider.
For the sake of archiving, here is the original content of Pete Freitag's comment:
I've narrowed this down a bit further, and removing the
KeyAgreement.DiffieHellman from the RSA JsafeJCE provider (which
causes the default sun implementation to be used instead) seams to
work, and probably has less of an effect on your server than removing
the entire provider would. Here's how you do it:
<cfset objSecurity = createObject("java", "java.security.Security") />
<cfset storeProvider = objSecurity.getProvider("JsafeJCE") />
<cfset dhKeyAgreement = storeProvider.getProperty("KeyAgreement.DiffieHellman")>
<!--- dhKeyAgreement=com.rsa.jsafe.provider.JSA_DHKeyAgree --->
<cfset storeProvider.remove("KeyAgreement.DiffieHellman")>
Do your http call, but pack the key agreement if you want:
<cfset storeProvider.put("KeyAgreement.DiffieHellman", dhKeyAgreement)>
I figured this out by using the SSLSocketFactory to create a https
connection, which provided a bit more details in the stack trace, than
when using cfhttp:
yadayadayada Caused by: java.security.InvalidKeyException: Cannot
build a secret key of algorithm TlsPremasterSecret at
com.rsa.jsafe.provider.JS_KeyAgree.engineGenerateSecret(Unknown
Source) at javax.crypto.KeyAgreement.generateSecret(DashoA13*..) at
com.sun.net.ssl.internal.ssl.DHCrypt.getAgreedSecret(DHCrypt.java:166)
Would be great if the exception thrown from ColdFusion was a bit less
generic.
specific to coldfusion 8 with an webserver with modern ssl ciphers:
I use coldfusion 8 on JDK 1.6.45 and had problems with <cfdocument ...> giving me just red crosses instead of images, and also with cfhttp not able to connect to the local webserver with ssl.
my test script to reproduce with coldfusion 8 was
<CFHTTP URL="https://www.onlineumfragen.com" METHOD="get" ></CFHTTP>
<CFDUMP VAR="#CFHTTP#">
this gave me the quite generic error of " I/O Exception: peer not authenticated."
I then tried to add certificates of the server including root and intermediate certificates to the java keystore and also the coldfusion keystore, but nothing helped.
then I debugged the problem with
java SSLPoke www.onlineumfragen.com 443
and got
javax.net.ssl.SSLException: java.lang.RuntimeException: Could not generate DH keypair
and
Caused by: java.security.InvalidAlgorithmParameterException: Prime size must be
multiple of 64, and can only range from 512 to 1024 (inclusive)
at com.sun.crypto.provider.DHKeyPairGenerator.initialize(DashoA13*..)
at java.security.KeyPairGenerator$Delegate.initialize(KeyPairGenerator.java:627)
at com.sun.net.ssl.internal.ssl.DHCrypt.<init>(DHCrypt.java:107)
... 10 more
I then had the idea that the webserver (apache in my case) had very modern ciphers for ssl and is quite restrictive (qualys score a+) and uses strong Diffie-Hellman groups with more than 1024 bits. obviously, coldfusion and java jdk 1.6.45 can not manage this.
Next step in the odyssee was to think of installing an alternative security provider for java, and I decided for bouncy castle.
see also http://www.itcsolutions.eu/2011/08/22/how-to-use-bouncy-castle-cryptographic-api-in-netbeans-or-eclipse-for-java-jse-projects/
I then downloaded the
bcprov-ext-jdk15on-156.jar
from http://www.bouncycastle.org/latest_releases.html and installed it under
C:\jdk6_45\jre\lib\ext or where ever your jdk is, in original install of coldfusion 8 it would be under C:\JRun4\jre\lib\ext but I use a newer jdk (1.6.45) located outside the coldfusion directory. it is very important to put the bcprov-ext-jdk15on-156.jar in the \ext directory (this cost me about two hours and some hair ;-)
then I edited the file C:\jdk6_45\jre\lib\security\java.security (with wordpad not with editor.exe!) and put in one line for the new provider. afterwards the list looked like
#
# List of providers and their preference orders (see above):
#
security.provider.1=org.bouncycastle.jce.provider.BouncyCastleProvider
security.provider.2=sun.security.provider.Sun
security.provider.3=sun.security.rsa.SunRsaSign
security.provider.4=com.sun.net.ssl.internal.ssl.Provider
security.provider.5=com.sun.crypto.provider.SunJCE
security.provider.6=sun.security.jgss.SunProvider
security.provider.7=com.sun.security.sasl.Provider
security.provider.8=org.jcp.xml.dsig.internal.dom.XMLDSigRI
security.provider.9=sun.security.smartcardio.SunPCSC
security.provider.10=sun.security.mscapi.SunMSCAPI
(see the new one in position 1)
then restart coldfusion service completely.
you can then
java SSLPoke www.onlineumfragen.com 443 (or of course your url!)
and enjoy the feeling...
and of course
<cfhttp and <cfdocument worked like a charm and like before we "hardened" our ssl ciphers in apache!
what a night and what a day. Hopefully this will help (partially or fully) to someone out there. if you have questions, just mail me at info ... (domain above).
Did you add it to the correct keystore? Remember that ColdFusion uses it's own Java instance. I spent several hours on this once before remembering that fact. The one you want is at somewhere like /ColdFusion8/runtime/jre/lib/security/
Try with this in CMD
C:\ColdFusion9\runtime\jre\bin>
keytool -import -keystore ../lib/security/cacerts
-alias uniquename -file certificatename.cer
Note: We must choose the correct keystore present inside the security folder,as there are other keystore file present inside bin.If we will import the certificate to those key stores it will not work.
What I just found out was referenced at this article: http://kb2.adobe.com/cps/400/kb400977.html and a few other places after a lot of digging.
If you are looking at this article you have most likely inserted your "server.crt" certificate in the proper root locations and you have probably inserted it into the cacerts file in /ColdFusion9/runtime/jre/lib/security using the command
\ColdFusion9\runtime\jre\bin\keytool -import -v -alias someServer-cert -file someServerCertFile.crt -keystore cacerts -storepass changeit
(if you haven't done this, do it now).
The thing I was running into was that I am setting up ssl on my localhost so after doing these steps I was still getting the same error.
As it turns out, you need to also insert your "server.crt" into the "trustStore" file commonly located in /ColdFusion9/runtime/jre/lib using the command
\ColdFusion9\runtime\jre\bin\keytool -import -v -alias someServer-cert -file someServerCertFile.cer -keystore trustStore -storepass changeit
Hopefully this will save someone time.
I am using JRun. After trying a lot of different things I came across a snippet of information that was applicable in my setup. I had configured an (1)HTTPS SSLService with my own truststore file. This caused the piece of information in the following link to become important.
http://helpx.adobe.com/coldfusion/kb/import-certificates-certificate-stores-coldfusion.html
Note: If you are using JRun as the underlying J2EE server (either the
Server Configuration or the Multiserver/J2EE with JRun Configuration)
and have enabled SSL for the internal JRun Web server (JWS), you will
need to import the certificate to the truststore defined in the
jrun.xml file for the Secure JWS rather than the JRE key store. By
default, the file is called "trustStore" and is typically located
under jrun_root/lib for the Multiserver/J2EE with JRun configuration
or cf_root/runtime/lib for the ColdFusion Server configuration. You
use the same Java keytool to manage the trustStore.
Here is the excerpt from my jrun.xml file:
<service class="jrun.servlet.http.SSLService" name="SSLService">
<attribute name="port">8301</attribute>
<attribute name="keyStore">/app/jrun4/cert/cfusion.jks</attribute>
<attribute name="trustStore">/app/jrun4/cert/truststore.jks</attribute>
<attribute name="name">SSLService</attribute>
<attribute name="bindAddress">*</attribute>
<attribute name="socketFactoryName">jrun.servlet.http.JRunSSLServerSocketFactory</attribute>
<attribute name="interface">*</attribute>
<attribute name="keyStorePassword">cfadmin</attribute>
<attribute name="deactivated">false</attribute>
</service>
Once I imported the certificate into this truststore (/app/jrun4/cert/truststore.jks) it worked after restarting ColdFusion.
(1) http://helpx.adobe.com/legacy/kb/ssl-jrun-web-server-connector.html
Adding the cert to the keystore did not work for me on CF9 Enterprise.
Ended up using the CFX tag, CFX_HTTP5.
I realize this is a very old discussion, but since it still comes up near the top of a search for the "peer not authenticated" error in CF, I wanted to share that for most people, the simple solution is to update the JVM that CF uses. (More in a moment on how to do that.)
The cause of the problem is generally that the service BEING CALLED has made a change that requires a later version of TLS or SSL (and perhaps a change to supported algorithms). Later JVMs offer that, while earlier ones did not. Since CF runs atop the JVM, it's the calls out of CF (va cfhttp, cfldap, cfmail, etc) that "suddenly" start to fail.
And sure, sometimes a cert update is the answer (and even then, you have to do it carefully), but it's not always needed. And updating the JVM also gives other benefits, in terms of bug fixes, etc.
The only challenge is knowing what JVM your version of CF will support. (But even people still running on an old CF version have found that updating the JVM CF uses has solved this problem and not caused any others.)
I discuss all this in a 2019 post:
https://coldfusion.adobe.com/2019/06/error-calling-cf-via-https-solved-updating-jvm/
Hope that may help someone.