Trying to import a certificate in pfx format into my Glassfish domain's keystore. It appears to work successfully, but the domain refuses to start up with the error "GRIZZLY0059: PortUnification exception."
I'm using Glassfish 3.1, with Grizzly 1.9.31-1. A few others have had this error, and one suggestion was to upgrade Grizzly (claiming this was a known bug, fixed in 1.9.43). pkg doesn't let me update Grizzly though ("No updates necessary for this image"). Is there a way to force it to upgrade to a more recent version?
This may not even be the issue anyway, but I can't figure out what else it could be
Related
I recently migrated from Gatling 3.3.1 to Gatling 3.4.0.
As a result, everything works fine in my local machine, but crashes in k8s because of the following error:
Couldn't execute warm up request https://gatling.io
java.lang.IllegalArgumentException: TLSv1.3
at sun.security.ssl.ProtocolVersion.valueOf(ProtocolVersion.java:187)
at sun.security.ssl.ProtocolList.convert(ProtocolList.java:84)
at sun.security.ssl.ProtocolList.<init>(ProtocolList.java:52)
at sun.security.ssl.SSLEngineImpl.setEnabledProtocols(SSLEngineImpl.java:2081)
...
I migrated back to the working version.
I assumed from here, that TLSv1.3 is switched on by default.
I searched for the appropriate setting in gatling-defaults.conf, but did not succeed.
I use Java 1.8 both locally and on remote k8s
Please help me to resolve this issue!
Thanks in advance!
In order to support TLSv3, Gatling needs:
either to be able to load netty-tcnative (basically BoringSSL)
or run on Java 11+ where TLSv3 is available
We can see in the logs that the former fails. We can also see that netty_transport_native_epoll_x86_64 can't be loaded while netty_transport_native_epoll_x86 can. This means you're running on a 32-bit Linux. netty-tcnative/BoringSSL is only available on 64-bit.
The latter fails as you stated running on Java 8.
We can probably improve things on our side, but you should switch to a 64-bit host.
Otherwise, you can enforce the list of supported protocols in gatling.conf, see https://github.com/gatling/gatling/blob/master/gatling-core/src/main/resources/gatling-defaults.conf#L57
I'm trying to use nginx-ingress to create secure connection but i'm getting this error:
I'm using helm chart stable/nginx-ingress version 1.34.2.
I've been searching for this kind of error, I've already config ssl-ciphers and ssl-protocols, add more ciphers suite in order to have more common ciphers between client and server. But i'm still getting this error.
The service i'm trying to build following this flow:
Hope some one can solve this out, or having any suggestion for my problem.
A client of mine has trouble with TortoiseSVN. It was working fine till now. She did her last commit on Thursday Feb. 23. 2013 But now she gets the following error:
OPTIONS SSL handshake failed: SSL error: sslv3 alert illegal parameter
She cannot access the Repository anymore. No update, no checkout, no log, etc.
It is difficult to locate the problem. It shows up with tsvn 1.7.4 and 1.7.11
She cannot use tsvn with the ProjectRepository
She cannot use svn commandline client (http://www.sliksvn.com/en/download) with the ProjectRepository
She can use tsvn with a PlaygroundRepository on another Server
She can access ProjectRepository with IE and with Firefox
She can access ProjectRepository with SmartSvn
I can use tsvn in their network with the ProjectServer from my macbook with parallels.
I entirely uninstalled/reinstalled tsvn -no success
I deleted %appdata%\Roaming\Subversion -no success
As an act of desperation, I installed smartsvn which makes her work again, but this cannot be the solution.
It must be the combination of tsvn, her machine and the ProjectRepository/Server. Her Machine works with PlaygroundRepository on another server.
Any Idea is highly welcome. In paticular due to the fact that it worked last week with tsvn 1.7.4.
So the only thing which might have changes is some updates on the windows box.
Check for the installation of MS012-006 on the client. That hot fix broke a lot of things. Roll it back and see if connects are successful.
Basic put object calls suddenly stopped working (sometimes it succeds). It has been working since long.
Looks like a SSL cert issue.
Stack Trace snippet.
org.jets3t.service.S3ServiceException: S3 PUT connection failed for '/s3_request_message-38afbd8e-7d65-428a-a708-5d34104ded95-4912660956668093023.xml'
at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:516)
at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestPut(RestS3Service.java:800)
at org.jets3t.service.impl.rest.httpclient.RestS3Service.createObjectImpl(RestS3Service.java:1399)
at org.jets3t.service.impl.rest.httpclient.RestS3Service.putObjectImpl(RestS3Service.java:1317)
at org.jets3t.service.S3Service.putObject(S3Service.java:1661)
at org.jets3t.service.S3Service.putObject(S3Service.java:1914)
at com.amazon.lm.utils.aws.S3Box.putFile(S3Box.java:111)
at com.amazon.lm.engine.LMEngine.copyRequestS3(LMEngine.java:350)
at com.amazon.lm.engine.LMEngine.run(LMEngine.java:165)
at com.amazon.lm.engine.discover.DiscoveryEngine.run(DiscoveryEngine.java:156)
at com.amazon.lm.engine.discover.GoogleBaseSearch.run(GoogleBaseSearch.java:25)
at com.amazon.lm.ui.UIDiscoverTask.run(UIDiscoverTask.java:41)
at java.lang.Thread.run(Thread.java:662)
Caused by: javax.net.ssl.SSLPeerUnverifiedException: HTTPS hostname invalid: expected 'lm-requests-prod.s3.amazonaws.com', received '*.s3.amazonaws.com'
at org.apache.commons.httpclient.contrib.ssl.StrictSSLProtocolSocketFactory.verifyHostname(StrictSSLProtocolSocketFactory.java:293)
at org.apache.commons.httpclient.contrib.ssl.StrictSSLProtocolSocketFactory.createSocket(StrictSSLProtocolSocketFactory.java:215)
at org.apache.commons.httpclient.HttpConnection.open(HttpConnection.java:707)
at org.apache.commons.httpclient.MultiThreadedHttpConnectionManager$HttpConnectionAdapter.open(MultiThreadedHttpConnectionManager.java:1361)
at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:387)
at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397)
at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:323)
at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:342)
... 12 more
Looks like Java does not like the wildcard domain presented as '*.s3.amazonaws.com'
per Can Java connect to wildcard ssl... wildcards can be problematic with java.
But as said earlier, we have been using it since long time and suddenly started facing this issue, that too intermittently.
We are using following versions:
jdk: 1.6
jets3: 0.7
openssl:1.0
Has anyone faced this issue? If Yes, Is there any workaround?
This wasn't an issue with the AWSS3JavaClient code, based on the fact that the this problem was happening both with S3 library and with other Java S3 libraries, and the fact that SSL cert verification is done inside the JVM platform library code, not inside our S3 library code.
The problem is that our JVM's keystore didn't have the most recent certificate authorities (CAs) that allow the JVM to form a chain of trust for whatever cert we're getting from the S3 SSL endpoint. This is a fairly common problem with Java and SSL, since the JVM maintains it's own keystore (i.e. it doesn't use certs from the OS).
If you face this problem, try reproducing this issue with other JVMs. Whenever customers have seen this issue in the past, it's been because their local JVM keystore (the keystore ships with the JVM and contains the most recent certs and CAs) has been out of date. Upgrading to the latest JVM version has always fixed this in the past.
Try upgrading your JVM version to recent one, it should help because your keystore must have been expired! :)
I have a simple cfhttp request (a login) going out to an SSL server:
<cfhttp url="https://www2.[domain].com/api/user/login" method="POST" port="443" >
<cfhttpparam type="formfield" name="username" value="[username]" >
<cfhttpparam type="formfield" name="password" value="[password]" >
</cfhttp>
The request fails before it begins, and the ColdFusion server says:
I/O Exception: peer not authenticated
Both development environments work smashingly. They receive the login session and then hand that to the collector process which successfully taps the remote web service for data.
After I spent a day trying to get the correct certificate into the ColdFusion stores, I had the bright idea to actually compare them to the working development environments. I looked at them (keytool -list), and they are identical.
Now that the obvious is absolved the questions I'm left with are twofold:
Is there some other certificate repository I need to check, or alternately, is there a place where I can get ColdFusion to tell me what certificate repository it needs to find the certificate IN (on the off chance it can and has been altered) or if that is even possible.
Identify and correct else could be causing this.
Are the development and production environments the same? Are they all, for example, ColdFusion 9 Standard or ColdFusion 8 Enterprise?
In my experience, this error is usually caused by one of two things:
The administrator failed to install the certificate into the cacarts repository, or they installed it into the wrong one.
ColdFusion Enterprise and ColdFusion Developer edition (for ColdFusion 8 and ColdFusion 9 both, I believe) have an issue with the built-in BSafe CryptoJ library that is installed and certain types of certificates (I have not yet been able to determine a pattern) that causes this error. There are some workarounds if this is the case.
First, I would explore the possibility that you are importing into the wrong certificate repository. It can be hard to tell which repository is being used. In your CF Admin under "Setting Summary" you should be able to find the location of the JRE that is being used. It is listed under "Java Home". Take that directory and add lib/security to the end of it and that should be the location of the cacaerts file that is being used. I say should because I have seen at least one weird situation where it was not.
I HAD the same problem and I tried everything and can't fix it. Strange is that everything worked fine then suddenly stopped working. It might be a Java update on the server causing the problem or a change of the certificate from the website the CFHTTP is trying to access.
Anyway, here is a link I setup for a "demo" of this problem:
http://www.viaromania.eu/https.cfm
As you can see, I am trying to access a HTTPS service using CFHTTP tag. And it is not working. I deleted the certificate from C:\ColdFusion9\runtime\jre\lib\security\cacerts, generated a new one from the website URL, imported back, installed "certman" under CFID/admministrator, checked the certificate, it's there... and it's listed in my test page.
If you scroll to the bottom of my test page, you'll see a similar CFHTTP to https://www.google.com and this works fine, even if there is no certificate installed on the server.
It is important to mention that the request is working just perfect on my development machine, and here I also don't have any certificate installed...
AND THIS HOW I FIXED IT
1. Updated ColdFusion 9.0.2 with this - https://helpx.adobe.com/coldfusion/kb/cumulative-hotfix-1-coldfusion-902.html
2. Installed Java JDK 1.7.0_79 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html
3. Changed the Java Home in ColdFusion Administrator / Server Settings → Java and JVM from "C:\ColdFusion9\runtime\lib\jre" to "C:\Program Files\Java\jdk1.7.0_79\jre"
That's it. I don't know if it uses any certificate or not. They were installed in the "C:\ColdFusion9\runtime\lib\jre\lib\security\cacerts" and not moved from there or anything.