I have a functional app running in a docker on port 3000. I have selenium tests that works when I set my host to http://localhost:3000. I created a container to launch the selenium tests and it fails with the following error:
WebDriverError:Reachederrorpage:about:neterror?e=nssFailure2&u=https://app:3000/&c=UTF-8&f=regular&d=An error occurred during a connection to app:3000.
SSL received a record that exceeded the maximum permissible length.
Error code: <a id="errorCode" title="SSL_ERROR_RX_RECORD_TOO_LONG">SSL_ERROR_RX_RECORD_TOO_LONG</a>
Snippet of my docker-compose.yml
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./:/usr/src/app/
ports:
- "3000:3000"
- "3001:3001"
networks:
tests:
selenium-tester:
build:
context: .
dockerfile: Dockerfile.selenium.tests
volumes:
- ./:/usr/src/app/
- /dev/shm:/dev/shm
depends_on:
- app
networks:
tests:
I replaced the host by http://app:3000 but firefox seems to want to redirect this http to https (which is not working). And finally I build my driver like this:
const ffoptions = new firefox.Options()
.headless()
.setPreference('browser.urlbar.autoFill', 'false'); // test to disable auto https redirect… not working obviously
const driver = Builder()
.setFirefoxOptions(ffoptions)
.forBrowser('firefox')
.build();
When manually contacting the http://app:3000 using curl inside the selenium-tester container it works as expected, I get my homepage.
I'm short on ideas now and even decomposing my problem to write this question didn't get me new ones
I had exactly the same problem - couldn't successfully make request on HTTP to app from Selenium-controlled browsers (Chrome or Firefox) in other Docker container on same network. cURL from that container though worked fine! Connect on HTTP, but something seemed to be trying to force HTTPS. Identical situation right down to the name of the container "app".
The answer is... it's the name of the container!
"app" is a top level domain on the HSTS preloaded list - that is, browsers will force access through HTTPS.
Fix is to use a container name that isn't on HSTS preloaded lists.
HSTS - more reading
As you mentioned manually contacting the http://app:3000 using curl inside the selenium-tester container it works as expected
This error message...
WebDriverError:Reachederrorpage:about:neterror?e=nssFailure2&u=https://app:3000/&c=UTF-8&f=regular&d=An error occurred during a connection to app:3000.
SSL received a record that exceeded the maximum permissible length.
Error code: <a id="errorCode" title="SSL_ERROR_RX_RECORD_TOO_LONG">SSL_ERROR_RX_RECORD_TOO_LONG</a>
...implies that SSL layer in curl or one of its dependencies seems broken.
#RussellFulton in this discussion mentioned:
This seems to be the result you see from Firefox when the server is not configured properly for SSL. Possibly Chrome would have just gave a generic ssl failed error.
This can happen when the browser sends a SSL handshake when the server is expecting an HTTP request. Server responds with a 400 code and an error message that is much bigger that the handshake message that the browser expects. Hence you see the message.
Reasons and Solution
When the error prone code tries to redirect to HTTPS on port 80 (port 3000 in your case).
Solution: Removing the port 80 (port 3000 in your case) from the url, the redirect works.
HTTPS by default runs over port 443.
This error also occurs when you have enabled the SSL module.
Solution: You have run a2enmod ssl.
a2enmod ssl
//or
a2ensite default-ssl
Provided a wrong IP in the ssl config.
Solution: Changed IP to what it should be.
Remove the IP if not needed in the ssl config.
Solution: Change
VirtualHost your.domain.com:443
//to
VirtualHost default:443
curl: (35) SSL received a record that exceeded the maximum permissible length. issue was discussed at length.
As per Curl Support HTTPS proxy and SOCKS+HTTP(s) there was another attempt to get the HTTPS proxy support into Curl.
This curl commit should have addressed your issue.
Related
I have installed Oracle Developer Suite 10g release 2(Complete), which leaded to installed standalone OC4J. Now I am trying to Testing OC4J's default configuration.
I started the OC4J instance through Start> Search for oc4j> clicked oc4j batch file. OC4J is initialized now.
According to the docs...
https://docs.oracle.com/cd/B14099_19/web.1012/b14361/config.htm#i1049203
I tried to test it by accessing http://localhost:8888/ from a Web browser.
Unfortunately, the result is...
localhost refused to connect.Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
According to the docs, I also tried...
http://localhost:8888/servlet/HelloWorldServlet, which should return a "Hello World" page.
Unfortunately, same result was produced...
localhost refused to connect.Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
I also tried starting the server using...
java -jar oc4j.jar
from J2EE>Home directory, and tried both links mentioned above...
http://localhost:8888/
and
http://localhost:8888/servlet/HelloWorldServlet
And the result was...
404 Not Found
Resource / not found on this server
and
404 Not Found
Resource /servlet/HelloWorldServlet not found on this server
respectively.
P.S. I have a static IP address.
What could be the cause for that?
Im trying to enable HTTPS using this guide (https://thingsboard.io/docs/user-guide/install/pe/add-haproxy-ubuntu/#step-10-refresh-haproxy-configuration) but i got stuck on step 9 i believe.
sudo certbot-certonly --domain your_domain --email your_email
I get the following error
certbot: error: unrecognized arguments: --tls-sni-01-port 8443
As far as i can tell, lets encrypt no longer supports this argument (tls-sni-01-port) or using ports other than 80 and 443. I got this from (https://serverfault.com/questions/805666/certbot-letsencrypt-on-different-port-than-443).
I am uncertain as how to solve this problem.
Here is my docker-compose.yml for Thingsboard + HTTPS through Nginx reverse proxy with automatic Let's Encrypt certificates: https://github.com/michalfapso/thingsboard_docker_https/
It uses linuxserver/swag which takes care of the certificates and is kept in sync with Let's Encrypt requirements by the linuxserver.io community.
Background:
I have a running app at ports 8080 in the remote server and a https ingress proxy at 443 on the same server, which redirects everything to 8080 app after handling the SSL.
What I want to do:
I want to communicate with the app through SSL remotely, while not having access directly to this domain (it is on a local network, I can access the server remotely via a different domain).
What I did:
I tunneled 443 port from my remote server ssh -L 3001:0.0.0.0:443 user#example.com. I then added 127.0.0.1 example.com to my /etc/hosts to make sure that the domain on my system is resolved properly.
Now, what I can do is enter https://example.com:3001/some/thing/ in firefox and it gets me a proper response from the server, while everything is ran through ssl without any problems. I also am able to use curl without checking the certificate: curl --insecure https://example.com:3001/some/thing works fine.
At the same time secure curl call fails: curl https://example.com:3001/some/thing with the error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Just to make sure both are using the same certificates, I actually used this tool: https://curl.haxx.se/docs/mk-ca-bundle.html to create a ca-bundle.crt from the most recent firefox certificates and passed it to curl with --cacert ca-bundle.crt. No luck - the same error. (I also tried following other curl tutorial on getting the local installation of firefox's certs, also no luck).
Question
What is going on? Why is curl's output different from firefox's even if I seem to use the same certificates? How can I debug this?
Side note
The real reason I am concerned about it is that with a normal (local) access to the server, I observed the same behaviour: I could connect to the server through chrome on https, but my react native app could not. I suspect the app to use libcurl under the hood or something similar and I believe debugging this problem could help me understand what's the problem with the app.
I currently have an HTTPS Load Balancer setup operating with a 443 Frontend, Backend and Health Check that serves a single host nginx instance.
When navigating directly to the host via browser the page loads correctly with valid SSL certs.
When trying to access the site through the load balancer IP, I receive a 502 - Server error message. I check the Google logs and I notice "failed_to_pick_backend" errors at the load balancer. I also notice that it failing health checks.
Some digging around leads me to these two links: https://cloudplatform.googleblog.com/2015/07/Debugging-Health-Checks-in-Load-Balancing-on-Google-Compute-Engine.html
https://github.com/coreos/bugs/issues/1195
Issue #1 - Not sure if google-address-manager is running on the server
(RHEL 7). I do not see an entry for the HTTPS load balancer IP in the
routes. The Google SDK is installed. This is a Google-provided image
and if I update the IP address in the console, it also gets updated on
the host. How do I check if google-address-manager is running on
RHEL7?
[root#server]# ip route ls table local type local scope host
10.212.2.40 dev eth0 proto kernel src 10.212.2.40
127.0.0.0/8 dev lo proto kernel src 127.0.0.1
127.0.0.1 dev lo proto kernel src 127.0.0.1
Output of all google services
[root#server]# systemctl list-unit-files
google-accounts-daemon.service enabled
google-clock-skew-daemon.service enabled
google-instance-setup.service enabled
google-ip-forwarding-daemon.service enabled
google-network-setup.service enabled
google-shutdown-scripts.service enabled
google-startup-scripts.service enabled
Issue #2: Not receiving a 200 OK response. The certificate is valid
and the same on both the LB and server. When running curl against the
app server I receive this response.
root#server.com curl -I https://app-server.com
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Thoughts?
You should add firewall rules for the health check service -
https://cloud.google.com/compute/docs/load-balancing/health-checks#health_check_source_ips_and_firewall_rules and make sure that your backend service listens on the load balancer ip (easiest is bind to 0.0.0.0) - this is definitely true for an internal load balancer, not sure about HTTPS with an external ip.
A couple of updates and lessons learned:
I have found out that "google-address-manager" is now deprecated and replaced by "google-ip-forward-daemon" which is running.
[root#server ~]# sudo service google-ip-forwarding-daemon status
Redirecting to /bin/systemctl status google-ip-forwarding-daemon.service
google-ip-forwarding-daemon.service - Google Compute Engine IP Forwarding Daemon
Loaded: loaded (/usr/lib/systemd/system/google-ip-forwarding-daemon.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2017-12-22 20:45:27 UTC; 17h ago
Main PID: 1150 (google_ip_forwa)
CGroup: /system.slice/google-ip-forwarding-daemon.service
└─1150 /usr/bin/python /usr/bin/google_ip_forwarding_daemon
There is an active firewall rule allowing IP ranges 130.211.0.0/22 and 35.191.0.0/16 for port 443. The target is also properly set.
Finally, the health check is currently using the default "/" path. The developers have put an authentication in front of the site during the development process. If I bypassed the SSL cert error, I received a 401 unauthorized when running curl. This was the root cause of the issue we were experiencing. To remedy, we modified nginx basic authentication configuration to disable authentication to a new route (eg. /health)
Once nginx configuration was updated and the path was updated to the new /health route at the health check, we were receivied valid 200 responses. This allowed the health check to return healthy instances and allowed the LB to pass through traffic
I need to setup a reverse proxy which intercepts HTTPS requests, decrypts them, performs body adaptation and finally forwards the re-encrypted request.
I'm now using Squid which provides support for eCAP plugins and ssl bumping: http://wiki.squid-cache.org/Features/SslBump
If I understood well, by configuring SSL bumping I can do exactly what I said above. However, ssl bumping is not working for now.
Here is my Squid configuration:
https_port 8080 cert=/etc/squid/cert.pem key=/etc/squid/key.pem
http_port 3128 ssl-bump cert=/etc/squid/cert.pem key=/etc/squid/key.pem dynamic_cert_mem_cache_size=4MB generate-host-certificates=on
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS
#always_direct allow all
ssl_bump allow all
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
Client-side, when trying to send a request to https:// 127.0.0.1:8080 I'm getting the following error:
Connection reset by peer
This happens if the destination server is running HTTPS. Looks like Squid is trying to establish a simple HTTP connection instead of a HTTPS request. Indeed, server-side I'm getting a SSL23_GET_CLIENT_HELLO error.
Is there anything wrong in my configuration? Is there anything I missed in how SSL bump works?
I digged into the problem and here is what I found:
1) ssl-bump option is not needed
2) the problem was that in the following line the ssl option was missing
cache_peer 52.170.25.214 parent 8080 0 no-query originserver login=PASS **ssl**