OpenSSL installed and working, one client can connect but another connection refused - apache

I have an API server (Debian Apache2) with OpenSSL installed and working. I also have a staging and production web server (also Debian Apache2, exactly the same spec - they are VM clones). All servers are on the same subnet. I can browse to the wsdl from my local machine on 443 successfully, and I can wget the wsdl successfully from my staging server on 443, but a wget from my production web server will not connect:
--2015-04-16 10:26:18-- https://www.example.com/index.php/api?wsdl
Resolving https://www.example.com (https://www.example.com)... XX.XX.XX.XX
Connecting to https://www.example.com (https://www.example.com)|XX.XX.XX.XX|:443... failed: Connection refused.
I can connect over https from a PHP nusoap client on staging no problem, but the same code on my production server returns:
wsdl error: HTTP ERROR: cURL ERROR: 7: couldn't connect to host
url: https://www.example.com/index.php/api?wsdl
content_type:
http_code: 0
header_size: 0
request_size: 0
filetime: -1
ssl_verify_result: 0
redirect_count: 0
total_time: 5.272228
namelookup_time: 5.271805
connect_time: 0
pretransfer_time: 0
size_upload: 0
size_download: 0
speed_download: 0
speed_upload: 0
download_content_length: -1
upload_content_length: -1
starttransfer_time: 0
redirect_time: 0
certinfo: Array
primary_ip: XX.XX.XX.XX
primary_port: 443
local_ip:
local_port: 0
redirect_url:
An openssl s_client -connect from both web servers produces the same output.
After my production server returns connection refused, there are no new entries in the API server's error.log, therefore this must be a client issue.
Is there a Debian-specific/internal firewall config I may have inadvertently changed that would prevent the one client from connecting to a secure web server over HTTPS and not another?

"Connection refused" usually indicates a failure to complete the initial TCP connection. Things to check include:
iptables, firewalls, hosts.deny
is apache listening on the interface/ip address the is attempting to connect to?
Does wget or curl work from the local server when you use http://127.0.0.1/ but not http://THE-SERVER'S-PUBLIC-IP-ADDRESS/ ?
What do you see when you run wget with --debug and --verbose ?

Related

Use Stunnel to connect wss to wsServer

I am trying to use stunnel to turn a wss connection into a ws connection because wsServer doesn't support wss. The server is running Ubuntu, and the client I'm using is Chrome, if it matters.
This is my stunnel.conf file
foreground = yes
debug = info
output = /var/log/stunnel.log
[wsServer]
cert = /etc/letsencrypt/live/myurl.com/fullchain.pem
key = /etc/letsencrypt/live/myurl.com/privkey.pem
accept = 0.0.0.0:8443
connect = 127.0.0.1:8080
I'm trying to connect to it with a javascript call:
const socket = new WebSocket('wss://myurl.com:8433');
But I consistantly get a connection error:
(index):13 WebSocket connection to 'wss://myurl.com:8433/' failed: (anonymous) # (index):13
Here's what I've checked:
That my port forwarding/system firewalls aren't eating the connection. If I kill stunnel and setup a regular socket listening on either port 8080 or 8433, I can connect to that socket from the client machine.
wsServer accepts non-encrypted traffic, if I instead connect with ws://myurl.com:8080 it works fine
wsServer accepts connections from localhost just fine, which I understand is necessary when stunnel is running on the same machine as the server
Chrome accepts my cert when used for https pages under the same domain, so I don't think I have a cert signing error, but I don't know how to tell if the cert is related to the connection failing
Stunnel does not print any errors when starting up
Nothing gets printed to /var/log/stunnel.log, although the file was created after I added the output field to the .conf file
Any ideas about what else I can try? Is there some reason the cert that works for https wouldn't work with wss?
Do people recommend using ProxyPass through apache and avoiding stunnel altogether?
Not a solution, but a next troubleshooting step. Get yourself openssl and attempt to connect to 8443. This should spit back the certificate information and at least confirm stunnel is presenting the certificate.
openssl s_client -connect myurl.com:8443
It's been awhile since I configured stunnel, but IIRC you can't put a password on your key.

Can't Connect to Apache Server on LAN

I have a very simple LAN setup and am trying to connect to an Apache server running on the LAN. The server IP is 192.168.1.178. I'm trying to connect from a box on same LAN with IP of 192.168.1.161. Attempting to connect from browser results in error saying site is unreachable. I can ping the server and SSH into the server. But, telnet and curl result in no route to host errors.
Both boxes are set up with static IPs. DNS for static connection is 192.168.1.1. Both boxes are running Manjaro and no firewalls are turned on. Apache access logs show no attempt to connect and there are no errors in the Apache error logs.
I also set up a test python server (sudo python -m http.server 80) to try that. Attempting to curl to that server results in 'connection refused' error as opposed to 'no route to host' error for the Apache server.
Traceroute results are:
traceroute 192.168.1.178
traceroute to 192.168.1.178 (192.168.1.178), 30 hops max, 60 byte packets
1 raptor (192.168.1.178) 0.434 ms !X 0.366 ms !X 0.400 ms !X
I discovered that a firewall daemon was running, which was causing the problem. Disabling the firewall solved the issue.

Yaws with SSL gives the error "SSL accept failed: timeout"

I used certbot to generate a Let's encrypt certificate for my website, but Yaws gives me an SSL accept failed: timeout error when I try to connect to it (after it times out of course). Interestingly it works when I redirect example.com to the local ip address of the server in the hosts file on my machine and connect to example.com:8080, but not when I connect to example.com without editing the hosts file or when I connect from my phone over 4G. Here's my webserver's configuration file (it is the only configuration file in conf.d):
<server www.example.com>
port = 8080
listen = 0.0.0.0
docroot = /usr/share/yaws
<ssl>
keyfile = /etc/letsencrypt/live/example.com/privkey.pem
certfile = /etc/letsencrypt/live/example.com/fullchain.pem
</ssl>
</server>
I made sure that the keyfile and the certificate are both readable by the yaws user. Next to the keyfiles is a README that contains the following:
`privkey.pem` : the private key for your certificate.
`fullchain.pem`: the certificate file used in most server software.
`chain.pem` : used for OCSP stapling in Nginx >=1.3.7.
`cert.pem` : will break many server configurations, and should not be used
without reading further documentation (see link below).
We recommend not moving these files. For more information, see the Certbot
User Guide at https://certbot.eff.org/docs/using.html#where-are-my-certificates.
So I'm relatively sure I've used the right file (the other ones gave me errors like badmatch and {tls_alert,"decrypt error"}). I also tried trivial things like writing https:// before the URL, but it didn't fix the issue, also, everything works fine when the server is running without SSL. The version of Erlang running on my server is Erlang/OTP 19. Also, if it's unclear, the domain isn't actually example.com.
Also, example.com is redirected via cname to examplecom.duckdns.org, if that matters.
UPDATE:
My server was listening on port 8080, that was forwarded from the external port 80, for https connections, when the default https port is port 443. My other mistake was connecting to http://example.com instead of https://example.com. Forwarding the external port 443 to the internal port 8443 and configuring yaws to listen on port 8443 fixed everything.
Just to be sure to understand, when you do something like curl -v https://example.com:8080, you get a timeout, that's it ? (here https protocol and port 8080 are mandatory of course)
SSL timeout during accept can be triggered when an unencrypted request is received on a SSL vhost.
Could you also provide the output of the following command:
echo -e "HEAD / HTTP/1.0\r\n\r\n" | openssl s_client -connect mysite.com:8080 -ign_eof
And finally, which version of Yaws are you running ? on which OS ?

Browser unable to connect to web2py over secure SSH tunnel

I followed the instructions in the web2py manual on how to connect to a remote web2py via ssh tunnel. SSH to my server appears to work just fine:
[~/prg]$ ssh -L 8002:127.0.0.1:8002 username#linux-server.com
Linux schemelab2 4.6.5-x86_64-linode71 #2 SMP Fri Jul 29 16:16:25 EDT 2016 x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
You have new mail.
but just as others have failed, when I attempt to visit http://localhost:8002 or https://localhost:8002 I get a number of connection refused messages:
channel 3: open failed: connect failed: Connection refused
channel 4: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
channel 4: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
channel 3: open failed: connect failed: Connection refused
If it helps any, here is my sshd_config
Also note:
telnet localhost 8002 yields
schemelab#schemelab2:~$ telnet localhost 8002
Trying ::1...
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
schemelab#schemelab2:~$
Could be one of several possible reasons. I am assuming you are mostly interested in accessing the web2py admin page on your remote server, since web2py doesn't allow remote admin access over an insecure channel... So first things first, you want to make sure your server's IP Tables are allowing access to services on the port you are trying to connect to, otherwise these remote connection solutions probably wont work (except for perhaps Plan C). See here for more info: https://help.ubuntu.com/community/IptablesHowTo
Firstly, let me show you how I SSH tunneled to web2py via dozens of servers I used in the past. I'll be using port 8889 in my examples:
ssh -L 8889:127.0.0.1:8889 username#linux-server.com
Just like with a normal SSH, you should now see the shell of your server (which you have demonstrated). Now, in the same terminal, cd to your server's root web2py directory and do the following (do not close the terminal window after):
> cd mywebite.com
> python web2py.py -a password -i 127.0.0.1 -p 8889
*web2py startup stuff*
Now on your local browser visit http://127.0.0.1:8889/admin and you should see the web2py admin page from your server.
Plan B - Using self-signed SSL certificate
If you're still having issues with ssh tunnel, another option you can try is using a self-signed SSL certificate.
Making a self-signed certificate is very easy with OpenSSL, and you can also use some online self-signed certificate generators (though I don't recommend this) to save you even more time.
Once you have your generated .crt and .key files, sftp to your server and upload the files to your server's root web2py directory (or upload them to Dropbox, ssh to your server, cd to your root web2py directory and wget the file links). Finally ssh to your server and do the following (do not close the terminal window after):
> cd mywebite.com
> python web2py.py -a password -p 8889 -i 0.0.0.0 server.crt -k server.key
*web2py startup stuff*
Now on your browser enter (notice the https) https://xxx.xxx.xxx.xxx:8889/admin (xxx... being your server IP), or you can do https://mywebsite.com:8889/admin if you already have your domain name setup.
Now you should see a SSL security warning on your browser. Simply ignore this warning and add an exception, and finally you should be able to see the web2py admin page from your server.
Plan C - Edit web2py source
This is the least recommended plan to allow admin over insecure channel, and should be used as a last resort. You can simply edit the part of the web2py source code that disables admin by just adding one line of code. In
<server's root web2py directory>\applications\admin\models\access.py (around line 21) put request.is_local=True before the part that disables admin over insecure channel:
'...'
request.is_local=True #TESTING ONLY. COMMENT OUT OR REMOVE IN PRODUCTION!
if request.env.http_x_forwarded_for or request.is_https:
session.secure()
elif not request.is_local and not DEMO_MODE:
raise HTTP(200, T('Admin is disabled because insecure channel'))
'...'
Now you can access your server's web2py admin by simply visiting http://xxx.xxx.xxx.xxx:8889/admin (xxx... being your server IP), or you can do http://mywebsite.com:8889/admin if you already have your domain name setup.
Note this is a quick and dirty solution and should be used only temporarily and for testing. Don't forget to remove or comment out that line in production!

Enable Remote SSL on Weblogic

I've enabled SSL Listen Port from the Admin Console of Weblogic 11g Version: 10.3.6.0
I've created a self-signed cert following: https://oracle-base.com/articles/11g/weblogic-configure-ssl-for-a-managed-server
But when try https on the browser of a remote machine I get a timeout.
If I try from the local machine using: curl -Ik I get the proper response, it seems that only remote access is disabled.
Accessing via http works fine from my remote machine browser. I did also try telnet but it only works with 7001 but not with 7002 (my secure port). I've already tried changing the secure port number but the result is the same.
My Weblogic server is on a Centos running on VMware ESXi.
What could be blocking the remote SSL connection?
A timeout indicates a firewalling of some sort. As you say yourself if you try locally with curl it works. There is nothing else to check if locally you can but remotely you get a timeout.