HAproxy fails to start after changing ssl certificates - ssl-certificate

I've been struggling with this issue for 2 days now and it's starting to be a big problem.
we have 2 LB on which HAproxy is installed,
I've tried to change the certificats on the first one and it fell down and couldn't restart it even when getting back the old haproxy.cfg file.
after a while I tried to restart the server and it did the job. I had my Haproxy running on the failed node. and I successfully changed the ssl certificats on that one.
Then, I tried to do the same on the second, and then it went down and nothing seem to be correcting the problem; not restarting haproxy on the old haproxy.cfg file nor restarting the whole server.
the error I get is like this :
Starting frontend PAGEMAINTENANC_GUN: cannot bind socket [10.168.10.16:80]
Can you please give me some little help?
thank you all in advance.

It seems that I've been trying to bind to an IP address that is not local. That's why the HAproxy fails to start. The solution was to set ip_nonlocal_bind to 1.
To get info:
sysctl net.ipv4.ip_nonlocal_bind
Set net.ipv4.ip_nonlocal_bind to 1
sysctl -w net.ipv4.ip_nonlocal_bind=1
Restart HAproxy works

Related

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

Why can't I see https webpage after using sudo certbot --apache

I've just done a fresh install of Ubuntu 20.04 and followed the Digital Ocean instructions to get my apache server up and running:
https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-on-ubuntu-20-04
Which worked fine for HTTP traffic, then I used the Digital Ocean instructions (which I knew, but followed them anyway) to set up for SSL (https) access:
https://www.digitalocean.com/community/tutorials/how-to-secure-apache-with-let-s-encrypt-on-ubuntu-20-04
I selected the option to redirect all traffic to https. I opened my firewall using sudo ufw allow 'Apache Full'.
But I am unable to see my sites - the browsers just timeout. I have tried disabling ufw just to see, and nope, nothing.
SSL Labs just gives me an "Assessment failed: Unable to connect to the server" error.
I also ran https://check-your-website.server-daten.de/?q=juglugs.com
and it timed out:
I have deleted the letsencrypt stuff and ran through it again three times with the same result, and now I'm stuck...
Everything I've searched points to a firewall error, but as I've said, I've disabled that and have the same result. The router settings have not been changed since I did my fresh Ubuntu install.
Any help gratefully received.
Thanks in advance.
on8tom answered this one for me - In setting up the new build of Ubuntu, my local IP address for the apache server had changed, and my Virgin Media Hub only had port 443 open to the old IP address.
Many thanks for pointing me at that (but I should have checked that before posting this - kicking myself!)

Haproxy wont recognize new certificate

I recently changed my certificate to LetsEncrypt's.
I placed the new certificate in the location of the old one:
cat /etc/haproxy/certs/fullchain.pem /etc/haproxy/certs/privkey.pem > /etc/haproxy/certs/mydomain.com.pem
And in my haproxy.cfg I have:
frontend https
bind :::8443 v4v6 ssl crt /etc/haproxy/certs/mydomain.com.pem no-sslv3
Then I ran systemctl reload haproxy, but it still brings the old one when I access it in my browser or using SSLLabs.
If I use curl -kv mydomain.com it shows the correct certificate.
I had this same issue where even after reloading the config, haproxy would randomly serve old certs. After looking around for many days the issue was that "reload" operation created a new process without killing the old one. The old processes were serving the outdated certs. You can check this by "ps aux | grep haproxy".
Fix
If your environment allows for a few seconds of downtime run "service haproxy stop" until no haproxy processes are left and then start haproxy.
**OR**
Sort by starting time and kill old processes while checking if the service is still running in between.
1 Year later EDIT
Instead of manually doing the fix mentioned above after every reload, we added a "hard-stop-after" for 600 seconds. Made sure to kill all old processes after adding the param and checked the same using ps aux. So now, older processes have to die after 600 seconds, and cannot serve outdated certs.
If you have the old pem file in /etc/haproxy/certs, HAproxy might be using it instead of new one.
I had a similar problem. HAproxy was using expired certificate that was first created for only dev.domain.com with Let's Encrypt. Later I changed certificate creation process to include multiple domains:
domain.dom www.domain.com and dev.domain.com.
The old dev.domain.com.pem was still in /etc/haproxy/certs folder. When I visited https://dev.domain.com, HAproxy used old pem certificate file and Chrome issued a warning for expired certificate.
When I deleted dev.domain.com.pem file and reloaded HAproxy, it started using new certificate and SSL is working correctly again.
My problem was historic and outdated wildcard cert that HAProxy (HA-Proxy version 1.8.19-1+deb10u3 2020/08/01) erroneously picked up and spitted out as outdated subdomain cert, both in the browser and in cURL.
Reloading, restarting, stopping+starting and even upgrading Debian did not help. What did help was to remove the outdated wildcard cert and reload.

While starting Squid on "cmd" it gives a "System error 1067 has occurred"

I am trying to use Squid's (its 2.7STABLE8 version) reverse proxy service on Windows 7. When I try to start squid from "cmd" (as administrator) I get an error message such as;
"The Squid service could not be started
A system error has occurred
System error 1067 has occurred message
The process terminated unexpectedly"
To solve it I tried:
setting "http_port" to 80 and 8080
disabling user account control settings.
and followed the instructions on this website: Configuring a Basic Reverse Proxy in Squid on Windows
However, none of the solutions worked. Therefore if there is any other solution you might know it would be great.
Thanks
I checked my cache.log file, and it was reporting to;
"FATAL: ipcache_init: DNS name lookup tests failed"
Therefore, in the squid.conf file I set my dns_testname to;
"dns_testnames 0.0.0.0"
and it solved my problem.
Try this to check:
squid -k check
I had the same issue.
Just in case it helps someone, I got the same problem and was low disk space.
I made some cleanup and went back on in seconds.
Thanks.
when I check logs, I saw there is not "logs" folder under "c:\squid\var". after I create "logs" folder under there, it starts working. please make sure you have "c:\squid\var\logs"
My error it was the same.
My solution was: check the log in C:\squid\var\logs in the file chache, and i found this;
commBind: Cannot bind socket FD 14 to 192.168.3.12:3128: (10049) WSAEADDRNOTAVAIL, Cannot assign requested address.
FATAL: Cannot open HTTP Port
In a last time, i change the dir IP of my server, this was the origin of my trouble. In the file C:\squid\etc\squid.conf i change it again, and it solve of my.

Apache Can't start because of a make_sock bind to address error on port 80

I have noticed a new thing going on with my server that I can't quite figure out what is making it happen. I'm hoping someone out there has experience with this problem and can help me find a solution to get it to stop.
I did a reboot on my ubuntu server tonight that I have running at slicehost.com. Everything runs great until I go to start apache. I get the following error.
* Starting web server apache2 (98)Address already in use: make_sock:
could not bind to address 0.0.0.0:80
no listening sockets available,
shutting down Unable to open logs
...fail!
A little further research using netstat -ltnp | grep ':80' will show the following:
tcp 0 0 0.0.0.0:80
0.0.0.0:* LISTEN 3948/apache2
I can then kill 3948 and apache starts up like normal. The PID 3948 keeps changing to a different number.
This is new and the only thing I have done since I had a successful boot without this happening was uninstalling a manual install of phpmyadmin and then re-install it using the aptitude install commands. Phpmyadmin now runs fine on the server but I don't understand what this error means or how I can go about getting it resolved.
Anyone that might offer some insight would be greatly appreciated!
Check If you don't double-start your server, and if your partitions are already mounted, so it can access its log files.
I advice to restate that question on serverfault.com, your question is slightly misplaced here.
The problem is because port 80 is already in use (probably IIS7 use it). To solve problem open Apache/conf/httpd.conf file and find line Listen 80 and change it to another port (eg. Listen 5555). Than run httpd.exe and try to open localhost:5555. It works! :)