git not fully working with self-signed cert - ssl

I use self-signed certificate to crypt.
after some work, the https is working for git, but the git#xxxxx way does not work. here's the output:
Cloning into 'test'...
/usr/lib/ruby/1.9.1/net/http.rb:762:in `initialize': Connection refused - connect(2 (Errno::ECONNREFUSED)
from /usr/lib/ruby/1.9.1/net/http.rb:762:in `open'
from /usr/lib/ruby/1.9.1/net/http.rb:762:in `block in connect'
from /usr/lib/ruby/1.9.1/timeout.rb:54:in `timeout'
from /usr/lib/ruby/1.9.1/timeout.rb:99:in `timeout'
from /usr/lib/ruby/1.9.1/net/http.rb:762:in `connect'
from /usr/lib/ruby/1.9.1/net/http.rb:755:in `do_start'
from /usr/lib/ruby/1.9.1/net/http.rb:744:in `start'
from /home/git/gitlab-shell/lib/gitlab_net.rb:56:in `get'
from /home/git/gitlab-shell/lib/gitlab_net.rb:17:in `allowed?'
from /home/git/gitlab-shell/lib/gitlab_shell.rb:51:in `validate_access'
from /home/git/gitlab-shell/lib/gitlab_shell.rb:21:in `exec'
from /home/git/gitlab-shell/bin/gitlab-shell:16:in `<main>'
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
both ssh and http works fine before I start the self-signed cert thing, so now the ssh+ssl not working.
I'm using nginx, gitlab 5.3, followed the install instruction on gitlab website.
I did a check, too.
~> sudo -u git -H /home/git/gitlab-shell/bin/check
Check GitLab API access: FAILED. code: 301
Check directories and files:
/home/git/repositories: OK
/home/git/.ssh/authorized_keys: OK
I think the 301 might be this part in my nginx config:
server {
listen 80;
server_name gitlab.MYDOMAIN.com;
rewrite ^ https://$server_name$request_uri? permanent;
}
don't know if it's related or something
thanks.

The issue you're having is when you enabled ssl you also redirected http to https.
Accessing the old http:// url works on most clients, but gitlab-shell (used as part of the login process on the gitlab server) will not follow 3xx redirects and instead return an error, thus disabling ssh-based access.
The fix is to edit /home/git/gitlab-shell/config.yml and replace the http:// in gitlab_url: with https://.
If you're using self-signed certificates, you may also have to set self_signed_cert: true under http_settings:

for gitlab 6.0 this fixed the error for me: if using self signed certificates make sure that in gitlab-shell/config.yml your gitlab_url is https://... rather than http://... and that you specify self_signed_cert: true

ssh+ssl ?
But the two aren't related from the client's side perspective (unless you want to do some kind of ssh tunneling through NGiNX)
An ssh connection would talk to the ssh daemon (which doesn't need any certificate) and would require that the correct ssh public key has been registered to the server account ~/.ssh/authorized_keys (done by GitLab when a user register said public key in his/her profile page).
The gitlab-shell/bin/check error is another issue, again not related with ssh issue.
It is gitlab-shell which tries to contact locally gitlab through an https API.
Solve that locally, and any connection (https or ssh) from the client will succeed.
In particular, check issues 3892, and see if you need to add a CA to the .crt file served by NGiNX.
LJ Vankuiken adds in the comments:
the self-signed flag needs to be set to "true" if the certificate chain presented by your gitlab server cannot be completely verified by the gitlab-shell.
I was able to set the self-signed flag to "false" by adding the signing authority's certificate to the system certificate store.

For what it's worth in case anyone gets similar, I am running Gitlab on port 8080 and because gitlab_url in gitlab-shell/config.yml was NOT pointing to port 8080 it was failing with a redirect error (which my server running on 80 was kicking up).
So to summarize, if you access gitlab via http://gitlab.mydomain.com:8080/ make sure gitlab_url points to http://gitlab.mydomain.com:8080/ as well!

Related

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

Firefox can find certificate, but curl cannot (while tunneling https through ssh)

Background:
I have a running app at ports 8080 in the remote server and a https ingress proxy at 443 on the same server, which redirects everything to 8080 app after handling the SSL.
What I want to do:
I want to communicate with the app through SSL remotely, while not having access directly to this domain (it is on a local network, I can access the server remotely via a different domain).
What I did:
I tunneled 443 port from my remote server ssh -L 3001:0.0.0.0:443 user#example.com. I then added 127.0.0.1 example.com to my /etc/hosts to make sure that the domain on my system is resolved properly.
Now, what I can do is enter https://example.com:3001/some/thing/ in firefox and it gets me a proper response from the server, while everything is ran through ssl without any problems. I also am able to use curl without checking the certificate: curl --insecure https://example.com:3001/some/thing works fine.
At the same time secure curl call fails: curl https://example.com:3001/some/thing with the error:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
Just to make sure both are using the same certificates, I actually used this tool: https://curl.haxx.se/docs/mk-ca-bundle.html to create a ca-bundle.crt from the most recent firefox certificates and passed it to curl with --cacert ca-bundle.crt. No luck - the same error. (I also tried following other curl tutorial on getting the local installation of firefox's certs, also no luck).
Question
What is going on? Why is curl's output different from firefox's even if I seem to use the same certificates? How can I debug this?
Side note
The real reason I am concerned about it is that with a normal (local) access to the server, I observed the same behaviour: I could connect to the server through chrome on https, but my react native app could not. I suspect the app to use libcurl under the hood or something similar and I believe debugging this problem could help me understand what's the problem with the app.

Let's encrypt SSL certificate on subdomain

I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.

Mattermost TLS issue

I'm having issues with TLS enabling in Mattemost. In my server I configured a lot of virtualHosts plus the mattermost files. In http everything was working fine.
Today I tried to setup TLS and https. I followed the instuctions as in https://docs.mattermost.com/install/config-tls-mattermost .html. Now I get this:
Please notice the error: I'm trying to access domain1.mywebsite.com and the error is "its security certificate is signed by domain2.mywebsite.com". domain2.mywebsite.com is one of the websites configured as virtualhosts in apache.
I did not configure any virtualhost for Mattermost, since I don't thing any is needed (and it worked flawlessly without one, and without TLS). But how can I tell mattermost (or the browser?) that the server of domain2.mywebsite.com is the same of domain1.mywebsite.com?
I generated the certificates using letsencrypt with the standalone option (sudo certbot certonly --standalone -d domain1.mywebsite.com) and didn't move any file, just enabled "UseLetsEncrypt": true, in config.json file.
Do you happen to have any idea about how I could fix this?
Thank you
Marco
You'll need to configure TLS on Apache. You'll needs to use separate certificates for each virtual host.
Here is information that might help you: https://httpd.apache.org/docs/2.4/ssl/ssl_howto.html
Don't configure TLS on Mattermost if TLS is being handled by the proxy.

DNS NXDOMAIN error command certbot

I'm trying to install a ssl certicate lets encrypt in my domain and my sub domaine.
I was sucessful installing the ssl certificate on my domain but i did't successful on my sub domain
I use the next command
certbot certonly --webroot -w /var/www/sub-domain/maxime-mazet.fr/owncloud/ -d cloud.maxime-mazet.fr
/var/www/sub-domain/maxime-mazet.fr/owncloud has the folder of my code.
cloud.maxime-mazet.fr is my sub domain.
my domain maxime-mazet.fr is host at ovh.
for cloud.maxime-mazet.fr I have created the enter A with the IP of server.
with my domain (maxime-mazet.fr) no error but with my sub domain (cloud.maxime-mazet.fr) the error is
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for cloud.maxime-mazet.fr
Using the webroot path /var/www/sub-domain/maxime-mazet.fr/owncloud for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. cloud.maxime-mazet.fr (http-01): urn:acme:error:connection :: The server could not connect to the client to verify the domain :: DNS problem: NXDOMAIN looking up A for cloud.maxime-mazet.fr
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: cloud.maxime-mazet.fr
Type: connection
Detail: DNS problem: NXDOMAIN looking up A for
cloud.maxime-mazet.fr
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A record(s) for that domain
contain(s) the right IP address. Additionally, please check that
your computer has a publicly routable IP address and that no
firewalls are preventing the server from communicating with the
client. If you're using the webroot plugin, you should also verify
that you are serving files from the webroot path you provided.
The next pictures is my panel for the A of my domain and my sub domain
Thanks for your help
ns13.ovh.net and dns13.ovh.net do not appear to be authoritative for your domain name as they do not properly reply to queries on it. You will first need to solve that problem. Ask OVH if they are indeed the correct hosts to use for your domain. Since you seem to just have changed recently something on your domain name, you may just need to wait a little for things to settle.
Have a look at https://www.zonemaster.net/ to conduct tests on your zone. Until they are all ok do not play with Let's Encrypt.
I'm sorry for the screen but the ns13 and dns13 is good i have a new screen with all enter ;)