I'm running the Omnibus version of Gitlab in docker.
I've edited my gitlab.rb file so as to enable https by prefixing external_url with https://. This seems to work well enough. However, when I also add my port to the URL:
external_url = 'https://www.example.com:12345'
My browser shows me a "connection refused" error. Why is this?
If you want to change the default port where GitLab is running, you have to put this in your gitlab.rb file :
gitlab_rails['gitlab_host'] = 'example.com'
gitlab_rails['gitlab_port'] = 12345
gitlab_rails['gitlab_https'] = true
After you set this parameters, you'll have to run a # gitlab-ctl reconfigure
Also take a look at /opt/gitlab/embedded/service/gitlab-shell/config.yml: you can find here interesting parameters for: the SSL certificates path
http_settings:
#user: someone
#password: somepass
ca_file: /etc/ssl/cert.pem
ca_path: /etc/pki/tls/certs
self_signed_cert: false
Related
I have been trying to access my local gitlab server with https by creating root and websited certificates.
I have used the below link and in place of Node js application,
https://www.section.io/engineering-education/how-to-get-ssl-https-for-localhost/
I have changed my configuration in gitlab.rb file with
external_url "https://gitlab.mydomain.com"
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/ssl/mydomain/gitlab.crt"
nginx['ssl_certificate_key'] = "/etc/ssl/mydomain/gitlab.key"
and reconfigured the gitlab inside my docker container.
And I imported my root pem(CA.pem) into the browser. But it still showing the connection not secure.
Can you please help me to get my gitlab with https connection.
I am trying to setup a self-hosted GitLab instance, everything works except when I try to create a https connection using Let's encrypt. I get the following error when trying to reconfigure the GitLab instance:
There was an error running gitlab-ctl reconfigure:
letsencrypt_certificate[gitlab.***.org] (letsencrypt::http_authorization line 6) had an error: Acme::Client::Error::AccountDoesNotExist: acme_certificate[staging] (/opt/gitlab/embedded/cookbooks/cache/cookbooks/letsencrypt/resources/certificate.rb line 41) had an error: Acme::Client::Error::AccountDoesNotExist: No account exists with the provided key
My external_url=https://gitlab.***.org, and on my network I have set port forwarding for both port :80 and :443. I also set the DNS to my IP, this works as the site is reachable when not secured.
Hope someone recognizes the error, I looked all over and didn't see it pop up anywhere.
Best Regards
I had the same problem while I tried to change the url of my GitLab.
I solved this issue thanks to : https://gbe0.com/posts/linux/server/gitlab-acme-account-does-not-exist/, by desactiving the old Acme private key then reloading GitLab config
sudo mv /etc/acme/account_private_key.pem /etc/acme/account_private_key.pem.backup
sudo gitlab-ctl reconfigure
Hello I configured Gitlab server on a VM OL7. I can easily call the http page. but when I use my self-signed ssl certificate generated using the method offered here, I can't call my page and I get a timeout as error. My configuration is simple and I have already tried different variants someone could tell me how I can configure this one. note that I don't want to use let's encrypt for this.
this is an example of my gitlab.rb
external_url 'http://gitlab.icw19.lab'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.icw19.lab.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.icw19.lab.key"
Your external_url value should include https:// to indicate that you wish to use SSL on port 443. Then GitLab will listen on 443 and will configure Nginx to use your SSL certificates.
external_url 'https://gitlab.icw19.lab'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.icw19.lab.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.icw19.lab.key"
I used certbot to generate a Let's encrypt certificate for my website, but Yaws gives me an SSL accept failed: timeout error when I try to connect to it (after it times out of course). Interestingly it works when I redirect example.com to the local ip address of the server in the hosts file on my machine and connect to example.com:8080, but not when I connect to example.com without editing the hosts file or when I connect from my phone over 4G. Here's my webserver's configuration file (it is the only configuration file in conf.d):
<server www.example.com>
port = 8080
listen = 0.0.0.0
docroot = /usr/share/yaws
<ssl>
keyfile = /etc/letsencrypt/live/example.com/privkey.pem
certfile = /etc/letsencrypt/live/example.com/fullchain.pem
</ssl>
</server>
I made sure that the keyfile and the certificate are both readable by the yaws user. Next to the keyfiles is a README that contains the following:
`privkey.pem` : the private key for your certificate.
`fullchain.pem`: the certificate file used in most server software.
`chain.pem` : used for OCSP stapling in Nginx >=1.3.7.
`cert.pem` : will break many server configurations, and should not be used
without reading further documentation (see link below).
We recommend not moving these files. For more information, see the Certbot
User Guide at https://certbot.eff.org/docs/using.html#where-are-my-certificates.
So I'm relatively sure I've used the right file (the other ones gave me errors like badmatch and {tls_alert,"decrypt error"}). I also tried trivial things like writing https:// before the URL, but it didn't fix the issue, also, everything works fine when the server is running without SSL. The version of Erlang running on my server is Erlang/OTP 19. Also, if it's unclear, the domain isn't actually example.com.
Also, example.com is redirected via cname to examplecom.duckdns.org, if that matters.
UPDATE:
My server was listening on port 8080, that was forwarded from the external port 80, for https connections, when the default https port is port 443. My other mistake was connecting to http://example.com instead of https://example.com. Forwarding the external port 443 to the internal port 8443 and configuring yaws to listen on port 8443 fixed everything.
Just to be sure to understand, when you do something like curl -v https://example.com:8080, you get a timeout, that's it ? (here https protocol and port 8080 are mandatory of course)
SSL timeout during accept can be triggered when an unencrypted request is received on a SSL vhost.
Could you also provide the output of the following command:
echo -e "HEAD / HTTP/1.0\r\n\r\n" | openssl s_client -connect mysite.com:8080 -ign_eof
And finally, which version of Yaws are you running ? on which OS ?
I have a ldap server + kerberos setup in a centos vm (running using boot2docker vm) And i am trying to use them for my web application authentication (from host - my macbook).
For authentication, i need to use the "GSSAPI" mechanism, not the simple bind. 'simple bind' is working perfectly, but the "GSSAPI" based approach is not working.
I am getting the following error whenever i try the "ldapwhoami" command (i ran 'kinit' before running ldapwhoami to make sure i have valid kerberos TGT)
ldap_sasl_interactive_bind_s: Local error (-2)
additional info: SASL(-1): generic failure: GSSAPI Error: Miscellaneous failure (see text (unable to reach any KDC in realm DEV.EXAMPLE.COM, tried 1 KDC)
Please note that the LDAP server and the kerberos server side is working perfectly, means i tested them with things like "ldapsearch", "ldapwhoami" in the centos VM where i have my ldap server + kerberos setup, Its working fine. I am able to see proper output for them.
I am getting errors (above error) only when i try the same command from my laptop (client).
Note: even i created host principal (host/mymacbook.dev#DEV.EXAMPLE.COM) from my laptop and added it to my local krb5.keytab file using 'kadmin'.
Below are my client side configurations:
/etc/krb5.conf file in Client (macbook):
[libdefaults]
default_realm = DEV.EXAMPLE.COM
ticket_lifetime = 24000
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
DEV.EXAMPLE.COM = {
kdc = d4dc7089282c
admin_server = krb.example.com
}
[domain_realm]
.dev.example.com = DEV.EXAMPLE.COM
dev.example.com = DEV.EXAMPLE.COM
.example.com = DEV.EXAMPLE.COM
example.com = DEV.EXAMPLE.COM
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
[logging]
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
/etc/hosts file in Client (macbook):
127.0.0.1 localhost
192.168.59.3 mymacbook.dev
255.255.255.255 broadcasthost
::1 localhost
192.168.59.103 ldapserver.example.com
192.168.59.103 d4dc7089282c
192.168.59.103 krb.example.com
192.168.59.103 is my boot2docker vm ip, and i am doing port forwarding from boot2docker vm to docker image on all the default ports related to LDAP and kerberos ( 88, 389, 464 & 749)
Any idea why i am getting this error?
ldap_sasl_interactive_bind_s: Local error (-2)
additional info: SASL(-1): generic failure: GSSAPI Error: Miscellaneous failure (see text (unable to reach any KDC in realm DEV.EXAMPLE.COM, tried 1 KDC)
is it related to DNS or something else? any suggestions?
On MacOS the default client does not fall back to TCP.
in your krb.conf prefix your kdc with tcp/ to force the client to use TCP if your network blocks UPD traffic (As some network admins might do).
kdc = tcp/ds01.int.domain.com:88
You need multiple things to get a containerized KDC being reachable from the outside.
Lets assume you are using port 88 as that is the default and lets also assume your image was called docker-kdc.
Make sure your port 88 is exposed.
EXPOSE 88
Make sure your KDC daemon listens on that port. For the sake of this example, I am simply using the KDC as an entrypoint, you should be able to extrapolate if that wasn't applying for your specific example.
ENTRYPOINT ["/usr/lib/heimdal-servers/kdc", "--config-file=/etc/heimdal-kdc/kdc.conf", "-P 88"]
When running the container, I am using port forwarding towards 48088. Note that the KDC uses both, TCP and UDP.
docker run -d -h kdc --name kdc -p 48088:88/udp -p 48088:88 docker-kdc
From this point on, your KDC should be reachable from within the host system.
=== OSX Only ===
Now given that you are using OSX (boot2docker -> VirtualBox), you will also need to setup port forwarding towards your OSX environment.
VBoxManage controlvm boot2docker-vm natpf1 "48088/tcp,tcp,127.0.0.1,48088,,48088"
VBoxManage controlvm boot2docker-vm natpf1 "48088/udp,udp,127.0.0.1,48088,,48088"
Get the IP address of your docker container if needed.
When using plain docker (on linux), you can simply use the loopback 127.0.0.1.
When using boot2docker (on OSX), you will get that using: boot2docker ip
Prepare a minimal krb5.conf that makes use of the KDC. For the sake of this example, I am using a realm called EXAMPLE.COM on the domain example.com.
Note that you will have to replace IP with the result of step 5.
[libdefaults]
default_realm = EXAMPLE.COM
noaddresses = true
[realms]
EXAMPLE.COM = {
kdc = IP:48088
admin_server = IP:48088
}
[domain_realm]
example.com = EXAMPLE.COM
.example.com = EXAMPLE.COM
Now go ahead and test that configuration.
export KRB5_CONF=PATH_TO_THE_KRB5.CONF_FILE_FROM_STEP_6
kinit test/foo.example.com#EXAMPLE.COM
Since I had to do this for a project of mine, I packed it all into some little script that might be helpful for your further research;
https://github.com/tillt/docker-kdc
Ensure that krb5.conf file is in /etc directory. I had the same issue and had no firewall issues, still was getting the same error. Finally, I was able to fix the issue by moving the krb5.conf file to /etc directory.