gitLab update : curl returns ssl error 35 - ssl

I am trying to update my gitLab installation from 7.7.2.
When I am running the following command nothing downloads.
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
And I get this error:
0* Unknown SSL protocol error in connection to packages.gitlab.com:443
0 0 0 0 0 0 0 0 --:--:-- 0:02:00 --:--:--
0
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to packages.gitlab.com:44
3
curl is unable to connect to packagecloud.io over TLS when running:
curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/co
nfig_file.list?os=Ubuntu&dist=trusty&name=git.curuba2.fr&source=script
This is usually due to one of two things:
1.) Missing CA root certificates (make sure the ca-certificates package is i
nstalled)
2.) An old version of libssl. Try upgrading libssl on your system to a more
recent version
My ubuntu Trusty is up to date, I have ca-certificates installed and I also did update-ca-certificates.
No idea what's wrong. I need to migrate my server. I installed it properly on the new one but I fail updating the old one...
[EDIT]
I also tried with -k with no luck...

I ran into the same problem trying to install the runner via a non-https proxy.
I tried using -x [proxy] --insecure in the command but it still failed.
I decided to look at the script itself and realised the issue is with the curl calls inside the script.
I update the calls I could find in a local copy of script.deb.sh to include -x [proxy] --insecure then just executed that using sudo ./script.deb.sh and it worked.

That's more a wrkaround than an answer.
I finally downgraded my future server to 7.7.2, restored there my backup and upgraded back to 7.12.0.
Here are the commands I ran on the future server:
sudo gitlab-ctl stop unicorn
sudo gitlab-ctl stop sidekiq
wget https://downloads-packages.s3.amazonaws.com/ubuntu-14.04/gitlab_7.7.2-omnibus.5.4.2.ci-1_amd64.deb
sudo dpkg -r gitlab-ce
sudo dpkg -i git*.deb
sudo gitlab-ctl reconfigure
cd /var/opt/gitlab/backups/ # This is where backups should be located
sudo gitlab-rake gitlab:backup:restore BACKUP=1435537802
sudo gitlab-ctl start unicorn
sudo gitlab-ctl start sidekiq
sudo gitlab-ctl status
sudo apt-get update
sudo apt-get install gitlab-ce

Related

Error when running API call in R using comprador() package

I get this error when I try to run an API call using ct_search() from comtradr() package in R .
Error in curl::curl_fetch_memory(url, handle = handle) :
SSL certificate problem: certificate has expired
Any ideas?
You haven't given enough details, but it could be related to this:
https://support.sectigo.com/articles/Knowledge/Sectigo-AddTrust-External-CA-Root-Expiring-May-30-2020
If you are on a Linux machine that you are running curl from, you can do the following:
$ sudo vi /etc/ca-certificates.conf
add an exclamation point in front of the line that says "mozilla/AddTrust_External_Root.crt" and save the file
$ sudo apt update
$ sudo apt install ca-certificates
$ sudo update-ca-certificates -f -v

server certificate verification failed while installing Kubernetes on Ubuntu 16.04

I'm setting up a Kubernetes cluster and as part of that, I ran the following command (mentioned on official docs: https://kubernetes.io/docs/tasks/tools/install-kubectl/) :
sudo apt-get update && sudo apt-get install -y apt-transport-https
However, it fails with the following error:
Err:3 https://packages.cloud.google.com/apt kubernetes-xenial/main amd64 Packages
server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
Now, I fetch the certificate with this command :
ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts -connect packages.cloud.google.com:443) -scq > kubecertificate.crt
I get the following response :
verify error:num=20:unable to get local issuer certificate
DONE
But since I see content inside my kubecertificate.crt file , I go ahead and copy the certificate in /usr/local/share/ca-certificates/ directory.
Then I run:
update-ca-certificates
After updating my ca certificates bundle, I re run the first command mentioned.
It again fails with the server certificate verification failed error.
Please help me understand where am I going wrong? Is it because I'm unable to get the local issuer certificate? Please help.
Are you using i386 image or is there some firewall involved? If it is 64bit version of Xenial then it must be some kind of system issue.
Take a look at this case. Especially I would check the current system time date -R and apt-get install NTP as advised by #davidthings as I remember having similar problem. There is also a lot of different solutions which could help, listed in the linked case - check which one is applicable for your and update if you succeeded.
After that you can try with this, to download kubectl, kubelet and kubeadm (or edit it accordingly if you want just one)
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg |
apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl

Syntax error on apache after installing lets encrypt

after installing lets encrypt ssl certificate I got error in my apache server.
AH00526: Syntax error on line 46 of
/opt/bitnami/apache2/conf/bitnami/bitnami.conf: SSLCertificateFile:
file '/opt/bitnami/apache2/conf/server.crt' does not exist or is empty
apache config test fails, aborting.
Kindly help me.
Bitnami Engineer here.
The link to the section about how to generate and configure a Let's Encrypt certificate in our solutions is this one
https://docs.bitnami.com/general/how-to/generate-install-lets-encrypt-ssl/
As the guide mentions, you need to:
1.Generate the certificates with the Lego tool
sudo /opt/bitnami/ctlscript.sh stop
sudo lego --email="EMAIL-ADDRESS" --domains="DOMAIN" --path="/etc/lego" run
2.Link the certificates with the files that Apache uses
sudo mv /opt/bitnami/apache2/conf/server.crt /opt/bitnami/apache2/conf/server.crt.old
sudo mv /opt/bitnami/apache2/conf/server.key /opt/bitnami/apache2/conf/server.key.old
sudo mv /opt/bitnami/apache2/conf/server.csr /opt/bitnami/apache2/conf/server.csr.old
sudo ln -fs /etc/lego/certificates/DOMAIN.key /opt/bitnami/apache2/conf/server.key
sudo ln -fs /etc/lego/certificates/DOMAIN.crt /opt/bitnami/apache2/conf/server.crt
sudo chown root:root /opt/bitnami/apache2/conf/server*
sudo chmod 600 /opt/bitnami/apache2/conf/server*
It's important that you change DOMAIN with the domain you set when running this command. If not, you will point to a non-existing file and Apache will fail.
You can review that it points to the proper file by running this command. This is what you need to review now.
ls -la /opt/bitnami/apache2/conf/server*
In case it the file that it points to doesn't exist, please run the commands of the step 2 again after ensuring that the certificate files exist
sudo ls -la /etc/lego/certificates
After that, restart the services again
sudo /opt/bitnami/ctlscript.sh start

Add trusted CA to Debian/Ubuntu image

I'm trying to deploy a CA certificate as a trusted root certificate in a Debian/nodejs container as described in https://askubuntu.com/a/94861/88763 or http://blog.bigon.be/2014/03/22/add-a-new-ca-certificate-to-the-certificates-stash-in-debian/ but it fails with no apparent reason. My Dockerfile:
FROM debian:jessie # or buildpack-deps:jessie or node:5
RUN apt-get update -y && \
apt-get install ca-certificates netcat strace wget -y
ADD rootCa.pem /usr/local/share/ca-certificates/rootCa.crt
RUN update-ca-certificates --verbose
CMD ["netcat", "-l", "12345"] # just to keep the container running
When building the container it actually tells me a certificate was added (1 added, 0 removed; done.) Nonetheless, when I try to use the root CA with wget, it is not found:
$ sudo docker exec -it cleanslatehg_catests_1 wget https://foo.v3.testing
converted 'https://foo.v3.testing' (ANSI_X3.4-1968) -> 'https://foo.v3.testing' (UTF-8)
--2016-02-02 15:11:33-- https://foo.v3.testing/
Resolving foo.v3.testing (foo.v3.testing)... 172.19.0.7
Connecting to foo.v3.testing (foo.v3.testing)|172.19.0.7|:443... connected.
ERROR: The certificate of 'foo.v3.testing' is not trusted.
Using the Ubuntu base image, I can access https://foo.v3.testing successfully:
FROM ubuntu
RUN apt-get update -y && \
apt-get install ca-certificates netcat strace wget -y
ADD rootCa.pem /usr/local/share/ca-certificates/rootCa.crt
RUN update-ca-certificates --verbose
CMD ["netcat", "-l", "12345"]
$ sudo docker exec -it cleanslatehg_catests_1 wget https://foo.v3.testing
--2016-02-02 15:23:17-- https://foo.v3.testing/
Resolving foo.v3.testing (foo.v3.testing)... 172.19.0.7
Connecting to foo.v3.testing (foo.v3.testing)|172.19.0.7|:443... connected.
HTTP request sent, awaiting response... 200 OK
[…]
2016-02-02 15:23:17 (33.9 MB/s) - 'index.html' saved [170/170]

error apache2.service" and "journalctl -xe"

Last time I try add new domain on localhost and I leave it on few weeks so now I try run my apache this command /etc/init.d/apache2 start and I get error
[....] Starting apache2 (via systemctl): apache2.serviceJob for apache2.service failed. See "systemctl status apache2.service" and "journalctl -xe" for details.
failed!
If i tried to do reinstall apache2 but it is still not work.
I just did these two lines.It worked.
Two web servers cannot be active on the one port at the same time
this code for apache & nginx:
or
if error journalctl -xe used this code
sudo apt-get install psmisc
sudo lsof -t -i tcp:80 -s tcp:listen | sudo xargs kill
Virtual Host configuration might cause this error
I solved this same problem by configuring my virtual host .conf files properly.
I created a virtual host & then removed the exapmle.conf file form /etc/apache2/sites-avalable/ but I didn't deleted the examle.conf file from /etc/apache2/sites-enabled/ for this reason i was getting this error.
Then I removed the example.conf file from both the folders( ../sites-enabled & ../sites-available ) and solved this issue.
If you tried to setup any virtual host recently, then try this solution.
Best of Luck
Kill the running process on the port. Hope it will work!
sudo apt-get install psmisc
sudo fuser 80/tcp
sudo lsof -i tcp:80
sudo lsof -i tcp:80 -s tcp:listen
sudo lsof -t -i tcp:80 -s tcp:listen | sudo xargs kill
Go
sudo nano /etc/apache2/apache2.conf
remove this line:
Include /etc/phpmyadmin/apache.conf
Then
service apache2 start/restart
This problem may be a result of some configuration files in apache missing. One of the solutions would be to purge the apache2 file.
You can type:
sudo apt-get purge apache2
Then reinstall apache2 by typing:
sudo apt-get install apache2
As stated in the error message, we just have to execute :
systemctl status apache2.service
or
journalctl -xe
And you will have more detail about the error (line of the error, or command misspelled, or module not included in the configuration, ...) :
for example you can have following detail Invalid command 'SSLEngine', perhaps misspelled or defined by a module not included in the server configuration ==> you then need to execute a2enmod ssl, and then execute service apache2 restart
Also I notice a difference between service apache2 reload and service apache2 restart commands. In case of persisting errors you should execute service apache2 restart, and then execute journalctl -xe.
You type
sudo netstat -pant
You check are you using port 80. If used, `
sudo service 'service_name' stop
and
sudo service apache2 start
The problem is because some configuration files are deleted.
You can use the following command to replace configuration files that have been deleted, without purging the package:
sudo apt-get -o DPkg::Options::="--force-confmiss" --reinstall install apache2
execute sudo service apache2 status and check the result. it might be trying to bind to a port that is already in use