cURL: SSL connect error while trying to install kubectl on RHEL 7 - ssl

I am following Kubernetes documentation to Install kubectl on Linux on my RHEL 7 server but I see an
curl: (35) SSL connect error
error while running the following command:
curl -kLO https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl
Any pointers to fix this issue will be very helpful for me to move forward.

I have just checked it and it appeared that the https://storage.googleapis.com/kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl doesn't exist . It looks like it is needed to add ".exe" at the end of the URL.
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Details>
No such object: kubernetes-release/release/v1.7.0/bin/windows/amd64/kubectl
</Details>
</Error>
The official documentation on how to install kubectl on Linux asks to download the latest release _for_linux_ with the following command:
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 44.5M 100 44.5M 0 0 15.8M 0 0:00:02 0:00:02 --:--:-- 15.8M
Additionally with the URL you have provided you are trying to download kubectl for windows on RHEL... (/bin/windows/amd64/kubectl in your url)
So, it is merely needed to add .exe in the end of kubectl if you need downloading it for Windows or replace windows with linux in URL :)

Related

Couldn't find the expected Java Key Stores (JKS) files! They are mandatory when encryption via TLS is enabled

I'm trying to get bitnami kafka, helm version, working with TLS certificates but am getting the above error in the log file of
my kafka pod, mykafka-0, when its status becomes ERROR. Here is what I do:
To generate the truststore.jks and keystore.jks, I use a script, kafka-generate-ssl.sh, given in "Enable TLS"
(https://docs.bitnami.com/kubernetes/infrastructure/kafka/administration/enable-tls/).
To create a secret containing the truststore and keystore, I execute the command given on the above mentioned web page, i.e.
kb create secret generic kafka-tls \
--from-file=./truststore/kafka.truststore.jks \
--from-file=./keystore/kafka.keystore.jks
NOTE: I only have 1 broker, so I only have 1 kafka.keystore.jks
I look at the secret using this command: kubectl describe secret kafka-tls. And here is what is output:
Name: kafka-tls
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
kafka.keystore.jks: 5069 bytes
kafka.truststore.jks: 1306 bytes
I start kakfa with the following command:
helm install mykafka ./kafka \
--set auth.interBrokerProtocol=tls \
--set auth.tls.type=jks \
--set auth.tls.existingSecret=kafka-tls \
--set auth.tls.password=mypassword
NOTE; the '--set' are those given on the above mentioned web page.
Maybe 5 minutes later, I check the status of my pods with this command kubectl get pods. Here is the output:
kafka-64fb77b646-mm4kd 0/1 Pending 0 7m34s
mykafka-0 0/1 ERROR 4 7m34s
mykafka-zookeeper-0 1/1 Running 0 7m34s
zookeeper-6f99fcbbb6-sd4vk 0/1 Pending 0 7m34s
I looked at the log file for the pod using this command: kubectl logs mykafka-0. Here is the output:
Couldn't find the expected Java Key Stores (JKS) files! They are mandatory when encryption via TLS is enabled
By the way, my kubenetes cluster is on azure.
Rename your keystores to "kafka-X.keystore.jks" where X is the ID of each Kafka broker.
https://github.com/bitnami/charts/blob/8ebbf6e0af566e05e794562d9a4d1e4f73ce1502/bitnami/kafka/values.yaml#L307
Before you create your secret, rename the kakfa.keystore.jks file to kakfa-0.keystore.jks. This should get you past the error message.
Please, take a look to the documentation here. The secret's name needs the ID of the broker.

gitlab runner - x509: certificate signed by unknown authority

Well, I am trying to run gitlab-runner on my PC, which should be connected to our Gitlab on the server.
I am getting
ERROR: Registering runner... failed runner=XXXXXX status=couldn't execute POST against https://XXXXXXXXXX/api/v4/runners: Post https://XXXXXXXXXX/api/v4/runners: x509: certificate signed by unknown authority
PANIC: Failed to register this runner. Perhaps you are having network problems
I ran through different advices, but nothing really changed.
My current setup is self-signed ceritificate generated by
wget "https://letsencrypt.org/certs/lets-encrypt-x3-cross-signed.pem.txt" -O "/Users/admin/gitlab-runner-certs/fs-tul-letsencrypt.pem"
(I also tried https://futurestud.io/tutorials/how-to-run-gitlab-with-self-signed-ssl-certificate),
script for gitlab-runner registration
#!/usr/bin/env bash
# tried also without sudo
sudo gitlab-runner register \
--non-interactive \
--registration-token OUR_GITLAB_TOKEN \
--url OUR_GITLAB_HOST_URL \
--tls-ca-file /Users/admin/gitlab-runner-certs/fs-tul-letsencrypt.pem \
--executor docker
And I am still getting that error. Any idea?
I also did not change anything on server side. Shouldn't I do anything there? (I did not find any mention about it, but still asking)
PS: gitlab-runner x509: certificate signed by unknown authority did not fix my problem
There was a problem on server side where gitlab was running.
There was wrong path to full-chain certificate.

Docker: how to force graylog web interface over https?

I'm currently struggling to get graylog working over https in a docker environment. I'm using the jwilder/nginx-proxy and I have the certificates in place.
When I run:
docker run --name=graylog-prod --link mongo-prod:mongo --link elastic-prod:elasticsearch -e VIRTUAL_PORT=9000 -e VIRTUAL_HOST=test.myserver.com -e GRAYLOG_WEB_ENDPOINT_URI="http://test.myserver.com/api" -e GRAYLOG_PASSWORD_SECRET=somepasswordpepper -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog2/server
I get the following error:
We are experiencing problems connecting to the Graylog server running
on http://test.myserver.com:9000/api. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
This is the last response we received from the server:
Error message
Bad request Original Request
GET http://test.myserver.com/api/system/sessions Status code
undefined Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
When I go to the URL in the message, I get a reply: {"session_id":null,"username":null,"is_valid":false}
This is the same reply I get when running Graylog without https.
In the docker log file from the graylog is nothing mentioned.
docker ps:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES 56c9b3b4fc74 graylog2/server "/docker-entrypoint.s" 5
minutes ago Up 5 minutes 9000/tcp, 12900/tcp
graylog-prod
When running docker with the option -p 9000:9000 all is working fine without https, but as soon as I force it to go over https I get this error.
Anyone an idea what I'm doing wrong here?
Thanks a lot!
Did you try GRAYLOG_WEB_ENDPOINT_URI="https://test.myserver.com/api" ?

Setting up a Docker registry with Letsencrypt certificate

I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.

Debugging Partial file aka net::ERR_INCOMPLETE_CHUNKED_ENCODING

I have a Dockerized service so theoretically they should be exactly the same across my two servers. The only difference is production is running on Digital Ocean with CoreOS stable (835.9.0) and the dev is running from my home server under Archlinux.
Problem I noticed that when my API returns a lot of results, on production the request seems to be cut short resulting in the infamous net::ERR_INCOMPLETE_CHUNKED_ENCODING in the browser. I can reproduce this issue like so:
curl -i 'http://greptweet.com/u/kaihendry/grep.php?q=http' >/dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 41274 0 41274 0 0 17846 0 --:--:-- 0:00:02 --:--:-- 17852
curl: (18) transfer closed with outstanding read data remaining
However is works fine on my home server:
curl -i 'http://gt.dabase.com/u/kaihendry/grep.php?q=http' >/dev/null
I am waiting to hear back from Digital Ocean. Is there anything else I might have missed? Content length? Compression?
The answer was actually in my error log if I cared to look closely:
[crit] 14#0: *3888 open() "/var/lib/nginx/tmp/fastcgi/2/03/0000000032" failed (13: Permission denied) while reading upstream, client:...
The fix was chmod -R 755 /var/lib/nginx.
This serverfault question is also related.