`oc cluster up` fails during initial startup - openshift-origin

I am trying out okd but it fails for me during the oc cluster up port check step. The debug output is not very verbose to be polite. Do you have an idea what to look for.
$ oc cluster up
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
error: a port needed by OpenShift is not available
But the required ports 53 and 8443 are not taken
sudo netstat -tulpn | grep '\(:8443\|:53\)'
At least netstat returns nothing
Versions:
$ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
and
CentOS Linux release 7.6.1810 (Core)
I have not been able to find out how to turn debugging on so that it is possible to see what it really checks for.

Has the user you are running the command as enough priveledges to open privileged ports (ports <1024) on your host machine?
try running cluster up as root or with sudo

yes I starting whole okd as root user

Related

How to resolve peer unverified exception in a secure nifi cluster?

I set up a secured NiFi cluster with TLS certificates provided by the organisation.On accessing the UI I am getting the error as "javax.net.ssl.SSLPeerUnverifiedException: Hostname abc.com not verified: certificate: sha256/abc/abcabc= DN: CN=abc.com, OU=Abc Operations, O=Abc Corporation Limited, C=SG subjectAltNames: [abc.com]".I have referred the link https://nifi.apache.org/docs/nifi-docs/html/walkthroughs.html#securing-nifi-with-provided-certificates.
Is there anything I missed to enable peer to peer communication while using SSL?
I had same problem and found solution in NiFi TLS-toolkit.
Notion: on my cluster auth worked correctly and problem was only in java verification SSL
Shortly: problem indeed in --subjectAlternativeNames
Generating ssl-keys with own rootCA not worked for me. Good instrunction (but old): https://community.cloudera.com/t5/Community-Articles/How-to-create-user-generated-keys-for-securing-NiFi/ta-p/245551
CentOS Linux 8
NiFi 1.14.0
nifi-toolkit 1.15.2
My way with NiFi TLS-toolkit:
Download nifi-toolkit-*.tar.gz to linux machine (let's ip machine is 0.0.0.1, we need it because this VM will be as "certificateAuthorityHostname") link at this page
sudo wget https://dlcdn.apache.org/nifi/1.15.2/nifi-toolkit-1.15.2-bin.tar.gz
Unarchive it
sudo tar -xvf nifi-toolkit-1.15.2-bin.tar.gz
Generate all keys by long command
../security_output - this dir (or any other name) need to be created before run main command (it's useful to store all key-files in one place)
sudo ./bin/tls-toolkit.sh standalone -h - this help-command to better understand args
OU - equal VM-names in my cluster
!!! --subjectAlternativeNames - it's main reason why raise error javax.net.ssl.SSLPeerUnverifiedException: Hostname <ip / dns> not verified
-O - this arg overwrite your keys in folder, be careful
generaet coomand: sudo ./bin/tls-toolkit.sh standalone --hostnames '0.0.0.1,0.0.0.2,0.0.0.3' -c '0.0.0.1' -C 'CN=0.0.0.1,OU=nifi-prod-cluster-01' -C 'CN=0.0.0.2,OU=nifi-prod-cluster-02' -C 'CN=0.0.0.3,OU=nifi-prod-cluster-03' -O -o ../security_output --subjectAlternativeNames '0.0.0.1,0.0.0.2,0.0.0.3,nifi-prod-cluster-01,nifi-prod-cluster-02,nifi-prod-cluster-03'
After generating keys I archive full dir security_output:
sudo tar -zcvf security_output.tar.gz security_output
And copy this tar/dir to other VM of cluster: to 0.0.0.2 and 0.0.0.3 in my example
Then we need to move keystore.jks and truststore.jks to nifi/conf/ directory near nifi.properties
Edit nifi.properties. Passwords of keys will be in security_output/0.0.0.X/nifi.properties. I replace only this params:
nifi.security.autoreload.enabled=false
nifi.security.autoreload.interval=10 secs
nifi.security.keystore=./conf/keystore.jks
nifi.security.keystoreType=jks
nifi.security.keystorePasswd=34dgsOBKdS+9DGHIm849ALK3JaNBdd738ddsgjfghb4J
nifi.security.keyPasswd=34dgsOBKdS+9DGHIm849ALK3Jaddsgjfghb4J
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=jks
nifi.security.truststorePasswd=/n1xI9AjcwutNBdd738uOQeQL5O9ALK3i3KwylEYMW5
nifi.security.user.authorizer=single-user-authorizer
nifi.security.allow.anonymous.authentication=false
nifi.security.user.login.identity.provider=single-user-provider
nifi.security.user.jws.key.rotation.period=PT1H
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
Restart nifi:
sudo service nifi restart && tail -f /opt/nifi/logs/nifi-app.log
UPD. Maybe you want to set one password for keys for all machines (it's easier to setup) or set number of days for keys: https://nifi.apache.org/docs/nifi-docs/html/toolkit-guide.html#standalone
Links:
Usefull link for my guide (but old): https://pierrevillard.com/tag/tls-toolkit/
This helps me find good idea: https://community.cloudera.com/t5/Community-Articles/Using-the-TLS-Toolkit-to-simplify-security/ta-p/247531

Why SSH is not working in kubernetes pods/container?

We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( $ lsof -i -P -n | grep LISTEN).
But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the sshd from inside the k8s container then it gives me below error:
Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
Is there any way to start the K8s container with SSH enabled?
There are three things to consider:
Like David said in his comment:
I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.
If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.
In case you have missed that: you need to declare port 22 in your deployment template.
Please let me know if that helped.

RabbitMQ accepting connections but closing them before accepting any input

So I just installed the latest version of rabbitmq and I've been trying to get it to work. The server is running and I've restarted it once just to be sure it's a consistent problem.
If I telnet localhost 5672, I get
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
As you can see, the connection is accepted but rabbitmq does not accept any input. The connection is closed immediately. No further information shows up in logs.
rabbitmqctl works without any problems.
This is running on Windows Subsystem for Linux / Ubuntu. I don't have any other options for a local dev environment because I'm on a work computer which is locked down pretty tightly.
I ran into the same issue, using Ubuntu(16.04) as a subsystem on Windows and rabbitmq 3.7.8. I noticed that when running sudo rabbitmqctl status the listeners showed the following:
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}
I fixed this issue by creating a rabbitmq config file and specifying the localhost and port 5762
Here is what i did step by step.
Using sudo && vim, I created a 'rabbitmq.conf' file, located in
/etc/rabbitmq/
sudo vim /etc/rabbimq/rabbitmq.conf
I specified the localhost(127.0.0.1) and port(5672) for the default
tcp listener in the rabbitmq.conf file
listeners.tcp.default = 127.0.0.1:5672
Restart rabbitmq
sudo service rabbitmq-server stop
then
sudo service rabbitmq-server start
Check sudo rabbitmqctl status and look at the listeners, you should see your new tcp listener with the localhost ip sepcified
{listeners,[{clustering,25672,"::"},{amqp,5672,"127.0.0.1"}]}
Here is the config docs from rabbitmq that may help clarify some of these steps.
Telnet lets you confirm the system is listening and allows incoming connections.
But even an "out of the box" install of RabbitMQ expects credentials for connections.
rabbitmqctl list_users to see which users are configured.
If guest present, typical creds are guest / guest
Either install management plugin (or confirm it is installed),
or script your test, most languages have a package available for connecting to RabbitMQ.

Could not connect to Redis at 127.0.0.1:6379: Connection refused with homebrew

Using homebrew to install Redis but when I try to ping Redis it shows this error:
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Note :
I tried to turn off firewall and edit conf file but still cannot ping.
I am using macOS Sierra and homebrew version 1.1.11
After installing redis, type from terminal:
redis-server
And Redis-Server will be started
I found this question while trying to figure out why I could not connect to redis after starting it via brew services start redis.
tl;dr
Depending on how fresh your machine or install is you're likely missing a config file or a directory for the redis defaults.
You need a config file at /usr/local/etc/redis.conf. Without this file redis-server will not start. You can copy over the default config file and modify it from there with
cp /usr/local/etc/redis.conf.default /usr/local/etc/redis.conf
You need /usr/local/var/db/redis/ to exist. You can do this easily with
mkdir -p /usr/local/var/db/redis
Finally just restart redis with brew services restart redis.
How do you find this out!?
I wasted a lot of time trying to figure out if redis wasn't using the defaults through homebrew and what port it was on. Services was misleading because even though redis-server had not actually started, brew services list would still show redis as "started." The best approach is to use brew services --verbose start redis which will show you that the log file is at /usr/local/var/log/redis.log. Looking in there I found the smoking gun(s)
Fatal error, can't open config file '/usr/local/etc/redis.conf'
or
Can't chdir to '/usr/local/var/db/redis/': No such file or directory
Thankfully the log made the solution above obvious.
Can't I just run redis-server?
You sure can. It'll just take up a terminal or interrupt your terminal occasionally if you run redis-server &. Also it will put dump.rdb in whatever directory you run it in (pwd). I got annoyed having to remove the file or ignore it in git so I figured I'd let brew do the work with services.
If after install you need to run redis on all time, just type in terminal:
redis-server &
Running redis using upstart on Ubuntu
I've been trying to understand how to setup systems from the ground up on Ubuntu. I just installed redis onto the box and here's how I did it and some things to look out for.
To install:
sudo apt-get install redis-server
That will create a redis user and install the init.d script for it. Since upstart is now the replacement for using init.d, I figure I should convert it to run using upstart.
To disable the default init.d script for redis:
sudo update-rc.d redis-server disable
Then create /etc/init/redis-server.conf with the following script:
description "redis server"
start on runlevel [23]
stop on shutdown
exec sudo -u redis /usr/bin/redis-server /etc/redis/redis.conf
respawn
What this is the script for upstart to know what command to run to start the process. The last line also tells upstart to keep trying to respawn if it dies.
One thing I had to change in /etc/redis/redis.conf is daemonize yes to daemonize no. What happens if you don't change it then redis-server will fork and daemonize itself, and the parent process goes away. When this happens, upstart thinks that the process has died/stopped and you won't have control over the process from within upstart.
Now you can use the following commands to control your redis-server:
sudo start redis-server
sudo restart redis-server
sudo stop redis-server
Hope this was helpful!
redis-server --daemonize yes
I have solved this issue by running this command.
This work for me :
sudo service redis-server start
Date: Dec 2021
There is a couple of reason for this error. I read one article to fix the issue for me. So I just summarize what to check one by one.
1 Check: Redis-Server not Started
redis-server
Also to run Redis in the background, the following command could be used.
redis-server --daemonize yes
2. Check: Firewall Restriction
sudo ufw status (inactive)
sudo ufw active (for making active it might disable ssh when first time active. So enable port 22 to access ssh.)
sudo ufw allow 22
sudo ufw allow 6379
3. Check: Resource usage
ps -aux | grep redis
4. Config setup restriction
sudo vi /etc/redis/redis.conf.
Comment the following line.
# bind 127.0.0.1 ::1
Note: It will be more difficult for malicious actors to make requests or gain access to your server. Make sure you're bound to correct IP address network.
Hope it helps someone. For more information read the following article.
https://bobcares.com/blog/could-not-connect-to-redis-connection-refused/
It's the better way to connect to your redis.
At first, check the ip address of redis server like this.
ps -ef | grep redis
The result is kind of " redis 1184 1 0 .... /usr/bin/redis-server 172.x.x.x:6379
And then you can connect to redis with -h(hostname) option like this.
redis-cli -h 172.x.x.x
Try this :
sudo service redis-server restart
Error connecting Redis on Apple Silicon( Macbook Pro M1 - Dec 2020), you have to just know 2 things:
Run the redis-server using a sudo will remove the server starting error
shell% sudo redis-server
For running it as a service "daemonize" it will allow you to run in the background
shell% sudo redis-server --daemonize yes
Verify using below steps:
shell% redis-cli ping
Hope this helps all Macbook Pro M1 users who are really worried about lack of documentation on this.
I was stuck on this for a long time. After a lot of tries I was able to configure it properly.
There can be different reasons of raising the error. I am trying to provide the reason and the solution to overcome from that situation. Make sure you have installed redis-server properly.
6379 Port is not allowed by ufw firewall.
Solution: type following command sudo ufw allow 6379
The issue can be related to permission of redis user. May be redis user doesn't have permission of modifying necessary redis directories. The redis user should have permissions in the following directories:
/var/lib/redis
/var/log/redis
/run/redis
/etc/redis
To give the owner permission to redis user, type the following commands:
sudo chown -R redis:redis /var/lib/redis
sudo chown -R redis:redis /var/log/redis
sudo chown -R redis:redis /run/redis
sudo chown -R redis:redis /etc/redis.
Now restart redis-server by following command:
sudo systemctl restart redis-server
Hope this will be helpful for somebody.
First you need to up/start the all the redis nodes using below command, one by one for all conf files.
#Note : if you are setting up cluster then you should have 6 nodes, 3 will be master and 3 will be slave.redis-cli will automatically select master and slave out of 6 nodes using --cluster command as shown in my below commands.
[xxxxx#localhost redis-stable]$ redis-server xxxx.conf
then run
[xxxxx#localhost redis-stable]$ redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 --cluster-replicas 1
output of above should be like:
>>> Performing hash slots allocation on 6 nodes...
2nd way to set up all things automatically:
you can use utils/create-cluster scripts to set up every thing for you like
starting all nodes, creating cluster
you an follow https://redis.io/topics/cluster-tutorial
Thanks
Actually you need to run "redis-server &" after instalation to start the service, when you only run "redis-server" the service runs in undetached mode. emphasis on "&"
I just had this same problem because I had used improper syntax in my config file. I meant to add:
maxmemory-policy allkeys-lru
to my config file, but instead only added:
allkeys-lru
which evidently prevented Redis from parsing the config file, which in turn prevented me from connecting through the cli. Fixing this syntax allowed me to connect to Redis.
Had that issue with homebrew MacOS the problem was some sort of permission missing on /usr/local/var/log directory see issue here
In order to solve it I deleted the /usr/local/var/log and reinstall redis brew reinstall redis
In my case, it was the password that contained some characters like ', after changing it the server started without problems.
Just like Aaron, in my case brew services list claimed redis was running, but it wasn't. I found the following information in my log file at /usr/local/var/log/redis.log:
4469:C 28 Feb 09:03:56.197 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
4469:C 28 Feb 09:03:56.197 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=4469, just started
4469:C 28 Feb 09:03:56.197 # Configuration loaded
4469:M 28 Feb 09:03:56.198 * Increased maximum number of open files to 10032 (it was originally set to 256).
4469:M 28 Feb 09:03:56.199 # Creating Server TCP listening socket 192.168.161.1:6379: bind: Can't assign requested address
That turns out to be caused by the following configuration:
bind 127.0.0.1 ::1 192.168.161.1
which was necessary to give my VMWare Fusion virtual machine access to the redis server on macOS, the host. However, if the virtual machine wasn't started, this binding failure caused redis not to start up at all. So starting the virtual machine solved the problem.
I was trying to connect my Redis running in wsl2 from vs code running in Windows.
I have listed down what worked for me and the order in which I have performed these actions:
1) sudo ufw allow 6379
2) Update redis.conf to bind 127.0.0.1 ::1 192.168.1.7
3) sudo service redis-server restart
NOTE: This is the first time I have installed Redis on wsl2 and have not run a single command yet.
Let me know if it works for you.
Thanks.
Redis for Mac:
1- brew install redis
2- brew services start redis
3- redis-cli ping
$ brew services start redis
$ brew services stop redis
$ brew services restart redis
Lunch autostart options:
$ ln -sfv /usr/local/opt/redis/*.plist ~/Library/LaunchAgents
# autostart activate
$ launchctl load ~/Library/LaunchAgents/homebrew.mxcl.redis.plist
# autostart deactivate
$ launchctl unload ~/Library/LaunchAgents/homebrew.mxcl.redis.plist
Redis conf default path : /usr/local/etc/redis.conf
In my case, someone had come along and incorrectly edited the redis.conf file to this:
bind 127.0.0.1 ::1
bind 192.168.1.7
when, it really needed to be this (one line):
bind 127.0.0.1 ::1 192.168.1.7
I am using Ubuntu 18.04
I have just enter this command in CMD
sudo systemctl start redis-server
And it is now working. so I thing my redis server was not started that why it showing me the error
Could not connect to Redis at 127.0.0.1:6379: Connection refused.

How can I connect to a Google Compute Engine virtual server with a GUI?

I am testing a Google Compute Engine, and I created a VM with Ubuntu OS. When I connect to it, by clicking this Connect SSH button, it opens a console window.
Is that the connection you get?
How do I open a real screen with a GUI on it? I don't want the console.
Much better solution from Google themselves:
https://medium.com/google-cloud/linux-gui-on-the-google-cloud-platform-800719ab27c5
You need to forward the X11 session from the VM to your local machine. This has been covered in the Unix and Linux stack site before:
https://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-from-ubuntu-machine
Since you are connecting to a server that is expected to run compute tasks there may well be no X11 server installed on it. You may need to install X11 and similar. You can do that by following the instructions here:
https://help.ubuntu.com/community/ServerGUI
Since I have needed to do this recently, I am going to briefly write up the required changes here:
Configure the Server
$ sudo vim /etc/ssh/sshd_config
Ensure that X11Forwarding yes is present. Restart the ssh daemon if you change the settings:
$ sudo /etc/init.d/sshd restart
Configure the Client
$ vim ~/.ssh/config
Ensure that ForwardX11 yes is present for the host. For example:
Host example.com
ForwardX11 yes
Forwarding X11
$ ssh -X -C example.com
...
$ gedit example.txt
Trusted X11 Forwarding
http://dailypackage.fedorabook.com/index.php?/archives/48-Wednesday-Why-Trusted-and-Untrusted-X11-Forwarding-with-SSH.html
You may wish to enable trusted forwarding if applications have trouble with untrusted forwarding.
You can enable this permanently by using ForwardX11Trusted yes in the ~/.ssh/config file.
You can enable this for a single connection by using the -Y argument in place of the -X argument.
These instructions are for setting up Ubuntu 16.04 LTS with LXDE (I use SSH port forwarding instead of opening port 5901 in the VM instance firewall)
1. Build a new Ubuntu VM instance using the GCP Console
2. connect to your instance using google cloud shell
gcloud compute --project "project_name" ssh --zone "project_zone" "instance_name"
3. install the necessary packages
sudo apt update && sudo apt upgrade
sudo apt-get install xorg lxde vnc4server
4. setup vncserver (you will be asked to provide a password for the vncserver)
vncserver
sudo echo "lxpanel & /usr/bin/lxsession -s LXDE &" >> ~/.vnc/xstartup
6. Reboot your instance (this returns you to the Google cloud shell prompt)
sudo reboot
7. Use the google cloud shell download file facility to download the auto-generated private key stored at $HOME/.ssh/google_compute_engine and save it in your local machine*****
cloudshell download-files $HOME/.ssh/google_compute_engine
8. From your local machine SSH to your VM instance (forwarding port 5901) using your private key (downloaded at step 7)
ssh -L 5901:localhost:5901 -i "google_compute_engine" username#instance_external_ip -v -4
9. Run the vncserver in your VM instance
vncserver -geometry 1280x800
10. In your local machine's Remote Desktop Client (e.g. Remmina) set Server to localhost:5901 and Protocol to VNC
Note 1: to check if the vncserver is working ok use:
netstat -na | grep '[:.]5901'
tail -f /home/user_id/.vnc/instance-1:1.log
Note 2: to restart the vncserver use:
sudo vncserver -kill :1 && vncserver
***** When first connected via the Google cloud shell the public and private keys are auto-generated and stored in the cloud shell instance at $HOME/.ssh/
ls $HOME/.ssh/
google_compute_engine google_compute_engine.pub google_compute_known_hosts
The public key should be added to the home/*user_id*/.ssh/authorized_keys
in the VM instance (this is done automatically when you first SHH to the VM instance from the google cloud shell, i.e. in step 2)
you can confirm this in the instance metadata
Chrome Remote Desktop allows you to remotely access applications with a graphical user interface from a local computer or mobile device. For this approach, you don't need to open firewall ports, and you use your Google Account for authentication and authorization.
Check out this google tutorial to use it with Compute Engine : https://cloud.google.com/solutions/chrome-desktop-remote-on-compute-engine