Add nodes in openshift origin - openshift-origin

I am running openshift origin All-in-One setup using available binaries. Is it possible to add multiple nodes to this existing installation ?
What are the prerequisites for this, do I need to setup ssh connections between hosts?
Please do let me know how or if there is any link available for the same?
Thanks a lot!

Yes you can add additional nodes. Check this link
Basically you have to export the config for this new node and move it over to that node and start using the node-config.yaml

Related

How to configure Redis cache on local?

I have implemented Redis cache with .net core 2.1 application. Now the issue is I have only development connection string. I want to configure and test Redis cache somehow on my local pc. I have read somewhere that it is possible using chocalatey. So can body refer me any link?
PS: When I tried to run redis cache from development server using vpn, It shown me popup to select "ResultBox.cs" file. So I created new ResultBox.cs file and give it the path, but when I call rediscache.Get() method it opens ResultBox.cs file but nothing happens then. Can anybody tell what is ResultBox.cs for?
I have found a way to configure Redis on local using chocolatey. Use this link. If you face Misconf issues while testing on redis-cli this link will be helpful.
You can run a local docker redis image. See this and this for reference.

Openshift Origin Offline/Disconnected Installation

I want to install openshift origin containerized installation(advanced installation) on machine without internet access.
I referred to this URL -> https://docs.openshift.com/enterprise/3.1/install_config/install/disconnected_install.html
But it still going to docker.io and registry.*.redhat.com.
Please suggest a possible approach to achieve the goal.
There is a way to accomplish this install.The updated documentation does list the steps.
Let me know if you are still looking for the steps.

Restart Kubernetes API server with different options

I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.

Why visor got "empty topology" after open?

Guys
I deploy a 6 nodes ignite for spark job. it works fine. However, when i hope to use the visor to do administration, i can not see the topology.
here is the config file.
And, when I enter the visor console, and open the config file. I encountered the "Empty topology"
any suggestion? how can i see the topology? thanks
Guys
I found the root cause, I use Static IP Based Discovery, i just config the IP Address List.
And I ran the visor on one of the nodes, so the visor uses port 47501, but the config file with static ips did not contain this server:port. Finally, this visor can not join the ignite cluster, and can not communicate with them.
use this to fix.
172.16.59.141:47500..47509
I don't see anything suspicious in your config.
I want to see logs from all nodes: 6 servers nodes and Visor node.
To get logs from Visor, please, run
./bin/ignitevisor.sh -v
To get logs from server nodes see <ignite_home>/work/logs.
Could you please zip all these logs and provide link on them (on any file sharing).
Also, I don't think any reason why you need to run visor with sudo rights. Could you please run Visor without sudo.

How to add a new project in Trac via Bitanami hosting

I have launched a TRAC demo server in cloud using Bitnami hosting. I just want to check how to work with multiple projects in TRAC. Now I can see only one project in the demo server and no options are there to add new project.
Bitnami wiki explains how to create a new project in windows/mac via cmd but I can't find any info about project creation in cloud. Can somebody help me with this?
It's not fully clear what you understand of "projects"; that term is ambiguous. But there are 2 possibilities:
On the level of Apache and its Trac instances:
bitnami-trac-1.0.1\apache2\conf\httpd.conf contains an include of bitnami-trac-1.0.1\apps\trac\conf\trac.conf and you can add another <Location> of a new Trac instance there. This will allow to run multiple Tracs within one Apache. See this Trac wiki about MultipleProject for details. Basically, first you need to create a second Trac instance (with its database) by calling bitnami-trac-1.0.1\apps\trac\Scripts\trac-admin.exe from Bitnami command line shell.
On the level of Trac and its plugins:
you may want to setup several "user projects" within one running Trac instance. Read this Trac wiki about MultipleProjects/SingleEnvironment for details. Basically, you'll need to install and setup a plugin called SimpleMultiProjectPlugin.
You need to access your machine and use the command line. Take a look at the documentantion for accessing your server: http://wiki.bitnami.com/BitNami_Cloud_Hosting/Servers/Access_your_machine