What is 'oadm' in CLI? - openshift-origin

I'm busy with OpenShift V3 and for some stuff I need to do commands like:
oadm registry --config=admin.kubeconfig \
--credentials=openshift-registry.kubeconfig
oadm router <router_name> --replicas=<number> \
--credentials='/etc/openshift/master/openshift-router.kubeconfig' \
--service-account=router
The question is: I don't know the meaning of 'oadm' and why I have to use it in this case? In OpenShift itself I have to use 'oc' so it's probably not a command specific for OpenShift.

oadm is an OpenShift command that is focused on admin level tasks that usually require elevated cluster authority to run. oc is an OpenShift command that is focused on common user level activities.
For instance, the oadm registry and oadm router commands both require edit privileges inside the "default" namespace. Most common users should not have that authority, so the commands are considered elevated. You'll also notice commands like oadm policy add-cluster-role-to-* and oadm manage-node which are other admin level tasks.

Related

dokku - giving another user root access

I added another user's public ssh-key to my dokku server, but they can't login using ssh root#appname.com.
I can see their ssh-key in authorized_keys and also if I run sshcommand list dokku or sshcommand list root.
I have checked in the sudoers config, and it seems that all ssh-keys are given root permissions:
$ cat sudoers
/...
# User privilege specification
root ALL=(ALL:ALL) ALL
I am using the dokku-acl plugin, but haven't found anything in the docs that would help.
The server is an Aliyun ECS (China).
Feel like I am missing something simple. Any advice is very much appreciated!
root ssh access and dokku ssh access are governed separately. The user's public key should be added as is to the /root/.ssh/authorized_keys file, whereas the /home/dokku/.ssh/authorized_keys file should only be managed by the subcommands of the ssh-keys plugin (included with Dokku).
You may wish to remove the entries from both files manually, then add the entries back as described above. For dokku user access - which would grant the ability to push code as well as perform remote commands via ssh dokku#host $command - you can use the command dokku ssh-keys:add to add a specific user:
echo "PUBLIC_KEY_CONTENTS" | dokku ssh-keys:add some-user-name

How to activate authentication in Apache Airflow

Airflow version- 1.9.0
I have installed apache airflow and post configuration i am able to run sample DAG's with sequential executor.
Also, created new sample user which i can see under Admin > Users.
But unable to get the login window/screen when we visit webserver adress at :8080/ it directly opens up Airflow webserver with admin user.
It will be great help if anyone can provide some info on how to activate login screen/page, so that user credentials can be used for logging into webserver.
Steps followed to enable web user authentication:
https://airflow.apache.org/security.html?highlight=authentication
Check the following in your airflow.cfg file:
[webserver]
authenticate = True
auth_backend = airflow.contrib.auth.backends.password_auth
And also remember to Restart Airflow Webserver, if it still doesn't work, run airflow initdb and restart the webserver.
Also, double-check in airflow.cfg file that it does not contain multiple configurations for authenticate or auth_backend. If there is more than one occurrence, than it can cause that issue.
If necessary, install flask_bcrpyt package of python2.x/3.x
For instance,
$ python3.7 -m pip install flask_bcrypt
Make sure you have an admin user created,
airflow create_user -r Admin -u admin -e admin#acme.com -f admin -l user -p *****
edit airflow.cfg
inside [webserver] section
change authenticate = True. by default it is set to False.
add auth_backend = airflow.contrib.auth.backends.password_auth.
change rbac = True for Role-based-access-control – RBAC.
airflow initdb
restart airflow webserver
just add rbac = True to airflow.cfg, and you are good to go.
Now all you need to is restart your airflow webserver.
And in case if you want to add a new user. You can use this command,
airflow create_user -r Admin -u admin -f Ashish -l malgawa -p test123 -e ashishmalgawa#gmail.com
“-r” is the role we want for the user
“-u” is the username
“-f” is the first name
“-l” is the last name
“-e” is the email id
“-p” is the password
For more details, you can follow this article
https://www.cloudwalker.io/2020/03/01/airflow-rbac-role-based-access-control/#:~:text=RBAC%20is%20the%20quickest%20way,access%20to%20DAGs%20as%20well

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

Can't login with root user in native templates of environments Jelastic

When I create a new environment in some nodes, (i.e. with the Nginx) I can't access to this node with root user
I logged with user a not with root.
Using username "251X-XXX".
Authenticating with public key "rsa-key-XXXXXXXX"
Last login: Thu Sep 28 09:11:56 2017
nginx#node251X-delete ~ $ sudo date
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:
#1) Respect the privacy of others.
#2) Think before you type.
#3) With great power comes great responsibility.
[sudo] password for nginx:
Sorry, try again.
Brief:
I didn't receive root password to my email (I'm the owner of this environment).
I can't change this node to a Docker image
There's no Reset Password option on Dashboard
Sudo it doesn't work.
Also it happens with other non-docker nodes (Tomcat, MySQL,...)
Any alternative or configuration to enter with root user to this node.
Thanks
Jelastic doesn't provide root access to separate containers. At the same time while accessing containers via SSH, a user receives all required permissions and additionally can manage the main services with sudo commands of the following kind (and others):
sudo /etc/init.d/jetty start
sudo /etc/init.d/mysql stop
sudo /etc/init.d/tomcat restart
sudo /etc/init.d/memcached status
sudo /etc/init.d/mongod reload
sudo /etc/init.d/nginx upgrade
sudo /etc/init.d/httpd help
For example, you can restart nginx with the following command:
sudo /etc/init.d/nginx restart
No password will be requested.
Note: If you deploy any application, change the configurations or add any extra functionality via SSH to your Jelastic environment, this
will not be displayed at the Jelastic dashboard.
Using our documentation you’ll find out how to:
use SFTP and FISH protocols
manage containers via SSH with Capistrano
Root user is only provided for self-managed nodes (custom Docker / Elastic VPS).
You can execute specific whitelisted commands with sudo (e.g. sudo service nginx restart). Besides that you shouldn't need root access.
If you feel otherwise then contact your hosting provider to discuss your needs and they can find a solution for you.

Why can't I use oadm policy add-cluster-role-to-user ... but I can use oc adm policy add-cluster-role-to-user ...?

Most examples out there use something like:
$ oadm policy add-cluster-role-to-user cluster-admin admin
however, if I run it I get:
$ oadm policy add-cluster-role-to-user cluster-admin admin
error: unknown command "add-cluster-role-to-user cluster-admin
with oadm version:
$oadm version
oadm v1.4.0-rc1+b4e0954
kubernetes v1.4.0+776c994
features: Basic-Auth
Server https://192.168.64.18:8443
openshift v1.3.2
kubernetes v1.3.0+52492b4
However, using:
oc adm policy add-cluster-role-to-user cluster-admin admin
does work.
I have not seen (could have missed it...) any documentation on the OpenShift Origin documentation or anywhere else that mentions when one should be used over the other, or if the former has been deprecated etc..
I did find this issue (#1845) but not much clarification.
Can anyone please clarify...