Why can't I use oadm policy add-cluster-role-to-user ... but I can use oc adm policy add-cluster-role-to-user ...? - openshift-origin

Most examples out there use something like:
$ oadm policy add-cluster-role-to-user cluster-admin admin
however, if I run it I get:
$ oadm policy add-cluster-role-to-user cluster-admin admin
error: unknown command "add-cluster-role-to-user cluster-admin
with oadm version:
$oadm version
oadm v1.4.0-rc1+b4e0954
kubernetes v1.4.0+776c994
features: Basic-Auth
Server https://192.168.64.18:8443
openshift v1.3.2
kubernetes v1.3.0+52492b4
However, using:
oc adm policy add-cluster-role-to-user cluster-admin admin
does work.
I have not seen (could have missed it...) any documentation on the OpenShift Origin documentation or anywhere else that mentions when one should be used over the other, or if the former has been deprecated etc..
I did find this issue (#1845) but not much clarification.
Can anyone please clarify...

Related

What are the differences between tokens generated by `aws-iam-authenticator` and `aws eks get-token` when authenticate to kubernetes-dashboard?

kubectl is using aws eks get-token and works perfectly.
But when I try to login to kubernetes-dashboard with the token generated below I get Unauthorized (401): Invalid credentials provided:
AWS_PROFILE=MYPROFILE aws eks get-token --cluster-name myclustername | jq -r '.status.token'
But if I use the token generated with:
AWS_PROFILE=MYPROFILE aws-iam-authenticator -i myclustername token --token-only
then I can login to kubernetes-dashboard.
So in which way are those tokens different? I thought they were equivalent.
There should be not difference between the tokens generated by aws-iam-authenticator or aws eks get-token.
Make sure that you spelled the cluster name right in both commands as you can generate tokens for clusters that do not exist.
Double check that both commands authenticate:
kubectl --token=`AWS_PROFILE=MYPROFILE aws-iam-authenticator -i MYCLUSTERNAME token --token-only` get nodes
kubectl --token=`AWS_PROFILE=MYPROFILE aws --region eu-north-1 eks get-token --cluster-name MYCLUSTERNAME | jq -r '.status.token'` get nodes
Sometimes is very easy to misspell the cluster name and the tools will happily generate a token for it without producing any visible error or warning.

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

Kerberos auth with Apache/PHP on CentOS7

I want to configure Kerberos auth with Apache - with no success.
It look that problem with keytab file.
When I run:
klist -kte /path/to/website.HTTP.keytab
I get:
KVNO Timestamp Principal
---- ------------------- ------------------------------------------------------
12 01/01/1970 02:00:00 HTTP/website.domain#DOMAIN
Then I run:
kinit -k -t /path/to/website.HTTP.keytab HTTP/website.domain#DOMAIN
kinit: Client 'HTTP/website.domain#DOMAIN' not found in Kerberos database while getting initial credentials
Any Idea whats goes wrong?
HTTP/website.domain#DOMAIN is an incomplete SPN, which seems the most likely reason kinit can't find the SPN in the database. A full SPN, as an example, would look like this: HTTP/website.domain#DOMAIN.COM. To fix, you will need to re-create the keytab using that fully-qualified syntax. Example:
ktpass -out HTTP.keytab -mapUser AD_Account_Name#DOMAIN.COM +rndPass -mapOp set +DumpSalt -crypto AES128-SHA1 -ptype KRB5_NT_PRINCIPAL -princ HTTP/website.domain#DOMAIN.COM

User "system" cannot list all services in the cluster

I'm new to openshift. I'm trying to work through some basic install options. First I was able to download and run the vagrant image. When I did that I was able to login and see several projects and containers running. Next I tried the binary install. So I downloaded openshift origin server v1.3.1 untared it and ran the following:
sudo openshift start
It seems that openshift started, but I did notice a few questionable lines in the output as follows:
W1103 09:06:47.360850 4647 start_master.go:272] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container
logs in the console, master start will continue.
W1103 09:06:47.360906 4647 start_master.go:272] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in t
he console, master start will continue.
E1103 09:06:47.373823 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.ClusterPolicy: client:
etcd cluster is unavailable or misconfigured
E1103 09:06:47.374026 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.ClusterPolicyBinding: c
lient: etcd cluster is unavailable or misconfigured
E1103 09:06:47.374102 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.PolicyBinding: client:
etcd cluster is unavailable or misconfigured
E1103 09:06:47.374254 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.Group: client: etcd clu
ster is unavailable or misconfigured
E1103 09:06:47.374420 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.Policy: client: etcd cl
uster is unavailable or misconfigured
E1103 09:06:47.376485 4647 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go
:154: Failed to list *api.LimitRange: Get https://XXX.XXX.XXX.XXX:8443/api/v1/limitranges?resourceVersion=0: dial tcp XXX.XXX.XXX.XXX:8443: getsockopt:
connection refused
Once the server is started I can login, but the system user doesn't seem to have permissions to do very much. For example the system user can't see any project or the services in the cluster. Running some of the oc commands seems to indicate that the system user does not have proper permissions as follows:
#./oc login https://localhost:8443 Authentication required for https://localhost:8443 (openshift) Username: system Password: Login
successful.
You don't have any projects. You can try to create a new project, by
running
oc new-project <projectname>
# ./oc new-project default
Error from server: project "default" already exists
# ./oc get services --all-namespaces
User "system" cannot list all services in the cluster
It seems I must be missing something very basic about how to start openshift up from the binary distribution. I can't find anything in the documentation that seems to speak to this problem.
Not sure what your environment looks like, so the following might not work 100%.
But can you try the following:
oc whoami
oc login -u system:admin
oc whoami
the system:admin account is your root account and from their you can create additional user accounts.
The best way that I've found to run a development instance of OpenShift is through oc cluster up. https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md. This runs a containerised version on openshift in docker. Might be worth a spin as it seems that your previous install method has a few errors.
if you want to do this manually (without oc cluster up as mentioned above)
export KUBECONFIG=/full/path/to/openshift.local.config/master/admin.kubeconfig
sudo chmod a+rwX -R /path/to/openshift.local.config/
oadm policy add-cluster-role-to-user cluster-admin demo (demigod mode)
oc whoami
system:admin
oc projects
You have access to the following projects and can switch between them with 'oc project <projectname>':
default
kube-system
openshift
openshift-infra
* test
this isn't a production setup, this is just for messing around.
p.s.: ignore the errors clusterbinding policy error, the issue is known and doesn't affect you logging in.

What is 'oadm' in CLI?

I'm busy with OpenShift V3 and for some stuff I need to do commands like:
oadm registry --config=admin.kubeconfig \
--credentials=openshift-registry.kubeconfig
oadm router <router_name> --replicas=<number> \
--credentials='/etc/openshift/master/openshift-router.kubeconfig' \
--service-account=router
The question is: I don't know the meaning of 'oadm' and why I have to use it in this case? In OpenShift itself I have to use 'oc' so it's probably not a command specific for OpenShift.
oadm is an OpenShift command that is focused on admin level tasks that usually require elevated cluster authority to run. oc is an OpenShift command that is focused on common user level activities.
For instance, the oadm registry and oadm router commands both require edit privileges inside the "default" namespace. Most common users should not have that authority, so the commands are considered elevated. You'll also notice commands like oadm policy add-cluster-role-to-* and oadm manage-node which are other admin level tasks.