User "system" cannot list all services in the cluster - authentication

I'm new to openshift. I'm trying to work through some basic install options. First I was able to download and run the vagrant image. When I did that I was able to login and see several projects and containers running. Next I tried the binary install. So I downloaded openshift origin server v1.3.1 untared it and ran the following:
sudo openshift start
It seems that openshift started, but I did notice a few questionable lines in the output as follows:
W1103 09:06:47.360850 4647 start_master.go:272] Warning: assetConfig.loggingPublicURL: Invalid value: "": required to view aggregated container
logs in the console, master start will continue.
W1103 09:06:47.360906 4647 start_master.go:272] Warning: assetConfig.metricsPublicURL: Invalid value: "": required to view cluster metrics in t
he console, master start will continue.
E1103 09:06:47.373823 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.ClusterPolicy: client:
etcd cluster is unavailable or misconfigured
E1103 09:06:47.374026 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.ClusterPolicyBinding: c
lient: etcd cluster is unavailable or misconfigured
E1103 09:06:47.374102 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.PolicyBinding: client:
etcd cluster is unavailable or misconfigured
E1103 09:06:47.374254 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.Group: client: etcd clu
ster is unavailable or misconfigured
E1103 09:06:47.374420 4647 cacher.go:220] unexpected ListAndWatch error: pkg/storage/cacher.go:163: Failed to list *api.Policy: client: etcd cl
uster is unavailable or misconfigured
E1103 09:06:47.376485 4647 reflector.go:203] github.com/openshift/origin/vendor/k8s.io/kubernetes/plugin/pkg/admission/limitranger/admission.go
:154: Failed to list *api.LimitRange: Get https://XXX.XXX.XXX.XXX:8443/api/v1/limitranges?resourceVersion=0: dial tcp XXX.XXX.XXX.XXX:8443: getsockopt:
connection refused
Once the server is started I can login, but the system user doesn't seem to have permissions to do very much. For example the system user can't see any project or the services in the cluster. Running some of the oc commands seems to indicate that the system user does not have proper permissions as follows:
#./oc login https://localhost:8443 Authentication required for https://localhost:8443 (openshift) Username: system Password: Login
successful.
You don't have any projects. You can try to create a new project, by
running
oc new-project <projectname>
# ./oc new-project default
Error from server: project "default" already exists
# ./oc get services --all-namespaces
User "system" cannot list all services in the cluster
It seems I must be missing something very basic about how to start openshift up from the binary distribution. I can't find anything in the documentation that seems to speak to this problem.

Not sure what your environment looks like, so the following might not work 100%.
But can you try the following:
oc whoami
oc login -u system:admin
oc whoami
the system:admin account is your root account and from their you can create additional user accounts.
The best way that I've found to run a development instance of OpenShift is through oc cluster up. https://github.com/openshift/origin/blob/master/docs/cluster_up_down.md. This runs a containerised version on openshift in docker. Might be worth a spin as it seems that your previous install method has a few errors.

if you want to do this manually (without oc cluster up as mentioned above)
export KUBECONFIG=/full/path/to/openshift.local.config/master/admin.kubeconfig
sudo chmod a+rwX -R /path/to/openshift.local.config/
oadm policy add-cluster-role-to-user cluster-admin demo (demigod mode)
oc whoami
system:admin
oc projects
You have access to the following projects and can switch between them with 'oc project <projectname>':
default
kube-system
openshift
openshift-infra
* test
this isn't a production setup, this is just for messing around.
p.s.: ignore the errors clusterbinding policy error, the issue is known and doesn't affect you logging in.

Related

Using PROXY: command-line: line 0: Bad configuration option: ProxyUseFdpass?

For security concerns, needed to set GCP Compute Engine instance to not have External IP (external ip = None). In that case, it defaults to Identity Aware Proxy. IAP - to the same targets - does succeed from other machines, but not some in my data center.
Even after fully configuring gcloud logging in/authenticating and:
gcloud config set project $PROJECTNAME
gcloud config set compute/zone us-central1-c
then running: gcloud compute ssh $INSTANCENAME --tunnel-through-iap
Returns:
command-line: line 0: Bad configuration option: ProxyUseFdpass
ERROR: (gcloud.compute.ssh) [/usr/bin/ssh] exited with return code [255].
Unclear whether this points to a ssh_config issue or something else, but this is not my area, so am a bit lost and not seeing other related things to this error. Any thoughts? The desired behavior is to not get the error on ProxyUseFdpass. And, for ssh to connect successfully.
I also ran gcloud compute ssh $INSTANCENAME --tunnel-through-iap --dry-run, and what gets returned match the results from successful places that connect.
Also, check if the "Private Google access" is turned-on for the subnet. This will allow the Google services to reach your VM. I had the same problem and turning on "Private Google access" solved the issue for me.

AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000; Not able to connect to local node from aql

I have installed aerospike on my mac my following this installation steps
All the validations are working fine. I am able to connect to the cluster using browser chrome. Below is the screen shot.
I have also installed the AQL tools following the instructions here.
But I'm unable to connect to local node from aql.
$ aql
2017-11-21 16:06:09 WARN Failed to connect to seed 127.0.0.1 3000.
AEROSPIKE_ERR_CONNECTION Bad file descriptor, 127.0.0.1:3000
Error -1: Failed to connect
$ asadm
Aerospike Interactive Shell, version 0.1.11
ERROR: Not able to connect any cluster.
Also, I have noticed the Java client is giving error.
AerospikeClient client = new AerospikeClient("localhost", 3000);
when I changed the localhost to actual Ip returned by vagrant ssh -c "ip addr"|grep 'global eth1' it is working fine.
How to connect with aql using customer parameters? I want to pass ip address and port as parameters to aql. Any suggestions.
$ aql --help
https://www.aerospike.com/docs/tools/aql/index.html - discusses all various command line options.
$ aql -h a.b.c.d -p 1234
There is another possibility, you have your owned port instead of the default 3000, so when you try to connect to aerospike, you can try to run command like : aql -p4000
Hope this may help you
Seems like the port is not getting freed even after exiting the vagrant console.
Tried closing all the terminal windows and then starting again. But no luck.
Finally, restarting the system resolved the issue.

Docker login error with Get Started tutorial

I'm trying to follow beginner tutorial on Docker's website and I suffer with an error on login.
OS is Ubuntu 14.04, I'm not using VirtualBox and I'm not behind any proxy and want to push to the "regular" docker repository (not private one).
All threads I've found mention proxies and private repositories but that isn't my case, I'm just trying to do simple beginner tutorial.
Here is my attempt:
$ sudo docker login
[sudo] password for myuname:
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: myDHuname
Password:
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
My docker info:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 5
Server Version: 1.11.0
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 28
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host
Kernel Version: 3.19.0-58-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.686 GiB
Name: myuname-ThinkPad-T420
ID: 6RW3:X3FC:T62N:CWKI:JQW5:YIPY:RAHO:ZHF4:DFZ6:ZL7X:JPOD:V7EC
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Epilogue
Now docker login is passing. I have not touched anything since yesterday when it was broken...
I can't reproduce the behavior anymore.
I encounter this issue when my first use docker. I've shadowsocks proxy on, and configed as pac mode. When I try to run docker run hello-world, I get this timeout error. When I set the proxy mode to global, the error is aslo there.
But when I disable the proxy, docker runs well. It pull remote image successfully.
docker for windows
Note: Some users reported problems connecting to Docker Hub on Docker
for Windows stable version. This would manifest as an error when
trying to run docker commands that pull images from Docker Hub that
are not already downloaded, such as a first time run of docker run
hello-world. If you encounter this, reset the DNS server to use the
Google DNS fixed address: 8.8.8.8. For more information, see
Networking issues in Troubleshooting.
The error Client.Timeout exceeded while awaiting headers indicates:
GET request to the registry https://registry-1.docker.io/v2/ timedout
The library responsible (most likely libcurl) timed out before a response was heard
The connection never formed (proxy/firewall gobbled it up)
If you see the below result you can rule out timed out and network connectivity
$ curl https://registry-1.docker.io/v2/
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
If you get the above response next would be to check if your user environment has some proxy configuration.
env | grep "proxy"
Note: The docker runs as root. Maybe you have http_proxy in your env. Most likely I am wrong. Anywho see what happens with the curl GET request
change the proxy settings in the firefox. May be you are in access restricted mode. Just add the server address in the firefox settings -> preferences -> advanced -> network -> configuration (settings). Add the server ip in the no proxy for the issue can be resolved

Ice namespace error

I'm trying to login to stage1 Bluemix (for IBMers) using command:
ice --verbose login --user username --psswd password --registry 'registry-ice.bluemix_staging_server'
Once invoked I'm prompted:
Namespace(api_key=None, api_url=None, cf=False, cloud=False, host=None, local=False, org=None, psswd='password', reg_host='registry-ice.bluemix_staging_server', skip_docker=False, space=None, subparser_name='login', user='username', verbose=True)
Executing: cf login -u username -p password -a https://api.bluemix_staging_server
API endpoint: https://api.bluemix_staging_server
Authenticating...
OK
Targeted org 'user org'
Select a space (or press enter to skip):
1. dev
2. docker
Once I choose dev or docker I get following error:
------------------------
*Error response from daemon: Login: You must set a namespace before you login to the registry. See 'ice help namespace' (Code: 404; Headers: map[Server:[nginx] Date:[Tue, 10 Nov 2015 10:54:06 GMT] Content-Type:[text/plain] Content-Length:[84] Connection:[keep-alive]])
docker is not available on your system or is not properly configured
Could not authenticate with cloud registry at registry-ice.bluemix_staging_server
You can still use IBM Containers but will not be able to run local docker containers, push, or pull images*
------------------------
I've defined a space named 'docker' using console before logging.
Any idea what I'm missing here?
Thanks in advance !
You need to set the namespace before doing the login (namespace is your area within the Images Registry). It is done just once.
Check the docs here: https://www.ng.bluemix.net/docs/containers/container_cli_ov.html#container_cli_login
If it doesn't ask for a namespace, it means you have it already. Then try: ice login -a api.bluemix_staging_server -H containers-api.bluemix_staging_server/v2/containers -R registry.bluemix_staging_server
Note: change bluemix_staging_server by the correct hostname you are connecting to.

rabbtimqadmin - Could not connect: [Errno -2] Name or service not known

I have RabbitMQ installed on a CentOS 5.x server which I use for message passing between my programs. I've installed rabbitmqadmin following the directions on https://www.rabbitmq.com/management-cli.html and have used it on my servers in the past.
From what I can tell it looks like this particular server is misconfigured. My web-searches have failed me on trying to get more information on how to troubleshoot this issue.
The error:
[root#server ~]# python26 /usr/local/bin/rabbitmqadmin list nodes
*** Could not connect: [Errno -2] Name or service not known
[root#server ~]#
I have tried several different rabbitmqadmin commands and they give the same result. If I run the command without the extra params it displays the normal help dialog. I have this setup and working on several other servers.
Any idea on what the root issue is? If not, anyway to get more details, like verbose?
Update:
I just tried to check the version of rabbitmq and its yielding an error too:
[root#server ~]# rabbitmqctl status
Status of node rabbit#server ...
Error: unable to connect to node rabbit#server: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#server]
rabbit#server:
* connected to epmd (port 4369) on server
* epmd reports node 'rabbit' running on port 25672
* TCP connection succeeded but Erlang distribution failed
* suggestion: hostname mismatch?
* suggestion: is the cookie set correctly?
current node details:
- node name: rabbitmqctl25451#server
- home dir: /var/lib/rabbitmq
- cookie hash: WXaeZT7XXm13naagfRX5cg==
[root#server ~]#
I'm going to see if I can find something from this... I find this weird because the server is passing messages fine and can be monitored through the web console.
Erlang version:
[root#server rabbitmq]# erl -eval 'erlang:display(erlang:system_info(otp_release)), halt().' -noshell
"R14B04"
[root#server rabbitmq]#
Rabbitmq Version:
[root#server rabbitmq]# python26 /usr/local/bin/rabbitmqadmin --version
rabbitmqadmin 3.3.5
[root#server rabbitmq]#
After much digging and frustration, I found my problem... I'm posting the solution in case anyone else has a similar experience
Previously, I found that if you setup RabbitMQ on a linux server then change the hostname that it can break some of the rabbit configuration.
The awesome part about this problem is that someone changed the name of the server from all capital letters to lowercase...
I've solve this one of two ways:
Solution 1:
Revert the host name back to the previous name. So that rabbitmq references with the appended server name work again.
Solution 2:
If you want to keep the server name change, then you can create a rabbitmq-env.conf files in /etc/rabbitmq like:
NODENAME=rabbit#OLDHOSTNAME
If you aren't sure what your previous name was, you can reference it by doing an ls in your /var/lib/rabbitmq/mnesia/ folder. You'll then see a folder that matches the nodename you need to specify.
Reference: https://www.rabbitmq.com/man/rabbitmq-env.conf.5.man.html
UPDATE:
Host name is CaSE SeNSiTIve... had someone change a hostname on me and the only difference was the case... so took a while to notice...
Yesterday I've lost a few hours with this same problem and it was in a fresh install, so the problem was that the erlang cookie from my user and root user was different than the one from rabbitmq user.
Find out the HOME for the user rabbitmq:
# cat /etc/passwd | grep rabbitmq
Check if the cookies differs from each other:
# vimdiff /var/lib/rabbitmq/.erlang.cookie ~/.erlang.cookie
If they are different, copy the cookie from rabbitmq for the user that you want to have access to the server:
# cp /var/lib/rabbitmq/.erlang.cookie ~/.erlang.cookie
References:
rabbitmqctl status says "TCP connection succeeded but Erlang distribution failed"
How Nodes (and CLI tools) Authenticate to Each Other: the Erlang Cookie