Minishift: Could not resolve: *.192.168.64.2.nip.io - openshift-origin

I have installed minishift on OSX with brew:
brew cask install minishift-beta
...
$ minishift version
Minishift version: 1.0.0
I have sucessfuly started minishift, and created node-ex example application and exported it:
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
nodejs-ex nodejs-ex-myproject.192.168.64.2.nip.io nodejs-ex 8080-tcp None
However I can not reach .192.168.64.2.nip.io:
$ curl nodejs-ex-myproject.192.168.64.2.nip.io
curl: (6) Could not resolve host: nodejs-ex-myproject.192.168.64.2.nip.io
$ dig +short nodejs-ex-myproject.192.168.64.2.nip.io
$
All is working with minishift web console and oc command, but I can not reach the application domain.

Thank you #enj. The explanation at http://nip.io is clear about how it works.
I have seen that queries to 8.8.8.8 and to my ISP DNS are resolved to my private IP. But it is my router (my primary DNS) which do respond nip.io
My router run DD-WRT and has enabled
Rebind protection Discard upstream RFC1918 responses
then I add nip.io at
Domain whitelist nip.io
and now I resolve queries:
≻ dig +short test.10.0.0.1.nip.io
10.0.0.1

Is something on your machine or network blocking DNS queries to nip.io?

When playing with Minishift at home, where I am connected to the internet via Deutsche Telekom's VDSL and Speedport-Router, I cannot resolve these xip.io or nip.io addresses.
My workaround is to put 8.8.8.8 into /etc/resolv.conf

I had the same issue on Windows 10. My workaround was to add an entry in C:\Windows\System32\drivers\etc\hosts file. Here is an example
192.160.90.101 nodejs-ex-nodejs-echo.192.160.90.101.nip.io # needed for minishift to work

Related

`oc cluster up` fails during initial startup

I am trying out okd but it fails for me during the oc cluster up port check step. The debug output is not very verbose to be polite. Do you have an idea what to look for.
$ oc cluster up
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
error: a port needed by OpenShift is not available
But the required ports 53 and 8443 are not taken
sudo netstat -tulpn | grep '\(:8443\|:53\)'
At least netstat returns nothing
Versions:
$ oc version
oc v3.11.0+0cbc58b
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
and
CentOS Linux release 7.6.1810 (Core)
I have not been able to find out how to turn debugging on so that it is possible to see what it really checks for.
Has the user you are running the command as enough priveledges to open privileged ports (ports <1024) on your host machine?
try running cluster up as root or with sudo
yes I starting whole okd as root user

Unable to resolve github.com from OpenShift origin pods

I have a basic OpenShift origin cluster started with oc cluster up
Now, in the default 'MyProject' i wanted to build a source from git repo and it's failing with the error
Could not resolve host: github.com; Name or service not known
Even I tried setting up gogs and migrate the public hosted source code on github.com to gogs pod but throwing same error.
Kindly advise if there are any additional network settings required during OpenShift cluster setup in order to access github.com or any other public domains. I can sense it's a network issue but not sure what exactly needs to be configured on the cluster.
I know this is an old ticket, but I came across this issue when looking for a solution for my problem. I had exactly the same problem as described in this issue. For me, the problem lies within the combination between Ubuntu 18.04 and docker. I followed solution B from this answer.
Hopefully this helps someone as I've lost a lot of time trying to resolve this issue by looking for the problem as if it was a problem from openshift/okd while the actual cause lies within the combination between docker and ubuntu (at least for me).
You can edit the config Map of Node in master server ( In order to provide proper information of your nameserver to the pods.)
# oc get cm -n openshift-node
for all compute nodes edit the config map by below command.( Only need to perform in master server)
# oc edit cm node-config-compute -n openshift-node
......
dnsBindAddress: 127.0.0.1:53
dnsDomain: cluster.local
dnsIP: 10.0.80.11
dnsNameservers: null
dnsRecursiveResolvConf: /etc/origin/node/resolv.conf
.......
Edit dnsIP section with your DNS IP. Then restart the service
# systemctl restart atomic-openshift-node.service
The DNS ip will be prepended in all /etc/resolv.conf file of Pods.
Click for detail info
Shutdown the cluster with: oc cluster down
Edit the file: openshift.local.clusterup/node/node-config.yml and set dnsIP: "" to 8.8.8.8
Edit the file openshift.local.clusterup/kubedns/resolv.conf
and add
nameserver 8.8.8.8
nameserver 8.8.4.4
Also make sure you have the DNS options inside the docker config file
Edit /etc/docker/daemon.json and add
"dns": ["8.8.8.8", "8.8.4.4"]
Then start your cluster with
oc cluster up
and now it should work fine.

How can I set virtual host in Codeship?

I’m using Codeship to automate a multi-tenancy application.
My app need subdomain setting to run acceptance tests using Selenium Web Driver.
So, I need to config virtual domain for my app.
For example, I need the following virtual domain:
127.0.0.1 test.my-app.test
127.0.0.1 my-app.test
If I do not use subdomain to request to my app, It not work as requirement.
I tried the following commands in Setup Commands section before Test Pipelines.
sudo echo '127.0.0.1 test.my-app.test' >> /etc/hosts
sudo echo '127.0.0.1 my-app.test' >> /etc/hosts
But, It doesn’t work, because I has no permission. The error message was:
bash: /etc/hosts: Permission denied
Would you mind tell me how to make it work ?
Thank you in advanced !
Update:
I received reply from Codeship team:
this is not possible in our classic infrastructure due to technical limitations. You could move to our Docker Platform, which allows more customization of your build environment.
We need to use Docker to solve this issue
Your redirected command will not be executed in the root privilege, that's why you got the Permission denied error.
Your command means "do the echo in the privilege root, then redirected to /etc/hosts file".
Try this:
sudo sh -c 'echo "Your text" >> /path/to/file'
We don't allow access via sudo on the build VMs because of security considerations.
However, you can use a service like http://xip.io/ or lvh.me to access your application via DNS names.
$ nslookup codeship.lvh.me
Server: 8.8.8.8
Address: 8.8.8.8#53
Non-authoritative answer:
Name: codeship.lvh.me
Address: 127.0.0.1
lvh.me will resolve any requests to a subdomain to 127.0.0.1, xip.io offers more functionality, that is explained on its homepage in more detail.

openshift create app ask for passwd

all. when I try 'rhc create-app demo python-2.7', I meet an issue not able to check out the git repo. system will ask for the password of the cartridge or something. but in fact I have upload the default key from openshift console.
here is what I have done:
install openshift from puppet
oo-diagnostics check pass
create app
then I remove the default files in /root/.ssh and remove the key file from openshift console, and recreate the ssh key, and run rhc setup again to upload key. then create app again, but failed again.
In the Broker Virtual Machine, while running - oo-register-dns -h node -d domainX.example.com XXX.XXX.XXX.XXX -k /var/named/domainX.example.com.key,
The proxy XXX.XXX.XXX.XXX should be your Node Virtual Machine's IP Address (as I think most probabily you have used Broker's IP Address. Change accordingly and run this command again,
It will work.
Can you try with a different (main) domain name instead of example.com? I think it might be the issue as per wikipedia explanation:
Example.com, example.net, example.org, and example.edu are second-level domain names reserved for documentation purposes and examples of the use of domain names.
Even if you've masked it with your hosts file or local DNS it still might be confusing the Openshift's DNS.

Why does running "apachectl -k start" not work, but "sudo apachectl -k start" does?

I'm working on my OS X with the default installation of Apache. For some reason, when I run the "apachectl" command without the "sudo" I get "no listening sockets available / unable to open logs." I'm guessing this is a permissioning thing, so can someone help me out? I'm using Apache 2.2.
Also, side question, where the the Apache script file that is basically the "exe" that linux executes? I'm trying to intergrate my server with Aptana Studio, and it requires the path to the Apache install. I know in Windows, this would be "C:\path\to\httpd.exe", but I don't know how this works in linux.
Is your server listening on port 80? (Usually) only root is allowed to open ports below 1024. Hence the need for sudo.
As you can see, lots of people wonder how to get around this. One possible solution is to perform port-forwarding on your router. (I'm assuming here that you are behind a router...). Then incoming connections on port 80 can be forwarded to e.g. port 8080. Thus only locally does one need to connect to port 8080. (There may be more elegant solutions... somebody else will post them.)
I think generally (on both OS X and Linux - I'm not sure which one you're referring to) the httpd binary is located at: /usr/sbin/httpd
If you need to be able to restart Apache, and you can't do so as root (for whatever reason..), then you may have to settle for a non 'well known' port.
try this
(with php)
$a = shell_exec('sudo -u root -S /etc/init.d/apache2 restart < /home/$user/passfile');
password should stored in passfile