** server can't find hostname.com nxdomain - apache

I am trying to set up an apache web server on my vm and im running into some issues. When I do an 'nslookup' on the hostname of the machine this is what I get:
nslookup rhel64.xxxxx.xxxxx.com
Server: xxx.xxx.32.1
Address: xxx.xxx.32.1#53
** server can't find rhel64.xxxxx.xxxxx.com: NXDOMAIN
I'm sure this is a common problem but I'm not sure how to fix it. It seems that dnsmasq can't resolve the hostname. Adding the hostname to /etc/hosts doesn't fix it.
Running on an RHEL6.4 machine.
Thanks in advance.

You should use a DNS server that is able to resolve the name; the one you are using now, at xxx.xxx.32.1,
isn't.
Adding the hostname to /etc/hosts doesn't fix it.
This is because nslookup does a DNS lookup always, it does not read the hosts file. Try using getent instead, for example I get:
$ getent hosts rhel64.xxxxx.xxxxx.com
176.74.176.178 rhel64.xxxxx.xxxxx.com
(By the way, you should use example.com as an example almost always, so you don't inadvertently link to adult-only websites)

Related

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

Apache Drill 'Failure in Starting Embedded drillbit'

I have drill on a vm and was successfully able to connect to it. I restarted the vm after a power outage and now when I try to start drill in embedded mode - I get the following message
Error: Failure in starting embedded Drillbit: java.net.UnknownHostException: xxxx.localdomain: xxxx.localdomain: Name or service not known (state=,code=0)
Is there a dependency that I need to restart?
Please verify /etc/hosts has entry for that domain with the inet ip. Hope this resolve your problem.
Please add an entry to /etc/hosts like "127.0.0.1 yourhostname" which will solve this problem
Also works adding in /etc/hosts something like:
# keeping in mind that 192.168.0.10 is the ip of your server
127.0.0.1 localhost
192.168.0.10 your-server-01

Vagrant share producing a 400 bad request

I'm using Vagrant with apache2 and specifically the command
vagrant share --https 443
It all starts fine and provides a URL. When I access that URL I'm presented with a 400 error:
Bad Request
Your browser sent a request that this server could not understand.
Apache/2.4.12 (Ubuntu) Server at *.vagrantshare.com Port 443
I have been accessing the vagrant machine using https just fine, but it doesn't seem to like to work with vagrant share.
This is a known Vagrant Share bug: https://github.com/webdevops/vagrant-docker-vm/issues/51
The only workarounds I've seen discussed are to use a custom domain or to use another product entirely (e.g. ngrok) to create the share. See the bug discussion here: https://github.com/mitchellh/vagrant/issues/5493#issuecomment-159792794
Vagrant Share docs for custom domains are here: https://atlas.hashicorp.com/help/vagrant/shares/custom-domains

IP based virtual hosts are not working properly after upgrading Mozilla NSS

We are using NSS as SSL engine in Apache server. Recently we applied latest SUSE Linux Enterprise server patches on Apache server which is hosting two IP based virtual hosts. After upgrade the first virtual host is working fine but the second one is not working.
Error log shows "Hostname vhost1.xxyyzz.com provided via SNI and hostname vhost2.xxyyzz.com provided via HTTP are different" when accessing vhost2.xxyyzz.com.
If we switch back to use mod_ssl the issue was gone. Obviously the issue is related to the following patches. Any help would be appreciated.
mozilla-nss 3.16.4-0.8.1
mozilla-nss-tools 3.16.4-0.8.1
apache2-mod_nss 1.0.8-0.4.9.1
Check your /etc/hosts file to see if you might be assigning the domain name to a local internal IP address or interface.
This caused the same error message for me and many 400 errors.
After changing /etc/hosts don't forget to restart the name service cache daemon ( service nscd restart ).
SNI isn't technically fully supported in that version of mod_nss but it has since been added: https://www.suse.com/support/update/announcement/2015/suse-ru-20150591-1.html
Saw the same error and saw it go away after applying the referenced patch.

resolve subdomains on a LAN test server

I have a development machine "A" and a test server "B". A runs Windows, "B" runs ubuntu. I've set up correctly machine "B" (apache, /etc/hosts) so that e.g. curl site.B and curl site.localhost both give correct result. From windows (machine "A") when I curl site.B I get "curl: (6) Couldn't resolve host 'site.B'". Do you have any clues on how to resolve this issue? (HINT: it might be Windows or router hostname caching issue)
Windows also has a hosts file that you need to configure. The problem is that Microsoft spells /etc as %WINDIR%\System32\drivers\etc. :-)
See http://en.wikipedia.org/wiki/Hosts_(file) for details.