How do I make Rails use SSL to connect to PostgreSQL? - ruby-on-rails-3

When I try to connect to the remote PostgreSQL database with a Rails 3.2 project I get this error:
FATAL: no pg_hba.conf entry for host "10.0.0.3", user "projectx", database "projectx", SSL off
My configuration on Rails looks like this:
staging:
adapter: postgresql
database: projectx
username: projectx
password: 123456
host: 10.0.0.3
encoding: utf8
template: template0
min_messages: warning
and on PostgreSQL looks like this:
hostssl all all 0.0.0.0/0 md5
hostssl all all ::/0 md5
Both machines are running on an Ubuntu 12.04.
I found posts saying that it should work automatically, which clearly doesn't happen. I found some saying that libpq didn't have SSL enabled and enabling it solved the problem, but no explanation on how to enable it. I can see when I look at the dependencies of libpq that it depends on the some SSL packages, so I would assume SSL support is compiled.
Some posts recommended adding this:
sslmode: require
or this:
sslmode: enabled
to enable ssl mode, but it had no effect for me. I read that it's silently ignored.
I also tried the database string approach, ending up with:
staging:
adapter: postgresql
database: "host=10.0.0.3 dbname=projectx user=projectx password=123456 sslmode=require"
and then I got the error:
could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
which seems to indicate that Rails was trying to connect to localhost or rather, the local PostgreSQL (there's none) instead of 10.0.0.3.
Any ideas?

As you wrote, normally the Ubuntu 12.x packages are set up so that SSL is activated, works out of the box, and in addition is the first method tried by rails, or any client that lets libpq deal with this stuff, which means almost all clients.
This automatic enabling is not necessarily true with other PostgreSQL packages or with a self-compiled server, so the answers or advice applying to these other contexts don't help with yours.
As your setup should work directly, this answer is a list of things to check to find out what goes wrong. Preferably, use psql first to test a connection setup rather than rails, so that generic postgresql issues can be ruled out first.
Client-side
The client-side sslmode parameter controls the sequence of connect attempts.
To voluntarily avoid SSL, a client would need to put sslmode=disable somewhere in the connection string, or PGSSLMODE=disable in the environment, or mess up with one of the other PGSSL* variables. In the unlikely case your rails process had this in its environment, that would explain the error you're getting, given that pg_hba.conf does not allow non-SSL connections.
Another reason to not try SSL is obviously when libpq is not compiled with SSL support but that's not the case with the Ubuntu packages.
The default for sslmode is prefer, described as:
prefer (default)
first try an SSL connection; if that fails, try a non-SSL connection
The SSL=off at the end of your error message relates to the last connect attempt that fails. It may be that SSL was tried and failed, or not tried at all, we can't know from this message alone. The connect attempt with SSL=off is rejected normally by the server per the policy set in pg_hba.conf (hostssl in the first column).
It's more plausible that the problem is server-side, because there are more things than can go wrong.
Server-side
Here are various things to check server-side:
There should be ssl=on in postgresql.conf (default location: /etc/postgresql/9.1/main/)
when connecting to localhost with psql, you should be greeted with a message like this:
psql (9.1.13)
SSL connection (cipher: DHE-RSA-AES256-SHA, bits: 256)
Type "help" for help.
The ca-certificates package should be installed and up-to-date.
The ssl-cert package should be installed and up-to-date.
Inside the postgres data directory (/var/lib/postgresql/9.1/main by default), there should be soft links:
server.crt -> /etc/ssl/certs/ssl-cert-snakeoil.pem or another valid certificate, and
server.key -> /etc/ssl/private/ssl-cert-snakeoil.key or another valid key.
/etc/ssl/certs and parent directories should be readable and cd'able by postgres.
The postgres unix user should be in the ssl-cert unix group (check with id -a postgres) otherwise it can't read the private key.
If changing postgresql.conf, be sure that postgresql gets restarted before doing any other test.
There shouldn't be any suspicious message about SSL in /var/log/postgresql/postgresql-9.1-main.log at startup time or at the time of the failed connection attempt.

Rails uses the PG gem for postgres to connect see here for the implementation:
https://github.com/rails/rails/blob/02a3c0e771b3e09173412f93d8699d4825a366d6/activerecord/lib/active_record/connection_adapters/postgresql_adapter.rb#L881
The PG gem uses libpg (c library) and the documentation on "PG::Connection.new" found here:
http://deveiate.org/code/pg/PGconn.html
Suggests the following options:
host
server hostname
hostaddr
server address (avoids hostname lookup, overrides host)
port
server port number
dbname
connecting database name
user
login user name
password
login password
connect_timeout
maximum time to wait for connection to succeed
options
backend options
tty
(ignored in newer versions of PostgreSQL)
sslmode
(disable|allow|prefer|require)
krbsrvname
kerberos service name
gsslib
GSS library to use for GSSAPI authentication
service
service name to use for additional parameters
So this would indicate that the connection string will not work (since it is not recognised by the adapter, this might be a mysql adapter option)
Also this indicates that the sslmode=required option should work, as this is a basic feature of libpg.
So:
database.yml
staging:
...
sslmode: "require"
...
should definitely do the trick, are you sure you use staging mode? // add sslmode to the other environments too to be sure.
Also libpg uses SSL by default as first try, maybe you see the error with SSL Off because SSL mode failed first, then libpq retried without ssl and eventually raised an error.

Please check your psql version,
older version do not support slmode=require.
It worked for me after upgrading psql to the latest version.

Related

Influxdb over SSL connection

I'm a little bit confused about https communication with influxdb. I am running a 1.8 Influxdb instance on a virtual machine with a public IP. It is an Apache2 server but for now I am not willing to use it as a webserver to display web pages to clients. I want to use it as a database server for influxdb.
I obtained a valid certificate from Let's Encrypt, indeed the welcome page https://datavm.bo.cnr.it works properly over encrypted connection.
Then I followed all the instructions in the docs in order to enable https: I put the fullchain.pem file in /etc/ssl directory, I set the file permissions (not sure about the meaning of this step though), I edited influxdb.conf with https-enabled = true and set the path for https-certificate and https-private.key (fullchain.pem for both, is it right?). Then, systemctl restart influxdb. When I run influxdb -ssl -host datavm.bo.cnr.it I get the following:
Failed to connect to https://datavm.bo.cnr.it:8086: Get https://datavm.bo.cnr.it:8086/ping: http: server gave HTTP response to HTTPS client
Please check your connection settings and ensure 'influxd' is running.
Any help in understanding what I am doing wrong is very appreciated! Thank you
I figured out at least a part of the problem. It was a problem related to permissions on the *.pem files. This thing looks weird because if I tip the following, as documentation says, it does not connect.
sudo chmod 644 /etc/ssl/<CA-certificate-file>
sudo chmod 600 /etc/ssl/<private-key-file>
If, instead, I tip the second line with 644 all works perfectly. But this way I'm giving to anyone the permission to read the private key! I'm not able to figure out this point.
UPDATE
If I put inside /etc/ssl/ the symlinks that point to the .pem files that live inside /etc/letsencrypt/live/hostname the connection is refused. Only if I put a copy of the files the ssl connection starts.
The reason I am willing to put the links inside /etc/ssl/ is the automatic renew of the certificates.
Anyone can help?

webrtc app show my Turn server broken, but it works

I have my stun / turn server running on local pc (CoTurn). It is tested on "https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/" and works. I have a domain name and configured the modem with public ip. I configured apache2 to make the site visible to the world. I have active and valid letsencript certificates. Everything works in short. But the test application starts the connection (the external pc communicates with the local via socket.io) but then the video is not seen and the console returns the error: ICE failed, your TURN server appears to be broken, see about:webrtc for more details.
The link of the application that I use as a test, because with my original I had no comparisons to make. First time with socket.io. But socket.io send and receive messages so these not appear a problem for now.
https://github.com/anoek/webrtc-group-chat-example
P.S.:
Ok. Server is behind the nat. My app (but linked app too) work very fine on local network (sorry I checked this point first before). These with my turn/stun server, than with public stun/turns google servers. This evidently indicates a bad setting of apache2 server or/and turn server. Where could I find a guide about it?
My server situation: myServerIpLocal-xxx.xxx.xxx.xxx -> nat/router/modem WithPublicIpStatic-xx.xx.xx.xx. I can see my sites from all the world, but turn server not work outside the local network. Inside local network work ok.
these my turn config:
listening-port=3478
tls-listening-port=5349
alt-listening-port=3479
alt-tls-listening-port=5350
listening-ip=xxx.xxx.xxx.xxx /*mylocal ip*/
relay-ip=xxx.xxx.xxx.xxx /*mylocal ip*/
external-ip=xx.xx.xx.xx /*my public ip on nat/ruter/modem */
min-port=49152
max-port=65535
verbose
fingerprint
userdb=/var/lib/turn/turndb
realm=mysite.com
cert=/etc/ssl/certificate.pem
pkey=/etc/ssl/private.key
dh-file=/etc/turn/dhparam.pem
no-stdout-log
log-file=/myhome/.turn/turn.log
lt-cred-mech
user=myusername:mypasswd
# Turn OFF the CLI support.
# By default it is always ON.
# See also options cli-ip and cli-port.
#
no-cli
#Local system IP address to be used for CLI server endpoint. Default value
# is 127.0.0.1.
#
cli-ip=127.0.1.1
# CLI server port. Default is 5766.
#
cli-port=5766
# CLI access password. Default is empty (no password).
#
cli-password=logen
no-sslv3
no-tlsv1
my old code on turn.conf:
lt-cred-mech
user=myusername:mypasswd
but turn work only locally .... probabily because I use:
sudo turnserver -L myPublicIp -o -a myrealm
at every coturn start command ....
actually I try not use the command "turnserver" and I try to use onlu sudo coturn start .....
basically in my turn.conf file I change these:
lt-cred-mech
user=mypasswd:myusername /***** mind the gap ;) *****/
these because my index.js file debug never see my external connection as authrized user ..... magically at these time my app perform webrtc multiple connection with every pc and mobile .... inside and outside my lan .... (I try connect my appa from phone in barcelona spain to other one in london with good result).
May be coturn wiki need to update?
Finally I would like to thank the serverfault and super-user guys who rejected my question. Since I had to make arrangements, I was able to acquire new and interesting information on this subject.
regards

Unable to register host while creating Apache Ambari cluster

I am trying to create localhost Apache Ambari cluster on CentOS7. I am using Ambari 2.2.2 binaries downloaded and installed from the Ambari repository with the following commands
cd /etc/yum.repos.d/
wget http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.2.2.0/ambari.repo
yum install ambari-server
ambari-server setup
ambari-server start
Before starting the server I have done all the necessary preparations steps described on the Hortonworks including the setup of passwordless ssh, which is frequent reason of problems according to the posts found on the internet. I verify it with
ssh root#localhost
During the creation of cluster in the "Install options" window I enter the name of the host I want to create (localhost in my case) and have already tried both of the options, which are
providing rsa secret key direktly - in this case the next window
simply stucks in the "Installing" stage and does not go any further,
showing no errors
performing manual registration of hosts.
For the second option I have downloaded and installed ambari-agent
yum install ambari-agent
ambari-agent start
In case of manual host registration I am getting the following error
"Host checks were skipped on 1 hosts that failed to register.".
When I click on "Failed", which in some cases described over the internet is supposed to deliver more precise description of a problem I see the following
"Registering with the server...
Registration with the server failed."
As a result I don't even now where to start searching for the possible reasons of this error.
Ambari cluster nodes need to be configured with a Fully Qualified Domain Name (FQDN). localhost is not an FQDN. You will need to configure the node with an FQDN and then retry the installation. You could use something like: localhost.local which is an FQDN. This requirement and how to configure the node to meet it are documented in the pre-requirements. From the HDP documentation:
All hosts in your system must be configured for both forward and and reverse DNS.
If you are unable to configure DNS in this way, you should edit the /etc/hosts file on every host in your cluster to contain the IP address and Fully Qualified Domain Name of each of your hosts.
I had the same "Registering with the server... Registration with the server failed." problem just recently.
I found the response on the same topic recommending to take a look at the log file which is located here /var/log/ambari-agent/ambari-agent.log from there was able to check that the hostname was set up incorrectly during some other phase of installation (I had it something like ambari.hadoop instead of localhost). So I went to the /etc/ambari-agent/conf/ambari-agent.ini and fixed it there.
I know that I'm digging some quite old question, but seems that compiling all that at one place might help someone with the same problem.

Why does SSH seem to remember my valid connection settings even though they're now invalid?

I'm troubleshooting some stuff with an application I'm working on that uses SFTP. Along the way, I'm using the openSSH command line client to connect to a separate SFTP server, using a configuration file (~/.ssh/config). In between tests, I'm changing the configurations, and at times I try to deliberately test an invalid configuration.
Currently, I just changed my config file to remove the IdentityFile line. Without this, it shouldn't know what key file to use to try and make the connection, and as such, the connection should fail. However, every time I ssh to that hostname, the connection succeeds without even so much as a password prompt.
This is BAD. My server requires the use of the keyfile, I know this because my application cannot connect without one. Yet it's almost like SSH is remembering an old, valid configuration for the server even though my current configuration is invalid.
What can I do to fix this? I don't want SSH to be hanging onto old configurations like this.
If you don't specify IdentityFile, the ssh will use the keys in default location (~/.ssh/id_{rsa,dsa,ecdsa}), as described in the manual page for ssh:
IdentityFile
Specifies a file from which the user's DSA, ECDSA, Ed25519 or RSA authentication identity is read. The default is ~/.ssh/identity for protocol version 1, and ~/.ssh/id_dsa,
~/.ssh/id_ecdsa, ~/.ssh/id_ed25519 and ~/.ssh/id_rsa for protocol version 2. [...]

Smartcvs error: Authentication Failed, You could not get authenticated by the CVS-server

I am trying to connect from a windows computer to a ubuntu linux server, It is about cvs, I want to do a checkout. I use smartcvs 7.1.9.
I get this error when I try to connect to the server: (Project > Checkout > Next)
Authentication Failed: You could not get authenticated by the
CVS-server. Details: I/O-Exception: Failed to negotiate a transport
component [diffie-hellman-group-exchange-sha1]
[diffie-hellman-group14-sha1]
Anybody ideas what I can do?
This is a cvs server issue.
SmartCVS uses the diffie-hellman key exchange method for authentication which is known to have security issues. Therefore it has been disabled by default in current standard openssh(d) server configurations.
If you know what you are doing and don't care about the security implications, just add the following lines to sshd_config:
starts here
KexAlgorithms diffie-hellman-group1-sha1,curve25519-sha256#libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm#openssh.com,aes256-gcm#openssh.com,chacha20-poly1305#openssh.com,blowfish-cbc,aes128-cbc,3des-cbc,cast128-cbc,arcfour,aes192-cbc,aes256-cbc
ends here
If you're on linux.. recreate the keys and restart the opensshd service:
dpkg-reconfigure openssh-server
/etc/init.d/ssh restart
Regards
Erwin