How to hide/encrypt password in Sensu configuration? - rabbitmq

I have following configuration to connect with RabbitMQ server. While configuring Sensu , rabbitmq password given in plain text. For security issue, I need to hide or encrypted this password. How can we do it?
File content : /etc/sensu/rabbitmq.json
{
"rabbitmq": {
"host": "127.0.0.1",
"port": 5672,
"vhost": "/sensu",
"user": "sensu",
"password": "**secret**"
}
}
Thanks in advance!!

If your goal is to have encryption in-flight with TLS, you'll want to look into enabling TLS listeners for RabbitMQ.
Sensu's official documentation also shows how to configure Sensu to communicate with a TLS-enabled RabbitMQ.

Related

Self hosted Kestrel with SSL on an IIS server side by side issue

I have a development server which has IIS installed and multiple assigned static private IP's but I also want to use it to run a Kestrel self hosted web service. When I set the Kestrel service endpoint config to run HTTP on port 80 with a specific IP it runs fine side by side with IIS as long as I don't overlap the endpoints/bindings.
That's great, however when I try to set up SSL on Kestrel's service it will not start at all. It will fail with the following exception
System.Net.Sockets.SocketException (10013): An attempt was made to access a socket in a way forbidden by its access permissions.
This only happens when I set the endpoint's port to 443, 5001 will work, 446 will work, but not 443.
Here's an example endpoint config that fails for me.
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://10.10.13.11:80"
},
"Https": {
"Url": "https://10.10.13.11:443",
"Certificate": {
"Location": "LocalMachine",
"Store": "My",
"Subject": "portaldev.mydomain.com",
"AllowInvalid": false
}
}
}
Only the port 443 is the issue, IIS also has all it's bindings set to other IP's, none of them attempt to reuse that 10.10.13.11 IP that the Kestrel service is using.
What could be going on here? What's it only with the SSL port and not the HTTP port? Do I need to give permissions to something to let it bind to that 443 port/ip socket?

Can't seem to connect to FTPS via Atom editor Remote FTP

I am trying to connect to my web server via my Atom editor, however whenever I try to connect with {"rejectUnauthorized": true,}, I received this error:
Hostname/IP does not match certificate's altnames: Host: myhost.com. is not in the cert's altnames: DNS:dns.name
I can connect fine with the following code:
{
"protocol": "ftp",
"host": "myhost.com",
"port": 21,
"user": "username**",
"pass": "password**",
"promptForPass": false,
"remote": "/",
"local": "",
"secure": true,
"secureOptions": {"rejectUnauthorized": false, "requestCert": true, "agent": false},
"connTimeout": 10000,
"pasvTimeout": 10000,
"keepalive": 10000,
"watch": [],
"watchTimeout": 500
}
However, from what I read, "rejectUnauthorized": false is not a very smart way to be transferring files. As it can cause a MITM attack.
I am using an automatically created Let's Encrypt SSL cert & Siteground for my hosting. Any help would be greatly appreciated.
Thanks in advance.
In case anyone arrives here as I did looking for some guidance on how to connect from Atom to a SiteGround server when using one of their Cloud Hosting plans, here's how I did it...
My local workstation is a Mac.
I'm using the Remote FTP package for Atom: https://atom.io/packages/remote-ftp
Add an SSH key in SiteGround, Site Tools, Devs, SSH Keys Manager. You can also use that page to get a copy of the private key - put this in a text file on your local workstation.
In Atom, use the Remote FTP package menu to create an sftp config file. Here's mine:
{
"protocol": "sftp",
"host": "example.sg-host.com",
"port": 18765,
"user": "u30-xxxxxxxxxx",
"remote": "/home/u30-xxxxxxxxxx/www/example.sg-host.com",
"local": "",
"agent": "env",
"privatekey": "/Path/to/local/keyfile",
"passphrase": "the password used when you created the siteground ssh key ",
"hosthash": "",
"ignorehost": true,
"connTimeout": 10000,
"keepalive": 10000,
"keyboardInteractive": false,
"keyboardInteractiveForPass": false,
"remoteCommand": "",
"remoteShell": "",
"watch": [],
"watchTimeout": 500
}
The thing that foxed me for a while is that the user you need to specify in the Remote FTP config file is the user in the SSH credentials. Maybe this is obvious to some. But it took me a while to realise you don't need an FTP user in SiteGround. SFTP is FTP over SSH so the SSH key and credentials are sufficient.
I'm using a temporary SiteGround domain name in this example. I think, although I haven't tested it, that if you've assigned a domain name to the SiteGround website, you would use that as the host.
The remote directory /home/u30-xxxxxxxxxx/www/example.sg-host.com is where you'll find the document root for the website, which on SiteGround is public_html.
Enjoy!
I am using an automatically created Let's Encrypt SSL cert & Siteground for my hosting. Any help would be greatly appreciated.
I don't know the setup run by Siteground but my guess is that
This is a shared hosting, i.e. you don't get a dedicated IP address for your domain but share it with others.
The Let's Encrypt certificate is only installed for HTTPS (i.e. web access).
With FTPS a single certificate is used on the IP address and thus is what you get. While with HTTPS it is is common to have multiple certificates per IP address by using Server Name Indication (SNI) this is usually not the case for other protocols like FTPS, SMTPS...
If my guess is correct then this is a shared FTPS server for all domains hosted on the system and the access to the users data is restricted by username+password and not by the domain name used to connect. In this case you are actually not expected to use your own domain name to access FTPS but you should use the common name (which is found in the certificate) and then login with your specific account.
It looks like that this is even documented. From Siteground FAQ: How to establish an FTP connection to your hosting account?:
FTP Hostname - This is the hosting server name.
Thus, you are not expected to use your own domain name but the name of the hosting server. This name can be found in your Account Information.

Failover redis with Consul DNS

How does Consul resolve the new redis master ip address after the old redis master get takedown ?
For example :
I did while true; do dig redis.service.google.consul +short; sleep 2; done the response is
192.168.248.43
192.168.248.41
192.168.248.42
192.168.248.41
192.168.248.42
192.168.248.43
...
My expectation is it's only resolve to 192.168.248.41 because it's master. But when the master is down, consul should resolve to 192.168.248.42 or 192.168.248.43, according which one is master
Here is my consul services config in 192.168.248.41
...
"services": [
{
"id": "redis01-xxx",
"name": "redis",
"tags": ["staging"],
"address": "192.168.248.41",
"port": 6379
}
]
...
Here is my consul services config in 192.168.248.42
...
"services": [
{
"id": "redis01-xxx",
"name": "redis",
"tags": ["staging"],
"address": "192.168.248.42",
"port": 6379
}
]
...
And same thing with 192.168.248.43.
My expectation is like this video https://www.youtube.com/watch?v=VHpZ0o8P-Ts
when i do dig, the consul will resolve to only one IP address (master). When the master is down and redis sentinel selects the new master. The consul will resolve to new redis master IP address.
I am very new in Consul. So, i am really appreciate if someone can give short example and suggestion feature of consul, so i can catch it faster.
When you register multiple services with the same "name", they will appear under <name>.service.consul (if they are healthy of course). In your case, you didn't define any health-checks for your redis services, that's why you see all of the service IPs in your DNS query response.
For use-case, where you only need IP of the Redis master, I would recommend registering service with a health check . I'm not familiar how one would query Redis, to find out if the node is the master. But you can use for an example script check, where your script would return 0, if current Redis node was elected master by the Redis sentinel. Consul will automatically recognize that service instance as healthy, and you will see only it's IP in the dig redis.service.consul query to Consul DNS interface.

Rabbitmq MQTT Bad username or bad password

I installed Rabbit MQ on my Ubuntu 16.04 Server. After that, I enabled plugins for MQTT for RabbitMQ. In rabbitmq-plugins list I can see that MQTT plugin is enabled and running on the server.
I then added the following configuration file for MQTT at this location
/etc/rabbitmq/rabbitmq.config
and restarted the server:
[{rabbit, [{tcp_listeners, [5672]}]},
{rabbitmq_mqtt, [{default_user, <<"myuser">>},
{default_pass, <<"mypass">>},
{allow_anonymous, false},
{vhost, <<"/">>},
{exchange, <<"amq.topic">>},
{subscription_ttl, 1800000},
{prefetch, 10},
{ssl_listeners, []},
%% Default MQTT with TLS port is 8883
%% {ssl_listeners, [8883]}
{tcp_listeners, [1883]},
{tcp_listen_options, [{backlog, 128},
{nodelay, true}]}]}
].
Now When I am trying to publish a message to rabbitmq server like:
import paho.mqtt.publish as publish
import paho.mqtt.client as mqtt
publish.single('/',
payload='hello world',
hostname='xxx.xxx.xxx.xxx', # My servers IP address
auth={'username':'myuser', 'password':'mypass'},
port=1883,
protocol=mqtt.MQTTv311)
It gives me this error:
paho.mqtt.MQTTException: Connection Refused: bad user name or password.
There is no Encryption for now. So what am I doing wrong?
I tried the same procedure with Mosquitto MQTT Broker and it worked fine. I think the issue is with my rabbitmq configuration.

Restrict access to RabbitMQ via IP

I installed rabbit mq via docker image on a machine including the management and rabbitmq_auth_backend_ip_range plugins. I want to restrict access to the ports 5671/2 and 15672 to only allow certain IPs accessing them.
As 15672 is the web interface, I have not current solution for that. Any ideas on that?
For 5671/2 (which one is the secure one?) I want to use the plugin rabbitmq_auth_backend_ip_range because as far as I understood, that's its purpose.
My current rabbitmq.config looks like this:
[
{rabbit, [
{auth_backends, [{rabbit_auth_backend_ip_range}]}
]},
{rabbitmq_auth_backend_ip_range, [
{tag_masks,
[{'administrator', [<<"::FFFF:192.168.0.0/112">>]}]
}
]}
].
According to the documentation that allows access only for accounts tagged with administrator. But if I do a telnet nothing changed:
telnet ip-address 5672
I can access it. How do you pass over credentials via telnet? How is ip restriction done with rabbit mq?
rabbitmq-auth-backend-ip-range is only providing authentication mechanism to login/talk to rabbitmq server. That doesn't mean your 5672 port is not open.
You will still be able to telnet on 5672 but if some administrator user tries to connect particularly to RabbitMQ server than it should match with the given IP address otherwise authentication failed will return
For RabbitMQ Management you can define IP address something like this:
{rabbitmq_management, [
{listener, [{port, 15672}, {ip, "127.0.0.1"}]}
]}
Rabbitmq-auth-backend-ip-range link is community plugin for client authorization based on source IP address. With this community plugin, we can restrict access to client on the basis of IP address
Steps To configure plugin in rabbitmq version 3.6.X
wget https://dl.bintray.com/rabbitmq/community-plugins/3.6.x/rabbitmq_auth_backend_ip_range/rabbitmq_auth_backend_ip_range-20180116-3.6.x.zip
unzip content to /usr/lib/rabbitmq/lib/rabbitmq_server-3.x/plugins
Enable plugin:rabbitmq-plugins enable rabbitmq_auth_backend_ip_range
Set a custom tag to which this plugin will block for certain IP address
rabbitmqctl set_user_tags custom_user custom_tag
Configure rabbitmqctl configuration file
vi /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners, [5672]},
{auth_backends, [
{rabbit_auth_backend_internal,
[rabbit_auth_backend_internal, rabbit_auth_backend_ip_range]
}
]
}
]},
{rabbitmq_auth_backend_ip_range, [
{tag_masks,
[{'customtag', [<<"::FFFF:172.xx.xx.xxx">>]}]},
{default_masks, [<<"::0/0">>]}
]}
].
this configuration will effect in such a way that the user with tag customtag will able to connect to rabbitmq server with IP address 172.xx.xx.xxx and all other tags can access from any IP address
sudo service rabbitmq-server restart
PS: As there is no valid link online to configure the rabbitmq_auth_backend_ip_range plugin, so I answered this question with the configuration steps