AWS Amazon MQ (RabbitMQ) Broker - Can't find IP Address - rabbitmq

So I've created our RabbitMQ broker via Amazon MQ within a private Subnet and not publicly accessible. This generates a web console URL https://xxxx.mq.us-west-2.amazonaws.com. And I wanted to create a Route 53 record name xxxxx.ourdomain.com and use that to access the broker web console. What I've done was to create a CNAME record and use https://xxxx.mq.us-west-2.amazonaws.com as the value/route traffic to . The problem was it's giving me this when I access it via https://xxxxx.ourdomain.com.
*This server could not prove that it is ourdomain.com; its security certificate is from .mq.us-west-2.amazonaws.com"
So usually, we create a load balancer (where we have the ACM (ourdomain.com) associated to the attached target group) to our services and use that to create Route 53 A record. But I don't see any option to do this for the Amazon MQ (RabbitMQ) as it don't give me any targets or IP address. I saw documentations where it shows the IP address of the broker from the console (See: https://aws.amazon.com/blogs/compute/creating-static-custom-domain-endpoints-with-amazon-mq/), but I don't see it, not in the Amazon MQ console nor the Network Interfaces.
Amazon MQ Console

We need to do a dns lookup to retrieve the private ip address
module "shell_execute" {
source = "github.com/matti/terraform-shell-resource"
command = "dig +short $(echo $URL | cut -d'/' -f3 | cut -d':' -f1) | grep -v '\\.$'"
environment = {
URL = module.rabbitmq01.primary_ssl_endpoint
}
}
output "mq_private_ip" {
value = module.shell_execute.stdout
}
Far too complicated, but this one works.
Now we can go on configuring our target group attachment.
Best of luck!

You can get the private ip with the following command from linux:
host {hostname-of-amazonmq-service}
As written here in more detail: https://aws.amazon.com/blogs/compute/creating-static-custom-domain-endpoints-with-amazon-mq-for-rabbitmq/
This link is explaining how to set a CNAME for the RabbitMQ instance you have. Most important is that you need a load balancer to be able to use the secure connection.

As of September 2021, AWS MQ RabbitMQ brokers don't have IP addresses associated with them, which makes them nearly useless in real world applications.

Related

Bastionhost configuration with NaviServer on GCP?

How to add TLS/SSL letsencrypt or GCP provided certificate to VM instance in GCP with an internal ip address and static external address?
When I create one via a letsencrpt certificate install script, resultant connections break because the VM doesn't have an external facing ip number --only an internal number.
The traffic passes through a firewall (or load balancer) of sorts.
I'm used to bastionhost VM servers in the wild..
Details: NaviServer web server is running on a GCP Compute Engine with a FreeBSD 11.3 image.
(Linux Shield OSes aren't letting me compile Naviserver and use it on any port).
All works for port 80 and 8000 on an internal ip address, and a static ip address pointed externally and not connected to the VM.
I can't find any proxy/firewall settings to navigate via GCP menus.
How to resolve?
Is there some special term I should use to search for docs?
Any link with instructions to follow?
Is there a way to expose a VM instance directly to an external ip address?
Any other creative way I may get SSL/TLS to work with NaviServer?
thank you
Links to some things I've tried:
Enable SSL on Tomcat on Google Compute Engine
How to setup Letsencrypt for Google Cloud Compute Engine load balancer? <-- this is for Kubernetes clusters
I'm currently trying adding a load balancer:
https://cloud.google.com/load-balancing/docs/ssl-certificates/google-managed-certs
This appears to be the solution: Use a GCP HTTP/S load balancer: https://cloud.google.com/load-balancing/docs/https
and specifically:
https://cloud.google.com/load-balancing/docs/https/ext-https-lb-simple
Argh. Actually No.
GCP Team kindly suggested this url: https://cloud.google.com/compute/docs/instances/custom-hostname-vm#create-custom-hostname
Set the hostname to the domain name. Treat this as if there's no proxy, just a firewall.

OpenShift v3 connect app with redis. Connection Refused

I have created a redis 3.2 application from the default image catalog.
I'm trying to connect a python app that runs inside the same project with the redis db.
This is what the Python application uses to connect to redis:
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_PASSWORD = os.environ.get('REDIS_PASSWORD') or 'test'
redis = aioredis.create_redis_pool(
(REDIS_HOST, int(REDIS_PORT)),
password=REDIS_PASSWORD,
minsize=5,
maxsize=10,
loop=loop,
)
The deployment fails with an ConnectionRefusedError: [Errno 111] Connection refused.
My guess is that I need to use another value for REDIS_HOST, but I couldn't figure what to use.
Does anyone know how to fix this?
After you deployed from the image catalog a number of objects will have been created for you. One of those objects is a service, which is used to load balance requests to the Pods it fronts. Service names for a project can be retrieved using the client tools via oc get svc.
This service name should be used to connect to your redis instance. If you deploy redis before your Python application, some environment variables should already be populated which can be used, for example REDIS_SERVICE_HOST and REDIS_SERVICE_PORT.
So from your application you can connect via the service ip or service name, where service name is redis then redis.StrictRedis(host='redis', port=6379, password='secret')
The redis password may have been generated for you. In that case it is retrievable from the redis secret which could also be mounted from your python app
Databases in general do not use standard HTTP, but custom TCP protocols. This is why in Openshift we need to connect directly to the service using Openshift's Service hostname or IP address (caution: only Service hostname is predictable), instead of the usual Route, and this applies also to Redis. Bypassing the Routes in Openshift is like bypassing a reverse proxy such as nginx and directly connecting to the db backend.
There is need to use env variables, because service hostnames are auto-generated by Openshift using this predictable pattern:
container_name.project_name.svc , e.g:
redis.db.svc
More info
"When a web application is made visible outside of the OpenShift cluster a route is created. This enables a user to use a URL to access the web application from a web browser. A route is usually used for web applications which use the HTTP protocol. A route cannot be used to expose a database, as they would typically use their own distinct protocol, and routes would not be able to work with the database protocol."
[https://blog.openshift.com/openshift-connecting-database-using-port-forwarding/ ]

How to find RabbitMQ URL?

Rabbit MQ URL looks like :
BROKER_URL: "amqp://user:password#remote.server.com:port//vhost"
This is not clear where we can find the URL, login and password of RabbitMQ
when we need to acccess from remote worker (outside of Localhost).
In other way, how to set RabbitMQ IP adress, login and password from Celery / RabbitMQ
You can create new user for accessing your RabbitMQ broker.
Normally port used is 5672 but you can change it in your configuration file.
So suppose your IP is 1.1.1.1 and you created user test with password test and you want to access vhost "dev" (without quotes) then it will look something like this:
amqp://test:test#1.1.1.1:5672/dev
I will recommend to enable RabbitMQ Management Plugin to play around RabbitMQ.
https://www.rabbitmq.com/management.html
To add to the accepted answer:
As of 2022, the default username and password are guest
In my experience, ignoring vhost is safe while getting started with RabbitMQ
If using RabbitMQ as part of a Docker Compose setup (e.g. for testing), other containers in the same application should be able to access RabbitMQ via its service name. For example, if the name of the service in docker-compose.yml is rabbitmq, passing amqp://guest:guest#rabbitmq:5672/ should work

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.

Amazon EC2 SSH server sent: ( publickey, gssapi-keyex,gssapi-with-mic)

I get this error message when trying to connect with ssh.
Disconnected: No supported authentication methods available (server sent: publickey,gssapi-keyex,gssapi-with-mic)
I create a instances(cent os), generated my webserver.pem, puttygen imported that and output a ppk
I have seen that it may be a permissions issue with the ~/.ssh on the server but how can i change the permissions on the server without ssh access to the server? Is there another way to connect that i am not aware of? I am quite new to the amazon ec2 stuff.
I am on a windows system right now using putty.
My security groups were incorrect. I remade the instance with the correct security groups
The below steps worked for me.
Edit sshd_config file sudo vi /etc/ssh/sshd_config.
Search for PasswordAuthentication
If it is no, change it to yes. For me it was commented. If so, uncomment it.
Restart sshd service sudo systemctl restart sshd.service
Done.
These are the basic steps generally when working with a public cloud, trying to create a Virtual Machine and connect to it.
Create a Virtual Cloud Network/ Virtual Private Cloud
Create an Internet Gateway and ensure the Route Table for the VCN has the entry to route internet bound traffic (destination 0.0.0.0/0) to the internet gateway
Create a Virtual Machine (Linux in this case), ensure it has a public IP ( VM be created in public subnet ), download the key pair (for example was in PEM format)
Create a Security Group and ensure ingress rule from source : 0.0.0.0/0, protocol: TCP, destination port: 22
Associate the VM with the Security Group at VNIC level at the time of creating the VM or post creation.
From Oracle Cloud documentation -
Just having an internet gateway alone does not expose the instances in
the VCN's subnets directly to the internet. The following requirements
must also be met:
The internet gateway must be enabled (by default, the internet gateway
is enabled upon creation). The subnet must be public. The subnet
must have a route rule that directs traffic to the internet gateway.
The subnet must have security list rules that allow the traffic (and
each instance's firewall must allow the traffic). The instance must >
have a public IP address.
Now connecting to VM using putty, basically you are doing a :
ssh user#ip_address —i private_key
a. Use puttygen and load the private PEM key that you downloaded. Once successfully imported, save the private key (optionally with a passphrase) as PPK in your local machine ( for example "your_pvt_key_name.ppk" )
b. Use putty to connect to the VM's public IP. Ensure in putty when connecting to the VM that private key is provided for authentication. In the section Connection->SSH->Auth, browse for the "your_pvt_key_name.ppk" and then go back to the Session and "Open" the VM. If the VM is on public subnet with correct route table entry, you should see the login screen. In case the VM is not available on internet, it wont connect !
c. Once you see the login screen most important and which is the probable cause of the above error, login with correct user name, such as "ec2-user" in AWS or "opc" in OCI. Using an incorrect user name results in this error.
No supported authentication methods available (server sent: publickey,gssapi-keyex,gssapi-with-mic)