Azure Container Group IP Address disappeared - azure-container-instances

We have an Azure Container Group that had an IP Address assigned upon creation. The IP address is now missing and our application suddenly stopped working.
(i.e. It was set to null, previously the resource had an associated IP address)
Note that our subscription was suspended for 2 weeks and is now back to an active state.
If someone can please help us to understand the following:
Q1: Where did the IP go?
Q2: If the subscription went suspended (as our case), do they deallocate the IP address?
Appreciate your feedback the soonest possible on how we can recover the Container Group to a working state.
Thank You

ACI's public IP addresses are ephemeral, so you are correct that you are not guaranteed to retain the public IP. To work around this and receive a reliable endpoint for your application, utilize the dns-name-label property for each container group. Details on that can be found here for Azure CLI and here for the REST API.

The IP address of a container won't typically change between updates, but it's not guaranteed to remain the same. As long as the container group is deployed to the same underlying host, the container group retains its IP address. Although rare, and while Azure Container Instances makes every effort to redeploy to the same host, there are some Azure-internal events that can cause redeployment to a different host. To mitigate this issue, always use a DNS name label for your container instances.
Refer to: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-update#limitations

If you are using a template for containers deployment/creation, you can include dnsNameLabel and then use FQDN address instead of an IP.
Template parameters:
"parameters": {
...
"dnsNameLabel": {
"type": "String",
"metadata": {
"description": "FQDN prefix (DNS name label)"
}
},
...
and then the IP address part:
"ipAddress": {
"type": "Public",
"ports": [
{
"protocol": "TCP",
"port": 22
}
],
"dnsNameLabel": "[parameters('dnsNameLabel')]"
},

Related

Consul Watch triggered on each server, cause multiple HTTP calls

I have a Consul cluster of 3 servers. I setup a watch config file for keyprefix (ref below). I deployed the watch to each of my server files.
I deploy the watch to the config folder of each of the servers so that if the leader goes down, then the next leader will have the watch.
The problem I am facing is when a key change triggers the watch, each of the 3 servers then run the handler. Thus making 3 POST calls to the handler service.
How can I make it so only one handler event is called?
Preferably from the cluster leader.
Or is there a way to only enable watches on specific server instance?
An alternative I thought of is to create a script handler, and with in the script check if it is running on the leader. If so, then create an HTTP call manually.
{
"watches": [{
"type": "keyprefix",
"prefix": "port-list/",
"handler_type": "http",
"http_handler_config": {
"path": "http://XX.XX.XX.XX:XXXX/alert",
"method": "POST",
"header": {
"x-foo": ["bar", "baz"]
},
"timeout": "10s",
"tls_skip_verify": false
}
}
]
}
The solution we came up with was to have a dedicated server that has the watches. Obviously, this is not ideal because if that server goes down then then watches go with it.

Consul - External service registration with more than 1 service

I want know if Im doing something wrong or the support for Consul External services is actually kind of limited ( or designed that way maybe).
I cant use ESM because I cannot install anything else, even if in containers :(.
Case:
- I have several hosts where Mysql has at least 4 processes running.
- I installed exporters on those hosts for each mysql process, which are already exposing the metrics for prometheus.
- I want those exporters to be registered in Consul as external services as I cant install the consul agent.
I already checked the Consul documentation and it seems that I cant register an external node with several services, just 1 service per node.
{
"Node": "ltmysqldb01-1.com",
"Address": "ltmysqldb01-1.com",
"NodeMeta": {
"external-node": "true",
"external-probe": "true"
},
"Service": {
"ID": "ltmysqldb01-1-node_exporter",
"Service": "node_exporter",
"Port": 9100
},
"Checks": [{
"Name": "http-check",
"status": "passing",
"Definition": {
"http": "ltmysqldb01-1.com",
"interval": "30s"
}
}]
}
curl --request PUT --data #external_mysql_ltmysqldb01-1.json https://consul-instance.com/v1/catalog/register
Multiple services can be easily defined per single node(agent):
You basically setup agent, and configure it with several external services.

RabbitMQ - ACCESS_REFUSED - Login was refused

I'm using rabbitmq-server and fetch messages from it using a consumer written in Scala. This has been working like a charm but since I migrated my RabbitMQ server from a server to another, I get the following error when trying to connect to it:
com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
In addition, the rabbitmq-server logs:
=INFO REPORT==== 18-Jul-2018::15:28:05 ===
accepting AMQP connection <0.7107.0> (127.0.0.1:42632 -> 127.0.0.1:5672)
=ERROR REPORT==== 18-Jul-2018::15:28:05 ===
Error on AMQP connection <0.7107.0> (127.0.0.1:42632 -> 127.0.0.1:5672, state: starting):
PLAIN login refused: user 'my_personal_user' - invalid credentials
=INFO REPORT==== 18-Jul-2018::15:28:05 ===
closing AMQP connection <0.7107.0> (127.0.0.1:42632 -> 127.0.0.1:5672)
I went through every SO questions about authentication problems and found the following leads:
My credentials are wrong
I'm trying to connect with guest from
remote
My RabbitMQ version is not compatible with the consumer
All those leads did not help me. My crendetials are good, I'm not using guest to connect but a privileged user with full access and admin I created and my RabbitMQ version did not change through the migration.
NB: I migrated my RabbitMQ server from a separate server to the same as my consumer, so now the consumer is fetching from localhost. Don't know the consequences but I figured it could help you guys help me.
So I just had a similar problem googled solutions, which is how I found this page. I didn't find a direct answer to my question, but I ended up discovering that rabbitmq has 2 different sets of rights to configure that don't exactly overlap with each other, in my case I had 0 rights for 1 set of rights and admin rights for the other set of rights. I wounder if you could be running into a similar scenario.
Seeing code will make the 2 sets of rights make more since, but first some background context:
My RMQ is hosted on Kubernetes where stuffs ephemeral, and I needed some usernames and passwords to ship preloaded with a fresh rabbitmq instance, well in Kubernetes there's an option to inject a preconfigured broker definition on first startup. (When I say broker definition I'm referring to that spot in the management Web GUI there's an option to import and export broker definitions AKA backup or replace your RMQ live configuration.)
Here's a shortened version of my config with sensitive stuff removed:
{
"vhosts": [
{"name":"/"}
],
"policies": [
{
"name": "ha",
"vhost": "/",
"pattern": ".*",
"definition": {
"ha-mode": "all",
"ha-sync-mode": "automatic",
"ha-sync-batch-size": 2
}
}
],
"users": [
{
"name": "guest",
"password": "guest",
"tags": "management"
},
{
"name": "admin",
"password": "PASSWORD",
"tags": "administrator"
}
],
"permissions": [
{
"user": "guest",
"vhost": "/",
"configure": "^$",
"write": "^$",
"read": "^$"
},
{
"user": "admin",
"vhost": "/",
"configure": ".*",
"write": ".*",
"read": ".*"
}
]
}
Ok so when I originally saw that tags attribute, I assumed o arbitrary value I'll put a self documenting tag there, and that was equivalent to "", which resulted in me having 0 rights to the web management GUI/REST API, while below I had all ".*" so that part had full admin rights. It was really confusing for me because (I was getting a false error message saying I was supplying invalid credentials, but the credentials were correct, I just didn't have access.)
If it's not that then there's also this configuration thing where guest gets limited to localhost access only by default, but you can override it.
Similar problem, we were also facing with different tech stack. In our case tech stack was:
RabbitMQ deployed in Kubernetes (AKS) using Bitnami package in HA mode
Consumer and Producer created in microservice created using Java 8 with Spring Boot Framework using Apache Camel also running in same Kubernetes cluster
We verified below points:
User and password are correct
User associated with required VHOST
Required permission given (administrator tag)
User was able to login from RabbitMQ Web Console
Connectivity on host and port was there from microservice Pod to RabbitMQ service (checked with various tools like telnet)
All code and configuration was absolutely same (as there is same configuration in lower environment working correctly)
Was getting issue:
com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
After much investigation and troubleshoot we found that, the size of username was larger than consumer API supported.
Example, we used username 'productionappuser'. This user was able to login in management web console but was failing from microservice.
We just changed the username to a new user with 8 characters and it started working.
This looks very weird as same user was able to login thus shared findings.

Using Google Cloud SQL WITHOUT Network Authorization

I am using Google Cloud PostgreSQL with my django rest api developed locally and to be able to connect to the database, you are required to enter an IP address of where do you want to connect from. My team and I are using dynamic IP addresses and we should change the IP address everytime in the cloud interface in order to connect. Is there any other way? I wanted to try the SSL thing but it's too complicated. Any thoughts?
Thanks
Edit:
I am trying to use SSL and this is what I added to my settings.py but I am getting an error:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'HOST': '00.000.00.000',
'NAME': 'dbname',
'USER': 'username',
'PASSWORD': 'mypassword',
'OPTIONS': {
'sslmode': 'require',
'ssl': {
'ca': 'certs/server-ca.pem',
'cert': 'certs/client-cert.pem',
'key': 'certs/client-key.pem'
}
},
}
}
The ssl files are located in a folder called certs and this folder is in the same directory as the settings.py file.
This is the error I get when running the server:
django.db.utils.ProgrammingError: invalid dsn: invalid connection option "ssl"
Try using cloud proxy This will enable the use of an authenticated connection, however you will not need to worry about the IP or authentication aspect.
Using cloud proxy - allows a dedicated connection to your Cloud SQL instance (this is an authenticated connection). Once this is correctly set up, simply point your application to the proxy and all traffic will be sent to the back-end Cloud SQL instance.
For bonus points, cloud proxy can be configured dynamically using the VM metadata to setup the environment. I have used it like this previously with Terraform where it used to point to a specific cloud sql instance and it saves a lot of effort.

How to create a user provided redis service which spring auto configuration cloud connectors picks?

I have created a user provided service for redis as below
cf cups p-redis -p "{\"host\":\"xx.xx.xxx.xxx\",\"password\":\"xxxxxxxx\",\"port\":6379}"
This not getting picked automcatically by the redis auto reconfiguration or the service connectors and getting jedis connection pool exception.
When I bind to the redis service created from the market place it works fine with the spring boot application. This confirms there is no issue with the code or configuration. I wanted a custom service for the redis to work with the spring boot app. How can i create such service? What am i missing here? Is this possible?
System-Provided:
{
"VCAP_SERVICES": {
"user-provided": [
{
"credentials": {
"host": "xx.xx.xxx.xxx",
"password": "xxxxxxxx",
"port": 6379
},
"label": "user-provided",
"name": "p-redis",
"syslog_drain_url": "",
"tags": []
}
]
}
}
I could extend the abstract cloud connector and create redis factory myself but i want to make it work out of the box with custom service and auto configuration.
All routes to mapping this service automatically lead to the spring-cloud-connectors project. If you look at the implementation, services must be either tagged with redis or expose a uri with a redis scheme from credential keys based on a permutation of uri.
If you'd like additional detection behavior, I'd recommend opening an issue in the GitHub repo.
What worked for me:
cf cups redis -p '{"uri":"redis://:PASSWORD#HOSTNAME:PORT"}' -t "redis"
Thanks to earlier answers that led me to this solution.