Azure Container Instances Islolate containers in side group - azure-container-instances

I want to be able to deploy an ACI container group but I want none of the containers in the group to be able to communicate with one another. According to the documentation, containers can communicate on any port even if it's not exposed. Is there a way to lock down all containers within a group?

For your requirements, I don't think there is an appropriate way to achieve it through the ACI. Maybe you can install the firewall in the image and use it. But it's not good and it will make the image bigger.
I recommend you take a try to the AKS, it has the network policy between the pods. And you can deploy the images with only one container for each of them. You can get more details from the Network policies of the AKS.

Related

Is it possible to make Redis cluster join on a particular path?

I'm looking into altering the architecture of a hosting service intended to scale arbitrarily.
On a given machine, the service works roughly as follows:
Start a container running Redis cluster client that joins a global cluster.
Start containers for each of the "Models" to be hosted.
Use upstream Redis cluster for managing model global state. Handle namespacing via keys themselves.
I'm wondering if it might be possible to change to something like this:
For each Model, start a container running the Model and a Redis cluster client.
Reverse proxy the Redis service using something like Nginx to be available on a certain path, e.g., <host_ip>:6397/redis-<model_name>. (Note: I can't just proxy from different ports, because in theory this is supposed to be able to scale past 65,535 models running globally.)
Join the Redis cluster by using said path.
Internalizing the Redis service to the container is an appealing idea to me because it is closer to what the hosting service is supposed to achieve. We do want to share compute; we don't want to share a KV store.
Anyways, I haven't seen anything that suggests this is possible. So, sticking with the upstream may be my only option. But, in case anyone knows otherwise, I wanted to check and see.

How to redirect the Apache log in Kubernetes

I am having one namespace and one deployment(replica set), My Apache logs should be written outside the pod, how is it possible in Kubernetes.
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
You should specify more precisely what you exactly mean by outside the pod, but as David Maze have already suggested in his comment, take a closer look at Logging Architecture section in the official kubernetes documentation.
Depending on what you mean by "outside the Pod", different solution may be the most optimal in your case.
As you can read there:
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes
cluster ... Cluster-level logging architectures are described in assumption that a logging backend is present inside or outside of your cluster.
Here are mentioned 3 most popular cluster-level logging architectures:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Second solution is widely used. Unlike the third one where the logs pushing needs to be handled by your application container, sidecar approach is application independend, which makes it much more flexible solution.
So that the matter was not so simple, it can be implemented in two different ways:
Streaming sidecar container
Sidecar container with a logging agent

Update a single container in an Azure container instance with multiple container

We've got an Azure container instance running in Azure with multiple containers deployed to it (via a yaml file). When we run updates, we have to run the full yaml file every time, with some of the values (i.e. image id) amended.
We'd like to break up our code so that we have more of a microservice approach to development (separate repos, separate devops pipelines). Is it possible to instruct a container instance to update one container (from a set of 4 for example) without submitting values for all containers?
For example, if we could have each repo contain a pipeline that only updates one instance in the aci it would be great. Note, what I think might happen is that we may get an error when submitting an update for one container because aci thinks we are trying to raise 3 containers and update one of them (if we have a group of 4).
If it's not possible, is there any other way of achieving the same, without having to step up to Kubernetes? Ideally we'd like to not have to use Kubernetes just because of the management overhead required.
You cannot update a single container in a container group. All containers will restart whenever any part of the group is updated.
Every container you want to update separately needs to be in its own group. If you split the containers, the containers will no longer be running on the same host and you will lose the ability to access the other services via localhost (you will have to use the DNS name of the container group).
If some of your containers serve endpoints that are exposed as paths of a single server, you will need to set up something like Azure Front Door to enable path-based routing so traffic can hit the correct service via a single hostname.

Prometheus target management

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

Can my requirements be met with JMX?

I am completely new to JMX. I have a specific requirement and wanted to know if it is possible to accomplish within the scope of JMX.
Requirements:
I have a set of resources which include many weblogic instances, jBoss instances and Tomcat instances running across many servers. Now I need a one stop solution, UI to monitor these resources, check their current status and if they are down, I need to start and stop them from that webpage.
Is this possible using JMX?
You could use nagios combined with check_jmx to monitor (create statistics)
and may trigger a restart of a resource. (I'm not sure if can trigger a restart direct via JMX)
Check out Jopr, http://www.jboss.org/jopr/
jmx4perl comes with a full featured Nagios Plugin check_jmx4perl for access JMX information. It comes with a set of preconfigured check for various resources, currently for JBoss, Tomcat and Jetty (more are in the pipeline).