Update a single container in an Azure container instance with multiple container - azure-container-instances

We've got an Azure container instance running in Azure with multiple containers deployed to it (via a yaml file). When we run updates, we have to run the full yaml file every time, with some of the values (i.e. image id) amended.
We'd like to break up our code so that we have more of a microservice approach to development (separate repos, separate devops pipelines). Is it possible to instruct a container instance to update one container (from a set of 4 for example) without submitting values for all containers?
For example, if we could have each repo contain a pipeline that only updates one instance in the aci it would be great. Note, what I think might happen is that we may get an error when submitting an update for one container because aci thinks we are trying to raise 3 containers and update one of them (if we have a group of 4).
If it's not possible, is there any other way of achieving the same, without having to step up to Kubernetes? Ideally we'd like to not have to use Kubernetes just because of the management overhead required.

You cannot update a single container in a container group. All containers will restart whenever any part of the group is updated.
Every container you want to update separately needs to be in its own group. If you split the containers, the containers will no longer be running on the same host and you will lose the ability to access the other services via localhost (you will have to use the DNS name of the container group).
If some of your containers serve endpoints that are exposed as paths of a single server, you will need to set up something like Azure Front Door to enable path-based routing so traffic can hit the correct service via a single hostname.

Related

Is it possible to make Redis cluster join on a particular path?

I'm looking into altering the architecture of a hosting service intended to scale arbitrarily.
On a given machine, the service works roughly as follows:
Start a container running Redis cluster client that joins a global cluster.
Start containers for each of the "Models" to be hosted.
Use upstream Redis cluster for managing model global state. Handle namespacing via keys themselves.
I'm wondering if it might be possible to change to something like this:
For each Model, start a container running the Model and a Redis cluster client.
Reverse proxy the Redis service using something like Nginx to be available on a certain path, e.g., <host_ip>:6397/redis-<model_name>. (Note: I can't just proxy from different ports, because in theory this is supposed to be able to scale past 65,535 models running globally.)
Join the Redis cluster by using said path.
Internalizing the Redis service to the container is an appealing idea to me because it is closer to what the hosting service is supposed to achieve. We do want to share compute; we don't want to share a KV store.
Anyways, I haven't seen anything that suggests this is possible. So, sticking with the upstream may be my only option. But, in case anyone knows otherwise, I wanted to check and see.

Azure Container Instances Islolate containers in side group

I want to be able to deploy an ACI container group but I want none of the containers in the group to be able to communicate with one another. According to the documentation, containers can communicate on any port even if it's not exposed. Is there a way to lock down all containers within a group?
For your requirements, I don't think there is an appropriate way to achieve it through the ACI. Maybe you can install the firewall in the image and use it. But it's not good and it will make the image bigger.
I recommend you take a try to the AKS, it has the network policy between the pods. And you can deploy the images with only one container for each of them. You can get more details from the Network policies of the AKS.

DC/OS running a service on each agent

Is there any way of running a service (single instance) on each deployed agent node? I need that because each agent needs to mount a storage from S3 using s3fs
The name of the feature you're looking for is "daemon tasks", but unfortunately, it's still in the planning phase for Mesos itself.
Due to the fact that schedulers don't know the entire state of the cluster, Mesos needs to add a feature to enable this functionality. Once in Mesos it can be integrated with DC/OS.
The primary workaround is to use Marathon to deploy an app with the UNIQUE constraint ("constraints": [["hostname", "UNIQUE"]]) and set the app instances to the number of agent nodes. Unfortunately this means you have to adjust the instances number when you add new nodes.

Kubernetes Architecture: Master-node

I have 2 questions about the orchestrationtool Kubernetes.
1) What is the Kube controller doing? Sometimes I read that it's really creating pods (the API server tells it how). And Sometimes I read it's just watching the whole process and see changes in the etcd.
2) Why do I see the Replication Controller on the Master in so many architecture-overviews of Kubernetes? I thought it was created for a service (which contains pods). So that it's always placed on the node.
The kube-controller-manager is managing a bunch of the cluster's state asynchronously, including the replication controllers. It's made up of a number of different "controllers" that watch the apiserver to know what the desired state of the world is, then do work to try to get there when the actual state differs from the desired state.
For example, it's the component that creates more pods for a replication controller when not enough exist, or tears one down when too many exist.
It also manages things like external load balancers for services running in the cloud, which endpoints make up a service, persistent volumes and their claims, and many of the new features coming up in 1.1 like daemon sets and pod autoscaling.

how to handle configuration for accept and production environment in glassfish

I want to create an application that is not aware of the environment it runs in.
The environment specific configuration I want to leave up to the configuration of glassfish.
So eg I have a persistence.xml which 'points' to a jta data source
<jta-data-source>jdbc/DB_PRODUCTSUPPLIER</jta-data-source>
In glassfish this datasource is configured to 'point' to a connection pool.
This connection pool is configured to connect to a database.
I would like to have a mechanism such that I can define these resources for a production and an accept environment without having to change the jndi name. Because this would mean that my application is environment aware.
Do I need to create two domains for this? Or do I need two completely separate glassfish installations?
One way to do this is to use clustering features (GF 2.1 default install is often developer mode, so you'll have to enable clustering, GF 3.1 clustering seems to be on by default).
As part of clustering, you can create stand alone instances that do not participate in a cluster. Each instance can have its own config. These instances share everything under the Resources section, and each instance can have separate values in the system properties, most importantly these are separate port numbers.
So a usage scenario would be that your accept/beta environment will run on it's own instance with different ports (defaults being 38080, 38181, etc., assuming you're doing an http app). When running this way, your new instance will be running in a separate JVM. With GF 2.1, you need to learn how to manage the node agent. With GF 3.1, you won't have to worry about that.
When you deploy an application, you must choose the destination, called a Target, so you can have an accept/beta version on one instance, and a production version on the other instance.
This is how I run beta deployments with our current GF 2.1 non-clustered setup and it works pretty well.