I have an existing ACI. Can I add it to a VNET and Subnet via Azure CLI/Azure cloud shell.
Unfortunately, this isn't possible right now from the CLI, the az container commands don't support patching / upgrading an existing container instance or container group for the network profile property
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-update#properties-that-require-container-delete
If you are interested in not loosing the traffic going to your existing ACI, you should deploy a new ACI to a private VNET/Subnet and front both your containers using an applicationGW or Load Balancer on your new VNET/Subnet.
Once ready to make the move you should be direct all traffic to your ACI running in the VNET.
Related
I have a Google Enterprise Subscription ( Redis Cloud/Fixed Plan/GCP/us-east1/Standard/100MB)
I can connect to the database from my local DEVELOPMENT environment.
BUT I CANNOT connect when I publish the app to the Google Cloud Platform (Cloud Run)
My Cloud Run app is in the same region as the Redis Instance (east-1)
The connection between your GCP project and the Redis instance is achieved through a VPC network peering as specified on the docs. Check all the restrictions and considerations for VPC network peering in GCP here. So I believe that if you make sure to route all traffic from your service through a Serverless VPC connector that is paired with the VPC network peering associated with your Redis instance could do the trick.
Anyhow, assigning your Cloud Run service a static outboud IP address by following this section of the docs should also guarantee that the connection is achieved. Notice that you'll basically need to configure the Cloud Run service's VPC egress to route all outbound traffic through a VPC network (using a Serveless VPC Access connector) that has a Cloud NAT gateway configured with the static IP address. Making sure that this IP address is cleared under the Source IP ACL related to your Redis Enterprise instance should guarantee the connection.
Finally, if you face too much difficulties achieving such a connection you could try to host your Redis instance in Cloud Memorystore and follow this section of the docs (notice that you'll basically need to once again create a VPC connector).
we have our AKS running in a private VNET, behind a corporate proxy. The proxy is not a "transparent" proxy and needs to be configured manually an all nodes. Is this a supported behavior? Is it possible to configure worker nodes and all system containers to work via proxy?
Actually, Azure Kubernetes managed by Azure and in a private Vnet create yourself or Azure. You can use the Load Balancer to transfer the traffic or use ingress. But you just can select one size and type for the nodes when you create the cluster and it seems not support multi size for the nodes currently. Maybe it will be supported in the future on Azure.
For more details about AKS, see Azure Kubernetes Service.
My client is currently evulating AKS which seems to be really promising. Our current platform is based on Azure VM's we provision ourselves. We would like to create private communication between both our existing platform and the managed AKS cluster but so far that does not seem to be supported yet.
Some example use cases for us are:
- Proxying incoming HTTP traffic via our main entrypoint, a Varnish server, to the new AKS environment so we don't have to change url's
- Accessing non publically exposed API's from the AKS environment
Right now the AKS cluster is it's a different subscription and resource group than other parts of our platform. The main reason we we can't connect though seems to be that it's not possible to specify which private IP range should be used when creating an AKS cluster.
Is there support planned for this or is there a reliable workaround?
Thanks for the inquiry, there's a workaround for the stated case, it's through the use of ACS Engine, "ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes"
So using this solution will allow you to integrate Azure Container Service Cluster into an existing Virtual Network.More details and step by step guide can be found here: https://blogs.msdn.microsoft.com/jcorioland/2017/01/10/how-to-integrate-a-new-azure-container-service-cluster-into-an-existing-virtual-network-using-acs-engine/
I have an Express API server running on localhost on my own machine. How do I make it accessible from the Internet and not just my own machine?
Preferably, it would be deployed on AWS.
In AWS there are multiple ways of hosting your express application based on flexibility vs convenience.
AWS Elastic Beanstalk:
This will provide you more convenience by creating an autoscaling and loadbalancing environment with version management and roll back support from one place in AWS web console. Also provide you IDE support for deployments and CLI commands for CI/CD support.
AWS ECS:
If you plans to dockerize your application(Which I highly recommend) you can use AWS ECS to manage your docker cluster with container level Autoscaling and loadbalancing support for more convenience. This also provides CLI for CI/CD.
AWS EC2:
If you need more flexibility, you can get a virtual server in AWS and also manually configure autoscaling and loadbalancing which I prefer as the least option simply for a web app since you have to do most of the things manually.
All this services will provide you with publicly accessible URL if you configure them properly to grant access from outside. You need to configure networking and security groups properly either exposing the loadbalancer or instance IP/DNS URL to the outside.
While trying to port my application which is running on docker Swarm locally to Azure Container Service I am struck on the load balancer part of the Azure.
Locally I have a container instance of HAproxy running on Swarm Master and multiple web containers running.
Web containers have just exposed the ports and they are not mapped to machines on which they are running.
HAproxy container has mapped port to the master and internally is talking to my web containers for load balancing.
This gives me the leverage to run any number of containers with limited number of workers in Docker Swarm.
In azure container service I see that Azure load balancer will talk to only ports that are mapped, that means that I can only run 1 container per agent or I keep an internal load balancer in my containers, which implies that users will be going through 2 load balancers before hitting my application.
Not an ideal scenario when my application uses sticky sessions.
So Apparently Microsoft's statement "Everything works same in Azure containers" goes for a toss ?
what are the solutions available or am I doing something wrong here?
Regards,
Harneet
The solution in ACS is almost identical. Use HAProxy and have the Azure LB talk to that. The only difference is that you will not be running the proxy on the master, you will have Swarm deploy it to an agent for you.
You shouldn't really be running workloads on your masters. What would you do if you have a DDoS attack and can't reach your masters, for example. Having Swarm deploy the proxy for you means that you can also have swarm monitor the health of the proxy.
You could, if you really wanted to, run the proxy on the master as you do now. The solution would be the same, have the Azure LB provide a public connection to the proxy just as you currently do.