In vnet-a, subnet-a there is aci-a.
In vnet-b, subnet-b there is aci-b.
If the virtual networks are peered both ways, shouldn't the containers be able to ping each other?
In my case, they can't. I've followed these pages:
Creating ACI in subnet:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
Peering:
https://learn.microsoft.com/en-us/azure/virtual-network/tutorial-connect-virtual-networks-portal
Any help is appreciated!
If the virtual networks are peered both ways, shouldn't the containers
be able to ping each other?
Actually, No. And the Virtual network deployment is just the preview version. It seems that your containers just can communicate securely with other resources in the same virtual network.
Through the test, container groups just can communicate with each other in the same Subnet or different Subnets in the same VNet. Maybe they can communicate between different VNets with peering later in the new version.
Currently VNET regional and global peering are not supported by ACI. We're working on this and regional peering should be enabled in the next few months, with global peering later down the road. Your scenario you describe should be supported by regional peering if both of your VNETs reside within the same Azure region.
Check back with us at aka.ms/aci/updates for the latest news and leave us feedback at aka.ms/aci/feedback if peering is a requirement for you!
Thanks for using our service!
Related
Using a Partner Interconnect I'm trying to get the restricted.googleapis.com access to work and having some issues.
The BGP sessions needs to advertise 199.36.153.4/30 for that to work. Does it also need to advertise all the VPC networks? Just the region cloud router is in? None of them?
GCP allows you to advertise the 199.36.153.4/30 network on the cloud router, and it will apply for all the BGP sessions it has, or you can do it for specific ones. It depends on your needs. You only need to advertise this network in order to be known for your on-prem devices which need to know that network.
Consider that you need to define a static route for this same network for your VPC whose next hop is the default internet gateway in order to have that traffic forwarded to the correct destination. For your VMs you need to set firewall rules to allow egress/ingress traffic for this network.
If you require to refer to restricted.googleapis.com from the on-prem network, you can define in your on-prem DNS system A/CNAME records as needed.
You can read more about these topics here and here.
Configuring a VNet-to-VNet connection is the preferred option to easily connect VNets if you need a secure tunnel using IPsec/IKE. In this case the documentation says that traffic between VNets is routed through the Microsoft backbone infrastructure.
According to the documentation, a Site-to-Site connection is also possible:
If you are working with a complicated network configuration, you may prefer to connect your VNets using the Site-to-Site steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually.
In this case we have control over the configuration of the virtual local network address space, but we need expose public IPs. Documentation don´t says nothing about where the traffic goes (azure internal or public internet)
My question is, in this scenario, S2S between VNets, the traffic is routed through azure infrastructure as in the case of VNet-to-VNet or the comunication is done through public internet?
edit
The traffic in an S2S between VNets is routed through Microsoft backbone network. See this doc.
Microsoft Azure offers the richest portfolio of services and
capabilities, allowing customers to quickly and easily build, expand,
and meet networking requirements anywhere. Our family of connectivity
services span virtual network peering between regions, hybrid, and
in-cloud point-to-site and site-to-site architectures as well as
global IP transit scenarios.
I've been asked by Azure support to open the question here, though i think this is an AKS bug.
When deploying a cluster each node 'node.status.addresses' should show an externalip or hostname of the node by design but there is a VM name in hostname address in instead of it in AKS made cluster. Which makes it is really hard to know node public ips for various reasons we need them.
Is there any standard or nonstandard way to get node public ip ?
There is the public IP exposed for the Azure Kubernetes Service, but it's not directly to the node. Actually, the Kubernetes node will not be exposed to the internet with a public IP.
The AKS nodes create in a VNet on Azure and access or can be accessed through the Azure Load Balancer with a public IP. The VNet is a private network as a resource of Azure. For the VNet, there are two types such as Basic and Advanced. You can get more details, see Network concepts for applications in Azure Kubernetes Service (AKS).
AKS nodes are not exposed to the public internet and therefore will not have an exposed public IP.
With that said, I’ve been investigating an issue where nodes either lose or fail to ever get an internal IP. We (AKS) have implemented an initial fix, which restarts kubelet, and does seem to at least temporarily mitigate the lack of an internal IP. There are ongoing efforts upstream to find and fix the real root cause.
I don’t think I’ve come across the scenario of a node not having a hostname address though. I’m going to log a backlog item to investigate any clusters that appear to be experiencing this symptom. I can’t promise an immediate fix, but I am definitely going to look into this further early next week.
There is a preview of a feature enabling a public IP per node. Please see https://learn.microsoft.com/en-us/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-in-a-node-pool
In common scenarios, each AKS node cluster will be behind a Load Balancer, which in turn will have an Public IP. You can get the public IP by going to your AKS Cluster -> Services & Ingresses -> Check for Service with Type Load Balancer. This will have a Public IP.
You can also configure the cluster so each Node has a Public IP. You can then access the details from the Node Pool tab.
I'm looking into setting up a VPC in Cloudhub and just wanted to know whether I would setup one VPC for dev, test and production environments or whether I setup a VPC for dev and test environments and one for Production?
Also, is there a best practice for how to set up private and public subnets in a Cloudhub VPC?
Thanks
Typically, yes. Most clients I've worked on have used a VPC for non-prod, and a separate VPC for prod. It's good practice to have your production environment completely isolated from your non-production environments, especially on a networking level.
I'm going to provide some additional details because I think they may be relevant to where you're at with your VPC setup.
Deciding how many of your company's internal IP addresses you should allocate to your non-prod and prod VPCs can be a bit of a headache. This decision needs to be made upfront, as the VPC is immutable: additional IP addresses cannot be dynamically added or subtracted after the VPC is created. The VPC needs to be completely torn down, and a new one stood up. This means all applications in that VPC will need to come down and be re-deployed in the new VPC as well. You'll want to avoid this if at all possible.
You should know you will use an IP address for every worker, and every proxy across the entire VPC. So if you have a non-prod VPC servicing 2 environments (dev and test), and you have 4 applications using 2 workers each per environment, you will need at least 4 apps * 2 workers * 2 envs = 16 IP addresses allocated.
If I'm remembering correctly, MuleSoft was last recommending that you take however many IP addresses you think you will need (using the calculation above), and double it to determine how many IP addresses you should allocate per VPC.
Not sure about private/public subnets or how they apply to this situation.
You can have any number of environments hosted in a single VPC. So all 3 of your environments Dev, ST, Prod can reside in same VPC. DNS entries route the traffic to different environments. As recommended approach, host your test environments - DEV, ST, SI etc in same VPC and set a separate VPC for production.
thnx
Vikas
The recommendation is to keep you Production environment separate from you non-production environment.
Before creating the VPCs, make sure to confirm the CIDRs with you network team to be sure those do not overlap any other network in your organization.
Regarding the size of the networks it also depend on you business and the plan you have for the future. I can suggest is better to over estimate. For example I would use a /16 even if a /24 is enough, this is because the VPC is immutable and once you create you can't change the size, so if you business grow fast you may need more IPs.
In CloudHub you don't have any control regarding the VPC sebnets. Those are managed internally by Mulesoft.
However,using Dedicated Load Balancers (DLB), you can have a secure separation between exposed resources and internal resources.
Inside each VPC, you can create a public DLB that expose only your public APIs and an internal DLB that serves all the APIs.
What's distinguish the 2 DLBs is the Whitelist section, where you specify who can connect to it.
The public DLB will whitelist : 0.0.0.0/0
The internal DLB will whitelist : Internal Networks
Please have in mind that to connect to the intenal DLBs you will need to setup a VPN or Direct Connect between you company networks and the VPCs
Hope this helps ...
I have a classic network setup in Azure, complete with VMs, vnets and site to site VPN.
I need to introduce a RM VM to integrate with this network. Are there any special considerations I need to make to ensure that the RM VM can integrate with the classic network?
Thanks
The only thing you have to do is to create a Vnet-Vnet connection between the ASM (Azure service manager or classic) and the ARM network. You can do this by creating a gateway and connect them . The only consideration is to use non overlapping subnets. The same consideration you have when creating a vpn between on-premise and Azure.