Should I have separate VPCs for production environments in Cloudhub? - mule

I'm looking into setting up a VPC in Cloudhub and just wanted to know whether I would setup one VPC for dev, test and production environments or whether I setup a VPC for dev and test environments and one for Production?
Also, is there a best practice for how to set up private and public subnets in a Cloudhub VPC?
Thanks

Typically, yes. Most clients I've worked on have used a VPC for non-prod, and a separate VPC for prod. It's good practice to have your production environment completely isolated from your non-production environments, especially on a networking level.
I'm going to provide some additional details because I think they may be relevant to where you're at with your VPC setup.
Deciding how many of your company's internal IP addresses you should allocate to your non-prod and prod VPCs can be a bit of a headache. This decision needs to be made upfront, as the VPC is immutable: additional IP addresses cannot be dynamically added or subtracted after the VPC is created. The VPC needs to be completely torn down, and a new one stood up. This means all applications in that VPC will need to come down and be re-deployed in the new VPC as well. You'll want to avoid this if at all possible.
You should know you will use an IP address for every worker, and every proxy across the entire VPC. So if you have a non-prod VPC servicing 2 environments (dev and test), and you have 4 applications using 2 workers each per environment, you will need at least 4 apps * 2 workers * 2 envs = 16 IP addresses allocated.
If I'm remembering correctly, MuleSoft was last recommending that you take however many IP addresses you think you will need (using the calculation above), and double it to determine how many IP addresses you should allocate per VPC.
Not sure about private/public subnets or how they apply to this situation.

You can have any number of environments hosted in a single VPC. So all 3 of your environments Dev, ST, Prod can reside in same VPC. DNS entries route the traffic to different environments. As recommended approach, host your test environments - DEV, ST, SI etc in same VPC and set a separate VPC for production.
thnx
Vikas

The recommendation is to keep you Production environment separate from you non-production environment.
Before creating the VPCs, make sure to confirm the CIDRs with you network team to be sure those do not overlap any other network in your organization.
Regarding the size of the networks it also depend on you business and the plan you have for the future. I can suggest is better to over estimate. For example I would use a /16 even if a /24 is enough, this is because the VPC is immutable and once you create you can't change the size, so if you business grow fast you may need more IPs.
In CloudHub you don't have any control regarding the VPC sebnets. Those are managed internally by Mulesoft.
However,using Dedicated Load Balancers (DLB), you can have a secure separation between exposed resources and internal resources.
Inside each VPC, you can create a public DLB that expose only your public APIs and an internal DLB that serves all the APIs.
What's distinguish the 2 DLBs is the Whitelist section, where you specify who can connect to it.
The public DLB will whitelist : 0.0.0.0/0
The internal DLB will whitelist : Internal Networks
Please have in mind that to connect to the intenal DLBs you will need to setup a VPN or Direct Connect between you company networks and the VPCs
Hope this helps ...

Related

Site-2-Site between 2 Azure VNETs

Configuring a VNet-to-VNet connection is the preferred option to easily connect VNets if you need a secure tunnel using IPsec/IKE. In this case the documentation says that traffic between VNets is routed through the Microsoft backbone infrastructure.
According to the documentation, a Site-to-Site connection is also possible:
If you are working with a complicated network configuration, you may prefer to connect your VNets using the Site-to-Site steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually.
In this case we have control over the configuration of the virtual local network address space, but we need expose public IPs. Documentation donĀ“t says nothing about where the traffic goes (azure internal or public internet)
My question is, in this scenario, S2S between VNets, the traffic is routed through azure infrastructure as in the case of VNet-to-VNet or the comunication is done through public internet?
edit
The traffic in an S2S between VNets is routed through Microsoft backbone network. See this doc.
Microsoft Azure offers the richest portfolio of services and
capabilities, allowing customers to quickly and easily build, expand,
and meet networking requirements anywhere. Our family of connectivity
services span virtual network peering between regions, hybrid, and
in-cloud point-to-site and site-to-site architectures as well as
global IP transit scenarios.

Connecting remote filesystem securely to Kubernetes Cluster

Here is the situation I am facing. I work for a company that is designing a product in which, due to legal constraints, certain pieces of data need to reside on physical machines in specific geopolitical jurisdictions. For example, some of our data must reside on machines within the borders of the "Vulgarian Federation".
We are using Kubernetes to host the system, and will probably settle on either GKE or AWS as the cloud provider.
A solution I have invented creates a pod to host a MongoDB instance that is locale specific (say, Vulgaria-MongoDB), which then seamlessly stores the data on physical drives in that locale. My plan is to export the storage from the Vulgarian machine to our Kubernetes cluster using NFS.
The problem that I am facing is that I cannot find a secure means of achieving this NFS export. I know that NFSv4 supports Kerberos, but I do not believe that NFS was ever intended to be used over the open web, even with Kerberos. Another option would be creating a VPN server in the cluster and adding the remote machine to the VPN. I have also considered SSHFS, but I think it would be too unstable for this particular use case. What would be an efficient & secure way to accomplish this task?
As mentioned in the comment, running the database far away from the storage is likely to result in all kinds of weirdness. Modern DB engines allow for some storage latency, but not 10s of seconds generally. But if you must, the VPN approach is correct, some kind of protected network bridge. I don't know of any remote storage protocols I would trust over the internet.

Can ACI ( Azure Container Instance ) in peered vnets communicate?

In vnet-a, subnet-a there is aci-a.
In vnet-b, subnet-b there is aci-b.
If the virtual networks are peered both ways, shouldn't the containers be able to ping each other?
In my case, they can't. I've followed these pages:
Creating ACI in subnet:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-vnet
Peering:
https://learn.microsoft.com/en-us/azure/virtual-network/tutorial-connect-virtual-networks-portal
Any help is appreciated!
If the virtual networks are peered both ways, shouldn't the containers
be able to ping each other?
Actually, No. And the Virtual network deployment is just the preview version. It seems that your containers just can communicate securely with other resources in the same virtual network.
Through the test, container groups just can communicate with each other in the same Subnet or different Subnets in the same VNet. Maybe they can communicate between different VNets with peering later in the new version.
Currently VNET regional and global peering are not supported by ACI. We're working on this and regional peering should be enabled in the next few months, with global peering later down the road. Your scenario you describe should be supported by regional peering if both of your VNETs reside within the same Azure region.
Check back with us at aka.ms/aci/updates for the latest news and leave us feedback at aka.ms/aci/feedback if peering is a requirement for you!
Thanks for using our service!

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

One domain name "load balanced" over multiple regions in Google Compute Engine

I have service running on Google Compute Engine. I've got few instances in Europe in a target pool and few instances in US in a target pool. At the moment I have a domain name, which is hooked up to the Europe target pool IP, and can load balance between those two instances very nicely.
Now, can I configure the Compute Engine Load Balancer so that the one domain name is connected to both regions? All load balancing rules seem to be related to a single region, and I don't know how I could get all the instances involved.
Thanks!
You can point one domain name (A record) at multiple IP addresses, i.e. mydomain.com -> 196.240.7.22 204.80.5.130, but this setup will send half the users to the U.S., and the other half to Europe.
What you probably want to look for is a service that provides geo-aware or geo-located DNS. A few examples include loaddns.com, Dyn, or geoipdns.com, and it also looks like there are patches to do the same thing with BIND.
You should configure your DNS server. Google does not have a DNS service, as one part of their offering, at the moment. You can use Amazon's Route 53 to route your requests. It has a nice feature called latency based routing which allows you to route clients to different IP addresses (in your case - target pools) based on latency. You can find more information here - http://aws.amazon.com/about-aws/whats-new/2012/03/21/amazon-route-53-adds-latency-based-routing/
With Google's HTTP load balancing, you can load balance traffic over these VMs in different regions by exposing via one IP. Google eliminates the need for GEO DNS. Have a look at the doc:
https://developers.google.com/compute/docs/load-balancing/
Hope it helps.