Cannot Connect to Redis Enterprise Cloud (gcloud) - redis

I have a Google Enterprise Subscription ( Redis Cloud/Fixed Plan/GCP/us-east1/Standard/100MB)
I can connect to the database from my local DEVELOPMENT environment.
BUT I CANNOT connect when I publish the app to the Google Cloud Platform (Cloud Run)
My Cloud Run app is in the same region as the Redis Instance (east-1)

The connection between your GCP project and the Redis instance is achieved through a VPC network peering as specified on the docs. Check all the restrictions and considerations for VPC network peering in GCP here. So I believe that if you make sure to route all traffic from your service through a Serverless VPC connector that is paired with the VPC network peering associated with your Redis instance could do the trick.
Anyhow, assigning your Cloud Run service a static outboud IP address by following this section of the docs should also guarantee that the connection is achieved. Notice that you'll basically need to configure the Cloud Run service's VPC egress to route all outbound traffic through a VPC network (using a Serveless VPC Access connector) that has a Cloud NAT gateway configured with the static IP address. Making sure that this IP address is cleared under the Source IP ACL related to your Redis Enterprise instance should guarantee the connection.
Finally, if you face too much difficulties achieving such a connection you could try to host your Redis instance in Cloud Memorystore and follow this section of the docs (notice that you'll basically need to once again create a VPC connector).

Related

Communication between AWS Fargate Services not working even with route53 setup

I have 2 services running in a single private vpc subnet (same available zone). Each service is based on a container here: https://github.com/spring-petclinic/spring-petclinic-microservices .
I've setup route53 service endpoints for both services.
When I run my tasks (each within their own service) service A times out calling service B over service B's route53 endpoint. Using localhost doesn't work because these containers are in separate services.
When I create a container for my task definition, I assign the port that my container is using (using port mapping field). However I notice in the console there is this note: "Host port mappings are not valid when the network mode for a task definition is host or awsvpc. To specify different host and container port mappings, choose the Bridge network mode."
Since I'm using Fargate I am using awsvpc mode. So is this telling my port mapping setting isnt doing anything ? Is that why my services are timing out ?
Then when I google bridge mode, this seems to tell me that awscpv networking mode support service discovery: https://aws.amazon.com/about-aws/whats-new/2018/05/amazon-ecs-service-discovery-supports-bridge-and-host-container-/
So how does "bridge mode" work here ? Why does port mapping field not work for awsvpc ?
Edit:
I read this How to communicate between Fargate services on AWS ECS? and he just says "I created a new service and things started working." That's a bit disheartening.
Edit2:
Yes my vpc has enabled dns resolution.
As it turns out the security group on my service was only allowing http on port 80. That is the inbound rules the default SG that the service wizard gives you. I updated it to allow traffic on my container ports and they seem to be talking to each other now.

Site-2-Site between 2 Azure VNETs

Configuring a VNet-to-VNet connection is the preferred option to easily connect VNets if you need a secure tunnel using IPsec/IKE. In this case the documentation says that traffic between VNets is routed through the Microsoft backbone infrastructure.
According to the documentation, a Site-to-Site connection is also possible:
If you are working with a complicated network configuration, you may prefer to connect your VNets using the Site-to-Site steps, instead the VNet-to-VNet steps. When you use the Site-to-Site steps, you create and configure the local network gateways manually.
In this case we have control over the configuration of the virtual local network address space, but we need expose public IPs. Documentation donĀ“t says nothing about where the traffic goes (azure internal or public internet)
My question is, in this scenario, S2S between VNets, the traffic is routed through azure infrastructure as in the case of VNet-to-VNet or the comunication is done through public internet?
edit
The traffic in an S2S between VNets is routed through Microsoft backbone network. See this doc.
Microsoft Azure offers the richest portfolio of services and
capabilities, allowing customers to quickly and easily build, expand,
and meet networking requirements anywhere. Our family of connectivity
services span virtual network peering between regions, hybrid, and
in-cloud point-to-site and site-to-site architectures as well as
global IP transit scenarios.

Deploying Azure application internally

Can anyone suggest some solution for this scenario?:
I have two resources deployed in a VNet: Application Gateway and a VM behind application gateway. (Application gateways in subnet1 and VM in subnet2) There's is no public ip associated with Application Gateway (internal app gateway with only private ip). I have automation scripts in storage account in another tenant and I need to be able to download those inside vm using azure cli. With the given architecture, I want to be able to download the scripts in the vm from storage account. Currently, if I run "az login" from VM, nothing happens. I found some help on Azure documentation :https://learn.microsoft.com/en-us/azure/application-gateway/configuration-overview#allow-application-gateway-access-to-a-few-source-ips but it's not helpful.
I have also attached network security group with allows VnetInbound for VM. In while architecture, I cannot use any public ip because of customer requirements and they do not want any connectivity to internet.
Any suggestions?
Thanks in advance!
Since Azure VM does not attach a public IP, the storage account does not directly communicate with your Azure VM over the Internet.
In this scenario, I would like to provide two suggestions:
The one is to use virtual network service endpoints, which allow you to secure Azure Storage accounts to your virtual networks, fully removing public internet access to these resources. You could create service endpoints for Microsoft.Storage in that VM subnet. You VM instance will access the storage account over the Azure backbone network but it has some limitations as below:
The virtual network where the endpoint is configured can be in the
same or different subscription than the Azure service resource. For
more information on permissions required for setting up endpoints and
securing Azure services, see Provisioning.
Virtual networks and Azure service resources can be in the same or
different subscriptions. If the virtual network and Azure service
resources are in different subscriptions, the resources must be under
the same Active Directory (AD) tenant.
Another suggestion is to use private endpoints for Azure Storage. You could create Private endpoint connections for the storage account in a VNet, then peer this VNet with the VNet where your Azure VM create.
For more references, you could get more details and steps on these blogs--https://stefanstranger.github.io/2019/11/03/UsingAzurePrivateLinkForStorageAccounts/
and
https://kvaes.wordpress.com/2019/03/10/hardening-your-azure-storage-account-by-using-service-endpoints/

Accessing memorystore in Shared VPC

I have created a Memorystore instance with IP 10.190.50.3 (and this is in us-east1).
I have a shared VPC setup with name my-gcp and I also authorised the same when creating the Memorystore instance.
In shared VPC, I have a service project dev and I have a window machine(10.190.5.7). Inside that when I am trying to connect to memory store from that Windows machine, I am not able to connect to Memorystore instance.
I have also enabled egress traffic to 10.190.50.3 from all instance of my-gcp vpc. this vpc is setup in us-east4.
tracert, ping also not working form window machine for IP 10.190.50.3.
This Memorystore instance is created in host project of vpc.
I found the public documentation updated recently:
1.The connecting client must be on the same network and in the same region (different zone within same region will also ok) as your Cloud Memorystore for Redis instance.
2.If you are using a Shared VPC network across multiple projects, you can connect to a Redis instance that is deployed on shared VPC network on the host project. Connecting to a Redis instance that is deployed on shared VPC network in a service project is not supported.
Also here is the link on how to Connect to the Redis instance from a Compute Engine VM.
Unfortunately, accessing memorystore from service project is currently not supported in Shared VPC .
Now we can deploy GCP's memory store in the shared network using the private connection. Refer this link for more details.
Steps below:
Verify or establish a private services access connection for the network in the host project that you use to create your Redis instance.
Make sure the Service Networking API is enabled for both the host project and the service project.
Follow the steps from Creating a Redis instance on a VPC network, but make the following modifications:
a. Complete the optional step for setting up a private services access connection.
b. Use the Authorized VPC Network dropdown to select the Shared VPC network from the host project. It is listed under Shared VPC Networks.

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.