Accessing memorystore in Shared VPC - redis

I have created a Memorystore instance with IP 10.190.50.3 (and this is in us-east1).
I have a shared VPC setup with name my-gcp and I also authorised the same when creating the Memorystore instance.
In shared VPC, I have a service project dev and I have a window machine(10.190.5.7). Inside that when I am trying to connect to memory store from that Windows machine, I am not able to connect to Memorystore instance.
I have also enabled egress traffic to 10.190.50.3 from all instance of my-gcp vpc. this vpc is setup in us-east4.
tracert, ping also not working form window machine for IP 10.190.50.3.
This Memorystore instance is created in host project of vpc.

I found the public documentation updated recently:
1.The connecting client must be on the same network and in the same region (different zone within same region will also ok) as your Cloud Memorystore for Redis instance.
2.If you are using a Shared VPC network across multiple projects, you can connect to a Redis instance that is deployed on shared VPC network on the host project. Connecting to a Redis instance that is deployed on shared VPC network in a service project is not supported.
Also here is the link on how to Connect to the Redis instance from a Compute Engine VM.

Unfortunately, accessing memorystore from service project is currently not supported in Shared VPC .

Now we can deploy GCP's memory store in the shared network using the private connection. Refer this link for more details.
Steps below:
Verify or establish a private services access connection for the network in the host project that you use to create your Redis instance.
Make sure the Service Networking API is enabled for both the host project and the service project.
Follow the steps from Creating a Redis instance on a VPC network, but make the following modifications:
a. Complete the optional step for setting up a private services access connection.
b. Use the Authorized VPC Network dropdown to select the Shared VPC network from the host project. It is listed under Shared VPC Networks.

Related

Communication between AWS Fargate Services not working even with route53 setup

I have 2 services running in a single private vpc subnet (same available zone). Each service is based on a container here: https://github.com/spring-petclinic/spring-petclinic-microservices .
I've setup route53 service endpoints for both services.
When I run my tasks (each within their own service) service A times out calling service B over service B's route53 endpoint. Using localhost doesn't work because these containers are in separate services.
When I create a container for my task definition, I assign the port that my container is using (using port mapping field). However I notice in the console there is this note: "Host port mappings are not valid when the network mode for a task definition is host or awsvpc. To specify different host and container port mappings, choose the Bridge network mode."
Since I'm using Fargate I am using awsvpc mode. So is this telling my port mapping setting isnt doing anything ? Is that why my services are timing out ?
Then when I google bridge mode, this seems to tell me that awscpv networking mode support service discovery: https://aws.amazon.com/about-aws/whats-new/2018/05/amazon-ecs-service-discovery-supports-bridge-and-host-container-/
So how does "bridge mode" work here ? Why does port mapping field not work for awsvpc ?
Edit:
I read this How to communicate between Fargate services on AWS ECS? and he just says "I created a new service and things started working." That's a bit disheartening.
Edit2:
Yes my vpc has enabled dns resolution.
As it turns out the security group on my service was only allowing http on port 80. That is the inbound rules the default SG that the service wizard gives you. I updated it to allow traffic on my container ports and they seem to be talking to each other now.

Cannot Connect to Redis Enterprise Cloud (gcloud)

I have a Google Enterprise Subscription ( Redis Cloud/Fixed Plan/GCP/us-east1/Standard/100MB)
I can connect to the database from my local DEVELOPMENT environment.
BUT I CANNOT connect when I publish the app to the Google Cloud Platform (Cloud Run)
My Cloud Run app is in the same region as the Redis Instance (east-1)
The connection between your GCP project and the Redis instance is achieved through a VPC network peering as specified on the docs. Check all the restrictions and considerations for VPC network peering in GCP here. So I believe that if you make sure to route all traffic from your service through a Serverless VPC connector that is paired with the VPC network peering associated with your Redis instance could do the trick.
Anyhow, assigning your Cloud Run service a static outboud IP address by following this section of the docs should also guarantee that the connection is achieved. Notice that you'll basically need to configure the Cloud Run service's VPC egress to route all outbound traffic through a VPC network (using a Serveless VPC Access connector) that has a Cloud NAT gateway configured with the static IP address. Making sure that this IP address is cleared under the Source IP ACL related to your Redis Enterprise instance should guarantee the connection.
Finally, if you face too much difficulties achieving such a connection you could try to host your Redis instance in Cloud Memorystore and follow this section of the docs (notice that you'll basically need to once again create a VPC connector).

Create GCloud VM instance with no VPC

I need to create a Google Compute Engine Virtual Machine instance with no VPC.
For the App environment that I am using, I need to use the Public IP Address directly such as DigitalOcean Droplet, so if I run ifconfig command should show the interface with the public IP Address.
Each Compute Engine instance belongs to at least one VPC network. The use case you are describing is likely impossible given GCP's software-defined network architecture.
You can't create a VM in GCP without it belonging to some VPC. Console gui won't allow you that - you just have to have at least one interface.
But - there's a workaround;
ssh to your VM and create additional user & password; add this user to sudo group: (adduser username; echo 'sudouser:userspass' | chpasswd; usermod -aG google-sudoers sudouser)
logout
enable serial-console interactive aceess
login using serial console
disable all network interfaces
This way you will have a VM with only a serial console access - however I didn't try this myself.
There is a way to do it(not the vpc part because it's not possible but to see the external IP directly on vm). steps are below:
Launch a VM in VPC first, while launching, in networking section, set the IP-Forwarding on. --> do it while creating, once the vm is created, you can't do that.
Reserve a External IP in your project and vpc.
In the VPC routing, create a route and for destination network x.x.x.x/32 (reserved Public IP) --> point the next hop as the VM.
In VM, create a Sub interface and assign the public IP directly using ip addr.
Note: This works only if you're able to reach to VPC, example: VPN to access the VM over public IP.

Deploying Azure application internally

Can anyone suggest some solution for this scenario?:
I have two resources deployed in a VNet: Application Gateway and a VM behind application gateway. (Application gateways in subnet1 and VM in subnet2) There's is no public ip associated with Application Gateway (internal app gateway with only private ip). I have automation scripts in storage account in another tenant and I need to be able to download those inside vm using azure cli. With the given architecture, I want to be able to download the scripts in the vm from storage account. Currently, if I run "az login" from VM, nothing happens. I found some help on Azure documentation :https://learn.microsoft.com/en-us/azure/application-gateway/configuration-overview#allow-application-gateway-access-to-a-few-source-ips but it's not helpful.
I have also attached network security group with allows VnetInbound for VM. In while architecture, I cannot use any public ip because of customer requirements and they do not want any connectivity to internet.
Any suggestions?
Thanks in advance!
Since Azure VM does not attach a public IP, the storage account does not directly communicate with your Azure VM over the Internet.
In this scenario, I would like to provide two suggestions:
The one is to use virtual network service endpoints, which allow you to secure Azure Storage accounts to your virtual networks, fully removing public internet access to these resources. You could create service endpoints for Microsoft.Storage in that VM subnet. You VM instance will access the storage account over the Azure backbone network but it has some limitations as below:
The virtual network where the endpoint is configured can be in the
same or different subscription than the Azure service resource. For
more information on permissions required for setting up endpoints and
securing Azure services, see Provisioning.
Virtual networks and Azure service resources can be in the same or
different subscriptions. If the virtual network and Azure service
resources are in different subscriptions, the resources must be under
the same Active Directory (AD) tenant.
Another suggestion is to use private endpoints for Azure Storage. You could create Private endpoint connections for the storage account in a VNet, then peer this VNet with the VNet where your Azure VM create.
For more references, you could get more details and steps on these blogs--https://stefanstranger.github.io/2019/11/03/UsingAzurePrivateLinkForStorageAccounts/
and
https://kvaes.wordpress.com/2019/03/10/hardening-your-azure-storage-account-by-using-service-endpoints/

Setup RD gateway on a single ec2 instance VPC

I have an AWS environment where
for each client, there is a dedicated ec2 windows instance.
There is NO active directory; each ec2 instance is like in its own workgroup.
Each instance is deployed on its own dedicated VPC, security group etc.
Clients use RDP to connect from their site to the ec2 instances whenever required over port 3389.
The clients' ip addresses are known upfront and we open port 3389 to allow RDP connection.
Now we want to introduce the RDP using SSL (port 443)
The typical guides from Amazon and other books walk thru setting up a RD Gateway in a SEPARATE ec2 instance and use that as the jump box.
https://docs.aws.amazon.com/quickstart/latest/rd-gateway/architecture.html#best-practices
This is all good except that,
I do not want to have an additional ec2 within each VPC.
(I understand that there are other options to have a centralized RD Gateway in its own vpc and then using VPC peering etc. But I don't want to go that route for various reasons).
So, my question is:
Is it possible to setup the RD gateway directly on the ec2 instance to
which I ultimately want to RDP into ? and use SSL(port 443) for
connecting thru RDP?
Thanks in advance.
I tried this out successfully. I created an EC2 windows 2016 server.
I installed RD gateway using the powershell command.
Install-WindowsFeature RDS-Gateway -IncludeManagementTools
Then I launched the RD gateway manager.
configured the CAP and RAP to allow my Remote Desktop Users to access any resource.
Used the ssl certificate which I created using certroot in linux.
From aws console, opened the port 443 in the security group to allow connections from my public to the ec2 instance. (No other ports were opened).
From my local computer, I setup a RDP connection such that:
The RD Gateway server setting had the RD gateway server name (ex. poc.mydomain.com)
This should match the ssl certificate.
The remote computer name was specified as "localhost" (implying that the same server needs to be connected to).
After providing the right credentials, I was connected the the ec2 instance using RDP.