Route to private Docker container using Traefik - traefik

I have a VPC having one public and one private subnet. In the private subnet, there are couple of Docker container services running in specific port. i can access the private ec2 instance only using the public subnet's AWS EC2 instance.
Is it possible to run a Docker container of Traefik in the public subnet to point docker container in private subnet? I just wanted to have all my api service in a private subnet and not publicly accessible to the user.
I go through some traefik documentation and realize that, Traefik can only discover Docker container within same Docker network. Is this correct?

Related

Using an AWS S3 File Gateway in EC2, why does it only work in a public subnet and not a private subnet?

Whether I try to create an AWS S3 File Gateway (EC2) in the management console or with Terraform, I get the same problem below...
If I launch the EC2 instance in a public subnet, the gateway is created. If I try to launch the gateway in a private subnet (with NAT, all ports open in and out), it wont work. I get...
HTTP ERROR 500
I am running a VPN and able to ping the instance's private IP if I use the Management console. This is the same error code in terraform on a cloud 9 instance, which is also able to ping the instance.
Since I am intending to share the S3 bucket with NFS, its important that the instance reside in a private subnet. I'm new to trying out the AWS S3 File Gateway, I have read over the documentation, but nothing clearly states how to do this and why a private subnet would be different, so if you have any pointers I could look into, I'd love to know!
For any further reference (not really needed) my testing in terraform is mostly based on this github repository:
https://github.com/davebuildscloud/terraform_file_gateway/tree/master/terraform
I was unable to get the AWS console test to work, but I realised my Terraform test was poorly done - I mistakenly was skipping over a dependency that was establishing the VPC peering connection to the Cloud 9 Instance. once I fixed that it worked. Still I would love to know what would be required to get this to work through the Management Console too...

Create GCloud VM instance with no VPC

I need to create a Google Compute Engine Virtual Machine instance with no VPC.
For the App environment that I am using, I need to use the Public IP Address directly such as DigitalOcean Droplet, so if I run ifconfig command should show the interface with the public IP Address.
Each Compute Engine instance belongs to at least one VPC network. The use case you are describing is likely impossible given GCP's software-defined network architecture.
You can't create a VM in GCP without it belonging to some VPC. Console gui won't allow you that - you just have to have at least one interface.
But - there's a workaround;
ssh to your VM and create additional user & password; add this user to sudo group: (adduser username; echo 'sudouser:userspass' | chpasswd; usermod -aG google-sudoers sudouser)
logout
enable serial-console interactive aceess
login using serial console
disable all network interfaces
This way you will have a VM with only a serial console access - however I didn't try this myself.
There is a way to do it(not the vpc part because it's not possible but to see the external IP directly on vm). steps are below:
Launch a VM in VPC first, while launching, in networking section, set the IP-Forwarding on. --> do it while creating, once the vm is created, you can't do that.
Reserve a External IP in your project and vpc.
In the VPC routing, create a route and for destination network x.x.x.x/32 (reserved Public IP) --> point the next hop as the VM.
In VM, create a Sub interface and assign the public IP directly using ip addr.
Note: This works only if you're able to reach to VPC, example: VPN to access the VM over public IP.

AKS in a private VNET behind a corporate proxy

we have our AKS running in a private VNET, behind a corporate proxy. The proxy is not a "transparent" proxy and needs to be configured manually an all nodes. Is this a supported behavior? Is it possible to configure worker nodes and all system containers to work via proxy?
Actually, Azure Kubernetes managed by Azure and in a private Vnet create yourself or Azure. You can use the Load Balancer to transfer the traffic or use ingress. But you just can select one size and type for the nodes when you create the cluster and it seems not support multi size for the nodes currently. Maybe it will be supported in the future on Azure.
For more details about AKS, see Azure Kubernetes Service.

Accessing memorystore in Shared VPC

I have created a Memorystore instance with IP 10.190.50.3 (and this is in us-east1).
I have a shared VPC setup with name my-gcp and I also authorised the same when creating the Memorystore instance.
In shared VPC, I have a service project dev and I have a window machine(10.190.5.7). Inside that when I am trying to connect to memory store from that Windows machine, I am not able to connect to Memorystore instance.
I have also enabled egress traffic to 10.190.50.3 from all instance of my-gcp vpc. this vpc is setup in us-east4.
tracert, ping also not working form window machine for IP 10.190.50.3.
This Memorystore instance is created in host project of vpc.
I found the public documentation updated recently:
1.The connecting client must be on the same network and in the same region (different zone within same region will also ok) as your Cloud Memorystore for Redis instance.
2.If you are using a Shared VPC network across multiple projects, you can connect to a Redis instance that is deployed on shared VPC network on the host project. Connecting to a Redis instance that is deployed on shared VPC network in a service project is not supported.
Also here is the link on how to Connect to the Redis instance from a Compute Engine VM.
Unfortunately, accessing memorystore from service project is currently not supported in Shared VPC .
Now we can deploy GCP's memory store in the shared network using the private connection. Refer this link for more details.
Steps below:
Verify or establish a private services access connection for the network in the host project that you use to create your Redis instance.
Make sure the Service Networking API is enabled for both the host project and the service project.
Follow the steps from Creating a Redis instance on a VPC network, but make the following modifications:
a. Complete the optional step for setting up a private services access connection.
b. Use the Authorized VPC Network dropdown to select the Shared VPC network from the host project. It is listed under Shared VPC Networks.

Remote connections to Infinispan server - and work with JGroups

My setup is an infinispan 8.1.2 server running on AWS using a distributed cache. For local development, I would like to be able to connect to the instance on AWS, but the server will only start using either 0.0.0.0 or the AWS private IP address. Since JGroups does not work with the 0.0.0.0 address it seems my only option would be to use the AWS private IP. But this address is not accessible remotely!
Has anyone else run infinispan server and tried to connect from a different subnet?
Not sure if this helps but anyway...
You do have a public IP address on AWS, which you can query with some HTTP command (check the docs).
Now, if you can add a NATting rule which forwards traffic between the private and public address, you could use external_addr and external_port in TCP to bind to the private address, but send traffic to the public address.
This would allow you to access a JGroups node from another subnet, or even the internet. You probably have to modify your security policy and expose the externally accessible ports. YMMV