How to restrict direct access from internet to Azure Public LoadBalancer backend pool VM with NSG - azure-load-balancer

As the question at title, I'm setup the following architecture on Azure Cloud and having trouble at restricting direct access from the internet to VMs.
Here are architecture requirements:
Both VMs must have public ips (for SysAdmin to access via SSH)
Direct traffics from Internet to WebService on VMs (via port 80) must be denied
The web traffics from Internet must go thru Public LB to VMs
Suppose that both VMs are in WebASG (Application Security Group), in the NSG setting that applied to VM's Subnet, I've add some rules (which have higher priority than 3 Azure NSG default rules):
Scenario A (adding 1 custom rule):
Port: 80 - Protocol: Tcp - Source: Internet - Destination:
WebASG - Action: Allow
With this NSG setting, I could access WebService from LoadBalancer IP (satisfy #3 requirement), but WebService on port 80 of both VMs will be exposed to Internet (that violates #2 requirement)
Scenario B (adding 2 custom rules):
Port: 80 - Protocol: Tcp - Source: AzureLoadBalancer -
Destination: WebASG - Action: Allow
Port: 80 - Protocol: Tcp - Source: Internet - Destination:
WebASG - Action: Deny
With this NSG setting, #2 requirement is satisfied, but I could not access WebService when visit LoadBalancer IP (violates #3 requirement)
Please note that: using AGW (Azure Application Gateway, I could make all the requirements happened by these NSG configuration:
RuleName: AllowSSH Port: 22 - Protocol: Tcp - Source:
sys-admin-ip-address - Destination: WebASG - Action: Allow
RuleName: DenyInternet2Web Port: Any - Protocol: Any -
Source: Internet - Destination: WebASG - Action: Deny
RuleName: AllowProbe2Web Port: 80 - Protocol: Tcp -
Source: VirtualNetwork - Destination: WebASG - Action:
Allow
RuleName: AllowProbe2Web Port: 80 - Protocol: Tcp -
Source: VirtualNetwork - Destination: WebASG - Action:
Allow
I dont want using AGW because it would cost more money than Azure LoadBalancer (actually the Basic LoadBalancer is free). So, how could I change NSG to satisfy all requirements when using LoadBalancer?
Thank in advance for any help!

I don't think there are NSG rules that will satisfy all requirements because of the #1 and #2 requirements are contradictory.
If the VMs must have public IP addresses, it actually has a chance to expose to the Internet. Any clients could access the VMs via the public IP. It works the same if you want to access the VMs through the load balancer frontend IP. Read the https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-overview#load-balancer-concepts
Load Balancer doesn't terminate or originate flows, interact with the
payload of the flow, or provide any application layer gateway
function. Protocol handshakes always occur directly between the client
and the back-end pool instance. A response to an inbound flow is
always a response from a virtual machine. When the flow arrives on the
virtual machine, the original source IP address is also preserved.
In this case, you could remove the backend-instance IP address, just use the load balancer frontend for the web traffic and SSH connection. If so, You could configure port forwarding in Azure Load Balancer for the SSH connections to individual instances and a load balancer rule for the web traffic following this quickstart, which works with standard LB. You can only allow port 80 and 22 from your clients' IP addresses. The NSG will look like this,
Port: 80,22 - Protocol: Tcp - Source: client's IP list - Destination: WebASG - Action: Allow

Related

Which is the correct IP to run API tests on kubernetes cluster

i have kubernetes cluster with pods which are type cluster IP. Which is the correct ip to shoot it if want to run integration tests IP:10.102.222.181 or Endpoints: 10.244.0.157:80,10.244.5.243:80
for example:
Type: ClusterIP
IP Families: <none>
IP: 10.102.222.181
IPs: <none>
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.0.157:80,10.244.5.243:80
Session Affinity: None
Events: <none>
If your test runner is running inside the cluster, use the name: of the Service as a host name. Don't use any of these IP addresses directly. Kubernetes provides a DNS service that will translate the Service's name to its address (the IP: from the kubectl describe service output), and the Service itself just forwards network traffic to the Endpoints: (individual pod addresses).
If the test runner is outside the cluster, none of these DNS names or IP addresses are reachable at all. For basic integration tests, it should be enough to kubectl port-forward service/its-name 12345:80, and then you can use http://localhost:12345 to reach the service (actually a fixed single pod from it). This isn't a good match for performance or load tests, and you'll either need to launch these from inside the cluster, or to use a NodePort or LoadBalancer service to make the service accessible from outside.
IPs in the Endpoints are individual Pod IPs which are subject to change when new pods are created and replace the old pods. ClusterIP is stable IP which does not change unless you delete the service and recreate it. So recommendation is to use the clusterIP.

How can I set firewall rule to allow ssh to a instance from Google Cloud console only

I could allow the IP of Bastian host but how do I allow IP of Google Cloud Console in firewall rule?
1. If you use Default network configuration, Compute Engine creates firewall rules that allows TCP connections through port 22 for you. You can see them in the GCP Console:
GCP Console => VPC network => Firewall rules
The Default network has preconfigured firewall rules that allow all instances in the network to talk with each other. In particular, these firewall rules allow ICMP, RDP, and SSH ingress traffic from anywhere (0.0.0.0/0). There should be an Ingress firewall rule for SSH: default-allow-ssh.
2. If you use Custom network, firewall rule for SSH should be created manually.
With Cloud Console
GCP Console => VPC network => Firewall rules => Create Firewall Rule
Name: mynet-allow-ssh
Network: mynet
Targets: All instances in the network
Source filter: IP Ranges
Source IP ranges: 0.0.0.0/0
Protocols and ports: Specified protocols and ports
tcp: ports 22
With command line
$ gcloud compute --project=myproject firewall-rules create mynet-allow-ssh --direction=INGRESS --priority=1000 --network=mynet --action=ALLOW --rules=tcp:22 --source-ranges=0.0.0.0/0
For more details see Compute Engine => Documentation => Connecting to instances
Speaking about whitelisting of an "IP of Google Cloud Console" for the case when you press the "SSH" button in the Cloud Console, this is rather unfeasible because SSH connection is established over HTTPS via a relay server that could have an unpredictable address from the Google's external pool of IPs. Use of a Bastion host with a single static IP is more rational from this perspective.
If you're using the SSH button, it's your external IP.
If you're using Cloud Shell, it's a random external IP (of Google Cloud) since it's technically a VM instance.
The answer in GCP open firewall only to cloud shell can be an option for you if you want to access from the console.

asp.net core application docker swarm hosted client IP

I want to log my client's requests IP address and I have a docker service of asp.net core on Linux.
now we always have docker network IP address!
how can I get my clients real IP address?
You can get the real IP address of the clients if you change your mode to host. Below is an example of how to do this:
traefikedge:
image: traefik:1.4.3-alpine
ports:
- target: 80
published: 80 #for redirect to HTTPS
protocol: tcp
mode: host #to bypass ingress mesh, to preserve client ip
- target: 443
published: 443
protocol: tcp
mode: host #to bypass ingress mesh, to preserve client ip
There is an open issue here about this.

IP/hostname whitelist way for a call to API from openshift

This is more of a how-to question , as i am still exploring openshift.
We have an orchestrator running out of openshift which calls to a REST API written in Flask hosted on apache/RHEL.
While our end point is token authenticated - we wanted to add a second level of restriction by allowing access from a whitelisted source of hosts.
But given the concept of openshift, that can span a container across any (number of ) server across its cluster.
What is the best way to go about whitelisting the action from a cluster of computers?
I tried to take a look at External Load Balancer for my the orchestrator service.
clusterIP: 172.30.65.163
externalIPs:
- 10.198.40.123
- 172.29.29.133
externalTrafficPolicy: Cluster
loadBalancerIP: 10.198.40.123
ports:
- nodePort: 30768
port: 5023
protocol: TCP
targetPort: 5023
selector:
app: dbrun-x2
deploymentconfig: dbrun-x2
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 172.29.29.133
So I am unsure is - what is the IP I am expected to see on the other side [ my API apache access logs ] with this setup?
or
Does this LoadBalancer act as gateway only for incoming calls to openshift.
sorry about the long post - would appreciate some inputs

Unable to set up ExternalIp port forwarding in openshift orgin pods

I have a use case where I am running services on my local machine which is behind a router behind a NAT, so I can't port forward to my public IP. The only way to do it is via ssh tunneling or a VPN, for both I would need an exposed port to connect from my local
So,
i tried setting up my yml config with external ip,
and it looks like this :
spec:
ports:
- name: 8022-tcp
protocol: TCP
port: 8022
targetPort: 8022
nodePort: 31043
- name: 8080-tcp
protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30826
selector:
deploymentconfig: remote-forward
clusterIP: 172.30.83.16
type: LoadBalancer
externalIPs:
- 10.130.79.198
deprecatedPublicIPs:
- 10.130.79.198
sessionAffinity: None
If I understand correctly here 10.130.79.198, is the IP I will need to connect to from my local ssh on port 31043 which then forwards it to the service port 8022 which then forwards to container port 8022 where the ssh server is running.
The problem is that I am not able to connect to this external IP.
ssh logs:
"debug1: connect to address 10.130.79.198 port 31043: Connection timed out"
I got this external IP from the pod -> dashboard page -> external IP. Is this external IP needs to be configured anywhere or is my above config has any issue with the setup?