Azure Function Apps Hybrid Connections connect to Azure Load Balancer - azure-function-app

Regarding Hybrid Connections attaching the relevant documentation here.
What we currently have is:
(WebApp) <--> (Hybrid Connection Manager) <--> (EndPoint A)
In the configuration of Hybrid Connection we specify:
Hybrid Connection name
Endpoint hostname (Hostname of VM endpoint)
Endpoint port
What we need is instead of connecting to a VM, to connect to an Azure Load Balancer which will redirect the traffic to other VMs (e.g. main traffic - EndPointA & failover - EndPointB). So:
(WebApp) <--> (Hybrid Connection Manager) <--> (Load Balancer) <--> (Endpoint A)
The question is:
"Is this something feasible if we take into consideration that Load Balancer is not a VM so the connection manager (.msi) cannot be installed on Load Balancer (our new endpoint). If not what are the alternatives?"
Thanks in advance.

After some research and testing found the solution!
(Hybrid Connection Manager) <--> (Load Balancer)
On this part in Hybrid Connection Manager we assign the IP of Load Balancer (as endpoint) and we need to install the client (connection manager .msi) in every "sub-endpoint" of Load Balancer!

Related

Azure Regional VNet Integration and Service Endpoint

I try to integrate a web app to a VNET. this is a brand new subscription.
According to : Integrate your app with an Azure virtual network
In the first step I wanted to check if the web app could reach out to VNET. The second step I want to connect the web app to SQL database through a Service Endpoint
I created a VNET with 2 subnets:
Jumpbox-subnet
integration-subnet
integration-subnet setting
There is service endpoint pointing to the integration-subnet. Also I integrated the app to the VNet, It's delegated the integration subnet.
I tried to connect to VM from App using tcpping 172.16.1.0 (jumpbox VM private address) from app console, but it failed.
I also app cannot connect to the sql database.
What are the missing pieces here, is a DNS server required to make this work?
Update (Resolved):
The question above was the key, It needs away to resolve the name with some sort DNS server.
tcpping default port is 80 and nothing was listening to that port in that box.
You could use the tool tcpping to test for TCP connectivity to a host and port combination.
The syntax is: tcpping.exe hostname [optional: port]
For example, run tcpping 172.19.1.10:3389
See troubleshooting app service networking for more details.
For more references, Here is an ARM example to deploy an app service with VNet integration and enable service endpoint Microsoft.Storage.

Are point to site VPN clients supposed to support peered VNETs?

I have P2S SSTP connection to VNET1. VNET1 is peered to VNET2. VPN client does not add address space of VNET2 and hence routing from P2S client never works to VNET2. Are SSTP VPN clients supposed to be adding routes for all peered VNETs to installation file?
Yes, the VPN clients will get routes to the direct-peered VNets in the VPN client downloaded file. In this case, you need to configure Hub-spoke network topology in Azure.
To allow gateway traffic to flow from spoke to hub, and connect to remote networks, you must:
Configure the peering connection in the hub(have gateway subnet VNet1) to allow gateway transit.
Configure the peering connection in each spoke(no gateway subnet VNet2) to use remote gateways.
You could get more information from the step by step to work with P2S VPN in a similar scenario.

Static IP address for IoT Hub

For the scenario where a firewall/proxy doesn't support IoT Hub's FQDN.
The recommended approach is to script the updating of the firewall's whitelist - not going to happen in our case.
My plan B is to introduce a "gateway" on the IoT Hub side to provide a static IP address, and forward traffic to IoT Hub. I can see a few azure appliances which might serve here:
Azure Application Gateway
Azure Firewall
Azure Load Balancer
Proxy Server on VM
Has somebody been through this? What was your experience, and where did you land?
I have implemented something like this by building an HA proxy solution (based on Squid proxy) on a VM Scale Set with a Load Balancer in front. You can find the full solution here: https://github.com/sebader/azure-samples-collection/tree/master/VmssProxySolution
This one uses an internal LB (private IP) but you can also easily modify this to expose a static, public IP.

How to redirect connection to implement load balancer?

I have to write a load balancer for our custom server, not http.
I have gone through lot of articles over internet. Every where its mentioned that load balancer redirects the connection to actual server.
But no where is mentioned in how to redirect the connects.
Can some body tell how to implement the connection redirection in C ?
Thanks
Redirecting connection in this context means creating a proxy between two connections - external (client facing), and internal (server facing). On one end you listen for incoming connections, on other you pick a backend server and redirect traffic from client connection there. In essence you're creating a flow from two ip tuples:
((external ip, external port, external interface) , (internal ip, internal port,internal interface))
The data flow is:
client load balancer server
[c1 sock]<--->[external socket | internal socket]<--->[s1 sock]
Basic operation mode would be:
When client connects, the load balancers picks a server from the servers
pool.
When data is transferred on either end, the load balancer copies
data between the two sockets.
When connection state changes on either end (is closed), the load balancer replicates the state to the other socket.
When a backed server is down, the load balancer excludes it from the pool (some kind of monitoring is required).
You can implement it without the usage of sockets, at the network layer, but that requires userspace TCP/IP stack implementaton and ability to read packets directly from network adapter queue.
nginx can load balance TCP and UDP connections. Why not use it instead of reinventing the wheel? It is probably way more tuned and battle tested that your solution will be in a few years.
https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/

Hosting server farm begind VPN

I have a set up I would like to implement but just not sure on the details. As you can see in the image below I have a single VPS in the web which I would like to use as a gateway to a number of locally running web servers. Im using the VPN to hide the IP/location of the server farm while maintaining the ability to host locally.
What I am not sure on is the implementation as I have never used a VPN before. My understanding is that I can host the VPN server on the server farm, have the VPS connect to it which will give me another 'local' network interface which I can then use apache to proxy traffic through?
The server farm is basically a small Kubernetes cluster give or take a little.
Is my understanding correct and can you offer any advice on implementaion?
Thanks in advance!
server farm example image
The VPN server should have two network interfaces. The first is the public interface that connects to the Internet and the second is the local interface that connects to the server farm. All the servers in the farm should connect only to the local interface and have the gateway set as the VPN server.
You can use the Reverse Proxy functionality in Apache to route incoming traffic to the appropriate server. See Reverse Proxy Guide