Provide access to Azure Sql from vnet only - azure-sql-database

I've got vnet in Azure 10.1.0.0/24. VM connected to vnet directly by static ip: 10.1.0.5 and has not public endpoint. I connect to vnet using VPN and able to connect to VM. That wokrs fine.
I'd created Azure Sql database and want to limit connection only from vnet 10.1.0.0/24 (no public endpoint). So, my VM should be able to connect to Azure Sql and I shoul be able to connect to it when connected through VPN.
How can I configure this?

In this case, if you have set up a private connection to Azure – via P2S VPN, S2S VPN, or Express Route, then you can use a TCP proxy server to forward traffic to the public IP address for SQL Database because the virtual network rules are not supported VPN scenario, read the limitation and this.
Main Steps:
You can add Azure VM vnet&subnet to virtual network rule on SQL Database as described here
Download Nginx and change only the body the nginx.conf file without modifying headers to forward any traffic received on port 1433 to your SQL Database server sqlserver.database.windows.net:1433.
You could read this blog for more details.

Related

EF Core connect from Google Cloud Run to Google Cloud SQL

I have tried these:
Data Source=/cloudsql/*****:asia-southeast2:*****;Initial Catalog=*****;Integrated Security=False;User ID=sqlserver;Password=MyPassword0!;MultipleActiveResultSets=True
that /cloudsql/*****:asia-southeast2:***** is my instance connection name described in here.
I tried public IP too like this:
Data Source=***.***.***.***;Initial Catalog=*****;Integrated Security=False;User ID=sqlserver;Password=MyPassword0!;MultipleActiveResultSets=True
with IP address my SQL instance public IP, but it is not working.
I have enabled the sql instance connection from the Cloud Run:
How can I fix the connection string using EF Core?
This is the error I got:
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 25 - Connection string is not valid)
You are trying to use Cloud SQL SQL server with Cloud Run. But if you have a look to the documentation, this connexion is not supported.
In reality, the connexion is supported, but Cloud Run service open a Unix socket to connect the SQL Server instance. And there is no SQL Server client compliant with Unix Socket and therefore, you can't access it.
To solve your issue, I recommend you to use the Private IP section of this page. You can also achieve the same configuration with the public IP (don't use a serverless VPC connector and go to your Cloud SQL instance, and authorized the network 0.0.0.0/0 to access to your instance), but, because you need to open broadly the authorized network, I don't recommend you this option, for security reason.
EDIT 1
Because of my bad english, let me explain more!!
The best way is to follow the documentation page: Connect Cloud SQL private IP to your VPC, use a serverless VPC connector with Cloud Run, and in your code you can use the private IP in your connection string to access the database.
But, you can also use the public IP, that I don't recommend (see below why), at least in its first naive use. In fact, you can use the pubic IP instead of the private IP in your code. Because you use the pubic IP, you no longer need the serverless VPC connector on your Cloud Run service (you don't use the VPC but the internet to reach the internet).
However, because you use the internet and Cloud Run is a multi-customer shared service, you don't know your source IP. On Cloud SQL, you need to allow any IP (0.0.0.0/0) in the authorized network section to access to your database, which is not a very secure configuration.
Alternatively, you can create a more complex configuration on Cloud Run to use securely the Cloud SQL public IP (but it becomes really complex). let me dig into it.
I said previously that Cloud Run is a shared service, and you don't manage the source IP when you initiate outgoing call (like connection to the database). It's true, but you can control that!
Firstly, you need (again) a serverless VPC connector on your Cloud Run. And you need to set your egress to ALL (route public and private traffic to the serverless VPC connector).
Then, create a Cloud NAT in your VPC and select, at least, your serverless VPC connector subnet to be NATed when going to the internet
Reserve a public IP on your Cloud NAT configuration
Now you have a public, static IP defined on your Cloud Run service. You can only grant it on your Cloud SQL authorized Network, to improve the security and don't let anybody access to your Cloud SQL instance.

Connection to Azure SQL database on Azure Private Link/Endpoint using Azure VPN Client not working

I'm trying to setup an Azure SQL database using P2S VPN for users who are remote working. They are using some applications like SSMS and Visual Studio that require access to the database. We allow them to connect by white listing their IP addresses but we would like to stop this and to use the deny public network access option on the SQL server on Azure.
Whenever I try to connect using SSMS I get the following message:
I've followed the steps outlined in the documentation and tutorials on MS Docs but I have not been able to get the private endpoint to work with the database.
I have created the virtual network gateway and connected it to Azure Active Directory and I can see the sessions being created by the users as they log in.
I have created the virtual network using the address range = 10.1.0.0/16 and the subnet address range = 10.1.0.0/24. I have attached the private endpoint connection to the Azure SQL server and added the virtual network to the firewall.
Is there some setting required to allow the user to connect to the database from their PC without whitelisting IP addresses?
WAY 1:
You may Use domain name instead of IP directly from your virtual
network. So, you need some service in Azure which can translate domain
name to IP.
It is necessary to properly configure your DNS settings to resolve the private endpoint IP address to the fully qualified domain name (FQDN) of the connection string.
Use a DNS forwarder for on-premises workloads to resolve the FQDN of a private endpoint, to resolve the Azure service public DNS zone in Azure.
A DNS forwarder is a Virtual Machine running on the Virtual Network
linked to the Private DNS Zone that can proxy DNS queries coming from
other Virtual Networks or from on-premises. This is required as the
query must be originated from the Virtual Network to Azure DNS.
.
Use the host file on a virtual machine to override the DNS: Azure
creates a canonical name DNS record (CNAME) on the public DNS. The
CNAME record redirects the resolution to the private domain name
(privatelink.database.windows.net). You can override the resolution
with the private IP address of your private
endpoints. See azure-provided-name-resolution.
References:
Azure services DNS zone configuration and
on-premises-workloads-using-a-dns-forwarder
Refer this for connectivity troubleshooting using Private Link
See how to resolve-azure-internal-dns-from-your-on-prem-network
WAY 2 :
You may go for SQL managed instance which is another Azure SQL
PaaS offering .It is deployed with in VNet with no public service
endpoints and uses root and client certificates to authenticate in
azure.
(Go for this when one prefers not to use Private endpoint:)
To configure P2S VPN using certificates Refer :
configure-p2s-vpn-using-certificates-and-connect-to-sql-managed-instance-from-on-premise-machine.
Other references:
DNS-Client-Configuration-Options
DNS-Integration-Scenarios
DNS-Scenario-Using-AD

How do I make VPN connection working from Azure VM?

I have created Virtual Network Connection.
I have created Connection for Site to Site(IPSec) which connects to VMWare-snx
Connection status is "Connecting". (Also connection from VMWare SNX side)
I have VM in subnet. VNet is same as with Gateway Subnet.
I try to ping or RDP to VM in VMWare side, but do not have connection.
Did I understand correctly that I should have automatically connection from all subnet in VNet.
No routing is needed between Gateway Subnet and others?
Is there any way to troubleshoot if ping passed Azure VPN?
https://vzerotohero.com/2017/03/step-by-step-deploy-vmware-nsx-with-microsoft-azure-ipsec-vpn-site-to-site/
If the VPN connection is set up well, the Connection status should be "connected". Please follow the step by step in the article, especially the note things:
NSX VPN as of now only supports Policy-Based VPN type.
PFS: Disable Perfect Forward Secrecy since its not supported with Azure Static-Policy based VPN.

Access Azure SQL Database when connected to VNET via Client Gateway

I have an Azure Virtual network and I connect to the network using Point-to-point with the VPN client downloaded from Azure. This works as expected as I can now RDP to VMs in the VNet if required.
I also have an Azure SQL Server instance and in the firewall section I have added the VNet above to the Virtual networks rule list.
With my work laptop, I was now hoping that I would be able to connect to the VNet using the VPN client and then be able to access the SQL database using SSMS. However, when I try and connect I get a message telling me that I cannot access the server and instead need to add my client IP to the Firewall rule list, which is what I was trying to avoid doing.
Is there something else I need to be doing here to get this working?
Is there something else I need to be doing here to get this working?
If you just use an Azure SQL Database, which is a Paas in Azure, itself is not located inside a VNet. You can directly add the client Public IP in the firewall of Azure SQL Server. Whereas this is not your expectation. You need to make it inside a VNet, then you can do these followings.
If you are using a SQL Managed instance which located inside a VNet, want to access the Database instance from on-premises with a private address, you need to make a VPN connection or ExpressRoute connection between the on-premise and the Managed Instance VNet.
Now, you have a P2S VPN connection, you still need to make VNet peering with Gateway Transit between the P2S VNet with SQL instance VNet. Note: To use remote gateways or allow gateway transit, the peered virtual networks must be in the same region. To do so, make the following very specific changes under the Peering settings.
In the VNet that hosts the VPN gateway, go to Peerings, then to the
Managed Instance peered VNet connection, and then click Allow
Gateway Transit.
In the VNet that hosts the Managed Instance, go to Peerings, then to
the VPN Gateway peered VNet connection, and then click Use remote
gateways.
Once the peering complete, you can check the status on the Azure portal. You need to remove the VPN client and re-download it and re-install it on your laptop, this will make the route update on your client side.
If you've established on-premises to Azure connection successfully and you can't establish a connection to Managed Instance, check if your firewall has an open outbound connection on SQL port 1433 as well as 11000-12000 range of ports for redirection.
For more reference, you can read Connect your application to Azure SQL Database Managed Instance.

Hosting server farm begind VPN

I have a set up I would like to implement but just not sure on the details. As you can see in the image below I have a single VPS in the web which I would like to use as a gateway to a number of locally running web servers. Im using the VPN to hide the IP/location of the server farm while maintaining the ability to host locally.
What I am not sure on is the implementation as I have never used a VPN before. My understanding is that I can host the VPN server on the server farm, have the VPS connect to it which will give me another 'local' network interface which I can then use apache to proxy traffic through?
The server farm is basically a small Kubernetes cluster give or take a little.
Is my understanding correct and can you offer any advice on implementaion?
Thanks in advance!
server farm example image
The VPN server should have two network interfaces. The first is the public interface that connects to the Internet and the second is the local interface that connects to the server farm. All the servers in the farm should connect only to the local interface and have the gateway set as the VPN server.
You can use the Reverse Proxy functionality in Apache to route incoming traffic to the appropriate server. See Reverse Proxy Guide