Azure route table not showing effective routes - azure-virtual-network

I have this Vnet and subnet and I created a route table and associated it to the subnet. However, under Effective routes, it is empty... I also have a VNet Gateway and it has a VPN to on-prem and the gateway is learning BGP routes but they are not listed here. I do have a NIC associated in the subnet as well. The NIC's menu shows all the routes. Why is that? Here is the screenshot of it. The route table is empty here
The route table is associated to this subnet
Here is the NIC in the subnet and it shows all the routes
Thanks!
Difan

I tried to reproduce the same in my environment. Its working fine this issue may cause of required permission and connection fails.
Go to Azure portal -> Your VM -> setting -> click networking -> select your network interface -> click effective routes
Named of effective routes as shown below:
Otherwise try to fix it via PowerShell
To Get the effective route table on a network interface
Get-AzEffectiveRouteTable -NetworkInterfaceName "MyNetworkInterface" -ResourceGroupName "MyResourceGroup
To gets the effective routes for a network interface
Get-AzEffectiveRouteTable `
-NetworkInterfaceName myVMNic1 `
-ResourceGroupName myResourceGroup `
| Format-Table
For your Reference :
az network nic | Microsoft Docs

Related

Not able to get Azure SQL Server Extended Events to work when Blob Storage is set to Enabled from selected virtual networks and IP addresses

So I have an Azure Database and want to test extended events with the database.
I was able to set up my Blob Storage container and was able to get Extended Events via Azure Database to work as long as the Blob Storage network setting Public network access is set to Enabled from all networks. If I set Enabled from selected virtual networks and IP addresses and have Microsoft network routing checked as well as Resource type set with Microsoft.Sql/servers and its value as All In current subscription, it still doesn't work.
I'm not exactly sure what I'm doing wrong and I'm not able to find any documentation on how to make it work without opening up to all networks.
The error I'm getting is:
The target, "5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file", encountered a configuration error during initialization. Object cannot be added to the event session. (null) (Microsoft SQL Server, Error: 25602)
Edit - Steps to fix the issue
#Imran: Your answer led me to get everything working. The information you gave and the link provided was enough for me to figure it out.
However, for anyone in the future I want to give better instructions.
The first step I had to do was:
All I had to do was run Set-AzSqlServer -ResourceGroupName [ResourcegroupName] b -ServerName [AzureSQLServerName] -AssignIdentity.
This assigns the SQL Server an Azure Active Directory Identity. After running the above command, you can see your new identity in Azure Active Directory under Enterprise applicationsand then where you see theApplication type == Enterprise Applicationsheader, click the headerApplication type == Enterprise Applicationsand change it toManaged Identities`and click apply. You should see your new identity.
The next step is to give your new identity the role of Storage Blob Data Contributor to your container in Blob Storage. You will need to go to your new container and click Access Control (IAM) => Role assignments => click Add => Add Role assignment => Storage Blob Data Contributor => Managed identity => Select member => click your new identity and click select and then Review + assign
The last step is to get SQL Server to use an identity when connecting to `Blob Storage.
You do that by running the command below on your Azure SQL Server database.
CREATE DATABASE SCOPED CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>]
WITH IDENTITY = 'Managed Identity';
GO
You can see your new credentials when running
SELECT * FROM sys.database_scoped_credentials
The last thing I want to mention is when creating Extended Events with
an Azure SQL Server using SSMS, it gives you this link. This only works if you want your Blob Storage wide open. I think this is a disservice and wish they would have instructions when you want your Blob Storage not wide open by using RBAC instead of SAS.
I tried to reproduce the same in my environment I got the result successfully like below:
To resolve this issue, check whether your account type should be
StorageV2(general purpose v2). If you have a general-purpose v1 or blob storage account, try to upgrade like below.
In storage account -> under setting, configuration -> upgrade
Check whether you have choose Allow trusted Microsoft services to access this storage account under exception and I added firewall client Ip address range and vnet like below.
Make sure Microsoft.Authorization/roleAssignments/write permission in your storage account
After enabling firewall, we lose write access to the storage account and audit logs try to Resave the audit settings from the portal is required in order for auditing to function like below.
Note: Auditing to storage behind firewalls using user managed identity authentication type is not presently supported.
When I try to connect, I got result successfully like below:
Reference:
Configure extended events in SQL Azure to the blob storage with Private Endpoint - Microsoft Community Hub by Sakshi Gupta

What is the correct Cloud SQL connection string syntax for dotnetcore app with Cloud Run?

I want to setup a .NET Core web application on Cloud Run with a Google Cloud SQL database. I easily deployed the database which has a public IP on Cloud SQL and my web application with Docker Container on Cloud Run. I can access the database with SQL Server Management Studio without any difficulties and the web app is up and running as expected. The only piece missing is the link between them that allows them to connect.
In my web app, I got a connection string in that format :
Data Source=***;Initial Catalog=***;User ID=***;Password=***;Pooling=true;Trusted_Connection=false;Connection Timeout=60;Integrated Security=false;Persist Security Info={0};Encrypt=true;TrustServerCertificate=true;MultipleActiveResultSets=true;
Once I got the public IP and the connection name from Cloud SQL, how should be precisely be the connection string and/or the next steps?
Furthermore, in the connections tab under Cloud Run Service, I added the Cloud SQL connection. This is supposed to configure a Cloud SQL Proxy for me.
In order to connect to Cloud SQL from Cloud Run, you must follow this guide
You have already made some configurations in the Connections tab as stated in the Configuring Cloud Run section. You can check the guide for the Public IP since you configured your instance that way, to be sure that all steps were followed.
Briefly, the steps are:
Configure the service account for your service. Make sure that the service account has the appropriate Cloud SQL roles and permissions to connect to Cloud SQL.
The service account for your service needs one of the following IAM roles:
Cloud SQL Client (preferred)
Cloud SQL Admin
If the authorizing service account belongs to a different project than the Cloud SQL instance, the Cloud SQL Admin API and IAM permissions will need to be added for both projects.
Like any configuration change, setting a new configuration for the Cloud SQL connection leads to the creation of a new Cloud Run revision. Subsequent revisions will also automatically get this Cloud SQL connection, unless you make explicit updates to change it.
Go to Cloud Run
Configure the service:
If you are adding Cloud SQL connections to an existing service:
Click on the service name.
Click on the Connections tab.
Click Deploy.
Enable connecting to a Cloud SQL instance:
Click Advanced Settings.
Click on the Connections tab.
If you are adding a connection to a Cloud SQL instance in your project, select the desired Cloud SQL instance from the dropdown menu.
If you are deleting a connection, hover your cursor to the right of the connection to display the Trash icon, and click it.
Click Create or Deploy.
After you've double checked the steps above, you could continue with the section Connecting to Cloud SQL. You can follow the steps on the Public IP tab.
Connect with Unix sockets
Once correctly configured, you can connect your service to your Cloud SQL instance's Unix domain socket accessed on the environment's filesystem at the following path: /cloudsql/INSTANCE_CONNECTION_NAME.
The INSTANCE_CONNECTION_NAME can be found on the Overview page for your instance in the Google Cloud Console or by running the following command:
gcloud sql instances describe [INSTANCE_NAME].
These connections are automatically encrypted without any additional configuration.
The code samples shown below are extracts from more complete examples on the GitHub site. To see this snippet in the context of a web application, view the README on GitHub.
// Equivalent connection string:
// "Server=<dbSocketDir>/<INSTANCE_CONNECTION_NAME>;Uid=<DB_USER>;Pwd=<DB_PASS>;Database=<DB_NAME>;Protocol=unix"
String dbSocketDir = Environment.GetEnvironmentVariable("DB_SOCKET_PATH") ?? "/cloudsql";
String instanceConnectionName = Environment.GetEnvironmentVariable("INSTANCE_CONNECTION_NAME");
var connectionString = new MySqlConnectionStringBuilder()
{
// The Cloud SQL proxy provides encryption between the proxy and instance.
SslMode = MySqlSslMode.None,
// Remember - storing secrets in plain text is potentially unsafe. Consider using
// something like https://cloud.google.com/secret-manager/docs/overview to help keep
// secrets secret.
Server = String.Format("{0}/{1}", dbSocketDir, instanceConnectionName),
UserID = Environment.GetEnvironmentVariable("DB_USER"), // e.g. 'my-db-user
Password = Environment.GetEnvironmentVariable("DB_PASS"), // e.g. 'my-db-password'
Database = Environment.GetEnvironmentVariable("DB_NAME"), // e.g. 'my-database'
ConnectionProtocol = MySqlConnectionProtocol.UnixSocket
};
connectionString.Pooling = true;
// Specify additional properties here.
return connectionString;
Google recommends that you use Secret Manager to store sensitive information such as SQL credentials. You can pass secrets as environment variables or mount as a volume with Cloud Run.
After creating a secret in Secret Manager, update an existing service, with the following command:
gcloud run services update SERVICE_NAME \
--add-cloudsql-instances=INSTANCE_CONNECTION_NAME
--update-env-vars=INSTANCE_CONNECTION_NAME=INSTANCE_CONNECTION_NAME_SECRET \
--update-secrets=DB_USER=DB_USER_SECRET:latest \
--update-secrets=DB_PASS=DB_PASS_SECRET:latest \
--update-secrets=DB_NAME=DB_NAME_SECRET:latest
See also:
GoogleCloudPlatform/dotnet-docs-samples on GitHub

(NetcfgInvalidSubnet) Subnet 'mySubnet' is not valid in virtual network 'myVnet'

I have a weird issue while trying to create an azure container instance referencing an existing virtual network and subnet.
For that I am using the following command describes in microsoft docuementation running from azure CLI:
az container create
--name mycontainer --resource-group myresourcegroup --image crazlabjira01.azurecr.io/jira-servicemanagement:4.19.0 --vnet **myVnet** --vnet-address-prefix 172.27.0.0/16 --subnet **mySubnet** --subnet-address-prefix 172.27.14.0/24
My subnet is in the range of the Vnet so why does the command returns the following error :
(NetcfgInvalidSubnet) Subnet 'mySubnet' is not valid in virtual network 'myVnet'
Please note that if I create that container using the UI and network defined above, it works without any trouble
Thanks for help
This is because you are adding a new subnet in an existing vnet in which you already have one subnet whose address space is the same as your new subnet.
I tested in my environment and it is working fine for me.
az container create --name myconatainer98044 -g v-ra****-***tr**e --image mycontainer7723.azurecr.io/atlassian/jira-servicemanagement:latest --vnet RG-VNET --vnet-address-prefix 10.0.0.0/16 --subnet TestSubnet --subnet-address-prefix 10.0.1.0/24
When you create a virtual network you can configure the address space. By default, I was given an address space of 10.0.0.0/16 which allows addresses from 10.0.0.0 to 10.0.255.255.
By changing the subnet to a valid value for a 10.0.0.0/16 address space, like 10.0.1.0/24, you will likely be successful:
For you Make Sure the subnet that you are trying to use is not being used by other resource provider. Here you might be trying to use a subnet with 172.27.14.0/24 which is already in use.
Example : If you using a subnet for creating a SQL instance and the
same subnet you are using for creating the Container Instance it will
throw an error which you are getting. So Microsoft doesn’t support
the same subnet to create a different resource provider.
In my Case my TestSubnet is delegated to Microsoft.Containerintance
Please refer this document for get the clear understanding.

authpuppy "The node you're trying to access does not exist on the server"

somebody can help me, how configure wifidog gateway to authpuppy, coz when user connect always notif error "The node you're trying to access does not exist on the server"
You have to create a node so that when request goes to authpuppy it can refer you to a specific node i.e the configured router.
Steps to Create a Node
Manage Nodes
Create Node
Name the node exactly what you named the Gateway Id when you
configured Router in wifidog.conf file
save it
Then Connect to wifi Your problem be solved

How to find the global catalog of my network in ADDC?

I'm learning IT right now, and I have this situation.
The employee who was the administrator, got out of the company. But he doesn't leave a documentation to tell me which of my ADDC (Active Directory Domain Controller) is the PDC, I mean I'm interested to fin the global catalog and structure of my network.
Does you know a post from TechNet or some site to find this PDC in Windows Server 2008 R2?
You can either open Active Directory Sites and services, expand sites -> servers and look at the NTDS settings of each server you have, there will be a tick box on the general tab that will be checked if the server is a global catalog.
Alternatively, if you have quite a lot of servers and don't want to have to do this for each one, you can use nslookup:
Find a list of global catalogs using nslookup
As for PDC though, these haven't really existed since windows NT, there is however a PDC emulator FSMO role which is held by one domain controller that you can find using the following command:
dsquery server -hasfsmo pdc
You can see the other FSMO roles here:
Identify Operations Master Roles
You can display the Global Catalog Servers in the domain you are logged in to using Nslookup.exe:
Open a CMD.EXE window.
Type the following command and press Enter:
nslookup gc._msdcs.%USERDNSDOMAIN%
Run the following from a command promt:
nslookup
set type=serv
_gc._tcp."FQDN"