ASR failover fails for Bitlocker encrypted VM - azure-backup-vault

There is an Azure VM encrypted disk with Bitlocker in North Europe. Everything has replicated well in West Europe. While doing Test Failover, getting below error.
Failover Error: ID28031
Error Message: Virtual machine XXX-AZ-WEB01-test' could not be created under the resource group 'XXXX-Destination-RG'. Azure error message: 'Key Vault https://XXX-keyvault-ne.vault.azure.net/keys/Bitlocker/XXXX either has not been enabled for Volume Encryption or the vault id provided does not match /subscriptions/XXXX-XX-XXXX-XXX-XXXX/resourceGroups/XXX-Destination-RG/providers/Microsoft.KeyVault/vaults/XXX-KEYVAULT-WE's true resource id. (Provisioning failed)'.
Things are already in place what is showing in error.
Volume encryption has enabled in both source and destination Key vault.
The user has assigned all the permission as per this doc.
Thanks in advance.

Based on the Error message Failover failed with Error ID 28031 due to Quota and also check
Are you trying to do failover to different resource group or key vault? When restoring the vm, and encrypting it with the existing keys again trying to store the keys in the target keyvault
Have a crosscheck if required user KeyVault permissions as mentioned in https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms#required-user-permissions.
While enabling mentioned KeyVault permissions (on primary & recovery) under access policies, please enable volume encryption under advanced access policies (to make failover to work).
Also try to create manually the Resource Group & Storage Account post which Enable Replication was successful.
There is some limitation in KeyVault which is making the failover to fail: https://github.com/Azure/azure-cli/issues/4318
Kindly let us know if the above helps or you need further assistance on this issue.

The mistake was that destination KeyVault was created and keys were imported manually. The destination Keyvault must be created by the script provided below.
https://learn.microsoft.com/en-us/azure/site-recovery/azure-to-azure-how-to-enable-replication-ade-vms#copy-disk-encryption-keys-to-the-dr-region-by-using-the-powershell-script
Once I created the destination KeyVault by script, everything goes smoothly.

Related

Not able to get Azure SQL Server Extended Events to work when Blob Storage is set to Enabled from selected virtual networks and IP addresses

So I have an Azure Database and want to test extended events with the database.
I was able to set up my Blob Storage container and was able to get Extended Events via Azure Database to work as long as the Blob Storage network setting Public network access is set to Enabled from all networks. If I set Enabled from selected virtual networks and IP addresses and have Microsoft network routing checked as well as Resource type set with Microsoft.Sql/servers and its value as All In current subscription, it still doesn't work.
I'm not exactly sure what I'm doing wrong and I'm not able to find any documentation on how to make it work without opening up to all networks.
The error I'm getting is:
The target, "5B2DA06D-898A-43C8-9309-39BBBE93EBBD.package0.event_file", encountered a configuration error during initialization. Object cannot be added to the event session. (null) (Microsoft SQL Server, Error: 25602)
Edit - Steps to fix the issue
#Imran: Your answer led me to get everything working. The information you gave and the link provided was enough for me to figure it out.
However, for anyone in the future I want to give better instructions.
The first step I had to do was:
All I had to do was run Set-AzSqlServer -ResourceGroupName [ResourcegroupName] b -ServerName [AzureSQLServerName] -AssignIdentity.
This assigns the SQL Server an Azure Active Directory Identity. After running the above command, you can see your new identity in Azure Active Directory under Enterprise applicationsand then where you see theApplication type == Enterprise Applicationsheader, click the headerApplication type == Enterprise Applicationsand change it toManaged Identities`and click apply. You should see your new identity.
The next step is to give your new identity the role of Storage Blob Data Contributor to your container in Blob Storage. You will need to go to your new container and click Access Control (IAM) => Role assignments => click Add => Add Role assignment => Storage Blob Data Contributor => Managed identity => Select member => click your new identity and click select and then Review + assign
The last step is to get SQL Server to use an identity when connecting to `Blob Storage.
You do that by running the command below on your Azure SQL Server database.
CREATE DATABASE SCOPED CREDENTIAL [https://<mystorageaccountname>.blob.core.windows.net/<mystorageaccountcontainername>]
WITH IDENTITY = 'Managed Identity';
GO
You can see your new credentials when running
SELECT * FROM sys.database_scoped_credentials
The last thing I want to mention is when creating Extended Events with
an Azure SQL Server using SSMS, it gives you this link. This only works if you want your Blob Storage wide open. I think this is a disservice and wish they would have instructions when you want your Blob Storage not wide open by using RBAC instead of SAS.
I tried to reproduce the same in my environment I got the result successfully like below:
To resolve this issue, check whether your account type should be
StorageV2(general purpose v2). If you have a general-purpose v1 or blob storage account, try to upgrade like below.
In storage account -> under setting, configuration -> upgrade
Check whether you have choose Allow trusted Microsoft services to access this storage account under exception and I added firewall client Ip address range and vnet like below.
Make sure Microsoft.Authorization/roleAssignments/write permission in your storage account
After enabling firewall, we lose write access to the storage account and audit logs try to Resave the audit settings from the portal is required in order for auditing to function like below.
Note: Auditing to storage behind firewalls using user managed identity authentication type is not presently supported.
When I try to connect, I got result successfully like below:
Reference:
Configure extended events in SQL Azure to the blob storage with Private Endpoint - Microsoft Community Hub by Sakshi Gupta

Can I restrict batch account linked auto storage with Firewall and azure virtual network setting?

I have batch account with auto storage linked where the application packages are stored. I want to restrict the access on the this batch linked auto storage with virtual network settings.
I tried adding vnet setting and allowed the subnet of my selfhost virtual machine scale set agents , from devops pipeline I am tryingto execute powershell script which uploads the application package to the batch account using below command
New-AzBatchApplicationPackage -AccountName $BatchAccountName -ResourceGroupName $ResourceGroupName -ApplicationId $ApplicationName -ApplicationVersion $newVersionNumber -Format zip -FilePath $PackageFilePath
this command works when the storage network setting all networks is enabled, but when I try to select the selected network , the command files to upload the package with the error
Failed to add application package DataExportProcessor version 89.0. The auto storage account keys are invalid, please sync auto storage keys.
In the storage selected network I am allowing my devops scale set agent subnet but , I am not uploading package directly to the storage from scale set machine, the New-AzBatchApplicationPackage command uploads the application package to storage, but I am not sure which IP , I should whitelist in my storage account so that batch account can update the application package
Please note that, while setting firewall of storage account you need to select All Networks .
If you want to choose selected network, then you have to add your public IP address and the list of the IPs of the BatchNodeManagement to your Storage Account firewall.
To get the list of those IPs, you can refer this blog by Amine Charot.
Make sure to add IPs like below:
To resolve the "Failed to add application package DataExportProcessor version 89.0. The auto storage account keys are invalid, please sync auto storage keys" please check whether the keys in storage account and batch account are same or not.
If not sync like below:
Go to Azure Portal -> Your Batch Account -> Storage Account -> SyncKeys
Reference:
Package deployment failures (microsoft.com)

My IBM Directory Server P2P replication blocks on add new entry and changes to operational attributes by the pwdpolicy mechanism. How do I avoid this?

I setup a peer-to-peer replication topology on 2 IBM LDAP servers (Version 6.4). It works, both ways, with simple attribute modifications like changing description or displayName attributes. But it blocks when I add a new entry on either server. I checked the logs and see an error 50 (insufficient access) for the change. The audit logs show an "extra" operational attribute, ibm-entryuuid, are added to the other server, which maybe causes the error.
It also blocks when I try to login on an account with an invalid password. I get an error 65 (object class violation). This is maybe because the password policy mechanism modifies/adds/deletes certain operational attributes(e.g. PWDFAILURETIME)
The schema files are the same for both servers. And both servers are cryptographically synched.
I use JXplorer to test. I use admin credentials.
What should I do to allow these operations to replicate? Thanks in advance for any help.
Update:
I have checked the supplier credentials and when I tried to change the ibm-slapdmasterdn and ibm-slapdmasterpw, I get an Already Exists error. What do I do?
I found the problem. I didn't quite understand what the credentials attributes meant until I re-read the IBM tutorial. I was trying to modify the replica DN to the admin DN, that's why I got the error.
It replicates smoothly now.

External tables not working when "Deny public network access" is set to Yes

I have enabled Private link by setting the "Deny public network access" knob to Yes in the Firewall settings on my Azure SQL Database server. Everything is working as expected except external data sources (external tables). The external tabels are simply links to tables in another Azure SQL database that belongs to the same server. Before I enabled the Private link, everything worked fine. If I try to query the external tables I get this error message:
"Error retrieving data from [mydbserver].database.windows.net.[mydbname]. The underlying error message received was: 'Reason: An instance-specific error occurred while establishing a connection to SQL Server. Connection was denied since Deny Public Network Access is set to Yes (https://learn.microsoft.com/azure/azure-sql/database/connectivity-settings#deny-public-network-access). To connect to this server, use the Private Endpoint from inside your virtual network (https://learn.microsoft.com/azure/sql-database/sql-database-private-endpoint-overview#how-to-set-up-private-link-for-azure-sql-database)."
I can't find anything in the docs about any limitation regarding external data sources and external tables in combination with Private Link setup.
The external tables where created using the standard way: "CREATE EXTERNAL DATA SOURCE" and "CREATE EXTERNAL TABLE". I have also tried to recreate the data source and the tables after enabling Private Link, but the error remains...
Want to reiterate the answer to the same question posted on Microsoft Q&A: External tables not working when “Deny public network access” is set to Yes
The limitation is with Polybase as it currently does not support Private Link at this time. As per the PG:
Polybase does not support using private link at this time. Please direct the customer to use Managed Identity to secure the connection to Azure Storage.
Albeit, this may not be a workable solution for you but, if the data you need to access is extracted to a storage account and then imported via the method referenced by the PG, this could be a workable solution. The same process is reversed with flip/flop endpoints, and could be done within the security of a VNET + Managed Identity.
You need to use the name yourdbname.privatelink.database.windows.net
Afterwards you'll maybe receive another error that this name is incorrect. In this case you're experiencing a DNS problem and you need to add an entry in the host file of your VM with the IP of the endpoint. If your VM is outside of that VNET, it's another story.
Then you need to add the public IP of your endpoint in your hostfile. I'm still trying to solve this with a serious dns, haven't figured it out yet.
For More information see this;
https://techcommunity.microsoft.com/t5/azure-database-support-blog/lesson-learned-126-deny-public-network-access-allow-azure/ba-p/1244037

SQL Server - Enable TDE Encryption trying to connect to Azure Key Vault

The goal here is to: Assist client in configuring his Key Vault so that he would be able to enable TDE encryption and access it over the government portal url
Customer Verbatim:
"I am running into an issue when trying to enable TDE for SQL Server 2016. I have attached a few files with show the problem. Basically the problem is when SQL tries to connect to the Azure Key Vault it is using the public suffix (azure.net) instead of the the govcloud suffix (usgovcloudapi.net).
How do I force it to use the correct URL?"
https://vant4gekeyvault.vault.usgovcloudapi.net/
I think the issue is this is a gov tenant and he's stuck using the commercial URL but we were unable to force the correct URL. I sent him instructions on how to
Set-AzureRmEnvironment for AzureKeyVaultServiceEndpointResourceId as *.vault.usgovcloudapi.net, should be https://vault.usgovcloudapi.net.
but that didn't seem to work. I may be way off on this assumption too, as I'm not really that great in KV. Any Ideas or a known fix?
Here is his error message:
---SQL
Msg 33049, Level 16, State 2, Line 17
Key with name 'SqlTDEKey' does not exist in the provider or access is denied. Provider error code: 2058. (Provider Error - No explanation is available, consult EKM Provider for details)
---EVENT LOG
The description for Event ID 2 from source SQL Server Connector for Microsoft Azure Key Vault cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
If the event originated on another computer, the display information had to be saved with the event.
The following information was included with the event:
Vault Name: EKM Operation
Operation: SqlCryptGetKeyInfoByName
Key Name: N/A
Message: Error when accessing registry:5
Read the message again, the account doesn't have permission to modify the registry. It's an issue introduced in the feb release of the connector. I ran into a similar issue, the provider tries to create a registry key but doesn't have permissions to do so, therefore it fails. Try the following steps taken from this blogpost [1]
Open regedit
Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft
Create a new Key called “SQL Server Cryptographic Provider” (without quotes)
Right click the key, from the context menu select ‘permissions.
Give Full Control permissions to this key to the Windows service account that runs SQL Server
[1] https://www.visualstudiogeeks.com/devops/SqlServerKeyVaultConnectorProviderError2058RegistryConsultEKMProvider