I'm trying to re-configure long term backup retention for my Azure SQL Database from a previously deleted recovery services vault (via Powershell) to a new recovery services vault
Now when I try to configure it gives me an error saying
TemplateBladeVirtualPart
SQLAZUREEXTENSION
Here is the script I used to removed the old recovery services vault (if it matters?)
$vault = Get-AzureRmRecoveryServicesVault -Name "is-vault-prod"
Set-AzureRmRecoveryServicesVaultContext -Vault $vault
$container = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureSQL -FriendlyName $vault.Name
$item = Get-AzureRmRecoveryServicesBackupItem -Container $container -WorkloadType AzureSQLDatabase
$availableBackups = Get-AzureRmRecoveryServicesBackupRecoveryPoint -Item $item
$containers = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureSQL -FriendlyName $vault.Name
ForEach ($container in $containers)
{
$items = Get-AzureRmRecoveryServicesBackupItem -container $container -WorkloadType AzureSQLDatabase
ForEach ($item in $items)
{
Disable-AzureRmRecoveryServicesBackupProtection -item $item -RemoveRecoveryPoints -ea SilentlyContinue
}
Unregister-AzureRmRecoveryServicesBackupContainer -Container $container
}
Remove-AzureRmRecoveryServicesVault -Vault $vault
Unfortunately, you just cannot choose another Recovery service vault once you had already used one.
I did a test in my lab and tried to disable the long backup retention but still failed, and found that:
This screenshot means that once you configured a recovery service vault for a SQL Server, it will be locked , you cannot use another vault.
I also found this in a FAQ:
Can I register my server to store backups to more than one vault?
No, you can currently store backups to only one vault at a time.
I understand why you want to use another vault.
However, We can just use this recovery service vault currently.If it was deleted, we cannot use long-term backup retention. It seems like a bad point of design. I will report this issue and I believe this feature would be better in future.
You can also post your idea in this Feedback Forum.
Hope this helps!
Related
Can anyone suggest, how to load Pipeline details to Azure Blob Storage?
For Example, I want to load a pipeline details to Master File(Info. like Pipeline Name, Pipeline Activities, Data factory Name., so on) and Child File (Info. like Pipeline Id, Activity Details with Transformation Level, Created date., so on).
I've tried with Azure PowerShell
Get-AzDataFactoryV2Pipeline
but it gives Master level data, but I need to load these data into Azure blob storage, as well as I need to update the file whenever new pipeline and delete pipeline has been happened.
I tried this approach from Azure Powershell, I am getting error while run.
Get-AzDataFactoryV2Pipeline -ResourceGroupName "<ResourceGroupName>" -DataFactoryName "<DFName>" | Export-Csv "$Env:temp/AutomationFile.csv"
$Context = New-AzureStorageContext -StorageAccountName "<StorageAccountName>" -StorageAccountKey "<StorageAccountKey>"
Set-AzureStorageBlobContent -Context $Context -Container "111" -File "$Env:temp/AutomationFile.csv" -Blob "SavedFile.csv"
Error like:
Export-Csv: Access to the path '/AutomationFile.csv' is denied.
Please help me on this request.
Using Terraform, how do I set the Azure SQL Database (and Azure Elastic Pool) LicenseType property to enable Azure Hybrid Use Benefit (aka AHUB, aka AHB)?
Here's an example using Powershell:
# Azure SQL Database:
Set-AzSqlDatabase -DatabaseName $sqlDb.DatabaseName -ResourceGroupName $sqlDb.ResourceGroupName -ServerName $sqlDb.ServerName -LicenseType "BasePrice"
# Azure SQL Database Elastic Pool:
Set-AzSqlElasticPool -ElasticPoolName $elasticPool.elasticPoolName -ResourceGroupName $elasticPool.ResourceGroupName -ServerName $elasticPool.ServerName -LicenseType "BasePrice"
The property is easily set using Az CLI too.
This is a very important property (from a cost perspective) and I cannot find mention of it anywhere in the context of Terraform.
Thanks!
From Terraform documentation
license_type - (Optional) Specifies the license type applied to this database. Possible values are LicenseIncluded and BasePrice.
Here is the link
https://www.terraform.io/docs/providers/azurerm/r/mssql_elasticpool.html#license_type
Why does it seem LicenseIncluded = the "Save Money" box being unchecked. I would have thought LicenseIncluded would have add the box checked and BasePrice would be unchecked, but in practice it is the opposite.
Hashicorp's site doesn't make this setting clear. The setting description is present, but the expanded description of the possible values is not. Combining Hashicorp's site with Microsoft's, we get:
license_type - (Optional) Specifies the license type applied to this database. Possible values are:
'LicenseIncluded' if you need a license
'BasePrice' if you have a license and are eligible for the Azure Hybrid Benefit
Sources:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/mssql_database#license_type
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.sql.models.database.licensetype?view=azure-dotnet
TL;DR How do i start-dscconfiguration over SSL with multiple nodes?
DSC supports the ability to provide multiple nodes in a configuration. A common example
$configData = #{
AllNodes = #(
#{
NodeName = "COMPUTER1";
Parameter1 = "Foo";
},
#{
NodeName = "COMPUTER2";
Parameter1 = "Bar";
}
)
}
configuration InstallIIS {
Node $AllNodes.NodeName {
WindowsFeature IIS
{
Ensure = "Present"
Name = "Web-Server"
}
}
}
$mofs = InstallIIS -ConfigurationData $configData
Start-DscConfiguration -Path $mofs -Verbose -Wait
As you're probably aware, this configuration (that installs IIS) will be applied to both COMPUTER1 and COMPUTER2 (2 mof files will be generated).
By default this example will use WinRM over HTTP. As all good programmers know, you should really consider the HTTPS option; so i am.
Here is the Start-DscConfiguration example again, using HTTPS (WinRM) with the -UseSsl flag:
$creds = New-Object System.Management.Automation.PSCredential ("mydomain\mike", (ConvertTo-SecureString "ILikeCatsAlot" -AsPlainText -Force))
$sessionOptions = New-CimSessionOption -UseSsl #note the -UseSsl
$computerName = "COMPUTER1" #Oh dear i have to pass the computer name
$session = New-CimSession -Credential $creds -SessionOption $sessionOptions -ComputerName $computerName
Start-DscConfiguration -Path $mofs -CimSession $session
As you see, i need to supply the -ComputerName parameter to the CimSession which is no good because i want to apply this configuration to all the nodes. It seems like i've lost the ability to provide multiple nodes in the $configdata
With DSC, how do i start a configuration overWinRM (transport:https) without needing to supply a specific computer for the session?
Based on the clarification in the comments, this is not currently a feature supported in Start-DSCConfiguration. If you would like to suggest the feature to the product team, please file an issue in the PowerShell User Voice
Update: Thanks for filing this issue on user voice.
Did you try to build a DSC Pull server (https or SMB)? SMB is encrypted (SMBv3 uses AES 256) and very simple (some syntax and renaming is involved). HTTPS Pull allows for a report server to manage your machines centrally and uses TLS like you have mentioned.
I would like to know if it is possible to know if a instance of sql server is in mirror/prinicipal by running any sql query? and secondly i want to run this on say 60-80 instances everyday at 4am automatically possible? I would like to use powershell used it before quite easy to use from experience. Tks
It is possible. You will need to play around with SMO objects.
$server = "dwhtest-new"
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $server
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$dbs = $srv.Databases
foreach ($db1 in $dbs)
{
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$db = $db1
$DatabaseName = $db.Name
Write-Host $DatabaseName
Write-Host "MirroringStatus:" $db.MirroringStatus
Write-Host "DBState:" $db.Status
Write-Host
}
If your DB's mirroring is still intact you will recieve 'Synchronized' for MirroringStatus and its its the Primary it will say "Normal" for the status and if its the failover it will say "Restoring". Unfortunately there is no way, that im aware of, to just pull out the status of "Mirror" or "principle". You will jsut have to build logic to check both fo those values.
Restoring
It depends on how are you going to setup the job?
If you want to run it from one central server that collects all the information then SMO would be the way to go with PowerShell. The answer provided by KickerCost can work but would need some more work to be able to run it for multiple servers. It would be best to take his example and turn it into a working function that will allow the server names to be piped in.
If you are going to just run a job locally on each server (scheduled task or SQL Agent job) that may point to the script on a network share, then maybe output that info to a file (like servername_instance.log) you can use a one-liner with SQLPS:
dir SQLSERVER:\SQL\KRINGER\Default\Databases | Select Name, MirroringStatus
KRINGER is my server name, with a default instance. If you have named instances then replace the "default" with the instance name.
Your output from this command would be similar to this:
Name MirroringStatus
---- ---------------
AdventureWorks None
AdventureWorksDW None
Obviously I don't have any databases involved in mirroring.
I used CreateImageRequest to take a snapshot of a running EC2 machine. When I log into the EC2 console I see the following:
AMI - An image that I can launch
Volume - I believe that this is the disk image?
Snapshot - Another entry related to the snapshot?
Can anyone explain the difference in usage of each of these? For example, is there any way to create a 'snapshot' without also having an associated 'AMI', and in that case how do I launch an EBS-backed copy of this snapshot?
Finally, is there a simple API to delete an AMI and all associated data (snapshot, volume and AMI). It turns out that our scripts only store the AMI identifier, and not the rest of the data, and so it seems that that's only enough information to just Deregister an image.
The AMI represents the launchable machine configuration - it does NOT actually contain any of the machine's data, just references to it. An AMI can get its disk image either from S3 or (in your case) an EBS snapshot.
The EBS Volume is associated with a running instance. It's basically a read-write disk image. When you terminate the instance, the volume will automatically be destroyed (this may take a few minutes, note).
The snapshot is a frozen image of the EBS volume at the point in time when you created the AMI. Snapshots can be associated with AMIs, but not all snapshots are part of an AMI - you can create them manually too.
More information on EBS-backed AMIs can be found in the user's guide. It is important to have a good grasp on these concepts, so I would recommend giving the entire users guide a good read-over before going any further.
If you want to delete all data associated with an AMI, you will have to use the DescribeImageAttribute API call on the AMI's blockDeviceMapping attribute to find the snapshot ID; then delete the AMI and snapshot, in that order.
This small PS script takes the AMI parameter (stored in a variable), grab the snapshots of the given AMI ID by storing them into an array, and finally perform the required clean up (unregister & remove the snapshots).
# Unregister and clean AMI snapshots
$amiName = 'ami-XXXX' # replace this with the AMI ID you need to clean-up
$myImage = Get-EC2Image $amiName
$count = $myImage[0].BlockDeviceMapping.Count
# Loop and store snapshotID(s) to an array
$mySnaps = #()
for ($i=0; $i -lt $count; $i++)
{
$snapId = $myImage[0].BlockDeviceMapping[$i].Ebs | foreach {$_.SnapshotId}
$mySnaps += $snapId
}
# Perform the clean up
Write-Host "Unregistering" $amiName
Unregister-EC2Image $amiName
foreach ($item in $mySnaps)
{
Write-Host 'Removing' $item
Remove-EC2Snapshot $item
}
Clear-Variable mySnaps