Adding all databases in an elastic pool to an availability group - azure-sql-database

Executing the Powershell get-help Add-AzureRmSqlDatabaseToFailoverGroup -examples gives three examples with #3
PS C:\> $failoverGroup = Get-AzureRmSqlDatabaseFailoverGroup -ResourceGroupName rg -ServerName primaryserver -FailoverGroupName fg
PS C:\> $databases = Get-AzureRmSqlElasticPoolDatabase -ResourceGroupName rg -ServerName primaryserver -ElasticPoolName pool1
PS C:\> $failoverGroup = $failoverGroup | Add-AzureRmSqlDatabaseToFailoverGroup -Database $databases
"This command adds all databases in an Elastic Pool to a Failover Group."
Has anyone gotten this to work as presented?
I can successfully execute the Get-AzureRmSqlDatabaseFailoverGroup & Get-AzureRmSqlElasticPoolDatabase steps but the third step returns a
Add-AzureRmSqlDatabaseToFailoverGroup : FailoverGroupUnableToPerformGroupOperationOnDatabases: The operation cannot be
performed due to multiple errors.

"Now just have to figure out why it expects my secondary server to be associated with the primary elastic pool." Azure expects DR server elastic pool name would be the same as your Primary elastic pool name. If name is different you will not be able add db's to failover group.
I believe this is due to azure allow have multiple elastic pools on the same server and it have to know somehow in which one it should place database after you add it to failover group.
Hope it helps.

Related

List / discover all Azure SQL Database backup retention policies

I have a large number of Azure SQL Databases and I would like to create a list or report of some kind that shows what backup retention policies are in place for each one.
All I can find is how to check on per-database or per-server basis. This would take me a long time and is error-prone and not something I can check on a regular basis or easily provide to an auditor/manager who wants confirmation that everything is being backed up and retained properly.
Is there a way to obtain all this information in one place? A PowerShell solution would be acceptable.
You can use Powershell commands to get the Long-term retention policies for your SQL Server or even for each database using below commands:
# get all LTR policies within a server
$ltrPolicies = Get-AzSqlDatabase -ResourceGroupName $resourceGroup -ServerName $serverName | `
Get-AzSqlDatabaseLongTermRetentionPolicy
# get the LTR policy of a specific database
$ltrPolicies = Get-AzSqlDatabaseBackupLongTermRetentionPolicy -ServerName $serverName -DatabaseName $dbName `
-ResourceGroupName $resourceGroup
You can also use CLI command to get LTR policies for each database.
az sql db ltr-policy show \
--resource-group mygroup \
--server myserver \
--name mydb
In the above code only you can write the code for each database to get the LTR policies.
Refer: Manage Azure SQL Database long-term backup retention

Azure SQL DB Error, This location is not available for subscription

I am having pay as you go subscription and I am creating an Azure SQL server.
While adding server, on selection of location, I am getting this error:
This location is not available for subscriptions
Please help.
There's an actual issue with Microsoft servers. They have too many Azure SQL database creation requests. They're currently trying to handle the situation. This seems to affect all types of subscriptions even paid ones. I have a Visual Studio Enterprise Subscription and I get the same error (This location is not available for subscriptions) for all locations.
See following Microsoft forum thread for more information:
https://social.msdn.microsoft.com/Forums/en-US/ac0376cb-2a0e-4dc2-a52c-d986989e6801/ongoing-issue-unable-to-create-sql-database-server?forum=ssdsgetstarted
As the other answer states, this is a (poorly handled) restriction on Azure as of now and there seems to be no ETA on when it shall be lifted
In the meantime, you can still get an SQL database up and running in Azure, if you don't mind doing a bit of extra work and don't want to wait - just set up a Docker instance and put MSSQL on it!
In the Azure Portal, create a container instance. Use the following docker image: https://hub.docker.com/r/microsoft/mssql-server-windows-express/
while creating, you might have to set the ACCEPT_EULA environment variable to "Y".
after it boots up (10-20 minutes for me), in the portal, connect to it with the "sqlcmd" command and set up your login. In my case, I just needed a quick demo db, so I took the "sa" login, ran "alter login SA with password ='{insert your password}'" and "alter login SA enable". See here for details: https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-login-transact-sql?view=sql-server-ver15#examples
and voila, you have an SQL instance on Azure. Although it's unmanaged and poorly monitored, it might be enough for a short-term solution. The IP address of the docker instance can be found in the Properties section of the container instance blade.
Maybe you can reference this blog: Azure / SQL Server / This location is not available for subscription. It has the same error with you.
Run this powershell command to check if the location you choose is available:
Get-AzureRmLocation | select displayname
If the location is available, the best way to resolve this issue just contact the Azure support to have this enabled for you. You can do this for free using support page on your Azure Portal.
They well contact you can help you solve it.
Hope this helps.
This is how I solved myself. Let me tell you the problem first. Then the solution.
Problem: I created a brand new free Azure account (comes with $250 free credit) for a client. Then upgraded to pay-as-you-go subscription. I was unable to create Azure SQL db. The error was 'location is not available'.
How I solved: I created another pay-as-you-go subscription in the same account. Guess what - I was able to create SQL db in my new subscription right away. Then I deleted the first subscription from my account. And yes, I lost the free credit.
If your situation is similar to mine, you can try this.
PS: I have 3 clients with their own Azure accounts. I was able to create SQL Db in all of their accounts. I think the problem arises only for free accounts and/or for free accounts that upgraded to pay-as-you-go accounts.
EDIT - 2020/04/22
This is still an ongoing problem up to today, but I was told by Microsoft support that on April 24th, a new Azure cluster will be available in Europe. Thus it might get possible to finally deploy SQL Server instances on Free accounts around there.
Deploy a docker container running SQL Server
To complement on #Filip's answer, and given that the problem still remains with Azure SQL Server, a docker container running a SQL Server is a great alternative. You can set yourself one very easily running the following command on the cloud shell:
az container create --image microsoft/mssql-server-windows-express --os-type Windows --name <ContainerName> --resource-group <ResourceGroupName> --cpu <NumberOfCPUs> --memory <Memory> --port 1433 --ip-address public --environment-variables ACCEPT_EULA=Y SA_PASSWORD=<Password> MSSQL_PID=Developer --location <SomeLocationNearYou>
<ContainerName> : A container name of your choice
<ResourceGroupName> : The name of a previously created Resource Group
<NumberOfCPUs> : Number of CPUs you want to use
<Memory> : Memory you want to use
<Password> : Your password
<SomeLocationNearYou> : A location near you. For example,
westeurope
Access SQL Server
Once the container instance is deployed, in the Overview you will be able to find an IP address. Use that IP address and the password you chose in the az container command to connect to the SQL Server, either using Microsoft's SSMS, or the sqlcmd utility
Some documentation regarding the image I have used can be found here.
More information on the command I have used here.

How to add AD Administrator to Azure SQL Managed Instance with Powershell

I need to add AD administrator to Azure SQL Managed Instances through PowerShell in order to automate deployments.
But it seems there's no way to do it with Azure PowerShell or the REST APIs.
So far I've been trying to set it up like a normal SQL Server.
$sql = Get-AzureRmResource -ResourceGroupName "RSGName" -Name "InstanceName"
-ResourceType "Microsoft.Sql/managedInstances" -ExpandProperties
$dbaId = Get-AzureRmADGroup -DisplayName "ADGroupName" | Select-Object Id
Set-AzureRmSqlServerActiveDirectoryAdministrator -DisplayName "ADGroupName"
-ResourceGroupName "RSGName" -ServerName "InstanceName" -ObjectId $dbaId.Id
But it is giving me errors saying the Server cannot be found on the resource group.

Google cloud dataproc failing to create new cluster with initialization scripts

I am using the below command to create data proc cluster:
gcloud dataproc clusters create informetis-dev
--initialization-actions “gs://dataproc-initialization-actions/jupyter/jupyter.sh,gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh,gs://dataproc-initialization-actions/hue/hue.sh,gs://dataproc-initialization-actions/ipython-notebook/ipython.sh,gs://dataproc-initialization-actions/tez/tez.sh,gs://dataproc-initialization-actions/oozie/oozie.sh,gs://dataproc-initialization-actions/zeppelin/zeppelin.sh,gs://dataproc-initialization-actions/user-environment/user-environment.sh,gs://dataproc-initialization-actions/list-consistency-cache/shared-list-consistency-cache.sh,gs://dataproc-initialization-actions/kafka/kafka.sh,gs://dataproc-initialization-actions/ganglia/ganglia.sh,gs://dataproc-initialization-actions/flink/flink.sh”
--image-version 1.1 --master-boot-disk-size 100GB --master-machine-type n1-standard-1 --metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
--num-preemptible-workers 2 --num-workers 2 --preemptible-worker-boot-disk-size 1TB --properties hive:hive.metastore.warehouse.dir=gs://informetis-dev/hive-warehouse
--worker-machine-type n1-standard-2 --zone asia-east1-b --bucket info-dev
But Dataproc failed to create cluster with following errors in failure file:
cat
+ mysql -u hive -phive-password -e '' ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
+ mysql -e 'CREATE USER '\''hive'\'' IDENTIFIED BY '\''hive-password'\'';' ERROR 2003 (HY000): Can't connect to MySQL
server on 'localhost' (111)
Does anyone have any idea behind this failure ?
It looks like you're missing the --scopes sql-admin flag as described in the initialization action's documentation, which will prevent the CloudSQL proxy from being able to authorize its tunnel into your CloudSQL instance.
Additionally, aside from just the scopes, you need to make sure the default Compute Engine service account has the right project-level permissions in whichever project holds your CloudSQL instance. Normally the default service account is a project editor in the GCE project, so that should be sufficient when combined with the sql-admin scopes to access a CloudSQL instance in the same project, but if you're accessing a CloudSQL instance in a separate project, you'll also have to add that service account as a project editor in the project which owns the CloudSQL instance.
You can find the email address of your default compute service account under the IAM page for your project deploying Dataproc clusters, with the name "Compute Engine default service account"; it should look something like <number>#project.gserviceaccount.com`.
I am assuming that you already created the Cloud SQL instance with something like this, correct?
gcloud sql instances create g-test-1022 \
--tier db-n1-standard-1 \
--activation-policy=ALWAYS
If so, then it looks like the error is in how the argument for the metadata is formatted. You have this:
--metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
Unfortuinately, the zone looks to be incomplete (asia-east1 instead of asia-east1-b).
Additionally, with running that many initializayion actions, you'll want to provide a pretty generous initialization action timeout so the cluster does not assume something has failed while your actions take awhile to install. You can do that by specifying:
--initialization-action-timeout 30m
That will allow the cluster to give the initialization actions 30 minutes to bootstrap.
By the time you reported, it was detected an issue with cloud sql proxy initialization action. It is most probably that such issue affected you.
Nowadays, it shouldn't be an issue.

Failed to connect to database server. How do I connect to a database that is not on my localhost using powershell and integrated security?

Background to my question
At any moment I am expecting the security people in black suits and black sun glasses to come and take me away because of all my sql server login attempts...
I used and adapted iris classon's example to connect to a database via Powershell. The adapted code uses Integrated Security=True"
$dataSource = my_enterprise_db_server
$database = my_db
$connectionString = "Server=$dataSource;Database=$database;Integrated Security=True;"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$table = new-object “System.Data.DataTable”
$query = "..."
$connection.Open()
$command = $connection.CreateCommand()
$command.CommandText = $query
...
Hot diggity dog that worked. Thank's Iris.
I read the snapin verses the Import-Module sqlps way of executing a sql command. I also read all the links that Michael Sorens provided in his answer. I can mount a sqlserver connect with mount mydb SQLSERVER SQLSERVER:\SQL, use ls or dir, walk the path down the objects, etc. I also revised the main part of what Iris provided to
$table = Invoke-Sqlcmd –Server $dataSource –Database $database -Query $query
This version of Invoke-Sqlcmd allows me to connect to an "enterprise" database. The problem with all the references provided are that they expect you to work with a localhost sqlexpress database. The moment I try to use
Set-Location SQLSERVER:\SQL\my_enterprise_db_server\my_db
or similar constructs, I receive a message that ends with
...WARNING: Could not obtain SQL Server Service information. An attempt to connect to WMI on 'my_enterprise_db_server' failed with the following error: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))
I also saw mention of the SQLCMDSERVER and SQLCMDDBNAME environment variables. I set these to
$env:SQLCMDDBNAME = "my_db"
$env:SQLCMDSERVER = "my_enterprise_db_server"
set-location sqlserver:\sql
ls
produces
MachineName
-----------
localhost
Question
How do I correctly use set-location or New-PSDrive-Name for a database that does not reside on my local computer?
I found the answer by a serendipitous route. I right clicked on a database object in sql server management studio. There was an option to start powershell. Even though this looks like the order sqlps option, SSMS gave me the right way to set the location.
Option 1. If the server does not have instances, then add DEFAULT after the server_name in the slashy path.
Set-Location SQLSERVER:\SQL\server_name\DEFAULT\Databases\database_name\Tables\dbo.table_name
Option 2. If you have a server with an instance, then set the instance name after the server_name in the slashy path.
Set-Location SQLSERVER:\SQL\server_name\instance_name\Databases\database_name\Tables\dbo.table_name
I am a mere mortal as far as database security goes. Many of the features of SSMS are turned off to me because of my security settings verses how the dba security settings are configured. I receive errors in in SSMS all the time. Well that is no different with Powershell using the Set-Location. I did not realize that the two error messages where related because of the security policy configuration verses pilot error. If I set a location to a table, then I only have two warnings of access denied. If I set the location to the database level, then Powershell blows chunks for a bit but I have my slashy path setting. I do not see the errors if I used the Invoke-SqlCmd. I see now that the way the security errors were presented in Powershell are why I thought there was a problem with how I was attempting to connect to the database. Now I can do this:
mount rb SELSERVER SQLSERVER:\SQL\server_name\DEFAULT\Databases\database_name\Tables
# Look at a list of tables.
ls
# Go to a traditional file system
cd F:\
# Go to the Linux Style mounted file system
cd rb:\
# Go to a table like a directory
cd dbo.my_table_name
# Look at the column names
ls
# Use relative navigation
cd ..\dbo.my_other_table_name
ls
# Compare column names with another table using relative navigation after I have just
# listed the current directory/table that I am in.
ls ..\dbo.my_table_name
That just rocks! Now all I need to do is come up with an array of server names and databases to create mount points for all the databases that I can connect to. An array like that is just begging for an iteration to create all the mount points.