How to determine SQL database replication roles using the Azure PowerShell command Get-AzureRMSqlDatabase - azure-sql-database

Using the Azure Resource Manager PowerShell commands, is there a simple way to tell whether a database is involved in geo-replication role as either a Primary or Secondary? I used to read the Status property returned by Get-AzureSqlDatabase, and a value of 0 meant that the database was Primary. However, there is no corresponding property returned by Get-AzureRMSqlDatabase; it still returns a status column, but the value is "Online" for both primary and secondary databases.
The reason I need this is that I'm trying to maintain dozens of databases across multiple subscriptions and servers, and I am trying to automate actions that should only be taken on the primary databases.

I found a reasonable solution to this problem, making one extra call per database. The commandlet Get-AzureRmSqlDatabaseReplicationLink does exactly what I needed, with one caveat; I know that I'm not supposed to be passing the same value as both ResourceGroupName and PartnerResourceGroupName, but it seems to work (at least for now), so I'm going with it to avoid having to make one call per resource group in the subscription.
Using that, I was able to create this simple function:
Function IsSecondarySqlDatabase {
# This function determines whether specified database is performing a secondary replication role.
# You can use the Get-AzureRMSqlDatabase command to get an instance of a [Microsoft.Azure.Commands.Sql.Database.Model.AzureSqlDatabaseModel] object.
param
(
[Microsoft.Azure.Commands.Sql.Database.Model.AzureSqlDatabaseModel] $SqlDB
)
process {
$IsSecondary = $false;
$ReplicationLinks = Get-AzureRmSqlDatabaseReplicationLink `
-ResourceGroupName $SqlDB.ResourceGroupName `
-ServerName $SqlDB.ServerName `
-DatabaseName $SqlDB.DatabaseName `
-PartnerResourceGroupName $SqlDB.ResourceGroupName
$ReplicationLinks | ForEach-Object -Process `
{
if ($_.Role -ne "Primary")
{
$IsSecondary = $true
}
}
return $IsSecondary
}
}

Related

Flask-migrate change db before upgrade

I have a multi-tenancy structure set up where each client has a schema set up for them. The structure mirrors the "parent" schema, so any migration that happens needs to happen for each schema identically.
I am using Flask-Script with Flask-Migrate to handle migrations.
What I tried so far is iterating over my schema names, building a URI for them, scoping a new db.session with the engine generated from the URI, and finally running the upgrade function from flask_migrate.
#manager.command
def upgrade_all_clients():
clients = clients_model.query.all()
for c in clients:
application.extensions["migrate"].migrate.db.session.close_all()
application.extensions["migrate"].migrate.db.session = db.create_scoped_session(
options={
"bind": create_engine(generateURIForSchema(c.subdomain)),
"binds": {},
}
)
upgrade()
return
I am not entirely sure why this doesn't work, but the result is that it only runs the migration for the db that was set up when the application starts.
My theory is that I am not changing the session that was originally set up when the manager script runs.
Is there a better way to migrate each of these schemas without setting multiple binds and using the --multidb parameter? I don't think I can use SQLALCHEMY_BINDS in the config since these schemas need to be able to be dynamically created/destroyed.
For those who are encountering the same issue, the answer to my specific situation was incredibly simple.
#manager.command
def upgrade_all_clients():
clients = clients_model.query.all()
for c in clients:
print("Upgrading client '{}'...".format(c.subdomain))
db.engine.url.database = c.subdomain
_upgrade()
return
The database attribute of the db.engine.url is what targets the schema. I don't know if this is the best way to solve this, but it does work and I can migrate each schema individually.

Elastic APM show total number of SQL Queries executed on .Net Core API Endpoint

Currently have Elastic Apm setup with: app.UseAllElasticApm(Configuration); which is working correctly. I've just been trying to find a way to record exactly how many SQL Queries are run via Entity Framework for each transaction.
Ideally when viewing the Apm data in Kibana the metadata tab could just include an EntityFramework.ExecutedSqlQueriesCount.
Currently on .Net Core 2.2.3
One thing you can use is the Filter API for this.
With that you have access to all transactions and spans before they are sent to the APM Server.
You can't run through all the spans on a given transaction, so you need some tweaking - for this I use a Dictionary in my sample.
var numberOfSqlQueries = new Dictionary<string, int>();
Elastic.Apm.Agent.AddFilter((ITransaction transaction) =>
{
if (numberOfSqlQueries.ContainsKey(transaction.Id))
{
// We make an assumption here: we assume that all SQL requests on a given transaction end before the transaction ends
// this in practice means that you don't do any "fire and forget" type of query. If you do, you need to make sure
// that the numberOfSqlQueries does not leak.
transaction.Labels["NumberOfSqlQueries"] = numberOfSqlQueries[transaction.Id].ToString();
numberOfSqlQueries.Remove(transaction.Id);
}
return transaction;
});
Elastic.Apm.Agent.AddFilter((ISpan span) =>
{
// you can't relly filter whether if it's done by EF Core, or another database library
// but you have all sorts of other info like db instance, also span.subtype and span.action could be helpful to filter properly
if (span.Context.Db != null && span.Context.Db.Instance == "MyDbInstance")
{
if (numberOfSqlQueries.ContainsKey(span.TransactionId))
numberOfSqlQueries[span.TransactionId]++;
else
numberOfSqlQueries[span.TransactionId] = 1;
}
return span;
});
Couple of thing here:
I assume you don't do "fire and forget" type of queries, if you do, you need to handle those extra
The counting isn't really specific to EF Core queries, but you have info like db name, database type (mssql, etc.) - hopefully based on that you'll be able filter the queries you want.
With transaction.Labels["NumberOfSqlQueries"] we add a label to the given transction, and you'll be able to see this data on the transaction in Kibana.

SQL Server audit extended event metadata change

We have extended event running in SQL Server 2014. We also have a task that run every few minutes and check that the Xevent is running -> Audit the Xevent trace.
I'm trying to find a way (in T-SQL) to audit also changes on the Xevent metadata.
If the Xevent change (by alter), or drop and create.
No need to monitor stop and start session.
I thought about hash function on the Xevent metadata, but I can not find a T-SQL way to get the Xevent string.
Any idea how this can be achieved?
Thanks,
I managed to get the Xevent metadata by using PowerShell SQLPS module.
Then Hash function on the string result:
$server="localhost"
Import-Module sqlps -DisableNameChecking
CD SQLSERVER:\
CD xevent\$server\default\Sessions
$xe=Get-ChildItem |Where-Object {$_.name -eq $XeName}
#$xe.Targets
#$xe.IsRunning
$Xeventmetadata=$xe.ScriptCreate().GetScript()
$Xeventmetadata
#Getting String hash value
function Hash($textToHash)
{
$hasher = new-object System.Security.Cryptography.SHA256Managed
$toHash = [System.Text.Encoding]::UTF8.GetBytes($textToHash)
$hashByteArray = $hasher.ComputeHash($toHash)
foreach($byte in $hashByteArray)
{
$res += $byte.ToString()
}
return $res;
}
Hash("$Xeventmetadata")

servicestack redis, when using SetEntry, it will automatic generate a set with key "ids:+objectName" in redis db, how can I disable it?

when using SetEntry, it will automatic generate a set with key "ids:+ objectName" in redis db.
For example:
typedClient.SetEntry("famyly:username:jhon",new Family {FatherName="Jhon",...});
a set with key name of "ids:Family" and a member like "2343443" will be automatic created in redis db,
and each time I update or modify the same key with SetEntry, the set of "ids:Family" will increment with an new auto generated member. And this set will grow extremely large if I update the key frequently.
How can I disable the auto generated set? this set seems useless for the current circumstances.
thanks
I ran into this same problem - I discovered that our database contained a couple dozen of these "ids:XXX" sets, each containing tens of millions of items, which were consuming significant amounts of memory.
The solution is to switch to untyped clients. You can still use typed methods on the client so you're really not giving up any type safety or automatic serialization at all. There's a couple ways to create clients; we tend to use the get-in-get-out Exec shortcuts on RedisClientsManager. You should be able to adapt this to the way you do it.
Typed client - creates "ids" sets:
// set:
redis.ExecAs<T>(c => c.SetEntry(key, value));
// get:
T value = redis.ExecAs<T>(c => c.GetValue(key));
Untyped client - no "ids" sets created:
// set:
redis.Exec(c => c.Set(key, value));
// get:
using (var cli = _redis.GetClient())
{
T value = cli.Get<T>(key);
}
The inferred auto-generated id's are when you use the high-level Redis Typed Client. Use the IRedisClient.SetEntry on the string-based RedisClient API instead.

Powershell to find the principal/mirror in SQL servers

I would like to know if it is possible to know if a instance of sql server is in mirror/prinicipal by running any sql query? and secondly i want to run this on say 60-80 instances everyday at 4am automatically possible? I would like to use powershell used it before quite easy to use from experience. Tks
It is possible. You will need to play around with SMO objects.
$server = "dwhtest-new"
$srv = New-Object Microsoft.SqlServer.Management.Smo.Server $server
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$dbs = $srv.Databases
foreach ($db1 in $dbs)
{
$db = New-Object Microsoft.SqlServer.Management.Smo.Database
$db = $db1
$DatabaseName = $db.Name
Write-Host $DatabaseName
Write-Host "MirroringStatus:" $db.MirroringStatus
Write-Host "DBState:" $db.Status
Write-Host
}
If your DB's mirroring is still intact you will recieve 'Synchronized' for MirroringStatus and its its the Primary it will say "Normal" for the status and if its the failover it will say "Restoring". Unfortunately there is no way, that im aware of, to just pull out the status of "Mirror" or "principle". You will jsut have to build logic to check both fo those values.
Restoring
It depends on how are you going to setup the job?
If you want to run it from one central server that collects all the information then SMO would be the way to go with PowerShell. The answer provided by KickerCost can work but would need some more work to be able to run it for multiple servers. It would be best to take his example and turn it into a working function that will allow the server names to be piped in.
If you are going to just run a job locally on each server (scheduled task or SQL Agent job) that may point to the script on a network share, then maybe output that info to a file (like servername_instance.log) you can use a one-liner with SQLPS:
dir SQLSERVER:\SQL\KRINGER\Default\Databases | Select Name, MirroringStatus
KRINGER is my server name, with a default instance. If you have named instances then replace the "default" with the instance name.
Your output from this command would be similar to this:
Name MirroringStatus
---- ---------------
AdventureWorks None
AdventureWorksDW None
Obviously I don't have any databases involved in mirroring.