ClickHouse SQL driven access control replication - access-control

I'd like to enable SQL-driven Access Control and Account Management as mentioned in ClickHouse docs https://clickhouse.tech/docs/en/operations/access-rights/
However, it does not state whether sql-managed users are then replicated across cluster or have to be set per replica.
I would move to sql-driven access control only if it was true. Now I have to manage xml files per replica. I see no big advantage in moving to SQL if it's either not replicated.

SQL-managed users are NOT replicated.
I see no big advantage in moving to SQL if it's either not replicated.
SQL managed users allows you to GRANT SELECT by table.

Related

What is the main difference between Active Geo Replication and Auto Failover Groups for Azure SQL DB

I would like to know what is the difference between Active Geo Replication and Auto Failover groups in Azure SQL DB ? I read that in Auto Failover groups, the secondary database is always created on a secondary region, but active geo-replication can happen between same region also. So when one should use compared to the other?
According to MSFT documentation - the Auto-failover groups "is a declarative abstraction on top of the existing active geo-replication feature, designed to simplify deployment and management of geo-replicated databases at scale". BCDR is the biggest use case - manual or automatic failover of SQL data to another region.
The auto-failover group feature imposes some limitations while adding convenience -
A listener concept enables your apps to take advantage of the same endpoint to your SQL, while with geo-replication your app is responsible for connection strings manipulation to target required SQL instance
On another hand, geo-replication supports multiple RO targets including in the same region, while failover group supports only two SQL instances in different regions, in which one is RW and another is RO
As validly pointed in another answer, SQL managed instances only support failover groups via vNet peering
There is little difference between Active Geo Replication and Auto Failover groups.
Active geo-replication is not supported by Azure SQL Managed Instance but Auto Failover groups is supported.
Active geo-replication replicates changes by streaming database transaction log. It is unrelated to transactional replication, which replicates changes by executing DML (INSERT, UPDATE, DELETE) commands. It seems that Active geo-replication is more lightweight and efficient.
Active-geo-replication document
Auto-failover-group document

What permission are required on the source to copy a SQL Azure database?

I need to grant permissions to a remote development team so they can copy schema changes on a database to their local dev instances. I see many posts similar to this, but they seem to focus on what is required in the destination server, rather than rights to read everything necessary on the source.
Currently, the user is in the db_datareader role and while they seem to be able to read a good portion of the table structure, configuration items such as defaults seems to be obscured, and stored proc and view definitions don't seem to be available, either.
I need the team to be able to copy from our Test/UAT instance, but I don't want them to be able to modify it. They should already have sa access to their local dev instances.
I need to grant permissions to a remote development team so they can copy schema changes on a database to their local dev instances.
I think you can using Azure SQL database Data Sync.
Data Sync is useful in cases where data needs to be kept up-to-date across several Azure SQL databases or SQL Server databases. Here are the main use cases for Data Sync:
Hybrid Data Synchronization: With Data Sync, you can keep data
synchronized between your on-premises databases and Azure SQL
databases to enable hybrid applications. This capability may appeal
to customers who are considering moving to the cloud and would like
to put some of their application in Azure.
Distributed Applications: In many cases, it's beneficial to separate
different workloads across different databases. For example, if you
have a large production database, but you also need to run a
reporting or analytics workload on this data, it's helpful to have a
second database for this additional workload. This approach minimizes
the performance impact on your production workload. You can use Data
Sync to keep these two databases synchronized.
Globally Distributed Applications: Many businesses span several
regions and even several countries/regions. To minimize network
latency, it's best to have your data in a region close to you. With
Data Sync, you can easily keep databases in regions around the world
synchronized.
Data Sync is based around the concept of a Sync Group. A Sync Group is a group of databases that you want to synchronize.
A Sync Group has the following properties:
The Sync Schema describes which data is being synchronized.
The Sync Direction can be bi-directional or can flow in only one
direction. That is, the Sync Direction can be Hub to Member, or
Member to Hub, or both.
The Sync Interval describes how often synchronization occurs.
The Conflict Resolution Policy is a group level policy, which can be
Hub wins or Member wins.
For more detail, please see Overview of SQL Data Sync.
With Data sync, you can set your Azure SQL database as Hub database, teams local dev instances as member database, set Sync Direction to 'Hub to Member'.
Then you can sync the schema changes on a database to their local dev instances manually or automatically. Reference: Tutorial: Set up SQL Data Sync between Azure SQL Database and SQL Server on-premises
Hope this helps.
GRANT VIEW DEFINITION was what I needed.
Not sure how I didn't stumble on that in my searches, but there it is.

Is it possible to turn off the possibility of FT-indexing on a per database level

I understand there is a Domino ini setting for turning off all FT-indexing for an entire server. But is there any way to do this for only some databases on the sever, possibly on a per folder basis?
A fulltext can only be created by a user with manager access to the database.
In a well configured environment NO USER needs manager access to ANY database.
Even administrators don't need that (as there is Full Administration Mode).
So: Give users editor to the databases, manage access to databases with groups (user managed groups if you want), and then decide which databases to index.
In the end give the rules about which databases should have an index to the admins...

SQL Mirroring or Failover Clustering VS Azure built in infrastructure

I read in a few places that SQL Azure data is automatically replicated and the Azure platform provides redundant copies of the data, Therefore SQL Server high availability features such as database mirroring and failover cluster aren't needed.
Has anyone got a chance to investigate deeper into this? Are all those availability enhancements really not needed in Azure? Thanks!
To clarify, I'm talking about SQL as a service and not a VM hosted SQL.
The SQL Database service (database-as-a-service) is a multi-tenant database service, and your databases are triple-replicated within the data center, providing durable storage. The service itself, being large-scale, provides high availability (since there are many VMs running the service itself, along with replicated data). Nothing is needed in terms of mirroring or failover clusters. Having said that: If, say, your particular database became unavailable for a period of time, you'll need to consider how you'll handle that situation (perhaps sync'ing to another SQL Database, maybe even in another data center).
If you go with SQL Database (DBaaS), you'll still need to work out your backup strategy, and possibly syncing with another DC (or on-premises database server) for DR purposes.
More info on SQL Database fault tolerance is here.
Your desired detail is probably contained in this MSDN article of Business Continuity and Azure SQL Database (see: http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx). At the most basic level Azure SQL Database will keep three replicas of your database - one primary and two secondary.
While this helps with BCP / DR scenarios you may also wish to investigate ways to backup your database so you have point-in-time restore capabilities. More information on backup / restore can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/jj650016.aspx

How to trigger SPLIT's and DROP's when sharding in SQL Azure

I am setting up a system running on Windows Azure for which I expect high volume of data and high traffic. In order to handle it, I am designing a Federated database. I am interested in having the application itself SPLIT (or DROP) federated databases when needed. There are 2 reasons that should trigger these operations to happen: 1) The size of the database is reaching the limit allowed in Windows Azure, and 2) The amount of traffic in the server is too high, and a SPLIT operation will improve performance, keeping the response time low (runs fast). (the inverse operations are based on similar reasoning).
My question is: How can I detect these 2 conditions programmatically?
You can use the Sql Azure Dynamic Management Views to programmatically monitor Sql Azure databases. Note that you will not be able to monitor the entire federated database at once, but rather each of its individual members.
Using the Dynamic Management Views to check for condition 1), the one related to size, should be straight forward. Detecting condition number 2), the one related to traffic / performance, is a bit more difficult since you will first need to identify the exact metrics that make sense and their threshold values.
One very important thing to keep in mind is that the SPLIT and DROP operations behave very differently. A SPLIT is an online operation (it does not involve any down time) through which a partition member is divided in two databases. The data is going to be automatically split between the two. This behavior means that splits might indeed be triggered from an automated scaling process.
The DROP however is quite different. When dropping a federation member, Sql Azure will move its range of key values to the lower or upper neighbor federation member, but the data itself is simply deleted. You can get a more detailed description in this article (search for "Scaling down" inside it). Basically you will have to manually export the data from the dropped database and manually merge it into the destination database. Technically speaking you might be able to automate the merge operation through the command line version of the Sql Azure Migration Wizard, but it's risky. It would require a lot of testing before putting it into production.
Microsoft is planning to implement automated merge on federation members drops, but that will happen in a future release. As it is at the moment, automated scaling down is not something I would recommend.
Update
For those interested, you can vote for the MERGE operation on federated SQL Azure databases here.