Most efficient way to implement RLS and CLS on SQL Server, considering that the users and usergroups metadata is stored in the database - sql

I'm looking for a very efficient way to control row level access and column level access, considering that users and user groups metadata is stored in the database. Our application is using Entity Framework and we have to ensure that all code access to some record and it's columns is filtered based on the user access to the requested data.

Related

Power BI Direct query mode - handle multiple tables

I have a azure sql db where all tables sit - fact, dimension other lookups. Have a requirement to pull(3 tables) a fact table, a dimension and another lookup table(not part of star schema) via direct query and be part of data model within Power BI.
Direct query doesn't allow more than one table to query against(from a single source).
Any thoughts/suggestions?
Sorry.. thanks to Jon's response, it triggered me to look further and I have found that for each table(from same source) I have to go through 'get data' process.
Initially, I thought via one 'get data' process I could multi select tables, but, obviously not.
All ok here now.
I tried with mysql db, I able to select multiple tables and load once to power bi.
New Source -> Database -> Mysql Database -> Add host and password ----> then you retrieve window with table list --> it views tables, views , procedures etc....

Postgresql dump with data restriction

I'm working on developing a fast way to make a clone of a database to test an application. My database has some specif tables that are quite big (+50GB), but the big majority of the tables only have a few MBs. On my current server, the dump + restore takes some hours. These bigs tables have date fields.
With the context in mind, my question is: Is possible to use some type of restrictions on table rows to select the data that is being dumped? e.g. On table X only dump the rows that date is Y.
If this is a possible show can I do it? if it's not possible what would be a good alternative?
You can use COPY SELECT whatever FROM yourtable WHERE ... TO '/some/file' to limit what you export.
COPY command
You could use row level security and create a policy that lets the dumping database user see only those rows that you want to dump (make sure that that user is neither a superuser nor owns the tables, because these users are exempt from row level security).
Then dump the database with that user, using the --enable-row-security option of pg_dump.

Azure Machine Learning Write output to Azure SQL Database

I am using Azure Machine Learning to clustering data.
The input data is from an Azure SQL Database, and it works fine.
At the end of everything I want to write the output to a table in the same Azure SQL Database, but I get this error:
Error: Error 1000: AFx Library library exception:
Sql encountered an error: Login failed for user
Anyone any idea?
Thank you very much!
Please follow the instructions and examine the examples provided here to properly use the Export Data module to save the data of ML to Azure SQL Database.
How to Export Data to an Azure SQL Database
Add the Export Data module to your experiment. You can find this module in the Data Input and Output group in the experiment items list in Azure Machine Learning Studio.
Connect it to the module that produces the data that you want to export to Azure SQL DB.
For Data destination, select Azure SQL Database. This option supports Azure SQL Data Warehouse as well.
Set the following options specific to Azure SQL Database or Azure SQL Data Warehouse.
Database server name
Type the server name that is generated by Azure. Typically it has the form <generated_identifier>.database.windows.net.
Database name
Type the name of a database on the server you just specified.The database must already exist; the Export Data cannot create it.
Server user account name
Type the user name of an account that has access permissions for the database.
Server user account password
Provide the password for the specified user account.
Comma-separated list of columns to be saved
Type the names of the columns in the experiment that you want to write to the database.
Data table name
Type the name of the table where data will be stored.
For Azure SQL Database, if the table does not exist, it will be created. For Azure SQL Data Warehouse, the table must already exist and have the correct schema, so be sure to create it in advance.
Comma-separated list of datatable columns
Type the names of the columns as you wish them to appear in the destination table. The columns should correspond in order with the column names that you list in Comma-separated list of columns to be saved.
if you are writing to Azure SQL Data Warehouse, the columns names must match those already in the destination table schema.
Number of rows written per SQL Azure operation
Indicate how many rows should be written to the destination table in each batch. By default, the value is set to 50, which is the default batch size for Azure SQL Database. However, you should increase this value if you have a large number of rows to write.
TIP:
For Azure SQL Data Warehouse, we recommend that you set this value to 1. If you use a larger batch size, the size of the command string that is sent to Azure SQL Data Warehouse can exceed the allowed string length, causing an error.
If you don't want to write new results each time you run the experiment, select the Use cached results option. If there are no other changes to module parameters, the experiment will write the data the first time the module is run, and thereafter not perform writes.
However, a write will always be performed if any parameters have been changed in Export Data that would change the results.
Run the experiment.
Find the issue!
I needed to create an specific user with this SQL code:
CREATE USER AMLApplicationUser WITH PASSWORD = '************';
and then add the user to these roles on the database I want to write.
ALTER ROLE db_datareader ADD MEMBER AMLApplicationUser;
ALTER ROLE db_datawriter ADD MEMBER AMLApplicationUser;
I guess only the datawriter role is enough, but I needed datareader too.
So in conclusion, seems that database admin role can be used to read data, but not to write data from AML.
Thank you for your help!

Dynamic data segregation

I am trying to create a framework in SQL ( tables mainly ) which could help to me segregate SQL data dynamically based on user roles.
e.g. User with role A should have access to data from country XYZ
I have bunch of stored procedures which fetch different attributes of data now I am trying to update stored procedures in a way that stored procedures needs to be modified only once.
I might get different filter criteria in future so I am trying to create a matrix of filter conditions which could be read in stored procedure dynamically to filter data.

SSRS Report Data Source for Query with Multiple Databases

I have a dataset that pulls from multiple databases on the same server. Historically (without doing research) in this case I would set the data source to ReportServer (the database that houses the execution log for the server, ect.) and noticed the dataset doesn't seem to care what the data source is.
I did a few hours of digging and couldn't find an answer. When using (or in my case, unioning) multiple data bases in a dataset, what should the dataset data source be in Visual Studio?
Specifying the database in the connection string sets the starting, default database for the query. If your permissions are adequate, then there is nothing to stop you from accessing other databases.
The database in the connection string will give your query the context that is used when you don't specify a database name as part of a table. If your query is simply:
SELECT * FROM vw_Interactions
then this will run against the database specified in your connection string.
For your case, when using a table with the same name across multiple databases, the default database doesn't matter much, as long as the data access account has permissions that let the query work.