How do you dynamically mask data that saved as json via Azure? - sql

I'm trying to mask sensitive data via an Azure SQL database.
The data is saved as normal text and one column as XML and another saved as json.
I've tried adding rules to the database but when I open SSMS and run a select statement it does not apply to any of the data in the columns (normal text, xml or json saved data)
There's no user excluded to see unmasked data.
Just want to understand why the data is not masked when I perform a select on SSMS.
My rules look like the below:
XML Rule
JSON Rule:
Text Rule:
My SQL statment:
SELECT TOP (1000) * from database_Name

As mentioned in Microsoft Document it says,
The identities in Azure Active Directory (Azure AD) or SQL are included in the masking process and should have access to the unmasked sensitive data.
Maybe you are accessing data as SQL admin or Azure AD user because of that you can see sensitive data.
By hiding important information from unwanted users at multiple layers of the database, you may prevent access and gain control. You may give or remove UNMASK permission to a user.
The code taken from Microsoft-documentation it says,
Give UNMASK permission to user
GRANT UNMASK ON Data.Membership TO USER;
To Query the data under the context of user
EXECUTE AS USER='USER';
To revoke UNMASK permissions
REVOKE UNMASK ON Data.Membership FROM USER;
Data after granting permission to user
Data after removing permission from user
Taken Reference from:
SQL Database dynamic data masking with the Azure portal
Granting and Revoking the Permission

Related

Unable to Query Serverless Pool View in Azure Synapse using SQL Admin Credentials

I have set up a Serverless SQL pool in Azure Synapse that is querying a view I had set up of a linked Azure Data Lake.
CREATE VIEW DeviceTelemetryView
AS SELECT corporationid, deviceid, version, Convert(datetime, dateTimestamp, 126) AS dateTimeStamp, deviceData FROM
OPENROWSET(
BULK 'https://test123.dfs.core.windows.net/devicetelemetry/*/*/*/*/*/',
FORMAT = 'PARQUET'
) AS [result]
GO
Using my Azure AD credentials from with synapse studio or SSMS I have no issues querying this View. When I try to query using my SQL Admin account I get the following error:
Cannot find the CREDENTIAL 'https://test123.dfs.core.windows.net/devicetelemetry/////*/', because it does not exist or you do not have permission.
It is important that I am able to query using SQL Admin Creds as we are wanting to query this View via our application for various reports and thus don't want to use AAD creds.
I have tried the SO solution provided here: GRANT Database Scoped Credential syntax gives mismatched input error
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::[WorkspaceSystemIdentity] TO [sqlAdmin];
As this seems to be the default credential that was created when linking my DataLake to Synapse however this gives me the following error when run against the db where my view exists:
Cannot find the database scoped credential 'WorkspaceSystemIdentity', because it does not exist or you do not have permission.
You would need to create server-scoped credential to allow access to storage files.
Server-scoped credential
These are used when SQL login calls OPENROWSET function without
DATA_SOURCE to read files on some storage account. The name of
server-scoped credential must match the base URL of Azure storage
(optionally followed by a container name). However, SQL users can't
use Azure AD authentication to access storage and serverless SQL pool doesn't return subfolders unless you specify /** at the end of path.

Error in SSMS when running query from SQL On-Demand endpoint

I am attempting to pull in data from a CSV file that is stored in an Azure Blob container and when I try to query the file I get an error of
File 'https://<storageaccount>.blob.core.windows.net/<container>/Sales/2020-10-01/Iris.csv' cannot be opened because it does not exist or it is used by another process.
The file does exist and as far as I know of it is not being used by anything else.
I am using SSMS and also a SQL On-Demand endpoint from Azure Synapse.
What I did in SSMS was run the following commands after connecting to the endpoint:
CREATE DATABASE [Demo2];
CREATE EXTERNAL DATA SOURCE AzureBlob WITH ( LOCATION 'wasbs://<container>#<storageaccount>.blob.core.windows.net/' )
SELECT * FROM OPENROWSET (
BULK 'Sales/2020-10-01/Iris.csv',
DATA_SOURCE = 'AzureBlob',
FORMAT = '*'
) AS tv1;
I am not sure of where my issue is at or where to go next. Did I mess up anything with creating the external data source? Do I need to use a SAS token there and if so what is the syntax for that?
#Ubiquitinoob44, you need to create a database credential:
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/develop-storage-files-storage-access-control?tabs=shared-access-signature
I figured out what the issue was. I haven't tried Armando's suggestion yet.
First I had to go to the container and edit IAM policies to give my Active Directory login a Blob Data Contributor role. The user to give access to will be your email address for logging in to your portal.
https://learn.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal?toc=/azure/synapse-analytics/toc.json&bc=/azure/synapse-analytics/breadcrumb/toc.json
After that I had to re-connect to the On-Demand endpoint in SSMS. Make sure you login through the Azure AD - MFA option. Originally I was using the On-Demand endpoint username and password which was not given access to the Blob Data Contributor role for the container.
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/resources-self-help-sql-on-demand

Using Microsoft Access program to upload images to be stored in SQL Server database

I have an Access back-end that is going to be converted to SQL Server. The front-end will stay the same using Access. The issue I am having is how SQL Server handles images differently than MS Access.
Currently, a user adds a picture to the record via the attachment data type which, to my understanding, isn't possible in SQL Server. I saw the image data type is deprecated which leaves varbinary(MAX) and/or filestream as the options.
I want to go with storing the images in the filesystem as the size is greater than 256KB, but I'm not finding any documentation about accomplishing that with an Access front-end.
Consider running an MS Access pass-through query to upload user's image. Specifically, pass the file name into an SQL query as shown in MSDN docs for large-value data types. For this, the user will need OPENROWSET privileges and the image file may need to be accessible on client machine or server.
INSERT myTable (myImageColumn, ...other columns...)
SELECT myPicData.*, ...other values...
FROM OPENROWSET
(BULK 'C:\Path\To\Image.jpg', SINGLE_BLOB) AS myPicData

Insert image into varbinary(max) column from Azure Storage into Azure SQL Database

As the title suggests I'm trying to add an image from Azure Bulk Storage and put that into a VarBinary(Max) column in my Azure SQL Database.
I'm building an application in Unity where each user has a logo. This logo is specific to each user. I send a web request through to PHP code which then requests the server for the information I need from the specific database. So I'm trying to find a way to ensure each user (row in the table) has a logo attatched to it. I'm thinking if it's not right to store images in the database itself then would it be possible to do a web request to a URL that is stored in the logo column, to then draw the image from that URL and use that in the application? If so, does anyone know how I would do this?
I know the Bulk Storage provides a URL to the image. Additionally, if possible I want to add it into currently created rows. Thanks!
Assumptions :
You have user icons stored as blobs in your Azure Storge account with anonymous read access at Blob level
Blob URL is https://my-az.blob.core.windows.net/usericon/staff icon.png
(Replace 'my-az'with your specific Azure tenant name)
You have two separate tables - UserDets and UserImg. It is always better to have the image data in a separate table.
In SSMS execute this command -
CREATE EXTERNAL DATA SOURCE MyUserAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE, LOCATION = 'https://my-az.blob.core.windows.net/usericon');
Then insert the image into VARBINARY column -
Insert into UserImg (UserId, UserImg)
(Select 1, BulkColumn FROM OPENROWSET(
BULK 'staff icon.png',
DATA_SOURCE = 'MyUserAzureBlobStorage', SINGLE_BLOB) AS ImageFile);
If you are using credentials for blob access, there are some additional steps. But it is not required in current instance.

Azure Machine Learning Write output to Azure SQL Database

I am using Azure Machine Learning to clustering data.
The input data is from an Azure SQL Database, and it works fine.
At the end of everything I want to write the output to a table in the same Azure SQL Database, but I get this error:
Error: Error 1000: AFx Library library exception:
Sql encountered an error: Login failed for user
Anyone any idea?
Thank you very much!
Please follow the instructions and examine the examples provided here to properly use the Export Data module to save the data of ML to Azure SQL Database.
How to Export Data to an Azure SQL Database
Add the Export Data module to your experiment. You can find this module in the Data Input and Output group in the experiment items list in Azure Machine Learning Studio.
Connect it to the module that produces the data that you want to export to Azure SQL DB.
For Data destination, select Azure SQL Database. This option supports Azure SQL Data Warehouse as well.
Set the following options specific to Azure SQL Database or Azure SQL Data Warehouse.
Database server name
Type the server name that is generated by Azure. Typically it has the form <generated_identifier>.database.windows.net.
Database name
Type the name of a database on the server you just specified.The database must already exist; the Export Data cannot create it.
Server user account name
Type the user name of an account that has access permissions for the database.
Server user account password
Provide the password for the specified user account.
Comma-separated list of columns to be saved
Type the names of the columns in the experiment that you want to write to the database.
Data table name
Type the name of the table where data will be stored.
For Azure SQL Database, if the table does not exist, it will be created. For Azure SQL Data Warehouse, the table must already exist and have the correct schema, so be sure to create it in advance.
Comma-separated list of datatable columns
Type the names of the columns as you wish them to appear in the destination table. The columns should correspond in order with the column names that you list in Comma-separated list of columns to be saved.
if you are writing to Azure SQL Data Warehouse, the columns names must match those already in the destination table schema.
Number of rows written per SQL Azure operation
Indicate how many rows should be written to the destination table in each batch. By default, the value is set to 50, which is the default batch size for Azure SQL Database. However, you should increase this value if you have a large number of rows to write.
TIP:
For Azure SQL Data Warehouse, we recommend that you set this value to 1. If you use a larger batch size, the size of the command string that is sent to Azure SQL Data Warehouse can exceed the allowed string length, causing an error.
If you don't want to write new results each time you run the experiment, select the Use cached results option. If there are no other changes to module parameters, the experiment will write the data the first time the module is run, and thereafter not perform writes.
However, a write will always be performed if any parameters have been changed in Export Data that would change the results.
Run the experiment.
Find the issue!
I needed to create an specific user with this SQL code:
CREATE USER AMLApplicationUser WITH PASSWORD = '************';
and then add the user to these roles on the database I want to write.
ALTER ROLE db_datareader ADD MEMBER AMLApplicationUser;
ALTER ROLE db_datawriter ADD MEMBER AMLApplicationUser;
I guess only the datawriter role is enough, but I needed datareader too.
So in conclusion, seems that database admin role can be used to read data, but not to write data from AML.
Thank you for your help!