How to Load data from Azure SQL DB to Synapse Spark Datalake? - azure-sql-database

I am looking for a Solution through which we can load the data from Azure SQL DB to Azure Synapse Spark Datalake (not in dedicated pool).
The Requirements are:
we have a csv file in which data is present. Currently we are updating or inserting the data into csv file which we are reading through the spark datalake and loading into dataframes.
But rather than using a csv file we want to load our csv data into Azure SQL DB and in future if any new updates or insert happen we should do directly in Azure SQL DB only.
Currently we are doing our transformations in Synapse using Pyspark and reading the File data through the spark tables which is in our lake database. we have put those csv files in our Synapse ADLS and reading data from there only.
We want to make a connection from the Azure SQL DB to Synapse Spark Datalake. So in Future if any upsert happen in SQL DB those changes will also reflect in our table in Spark datalake and when we are loading those tables in our Synapse notebook as a dataframe it should always pick up the latest Data.
Thanks in Advance for your Responses

You can do it by following ways.
By connecting Azure SQL Database to Synapse notebook via JDBC connection:
First Go to SQL Database and in the connection strings, copy the JDBC credentials.
In this approach, for every new data you should have a column last_modified date which helps in getting new data.
Now in Synapse notebook use the following code.
jdbcHostname = "rakeshserver.database.windows.net"
jdbcDatabase = "rakeshsqldatbase"
jdbcPort = "1433"
username = "rakeshadmin"
password = "< your password >"
jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : username,
"password" : password,
"driver" : "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
pushdown_query = "(select * from ok where last_modified>='2022-10-01' and last_modified<='2022-11-01') ok2"
df = spark.read.jdbc(url=jdbcUrl, table=pushdown_query, properties=connectionProperties)
display(df)
In the synapse SQL code, get the data from the table like above. Schedule this notebook every day by using Notebook activity. For the above dates to be dynamic, you can pass those dates from Notebook activity.
My Execution of dataframe:
By using Lookup to SQL and passing the JSON to Notebook activity as parameter:
First Use lookup query (use last_modified date) to get the desired results as json and then pass this output array as parameter to the Synapse notebook using Notebook activity. You can read that as dataframe in synapse code.

Related

How to connect databricks SQL notebook to a parquet file via filepath

I'm trying to connect to a list of parquet files that contain our data tables, I need to retrieve them to create a new table within a databricks notebook that will have the following fields:
Field Name
Data Type
Table Name
I just need to know the syntax for connecting to these parquet files via sql in a databricks notebook and any help with setting up these fields to display data as if pulling from information_schema in SSMS. Thanks.
The below syntax will help you to create table using given parquet file path:
%sql
CREATE TABLE <Table_Name>
USING parquet
OPTIONS (path "</path/to/Parquet>")
Change the <Table_Name> and </path/to/Parquet> with your values.
You can read the data by using SELECT statement:
SELECT * FROM <Table_Name>
Apache Spark also enables us to easily read and write Parquet files to Azure SQL Database.
df.write
.mode("overwrite")
.format("jdbc")
.option("url", f"jdbc:sqlserver://{servername}.database.windows.net;databaseName={databasename};")
.option("dbtable", "{tablename}")
.option("user", {localusername})
.option("password", {localpassword})
.option("driver", "com.microsoft.sqlserver.jdbc.SQLServerDriver")
.option("customSchema", "sqlschemadetails")
.save()
We have to specify the JDBC connection string including SQL user name and password along with the schema name.
Useful link: https://www.c-sharpcorner.com/article/ingest-data-to-azure-sql-database-using-azure-databricks/
You can open Azure SQL Server database on SSMS simply by using servername, user and password as shown below.

Is it possible to export a single table from Azure sql database, then directly save into azure blob storage without downloading it to local disk

I've tried to use SSMS, but it requires a temporary location for the BacPac file which is local. But i don't want to download it to local, would like to export a single table directly to Azure Blob storage.
Per my experience, we could import table data from the csv file stored in blob storage. But didn't find a way to export the table data to Blob Storage as csv file directly.
You could think about using Data Factory.
It can achieve that , please reference bellow tutorials:
Copy and transform data in Azure SQL Database by using Azure Data
Factory
Copy and transform data in Azure Blob storage by using Azure Data
Factory
Using Azure SQL database as the Source, and choose the table as dataset:
Source dataset:
Source:
Using Blob storage as Sink, and choose the DelimitedText as the sink format file:
Sink dataset:
Sink:
Run the pipeline and you will get the csv file in Blob Storage.
Also thanks for the tutorial #Hemant Halwai provided for us.

Loading 50GB CSV File Azure Blob to Azure SQL DB in Less time- Performance

I am loading 50GB CSV file From Azure Blob to Azure SQL DB using OPENROWSET.
It takes 7 hours to load this file.
Can you please help me with possible ways to reduce this time?
The easiest option IMHO is just use BULK INSERT. Move the csv file into a Blob Store and the import it directly using BULK INSERT from Azure SQL. Make sure Azure Blob storage and Azure SQL are in the same Azure region.
To make it as fast as possible:
split the CSV in more than one file (for example using something like a CSV splitter. This looks nice https://www.erdconcepts.com/dbtoolbox.html. Never tried and just came up with a simple search, but looks good)
run more BULK INSERT in parallel using TABLOCK option. (https://learn.microsoft.com/en-us/sql/t-sql/statements/bulk-insert-transact-sql?view=sql-server-2017#arguments). This, if the target table is empty, will allow multiple concurrent bulk operations in parallel.
make sure you are using an higher SKU for the duration of the operation. Depending on the SLO (Service Level Objective) you're using (S4? P1, vCore?) you will get a different amount of log throughput, up to close 100 MB/Sec. That's the maximum speed you can actually achieve. (https://learn.microsoft.com/en-us/azure/sql-database/sql-database-resource-limits-database-server)
Please try using Azure Data Factory.
First create the destination table on Azure SQL Database, let's call it USDJPY. After that upload the CSV to an Azure Storage Account. Now create your Azure Data Factory instance and choose Copy Data.
Next, choose "Run once now" to copy your CSV files.
Choose "Azure Blob Storage" as your "source data store", specify your Azure Storage which you stored CSV files.
Provide information about Azure Storage account.
Choose your CSV files from your Azure Storage.
Choose "Comma" as your CSV files delimiter and input "Skip line count" number if your CSV file has headers
Choose "Azure SQL Database" as your "destination data store".
Type your Azure SQL Database information.
Select your table from your SQL Database instance.
Verify the data mapping.
Execute data copy from CSV files to SQL Database just confirming next wizards.

Comparing Azure table data against Cosmos Db

We need to do a weekly sync of data in an Azure SQL table into a Cosmos db table. The Azure SQL table is the source and has millions of records. Has anyone done this before and was there a tool used to do this?
Surely,i'd suggest you using Copy activity in Azure Data Factory which is applied for data transfer. You could configure Azure SQL DB as source and Cosmos db as sink. Please see the document:https://learn.microsoft.com/en-us/azure/data-factory/copy-activity-overview#supported-data-stores-and-formats
For your need, you need to do a weekly sync of data so you could create a schedule trigger for the copy activity.

How to insert bulk data through spark dataframe to Sql Server Data warehouse by using Service Principle using Databricks

I'm trying to insert bulk data through spark dataframe to Sql server data warehouse in Databricks. For this i'm using pyodbc module with service principle(not by using jdbc).I have achieved with single insertion.I couldn't find a way to insert bulk data to sql server data warehouse.Can someone help me a way to insert data in Bulk?
Examples here: https://docs.databricks.com/spark/latest/data-sources/azure/sql-data-warehouse.html
Though this tends to recommend you use a blob storage account between the two.
You can also use the standard SQL interface: https://docs.databricks.com/spark/latest/data-sources/sql-databases.html
But you cannot use a service principal - you will need a SQL Login. I would store a connectionstring in key vault as a secret (using the SQL login). Get the secret using your service principal and then connect to SQL using the connetionstring.
You can do this nicely using polybase, it will require a location to store the temp files:
https://docs.databricks.com/data/data-sources/azure/sql-data-warehouse.html#azure-sql-data-warehouse