How to write to Blob Storage in Azure SQL Server using TSql? - sql

I'm creating a stored procedure which gets executed when a CSV is uploaded to Blob Storage. This file is then processed using TSQL and wish to write the result to a file
I have been able to read a file and process it using DATA_SOURCE, database scoped credential and external data source. I'm however stuck on writing the output back to a different blob container. How would I do this?

If it was me, I'd use Azure Data Factory, you can create a pipeline that's activated when a file is added to a blob, have it import that file, run an SP and export the results to a blob.
That maybe an Azure function that is activated on changes to a blob container.

Related

Access Azure Blob Storage data from Azure SQL Database/SQL Server as a blob (for read and write), not for a table import

All other questions and answers I have found on this topic reference accessing a CSV/Excel file (e.g. via OPENROWSET or BULK INSERT) rather than as a blob.
Is it possible, from within a Stored Procedure, to access Azure Blob Storage (for a particular, known file URL), and output the actual file's data as a varbinary(max) column from a stored procedure? Similarly, in reverse, is it possible to accept a varbinary(max) as a stored procedure parameter and subsequently write that file to Blob Storage from within Azure SQL Database?
My first reaction would be: why would you?
If there's a reason to combine data from SQL with data from a file, your logic should probably do that. Use tools and services for what they're there for, and what they're good at. Reading (information from) files from within (Azure) SQL is not one of them.
With that being said: it does not seem possible without OPENROWSET or BULK INSERT.
Closest that is out there is an option to SQL Server backup to URL for Microsoft Azure Blob Storage but even that's only available for SQL Server and Managed Instance. The fact even that is not possible with Azure SQL probably tells us something like this is not going to possible.

Excel into Azure Data Factory into SQL

I read a few threads on this but noticed most are outdated, with excel becoming an integration in 2020.
I have a few excel files stored in Drobox, I would like to automate the extraction of that data into azure data factory, perform some ETL functions with data coming from other sources, and finally push the final, complete table to Azure SQL.
I would like to ask what is the most efficient way of doing so?
Would it be on the basis of automating a logic app to extract the xlsx files into Azure Blob, use data factory for ETL, join with other SQL tables, and finally push the final table to Azure SQL?
Appreciate it!
Before using Logic app to extract excel file Know Issues and Limitations with respect to excel connectors.
If you are importing large files using logic app depending on size of files you are importing consider this thread once - logic apps vs azure functions for large files
Just to summarize approach, I have mentioned below steps:
Step1: Use Azure Logic app to upload excel files from Dropbox to blob storage
Step2: Create data factory pipeline with copy data activity
Step3: Use blob storage service as a source dataset.
Step4: Create SQL database with required schema.
Step5: Do schema mapping
Step6: Finally Use SQL database table as sink

File Handling in stored procedure - Azure SQL Serverless Database

I'm looking for solution to store a file in Azure Storage Account - File Share from a Stored Procedure. I'm not using this file content in my tables, it's all photo references. Could you please suggest me if any of these approaches would help or any other alternatives?
Calling Azure Serverless Functions from the Stored Procedure
Accessing the Physical Path from Stored Procedure, like using “CREATE EXTERNAL DATA SOURCE” command.
Calling xp_cmd to store the files.
Thanks in advance.
Azure SQL database doesn't support access local files, and only support Blob storage. That's why those approaches don't work.
Just for now, we can not find any supports for the file storage. You may could add a new feedback that Azure SQL database product team could see it.

Quickest way to import a large (50gb) csv file into azure database

I've just consolidated 100 csv.files into a single monster file with a total size of about 50gb.
I now need to load this into my azure database. Given that I have already created my table in the database what would be the quickest method for me to get this single file into the table?
The methods I've read about include: Import Flat File, blob storage/data factory, BCP.
I'm looking for the quickest method that someone can recommend please?
Azure data factory should be a good fit for this scenario as it is built to process and transform data without worrying about the scale.
Assuming that you have the large csv file stored somewhere on the disk you do not want to move it to any external storage (to save time and cost) - it would be better if you simply create a self integration runtime pointing to your machine hosting your csv file and create linked service in ADF to read the file. Once that is done, simply ingest the file and point it to the sink which is your SQL Azure database.
https://learn.microsoft.com/en-us/azure/data-factory/connector-file-system

Import large table to azure sql database

I want to transfer one table from my SQL Server instance database to newly created database on Azure. The problem is that insert script is 60 GB large.
I know that the one approach is to create backup file and then load it into storage and then run import on azure. But the problem is that when I try to do so than while importing on azure IO have an error:
Could not load package.
File contains corrupted data.
File contains corrupted data.
Second problem is that using this approach I cant copy only one table, the whole database has to be in the backup file.
So is there any other way to perform such an operation? What is the best solution. And if the backup is the best then why I get this error?
You can use tools out there that make this very easy (point and click). If it's a one time thing, you can use virtually any tool (Red Gate, BlueSyntax...). You always have BCP as well. Most of these approaches will allow you to backup or restore a single table.
If you need something more repeatable, you should consider using a backup API or code this yourself using the SQLBulkCopy class.
I don't know that I'd ever try to execute a 60gb script. Scripts generally do single inserts which aren't very optimized. Have you explored using various bulk import/export options?
http://msdn.microsoft.com/en-us/library/ms175937.aspx/css
http://msdn.microsoft.com/en-us/library/ms188609.aspx/css
If this is a one-time load, using a IaaS VM to do the import into the SQL Azure database might be a good alternative. The data file, once exported could be compressed/zipped and uploaded to blob storage. Then pull that file back out of storage into your VM so you can operate on it.
Have you tried using BCP in the command prompt?
As explained here: Bulk Insert Azure SQL.
You basically create a text file with all your table data in it and bulk copy it your azure sql database by using the BCP command in the command prompt.