Copy Data From On-Premise SQL Server To Azure SQL - Azure Private Network - azure-sql-database

Requirement: I wanted to copy data from a specific table/view residing on a on-premise SQL Server to Azure SQL DB.
Infrastructure: As depicted in below picture. Essentially, the Azure network is directly connected with corporate network over Express Route. Thus it's a pure private network connection; as good as the corporate network itself.
Issue/Question: I know there are multiple approaches present to get this operation done and I am not restricted to use ADF copy Data tool only. BUT, for all of these I see some cavets or extra steps needed to be done as below:
ADF Copy Data Tool: Needs a SH-IR and a small MSI package needs to be installed on on-premise machine which hosts the SQL server for registration purpose.
Logic Apps: Needs a Virtual Gateway (OR) ASE
App Service: If the operation is wrapped in a C# application and I choose to deploy to a Azure Web Apps. Then in-order to connect to on-premise SQL Server we need to setup hybrid connection manager and as in #1 we need to install something in on-premise machine.
For my case, none of these extra steps can be done. essentially, the on-premise SQL Server comes under a different BU and thus I don't have any permission there; except they have given grant to a table/view. Thus, none of these extra shitty steps can be done.
Moreover, as mentioned above; since it's connected over express route as direct connection, As can be seen in above picture, both the on-premise and azure SQL are essentially inside the same corporate network. THUS, I should be able to access them directly without configuring any of these extra steps as mentioned above.
Please confirm on these and provide a suggestion.
Thank You.

You can still go with the ADF scenario without a SHIR by creating ADF in a Managed VNET using Private Endpoint. As you already have an ER circuit and have the flexibility to configure the Azure side, can you do this with Azure IR: Access on-premises SQL Server from Data Factory Managed VNet using Private Endpoint - Azure Data Factory | Microsoft Docs

There are 2 solutions which could work for your scenario but even for them to work ,you would need access to on prem SQL server machine access to some extent atleast for one time config and Azure SQL db should be accessible via SSMS installed on on-prem machine.
Using linked server
You can create a linked server ( process explained here https://www.sqlshack.com/create-linked-server-azure-sql-database/ ) on on-prem server and create a agent server job to insert data to azure SQL db table.
Via Python Script
This would need Python installation on on-prem machine. Once installed you can write script to transfer data between on-prem SQL server and Azure SQL db. You can schedule this script again by using an agent server job.

Related

Why does external I.P. need access to on-prem sql database when moving data with ADF to Azure SQL?

Why does external I.P. need access to on-prem sql database when copying data with ADF to Azure SQL?
It looks like on-prem sql makes a direct connection to Azure SQL (bypassing ADF). Is this by design or do I follow the wrong workflow?
Data Factory use the integration runtime to help us create the connection to the Source/Sink dataset. Azure integration runtime for cloud dataset and Self-host integration runtime for on-premise source/sink dataset.
The integration runtime (IR) is the compute infrastructure that Azure
Data Factory uses to provide data-integration capabilities across
different network environments. For details about IR, see Integration
runtime overview.
A self-hosted integration runtime can run copy activities between a
cloud data store and a data store in a private network. It also can
dispatch transform activities against compute resources in an
on-premises network or an Azure virtual network. The installation of
a self-hosted integration runtime needs an on-premises machine or a
virtual machine inside a private network.
Azure integration runtime is provides by ADF in default. The self-host integration runtime must be created manually.
That means Data Factory can not access the on-prem SQL database directly. It need the self-host integration runtime to help us connect to the on-prem SQL database.
It means that the on-prem sql does not make a direct connection to Azure SQL(bypassing ADF. That why external I.P. need access to on-prem sql database when copying data with ADF to Azure SQL.
HTH.

SQL server to Azure process workflow migration

We are supporting a legacy system for our organisation. In the current scenario, we receive a SQL Server backup (.bak files) from the application vendor on an FTP location. For every weekend on Sunday it is a Full backup and for every other day its the differential one.
On our side, we have a SQL server instance running which has custom stored procedures written and scheduled to check the location every morning and then restore the backups every day. These restored backups are then used by the organisation for internal reporting purposes. There are 100s of other stored procedures written for different reports in different DBs on the same instance.
Since SQL Server 2008 is now out of support and for cost-saving purposes of running on-premise system, my team has been given a task to look into migrating this whole system to Azure SQL database.
My question is what is the most effective way in which we can move this workflow to the cloud? I have an azure trial account set up for me to try but haven't been successful in restoring the .bak files on Azure SQL instance.
Thanks.
You essentially have two options for Azure, either perform a fairly linear Lift and Shift to SQL Server on an Azure VM or go with a more advanced Azure PaaS offering in Azure SQL Database Managed Instance. The specific deployment Azure SQL Database (Single Instance) will not support your current solution requires with regard to the .bak file support, and I have detailed that below. For further details between the difference between Azure SQL Database Single Instance versus Managed Instance, please see: Features comparison: Azure SQL Database and Azure SQL Managed Instance
The second option, is to leverage the Azure Enterprise Ready Analytics Architecture (AERAA) (link) of Azure (PaaS) Analytics services. With Azure SQL Database (PaaS) services, as opposed to on-premise SQL Server or SQL Server on an Azure VM, there is no Integration Runtime or Analysis Services as a bundled service component. These services are separate PaaS offerings and with the help of the linked AERAA blog, you can gain a better understanding of the Azure Analytics services.
The .bak versus .bacpac support dilemma:
Since the main requirement for your solution is support of .bak files, you need to understand where .bak and where .bacpac files are supported. The term Azure SQL Database applies to both a specific deployment option for an Azure SQL database (PaaS) service and as a general term for Azure SQL cloud databases. As for the specific deployment option, Azure SQL Database (Single Instance nor Elastic Pools) will support your scenario with .bak files. This deployment option will support export/import functionality via .bacpac file format. It will not support full/partial restore functionality. The backup/restore functionality although configurable, is only in scope for the specific database hosted by an Azure SQL (logical) Server instance. Basically, you can not restore an external file. You can import, which is always a full copy. So, for that reason, for an Azure PaaS database service you will need Azure SQL Database Managed Instance for .bak file support or deploy an SQL Server VM image to an Azure VM, and migrate your objects via Azure Database Migration Service.
Regards,
Mike

Remote SQL Server backups using Azure

I've got a handful of databases running on a SQL Server instance. I don't have access to be able to install the Azure Backup agent but I do have connection details and credentials to access the database and perform backups in SQL Server Management Studio.
What I want to do is be able to perform and schedule these backups and save them in to Azure Blob Storage. I could have this schedule running on my local computer but that's not an ideal solution.
I've got a powershell script that will perform this action for me but it relies on SQL Server assemblies to run. I've tried running this as a devops build task but am unable to do so without the assemblies it requires.
Does anybody know a way of setting this up? In azure for example? Is there a resource that will allow me to connect and backup a sql instance via connections string and save down to blob storage. Or an azure function perhaps?
Is there a resource that will allow me to connect and backup a sql instance via connections string and save down to blob storage?
I'm afraid the answer is no.
We can't find any API support in Azure to help you achieve that.
I think the SQL Server Management Studio and powershell script is more suitable for you.
Maybe you can think about using third-party tool SQL Backup and FTP, it can help you schedule backup the SQL Server to Azure Blob Storage.
Hope this helps.

Azure SQL PaaS - Limitations

We are trying to evaluate possibility of migrating our in-house SQL DB server to Azure SQL as a PaaS.
Our legacy windows application which is written in VB6 and now running on VB.NET Framework 4.5
Clarifications I need if I migrate only DB server to Azure:
We use both trusted / credential based SQL connection from our desktop application to connect to SQL DB. If we migrate to Azure SQL, will it support trusted connection which should authenticate current organizations NT user?
We have lot of cross DB queries, do we need to face any challenge to use the queries as it is?
Run time we take a DB backup / restore for some business cases. Does this work?
Are there any restrictions on number of admin users on Azure DB?
Probably yes if you sync your local AD with an AAD (See: Use Azure Active Directory Authentication for authentication with SQL Database, Managed Instance, or SQL Data Warehouse)
Azure SQL Database (PaaS) doesn't support cross DB queries by default - you have to setup / use Elastic Query for that.
Yes, you can take a DB backup at runtime and also restore it. There is also a point-in-time restore feature available. See: Learn about automatic SQL Database backups.
I think you can only specify one server administrator (at least within the portal) but I doubt you will reach any limit on db users.
Instead of using the single database SQL Server PaaS service you should also consider using Managed Instance (preview)
You will have to extend your active directory to Azure active directory to keep using trusted connections. You will learn how to do it on this documentation and this one.
On Azure SQL Database you have elastic queries that allow you to run cross database queries. Learn how to create elastic queries here.
You can create bacpacs (export your databases) to Azure Storage or to on-premises location very easy.
You can configure one Server Admin or one Azure Active Directory Admin (it can be a group) for your Azure SQL Server. However, at the database level you can add many database users to the dbmanager role. You can have more information about this topic here.

SQL Server repository for informatica powercenter

I have a powercenter 9.1 installation on windows server 2008 R2.
The repository is on the same box, hosted on sql server 2012. I have configured a new user (with sql server authentication) and have the repo db owned by that user. (it has the owner role)
The core problem : I am not able to run a simple test workflow on this setup.
Here's what I have been trying
The windows firewall has been taken down now for about an hour or so.
The repository service and integration service are running in trace/debug mode respectively.
The integration service log complains that it cant find a certain session for a certain workflow in certain folder (with ids for all of them).
When I log into sql server mgmt console, and try to query the repository tables for those exact items (since i have the ids from logs), all the data is present...
I fail to understand what is that I am messing up...
Disclaimer - my knowledge of sql server is really low.. may be 1 or 2 on scale of 10, since I have been living on the other side of fence (with oracle) for all of my career...
Did you try keeping the sql server login/user name and the associated default schema name as same?