SQL Server 2012 Database Transaction Replication performance issue - sql

We have configured SQL Server 2012 database transaction replication for our client's .NET web application to distribute SQL transaction and reporting on different SQL Servers.
We had implemented transaction replication on to SQL-Node1 is working as Master DB Server, We'd configured replication of Master DB on SQL-Node2 to pull out reports in to our web application which having lots of transactions and data uploading from excel sheet entries around 10 million entries each day.
After configured replication on two SQL Server 2012 instances, after few weeks we facing some performance issues and found some resource get locked during uploading files on to database that's why application unable to access those tables and data. Also found that server performing too much slow during day time when users access our web application.
Now we are looking to distribute loads on different 3 Nodes of SQL Server 2012. Where web application will access and transact data on SQL-Node1, Reporting queries get pull data from SQL-Node2 and SQL-Node3 will be get used to upload excel sheet data on to Database which will get replicated on all other SQL Nodes.
Current setup, all servers having Windows Server 2008 Standard and SQL Server 2012 Enterprise Edition.
Database size approx : 15 GB / Replication used : Transaction / Distributor role configured on SQL Node1 / Subscriber role configured on SQL Node 2.
We are looking for solution to resolve above issues which can distribute different loads (reporting, data uploading, transaction) and replicate data between all SQL Nodes.
Which feature will do perform well for above scenario among SQL Server 2012 HA, SQL Server Replication or SQL Server Mirroring ??
Quick response will be highly appreciated....

Because you have changes happening at more than one node (transactional data at node 1, excel uploads at node 3), "none of the above". All of the abovementioned technologies are built on having data changes happen in one location and propagating to others. You could look at peer to peer replication, but it seems like overkill.
If it were me, I'd try to diagnose why your file upload process is killing performance and fix/work around that. Once you do that, I'd move that process back to node 1 and implement an availability group to cover your reporting needs (with the added bonus of HA).

All of the technologies would bog down on a large data import that is done in one big transaction. I suggest doing it as an ETL like function. Import into a staging table and migrate the data into the production table in bite sized chunks (test many data row sizes to find the size that works the best for your environment). 2 servers should be fine with replication on a cluster for HA with work loads you are talking about.

Related

SQL Server 2016 point in time replication tool

I need to run an end of day 'replication tool/application' on a SQL Server 2016 Always-On DB (1 Primary, 2 Sync, 1 Async). The job needs to run at a point in time when the DB is serving limited web traffic but I can't stop the Web/Application layer being up 24/7.
e.g., my requirement is to replicate all data in the DB at 1am.
I was wondering how best to solve this:
Create a Backup and Restore of the Async DB and then run the replication tool off this "new DB".
Create a DB snapshot of the Async DB and then run the replication tool of the snapshot
Something much easier I'm missing
*Note the tool I'm using proprietary to a product and I can't use any native SQL replication tools.
Many thanks

Migrate on-prem SQL Server database to Azure SQL database

We're in the process of a server migration from an on-prem server (Win2008R2) to Azure PaaS.
To move the DBs, we used the Microsoft Data Migration Assistant (DMA) tool, which worked great and we can connect to the migrated Azure DB via SQL Server Management Studio.
Considering:
Made quite a few changes to the migrated Azure DB (tables, stored procedures, indexes) to work with the apps in Azure
Combined multiple on-prem DBs into one DB in Azure via DMA to save costs
On-prem DB is continually being modified by insert/update operations (multiple tables) during the migration process
Question: what is the best and fastest way to migrate data (all vs missing/updated) considering the above?
I would recommend you to migrate first only the schema of your on-premises databases to Azure SQL Databases and then let Azure SQL Data Sync to migrate the data to Azure and keep it updated on Azure SQL Database.
My suggestion to start with an empty schema on the Azure SQL Database side is because when SQL data Sync finds data on-premises and on Azure it start comparing both databases and that consumes a lot of resources.
On the initial sync SQL Data Sync may consume a lot of resources on the on-premises database server even when having an empty schema on the Azure side, for that you can use SQL Server Resource Governor to cap the CPU used by the data sync sessions in your on premises SQL Server, and this way avoid big performance impact possibly affecting database users.
When you are ready, you can switch your users (gradually or not if SQL Data Sync is on bi-directional mode) to Azure. Once your users have been migrated, you can then remove the member database (the on-premises database) from the SQL Data Sync configuration and stop SQL Data Sync operation.
I disagree with all the answers here.
If you are running on Win2008R2 there is a high chance that you are on an old SQL Server (2008? 2012?) which are both deprecated and unsuitable for Azure SQL Database. And probably the application is also old and not suitable for the Cloud in general. I suggest you a good testing phase.
Here my to do list:
Upgrade SQL Server to SQL Server 2016 on-prem and test if all your queries are still running correctly
Test how ready is your SQL Server to go to Azure SQL Database through Microsoft Data Migration Assistant (DMA) tool or the new Azure SQL Migration extension for Azure Data Studio (came out his month).
Don even think for a second that merging databases will reduce your overall costs. Decide if going multi-tenant or single-tanant not because of the price of the database.
Plan for hours of downtime based on the size of the migration. Don't migrate while your database is modified. Expect downtime. The best way is to take a backup of the day before and then resume the logs.
and test like crazy. This is not gonna be easy because the app is old.
Good luck.
Visual Studio also has a great tool for comparing both schema and data between two databases on different servers.
It can then update the target database with any changes after which you can switch over to use the Azure DB.
This method would require downtime of around 5-30 minutes depending on amount of data, but that might be acceptible depending on your requirements.

Azure Growing Sql Database

1.I am new in azure, I want to know can we have same replication mechanism provided by on premise sql on azure sql db?
2 .Issue we are facing is, few of the tables are growing fast, daily insert around 10k records, so we are planning to keep only few months say 6 data on main DB and copy all data to other DB using replication (not sure if feasible).
We need to read data from backup as well in application for some reports.
Please suggest on this if replication will work or any other solution.
Geo-replication uses a version of AlwaysOn with async replicas under the hood. It is very similar to a distributed Availability Group in SQL 2016, but you cannot control it, you can only turn it on or off.
Replication will work for that, but it would replicate all the data in the DB, not just the tables you want.
Link to Azure Documentation: https://azure.microsoft.com/en-us/documentation/articles/sql-database-geo-replication-overview/

SQL Server Distributed Partitioning Views how to add new node online

We are using distributed partitioning views in SQL Server 2012 Enterprise Edition for scaling out our data across more than one servers. Now we faced to question how to add a new node (server) into the scale outed db servers system without sending the servers down, so our users will be able to use them during the process as well.
For example we have 4 servers with scaled out data. When we add the new empty server, the CHECKINGs for the partitioning columns should be reorganized. But during the process the partitioning views are not working.
The High Availability, Always On or Failover Cluster approaches seem are not resolve the problems.
So my question is how to add new node online?

How SSIS Dataflow really works?

I have an SSIS package which transfers the data from one database to another.
The SSIS package runs on an application server.
I am thinking of moving one of the two databases to another data server. Will there be an impact in performance? How is the data flows in SSIS i.e. does all the data go in the application server where the SSIS runs and then to the destination database?
SSIS is a client-side process, so if it is running on a server other than the machine running the DBMS the traffic will be going over the network. Your question is not very clearly worded, but I think you want to know whether moving a DB will affect performance given that the SSIS package is already running on a separate machine.
If the SSIS job is already running on an application server that is a physically separate machine to the DB server then moving one of the databases will probably not affect the performance unless it has a radically slower network connection than the other.
I recently came across the same situation and we upgraded our source system to a better configuration box. I didn't have to do anything on my part, but the data load times from the source to SQL box were cut down from approximately 40 minutes to under 12 minutes on average. To answer your question, you will only see any performance variance depending on 1) Your new systems resources and 2) If you make changes to the box hosting your SQL Server.