Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.
Related
can someone give me a clear idea about which technique/ method is more reliable, less memory consuming and faster in replicating data from one Database to another in MSSQL database(SQl Server 2012) and why. We are in the process of developing a Live GPS based tracking application and I am confused with which method to proceed with
Trigger Based Replication (Live Sync)
(OR)
Transactional Replication
Thanks in Advance ☺
I would recommend using standardised solutions whenever possible. Within the choice given to you, transaction replication should be an obvious favourite, because:
It doesn't require any coding and can be deployed using standard tools. This makes it much faster to deploy and maintain - any proper DBA can do it, some of them even being blindfolded.
Actual data transfer is done by replication agents which are separate applications external to the SQL Server process and client connections. Any network issues within the publisher-distributor-subscriber(s) chain will lead to delays in copying the data, but they will not affect the performance of the publisher database itself.
With triggers, you have neither of these advantages: you will have to add a lot of code, and sluggish network will make data-changing queries slower, potentially leading to timeouts.
Of course, there are many more ways to move the data between the databases in SQL Server, such as (in no particular order):
AlwaysOn Availability Groups (Database mirroring);
Log shipping;
CDC (Change Data Capture);
Service Broker.
However, given your needs, transaction replication still looks like your best bet, overall.
I have a SQL Server 2012 database which currently used as a transactional database and reporting database. The application reads/writes into the same database and the reports are also generated against the same database.
Due to some performance issue, I have decided to maintain the two copies of the database. One will be a transactional database which will be accessed by the application. The other database will be the exact copy of the transactional database and it will only be used by the reporting service.
Following are the requirements:
The reporting database should be synched with transactional database in every one hour. That is, the reporting database can have stale data for maximum of 1 hour.
It must be read-only database.
The main intension is NOT recovery or availability.
I am not sure which strategy, transactional log shipping, mirroring or replication, will be best suited in my case. Also if I do the synch operation more frequently (say in every 10 minutes), will there be any impact on the transactional database or the reporting service?
Thanks
I strongly recommend you to use a standby database in readonly state. And every 15 minutes your sqlserveragent has a scheduled job to: a) generate a new .trn logfile within main db, and b) restore it into standby one(your reports db). The only issue is: using this technique your session will be disconnected while agent restores the .trn logfile. But if you can stop the restore job, run your reports and then reactivate it, there is no problem. Seems to be exactly what you need. Or if your reports are fast to run, probably will not be disconnected...if im not wrong restore job can also be configured to wait opened session to finish or to close it. I can check it this last doubt for you tomorrow if you don't find..
Once it is running in the same sql server instance, you don't have to worry about extra licensing...
I am not a DBA; however, my small company is using SQL Server for a project that we are working on. On the same SQL Server instance there is a MS Great Plains (Dynamics GP) database - as we pass data back and forth between the two databases (mainly a scribe process getting our data and transferring it into GP).
We are using database replication (snapshot) as a means of syncing our production and development (and soon DR) environments. Right now its set to replicate every three hours during core business hours - mainly to keep production and development up to date for us while we are working.
1) Is this the correct way of doing such a thing? Is there a better way?
2) Does this stress the server or the SQL Server? Is this a possible cause of GP database issues because they are on the same server and instance?
3) Replication only occurs on the non GP database - this shouldn't affect the GP database at all right?
Our database should stay rather small. In doing the snapshot, it is my understanding that tables get locked while the replication is going on. Do the tables stay locked until the entire replication is done or are they off loading after they are completed as the process continues?
There are many ways to sync a SQL Server with another. There is replication which you are currently using, log shipping, backup/restore, mirroring, and Always On to name a few methods.
The "best" method depends on your requirements. If you're concerned about disaster recovery, snapshot replication is not a great option and I would look into AlwaysOn Availability Groups.
If load on your production system is a concern I would look into nightly restoring a backup of the production system.
To answer your specific questions:
1) Is this the correct way of doing such a thing? Is there a better way?
This answer depends on your exact requirements
2) Does this stress the server or the SQL Server?
Doing something is always more work than doing nothing. Depending on many factors this could affect your production server.
3) Replication only occurs on the non GP database - this shouldn't affect the GP database at all right?
Your server only has a finite amount of hardware resources. It could affect the performance of queries against the GP database
We have found that having replication in place also adds complexity when it comes to upgrades and schema changes. If you must have dev and prod in sync (and I would argue about that) Always On or log shipping would be my preferred techniques.
DR is a separate issue. You have to determine your Recovery Point Objective (RPO) and Recovery Time Objective (RTO) and adopt the appropriate technology to satisfy your requirements.
I have been recently reading about configuring jobs within SQL Server and that they can be configured to do specific tasks.
I recently had issues whereby all the DB indexes where > 75% fragmented and I wondered if there was a way to have SQL Server automatically manage itself.
Now when reading about setting up and configuring jobs it mentions the SQL Server Agent.
In the DB Server I was looking at the SQL Server Agent was switched off.
This made me think that having a "job" to handle the rebuilding/reorganising of indexes may not be great if this agent can simply be disabled...
Is there anything at a DB level which can be configured to do this, or is this still really in the hands of a "DBA"?
So to summarise, my question is, what is the best way to handle rebuilding/reorganising indexes?
A job calling some stored procedures could be your answer.
Automation of this task depends on your DB: volume of data, fragmentation degree, batch updates, etc.
I recommend you to check regularly your index fragmentation, before applying an automatized solution.
Also, you can programmatically check if SQL Server Agent is running.
I am new to sql server .I have a sql server and I have 2nd sql server(backupserver).I want to copy data from sql server to 2nd one and all my transactions create update delete must be reflected in the 2nd server immediately in real time.I have very large data in my table assuming million of rows.How can I achieve this.I dont have previalge to use third party tools.
You specify that you need data to be reflected on the second server immediately, in real time.
This requirement will add latency to every transaction that occurs on your primary server, since every action will need to be committed on the secondary server before it can be committed on your primary server.
In addition to the latency, this requirement will also likely reduce your availability. If the secondary server is no longer able to successfully commit transactions from the primary, then the primary can no longer commit transactions either, and your system is down.
For more information about these constraints, refer to the extensive discussion around this topic(CAP theorem).
If you're OK with these restrictions, you might consider using Synchronous Database Mirroring (High-Safety Mode).
If you're not OK with these restrictions, please adjust / clarify your requirements.
You may try to get some help from these two references:
Replication in MS SQL Server
SQL Server Replication
On a side note an important thing you should need to plan before doing replication is the Replication model which you will use for your replication.
There is a list of Replication model which you can use:-
Peer-to-peer model
Central publisher model
Central publisher with remote distributor model
Central subscriber model
Publishing subscriber model
Each one of the above has its own advantages. Check them as per your need.
Also to add to it there are three types of Replication:-
Transactional replication
Merge replication
Snapshot replication
Check out this tutorial on SQL Server 2008 replication:
Tutorial: Replicating Data Between Continuously Connected Servers
Since you said "... reflected in the 2nd server immediately in real time", that means you want to use Transactional Replication. You still may only choose to replicate certain tables.
Does the "millions of rows" represent some kind of history? If so, consider the risks that Michael mentioned... and whether you need all the history in the 2nd server, or just current / recent activity. If it's just current/recent, it may be safer and less of a system drain, to write something in T-SQL or SSIS, for a job to execute that loops, reads, and copies the data.
That could be done with linked servers and triggers... but the risks Michael mentions, about preventing the primary server from committing transactions, are as much or more a concern with triggers... that you can avoid with your own T-SQL/SSIS + job.
Hope that helps...