We are using distributed partitioning views in SQL Server 2012 Enterprise Edition for scaling out our data across more than one servers. Now we faced to question how to add a new node (server) into the scale outed db servers system without sending the servers down, so our users will be able to use them during the process as well.
For example we have 4 servers with scaled out data. When we add the new empty server, the CHECKINGs for the partitioning columns should be reorganized. But during the process the partitioning views are not working.
The High Availability, Always On or Failover Cluster approaches seem are not resolve the problems.
So my question is how to add new node online?
Related
I read in a few places that SQL Azure data is automatically replicated and the Azure platform provides redundant copies of the data, Therefore SQL Server high availability features such as database mirroring and failover cluster aren't needed.
Has anyone got a chance to investigate deeper into this? Are all those availability enhancements really not needed in Azure? Thanks!
To clarify, I'm talking about SQL as a service and not a VM hosted SQL.
The SQL Database service (database-as-a-service) is a multi-tenant database service, and your databases are triple-replicated within the data center, providing durable storage. The service itself, being large-scale, provides high availability (since there are many VMs running the service itself, along with replicated data). Nothing is needed in terms of mirroring or failover clusters. Having said that: If, say, your particular database became unavailable for a period of time, you'll need to consider how you'll handle that situation (perhaps sync'ing to another SQL Database, maybe even in another data center).
If you go with SQL Database (DBaaS), you'll still need to work out your backup strategy, and possibly syncing with another DC (or on-premises database server) for DR purposes.
More info on SQL Database fault tolerance is here.
Your desired detail is probably contained in this MSDN article of Business Continuity and Azure SQL Database (see: http://msdn.microsoft.com/en-us/library/windowsazure/hh852669.aspx). At the most basic level Azure SQL Database will keep three replicas of your database - one primary and two secondary.
While this helps with BCP / DR scenarios you may also wish to investigate ways to backup your database so you have point-in-time restore capabilities. More information on backup / restore can be found here: http://msdn.microsoft.com/en-us/library/windowsazure/jj650016.aspx
We have configured SQL Server 2012 database transaction replication for our client's .NET web application to distribute SQL transaction and reporting on different SQL Servers.
We had implemented transaction replication on to SQL-Node1 is working as Master DB Server, We'd configured replication of Master DB on SQL-Node2 to pull out reports in to our web application which having lots of transactions and data uploading from excel sheet entries around 10 million entries each day.
After configured replication on two SQL Server 2012 instances, after few weeks we facing some performance issues and found some resource get locked during uploading files on to database that's why application unable to access those tables and data. Also found that server performing too much slow during day time when users access our web application.
Now we are looking to distribute loads on different 3 Nodes of SQL Server 2012. Where web application will access and transact data on SQL-Node1, Reporting queries get pull data from SQL-Node2 and SQL-Node3 will be get used to upload excel sheet data on to Database which will get replicated on all other SQL Nodes.
Current setup, all servers having Windows Server 2008 Standard and SQL Server 2012 Enterprise Edition.
Database size approx : 15 GB / Replication used : Transaction / Distributor role configured on SQL Node1 / Subscriber role configured on SQL Node 2.
We are looking for solution to resolve above issues which can distribute different loads (reporting, data uploading, transaction) and replicate data between all SQL Nodes.
Which feature will do perform well for above scenario among SQL Server 2012 HA, SQL Server Replication or SQL Server Mirroring ??
Quick response will be highly appreciated....
Because you have changes happening at more than one node (transactional data at node 1, excel uploads at node 3), "none of the above". All of the abovementioned technologies are built on having data changes happen in one location and propagating to others. You could look at peer to peer replication, but it seems like overkill.
If it were me, I'd try to diagnose why your file upload process is killing performance and fix/work around that. Once you do that, I'd move that process back to node 1 and implement an availability group to cover your reporting needs (with the added bonus of HA).
All of the technologies would bog down on a large data import that is done in one big transaction. I suggest doing it as an ETL like function. Import into a staging table and migrate the data into the production table in bite sized chunks (test many data row sizes to find the size that works the best for your environment). 2 servers should be fine with replication on a cluster for HA with work loads you are talking about.
Im looking at developing a simple ecommerce platform and need to replicate product and customer data to the web host over the internet so the website can run disconnected. The two options i can think of at present are using enterprise messaging and database replication.
Im leaning towards database replication over enterprise messaging as enterprise messaging would require additional developer resource to write all the plumbing code. Anyone have any success using sql server one way replication over unreliable wan links through the internet?
I'm sorry I missed this... NitroAccelerator from Nitrosphere.com is built exactly to speed up replication over the internet. It compresses the TDS packets very efficiently and results in 80-90% improvement in replication times.
In the last company I worked for we had full merge replication for some of our customers.
There were 2 scenarios
Merge Replication for hanadheld devices
Some of our customers had PDAs and they subscribed to some published tables of our main database. They were disconnected for large periods and merge replication worked fine and updated changes on both sides when the connection was restored
Full site to site Merge Replications
This was used for customers that had remote offices but required a fully synchronized local database for performance reasons. In most cases the VPN was extremely poor and we did have some instances of the VPN being down for a week and on restoration replication synchronized both database without an issue.
In both cases replication seems to be very fault tolerant and performed very well.
In your case its one way replication so there should not be no merge conflicts to deal with making the situation easier.
There is a learning curve with replication but as a technology it works very well I found even over poor connections.
Liam
We want to distribute / synchronize data from our Datawarehouse (MS SQL Server) to external customers (also MS SQL Server). The connection has to be secure, because we are dealing with trusted data. Transmission of data from our system to external client system must be via the http/https
In addition it is possible that the clients still run their systems with an older database schema, so already existing tables and columns should be transmitted and non existing ones should be ignored.
Its most likely that we will have large database updates and the updates have to arrive in almost real-time.
And it is definitely necessary that the data is stored in a client side datawarehouse / SQL database.
The whole process should also include good monitoring possibilities in case something goes wrong.
We started to develop our own .NET solution but I thought it should be almost a common problem to exchange data between different systems.
Does anybody know about an existing solution which we can adapt to our scenario?
Any help is appreciated!
The problem is so common that it has a dedicated component in SQL Server: Service Broker. Rather than start your own .Net thing and take care of the many problems (how are you gonna handle down time? Retries? duplicates? out of order delivery? authentication of non-domain joined computers? routing for machines that change names? service upgrades? transactional consistency, rollbacks? are you gonna use dtc?). You can look at the demo I gave to SQL connections to see how you can easily scale SSB to a throughput of well over 1000 msgs/sec (1k payload) on commodity hardware.
the only requirement is that all partitcipants must be at least SQL Server 2005 (no SSB in 2000).
Just use regular SQL connections over a secure VPN or an SSH tunnel. Should be very easy to setup for your networking guys.
For example, you can create a linked server. Then a SQL scheduled job could move the data:
truncate table targetserver.dbname.dbo.tablename
insert into targetserver.dbname.dbo.tablename
select a, b, c
from dbname.dbo.sourcetable
Since the linked server talks to your server over a VPN or SSH tunnel, all data is send encrypted over the internet.
I am having some difficulty in determining how best to implement the different parts of reporting services.
Our company has just bought a new slew of servers and a shared SAN to support our growing infrastructure.
Our servers are running VMware and we have several virtual machines each with their own tasks load balanced across the set of physical machines. We currently have an application server which runs terminal services, and a SQL box which runs 2005 to hold our data, as well as several others for other purposes unrelated to our database.
My question is:
What would be the ideal installation of reporting services in a virtual environment? We will still be dealing with the same amount of resources if we install everything on our current SQL box, or slice up the installation into several virtual machines. Slicing the configuration into distinct machines would help load balancing, but slicing it up will also require more licenses.
My current thought was to install the report server database on the same box that currently has our sql databases and install the report server service on another box to keep iis off the sql box with our operational data.
How difficult would it be to migrate from one configuration to another, or would i be pretty much locking myself into something once a decision is made?
Editing: Adding my options
The different configurations i can think of
A) benefit : easiest to set up. downside : scale out requires migrating back end
1. SQL Data Box holding our production data
2. Reporing Services DB, Reporting Services Service, and IIS
B) benefit : supports scale out without migrating back end
1. SQL Data Box holding our production data, and Reporting Services DB
2. Reporting Services Service, and IIS
C) benefit : best for load balancing virtual machines across hardware, supports scale out without changes. downside : most expensive for licenses
1. SQL Data Box holding our production data
2. Reporting Services DB
3. Reporting Services Service, and IIS
D) benefit : cheapest for Licenses. downside : lots
1. Everything on one box
So options A or B are my front-runners, with B having no drawbacks i can think of, but not sure what kind of load reporting services has on it's database if that would be a noticeable impact as the production data box will be being queried for the raw data as well. Option A would allow me to slice off a new virtual server and play with it while developing and keeping everything off our production box and we could then change our data sources to point to the production box and roll it out.
I'm still not sure what the best option is, so if anyone else has opinions they would be welcomed.
Thanks again, Wesley
I have never done it but this article tells you how to do the remote install which is pretty common.
This article tells you how to move the database to another machine. Presumably this would allow you to migrate between both methods.
Personally I would try to pick the option you are hoping to stick with because in my experience with microsoft, installation instructions are more reliable than migration instructions.