Is there any way to give a particular SQL login higher priority for running queries? We have one server, that has multiple databases, unfortunately one of the databases occasionally runs very intensive queries (which aren't too time dependant), and it slows down the rest of the databases on the server.
I'd like to be able to tell the server to run queries from a particular login on a higher priority to avoid slow down for other systems.
I understand that typically there would be issues with locking - however in this case, there is one database table that all the databases reference (user information) that is read only - so there wouldn't be any of these issues.
We can't separate out the databases, and we can't add more servers - any ideas?
Thanks
The only way to handle resources in SL 2005 is to create seperate instances, however this only hides memory/cpu from the other instances, it doesnt allow under utilized instances to share its memory/cpu with busy instances.
In SQL Server 2008, they have added the Resource Governor which can prioritise the CPU and Memory based on users or databases (http://msdn.microsoft.com/en-us/library/bb933866.aspx).
Thanks,
Matt
Related
I'm currently running an instance of MS SQL Server 2014 (12.1.4100.1) on a dedicated machine I rent for $270/month with the following specs:
Intel Xeon E5-1660 processor (six physical 3.3ghz cores +
hyperthreading + turbo->3.9ghz)
64 GB registered DDR3 ECC memory
240GB Intel SSD
45000 GB of bandwidth transfer
I've been toying around with Azure SQL Database for a bit now, and have been entertaining the idea of switching over to their platform. I fired up an Azure SQL Database using their P2 Premium pricing tier on a V12 server (just to test things out), and loaded a copy of my existing database (from the dedicated machine).
I ran several sets of queries side-by-side, one against the database on the dedicated machine, and one against the P2 Azure SQL Database. The results were sort of shocking: my dedicated machine outperformed (in terms of execution time) the Azure db by a huge margin each time. Typically, the dedicated db instance would finish in under 1/2 to 1/3 of the time that it took the Azure db to execute.
Now, I understand the many benefits of the Azure platform. It's managed vs. my non-managed setup on the dedicated machine, they have point-in-time restore better than what I have, the firewall is easily configured, there's geo-replication, etc., etc. But I have a database with hundreds of tables with tens to hundreds of millions of records in each table, and sometimes need to query across multiple joins, etc., so performance in terms of execution time really matters. I just find it shocking that a ~$930/month service performs that poorly next to a $270/month dedicated machine rental. I'm still pretty new to SQL as a whole, and very new to servers/etc., but does this not add up to anyone else? Does anyone perhaps have some insight into something I'm missing here, or are those other, "managed" features of Azure SQL Database supposed to make up the difference in price?
Bottom line is I'm beginning to outgrow even my dedicated machine's capabilities, and I had really been hoping that Azure's SQL Database would be a nice, next stepping stone, but unless I'm missing something, it's not. I'm too small of a business still to go out and spend hundreds of thousands on some other platform.
Anyone have any advice on if I'm missing something, or is the performance I'm seeing in line with what you would expect? Do I have any other options that can produce better performance than the dedicated machine I'm running currently, but don't cost in the tens of thousand/month? Is there something I can do (configuration/setting) for my Azure SQL Database that would boost execution time? Again, any help is appreciated.
EDIT: Let me revise my question to maybe make it a little more clear: is what I'm seeing in terms of sheer execution time performance to be expected, where a dedicated server # $270/month is well outperforming Microsoft's Azure SQL DB P2 tier # $930/month? Ignore the other "perks" like managed vs. unmanaged, ignore intended use like Azure being meant for production, etc. I just need to know if I'm missing something with Azure SQL DB, or if I really am supposed to get MUCH better performance out of a single dedicated machine.
(Disclaimer: I work for Microsoft, though not on Azure or SQL Server).
"Azure SQL" isn't equivalent to "SQL Server" - and I personally wish that we did offer a kind of "hosted SQL Server" instead of Azure SQL.
On the surface the two are the same: they're both relational database systems with the power of T-SQL to query them (well, they both, under-the-hood use the same DBMS).
Azure SQL is different in that the idea is that you have two databases: a development database using a local SQL Server (ideally 2012 or later) and a production database on Azure SQL. You (should) never modify the Azure SQL database directly, and indeed you'll find that SSMS does not offer design tools (Table Designer, View Designer, etc) for Azure SQL. Instead, you design and work with your local SQL Server database and create "DACPAC" files (or special "change" XML files, which can be generated by SSDT) which then modify your Azure DB such that it copies your dev DB, a kind of "design replication" system.
Otherwise, as you noticed, Azure SQL offers built-in resiliency, backups, simplified administration, etc.
As for performance, is it possible you were missing indexes or other optimizations? You also might notice slightly higher latency with Azure SQL compared to a local SQL Server, I've seen ping times (from an Azure VM to an Azure SQL host) around 5-10ms, which means you should design your application to be less-chatty or to parallelise data retrieval operations in order to reduce page load times (assuming this is a web-application you're building).
Perf and availability aside, there are several other important factors to consider:
Total cost: your $270 rental cost is only one of many cost factors. Space, power and hvac are other physical costs. Then there's the cost of administration. Think work you have to do each patch Tuesday and when either Windows or SQL Server ships a service pack or cumulative update. Even if you don't test them before rolling out, it still takes time and effort. If you do test, then there's a second machine and duplicating the product instance and workload for test.
Security: there is a LOT written about how bad and dangerous and risky it is to store any data you care about in the cloud. Personally, I've seen way worse implementations and processes on security with local servers (even in banks and federal agencies) than I've seen with any of the major cloud providers (Microsoft, Amazon, Google). It's a lot of work getting things right then even more work keeping them right. Also, you can see and audit their security SLAs (See Azure's at http://azure.microsoft.com/en-us/support/trust-center/).
Scalability: not just raw scalability but the cost and effort to scale. Azure SQL DB recently released the huge P11 edition which has 7x the compute capacity of the P2 you tested with. Scaling up and down is not instantaneous but really easy and reasonably quick. Best part is (for me anyway), it can be bumped to some higher edition when I run large queries or reindex operations then back down again for "normal" loads. This is hard to do with a regular SQL Server on bare metal - either rent/buy a really big box that sits idle 90% of the time or take downtime to move. Slightly easier if in a VM; you can increase memory online but still need to bounce the instance to increase CPU; your Azure SQL DB stays online during scale up/down operations.
There is an alternative from Microsoft to Azure SQL DB:
“Provision a SQL Server virtual machine in Azure”
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-provision-sql-server/
A detailed explanation of the differences between the two offerings: “Understanding Azure SQL Database and SQL Server in Azure VMs”
https://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-database-and-sql-server-iaas/
One significant difference between your stand alone SQL Server and Azure SQL DB is that with SQL DB you are paying for high levels of availability, which is achieved by running multiple instances on different machines. This would be like renting 4 of your dedicated machines and running them in an AlwaysOn Availability Group, which would change both your cost and performance. However, as you never mentioned availability, I'm guessing this isn't a concern in your scenario. SQL Server in a VM may better match your needs.
SQL DB has built in availability (which can impact performance), point in time restore capability and DR features. You have the option to scale up / down your DB based on your usage to reduce the cost. You can improve your query performance using Global query (shard data). SQl DB manages auto upgrades and patching and greatly improves the manageability story. You may need to pay a little premium for that. Application level caching / evenly distributing the load, downgrading when cold etc. may help improve your database performance and optimize the cost.
Is it a good idea to use different schemas inside one large database instead different db instances to reduce cost?
The schema would be absolutely the same, just different names.
For example I have one db for test environment, one for beta and for production etc. Can I collapse all these dbs into one large with different schema names without any issues in future?
Does this approach has some pitfalls?
Best practice recommendations is definitely to separate dev/test from production. You don't want your developers or testers running some test case where a rogue query brings the entire server to its knees.
But for the dev and test/qa environments you could use the same server but separate instances (SQL server installations on same physical hardware).
You might even be able to get by with SQL Server Express for dev and test/qa environments, which is a free version. SQL Server Express 2012 allows for 10 GB database size now I think.
I have a database server that currently has two databases, call them [A] and [B].
[A] is the standard database used for my application
[B] is used by another application (that puts considerable load on the server), however, we use a few tables sparingly (that are shared between my application and the primary application that utilizes [B]). Recently the use of [B] has been increasing and it's causing long wait periods and timeouts in my application.
I'm open to alternative ideas, but at the moment the quickest potential solution I've come up with is to get a second database server and move [A] to that. However, I do need access to some of the tables in [B] - ideally with as little code changes as possible.
We were thinking a setup something like:
Current:
DB Server 1 {[A],[B]} (SQL Server 2005)
New Setup
DB Server 1 {[B]} (SQL Server 2005)
DB Server 2 {[A], [B]} (SQL Server 2008)
Where in the new setup, DBServer2.[B] is a linked table (kind of) to DBServer1.[B]
I looked into linked databases, from my understanding (limited) those work as an alias to a database on another server, which is almost what we want, except it adds the extra server qualifier, and it works on the db level. I'd like to do something like this, but ideally without the server qualifier and on a table level.
So for example, [B] has tables Users and Events. The Users table is updated weekly by batch, and we use it often, so we'd like to have a local copy on the new DBServer2. However, Events we use far less often and needs to be real-time between the two servers. In this case, it would be great to have Events as a linked table.
Further, it would be fantastic if we could use the same markup for queries. Today to query one db from the other we'd do something like
select * from b.events join a.dates
We'd like to continue that, except have the database server know that when we touch events it's really located at dbserver1.b.events.
Does that make sense? I confuse myself sometimes.
Thanks for any help
~P
You can use synonyms for linked objects - http://msdn.microsoft.com/en-us/library/ms177544%28SQL.90%29.aspx
This unfortunately only works for single objects, you CANNOT make a synonym for linkedserver.databasename and then reference synDBName.table.
"Alternative Idea" from me since you said you are open...
How about looking into the cause of the slowness? What is the "Load" you are measuring?
How are your disks laid out on the server?
Maybe you could use more memory, another CPU or Some SQL tuning?
Fixing your issues with a software or hardware fix MAY be faster than getting a server and doing all the installs and then working through the integration problems you may run into.
I am developing a web portal in Asp.Net. Primary target users will be of India Only. But in future I may target overseas users also. I want to know if I should use Sql Server Replication or not. Should I concern about "Replication" at initial stage or can I use it at later stages. Thanks in advance.
I don't think replication means what you think it means. Replication is a means of synchronizing multiple databases with data and it is most useful where databases need to be disconnected at times.
I don't think you need to be concerned about replication if you are developing a web app. If you will have multiple sites in the future, you can still have a single database hosted separately from any of the sites that all the localized sites reference and update independently.
Unless you will have multiple databases to be consolidated into a central db, NO, dont worry about it.
It will bring a whole world of hurt.
I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.