Azure database performance is very bad - azure-sql-database

I am facing performance issues with database hosted in azure, if I run same query in my server response time is 4sec but in azure database it takes around 12secs and sometimes timeout.Basically it takes 3X times the response of server hosted in my company environment. We were using Shared plan after that we tried upgrading the plan to S01 Standard and then to Default1 (Basic: 1 Large), but upgrading plan dint make any difference in performance.Can you please help to make the performance faster.
Thanks.

Azure SQL Database charges on performance. So if you want more performance, you should change to a higher priced performance tier.
What performance level are you currently in?
Also a comparison with your desktop/laptop is an apples to oranges comparison. A typical laptop has 4 or 6 cores, 4GB or 8GB of memory and more. Only a larger Premium performance-level could compare to that potential laptop performance. Lastly your laptop doesn't have any HA available, and SQL DB has a 99.99 SLA.
I hope this helps.

Related

Azure DTUs for a medium size application

I am trying to migrate my ASP (IIS) +SQLServer application from SQL Server Express Edition to Azure SQL database. Currently, we only have one dedicated server with both IIS and SQL express edition on it. The planned setup will be ASP (IIS) on an Azure virtual machine and Azure SQL database.
Per my search on google, it seems SQL server Express Edition has performance issues which are resolved in standard and enterprise edition. The DTU calculator indicates that I should move to 200 DTUs. However, that is based on test run on SQL Express edition setup with IIS on the same dedicated server.
Some more information:
The database size is around 5 GB currently including backup files.
Total users are around 500.
Concurrent usage is limited, say around 30-40 users at a time.
Bulk usage happens for report retrieval during a certain time frame only by a limited number of users.
I am skeptical to move to 300DTUs given the low number of total users. I am initially assuming 100 DTUs is good enough but looking for some advice on someone who has dealt with this before.
Database size and number of users isn't a solid way to estimate DTU usage. A poorly indexed database with a handful of users can consume ridiculous amounts of DTUs. A well-tuned database with lively traffic can consume a comparatively small number of DTUs. At one of my clients, we have a database that handles several million CRUD ops per day over 3,000+ users that rarely breaks 40DTUs.
That being said, don't agonize over your DTU settings. It is REALLY easy to monitor and change. You can scale up or scale down without interrupting service. I'd make a best guess, over-allocate slightly, then move your allocated DTUs up or down based on what you see.
it seems SQL server Express Edition has performance issues
This is not correct.There are certain limitations like 10GB size,one core CPU and some features are disabled ..
I am initially assuming 100 DTUs is good enough but looking for some advice on someone who has dealt with this before.
I would go with the advice of DTU calculator,but if you want to go with 100 DTU's,i recommend going with it ,but consistently evaluate performance..
Below query can provide you DTU metrics in your instance and if any one of the metrics is consistently over 90% over a period of time,i would try to tune that metric and finally upgrade to new tier,if i am not successfull
DTU query
SELECT start_time, end_time,
(SELECT Max(v)
FROM (VALUES (avg_cpu_percent), (avg_physical_data_read_percent), (avg_log_write_percent)) AS value(v)) AS [avg_DTU_percent]
FROM sys.resource_stats
WHERE database_name = '<your db name>'
ORDER BY end_time DESC;
-- Run the following select on the Azure Database you are using
SELECT
max (avg_cpu_percent),max(avg_data_io_percent),
max (avg_log_write_percent)
FROM sys.resource_stats
WHERE database_name = 'Database_Name'

Azure SQL Database vs. MS SQL Server on Dedicated Machine

I'm currently running an instance of MS SQL Server 2014 (12.1.4100.1) on a dedicated machine I rent for $270/month with the following specs:
Intel Xeon E5-1660 processor (six physical 3.3ghz cores +
hyperthreading + turbo->3.9ghz)
64 GB registered DDR3 ECC memory
240GB Intel SSD
45000 GB of bandwidth transfer
I've been toying around with Azure SQL Database for a bit now, and have been entertaining the idea of switching over to their platform. I fired up an Azure SQL Database using their P2 Premium pricing tier on a V12 server (just to test things out), and loaded a copy of my existing database (from the dedicated machine).
I ran several sets of queries side-by-side, one against the database on the dedicated machine, and one against the P2 Azure SQL Database. The results were sort of shocking: my dedicated machine outperformed (in terms of execution time) the Azure db by a huge margin each time. Typically, the dedicated db instance would finish in under 1/2 to 1/3 of the time that it took the Azure db to execute.
Now, I understand the many benefits of the Azure platform. It's managed vs. my non-managed setup on the dedicated machine, they have point-in-time restore better than what I have, the firewall is easily configured, there's geo-replication, etc., etc. But I have a database with hundreds of tables with tens to hundreds of millions of records in each table, and sometimes need to query across multiple joins, etc., so performance in terms of execution time really matters. I just find it shocking that a ~$930/month service performs that poorly next to a $270/month dedicated machine rental. I'm still pretty new to SQL as a whole, and very new to servers/etc., but does this not add up to anyone else? Does anyone perhaps have some insight into something I'm missing here, or are those other, "managed" features of Azure SQL Database supposed to make up the difference in price?
Bottom line is I'm beginning to outgrow even my dedicated machine's capabilities, and I had really been hoping that Azure's SQL Database would be a nice, next stepping stone, but unless I'm missing something, it's not. I'm too small of a business still to go out and spend hundreds of thousands on some other platform.
Anyone have any advice on if I'm missing something, or is the performance I'm seeing in line with what you would expect? Do I have any other options that can produce better performance than the dedicated machine I'm running currently, but don't cost in the tens of thousand/month? Is there something I can do (configuration/setting) for my Azure SQL Database that would boost execution time? Again, any help is appreciated.
EDIT: Let me revise my question to maybe make it a little more clear: is what I'm seeing in terms of sheer execution time performance to be expected, where a dedicated server # $270/month is well outperforming Microsoft's Azure SQL DB P2 tier # $930/month? Ignore the other "perks" like managed vs. unmanaged, ignore intended use like Azure being meant for production, etc. I just need to know if I'm missing something with Azure SQL DB, or if I really am supposed to get MUCH better performance out of a single dedicated machine.
(Disclaimer: I work for Microsoft, though not on Azure or SQL Server).
"Azure SQL" isn't equivalent to "SQL Server" - and I personally wish that we did offer a kind of "hosted SQL Server" instead of Azure SQL.
On the surface the two are the same: they're both relational database systems with the power of T-SQL to query them (well, they both, under-the-hood use the same DBMS).
Azure SQL is different in that the idea is that you have two databases: a development database using a local SQL Server (ideally 2012 or later) and a production database on Azure SQL. You (should) never modify the Azure SQL database directly, and indeed you'll find that SSMS does not offer design tools (Table Designer, View Designer, etc) for Azure SQL. Instead, you design and work with your local SQL Server database and create "DACPAC" files (or special "change" XML files, which can be generated by SSDT) which then modify your Azure DB such that it copies your dev DB, a kind of "design replication" system.
Otherwise, as you noticed, Azure SQL offers built-in resiliency, backups, simplified administration, etc.
As for performance, is it possible you were missing indexes or other optimizations? You also might notice slightly higher latency with Azure SQL compared to a local SQL Server, I've seen ping times (from an Azure VM to an Azure SQL host) around 5-10ms, which means you should design your application to be less-chatty or to parallelise data retrieval operations in order to reduce page load times (assuming this is a web-application you're building).
Perf and availability aside, there are several other important factors to consider:
Total cost: your $270 rental cost is only one of many cost factors. Space, power and hvac are other physical costs. Then there's the cost of administration. Think work you have to do each patch Tuesday and when either Windows or SQL Server ships a service pack or cumulative update. Even if you don't test them before rolling out, it still takes time and effort. If you do test, then there's a second machine and duplicating the product instance and workload for test.
Security: there is a LOT written about how bad and dangerous and risky it is to store any data you care about in the cloud. Personally, I've seen way worse implementations and processes on security with local servers (even in banks and federal agencies) than I've seen with any of the major cloud providers (Microsoft, Amazon, Google). It's a lot of work getting things right then even more work keeping them right. Also, you can see and audit their security SLAs (See Azure's at http://azure.microsoft.com/en-us/support/trust-center/).
Scalability: not just raw scalability but the cost and effort to scale. Azure SQL DB recently released the huge P11 edition which has 7x the compute capacity of the P2 you tested with. Scaling up and down is not instantaneous but really easy and reasonably quick. Best part is (for me anyway), it can be bumped to some higher edition when I run large queries or reindex operations then back down again for "normal" loads. This is hard to do with a regular SQL Server on bare metal - either rent/buy a really big box that sits idle 90% of the time or take downtime to move. Slightly easier if in a VM; you can increase memory online but still need to bounce the instance to increase CPU; your Azure SQL DB stays online during scale up/down operations.
There is an alternative from Microsoft to Azure SQL DB:
“Provision a SQL Server virtual machine in Azure”
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-provision-sql-server/
A detailed explanation of the differences between the two offerings: “Understanding Azure SQL Database and SQL Server in Azure VMs”
https://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-database-and-sql-server-iaas/
One significant difference between your stand alone SQL Server and Azure SQL DB is that with SQL DB you are paying for high levels of availability, which is achieved by running multiple instances on different machines. This would be like renting 4 of your dedicated machines and running them in an AlwaysOn Availability Group, which would change both your cost and performance. However, as you never mentioned availability, I'm guessing this isn't a concern in your scenario. SQL Server in a VM may better match your needs.
SQL DB has built in availability (which can impact performance), point in time restore capability and DR features. You have the option to scale up / down your DB based on your usage to reduce the cost. You can improve your query performance using Global query (shard data). SQl DB manages auto upgrades and patching and greatly improves the manageability story. You may need to pay a little premium for that. Application level caching / evenly distributing the load, downgrading when cold etc. may help improve your database performance and optimize the cost.

SQL Server 2012 developer and express version

I'm not sure whether sql server 2012 developer edition can be installed to a Windows Server 2008/2012 for testing and development purpose. On the other hand, it seems express version has a limitation of 10GB database size, does it mean all databases 10GB, or it can create multiple databases, and each database 10GB?
Thanks,
Joe
First Question
Yes, you can. Have a look at this link http://www.mytechmantra.com/LearnSQLServer/Install-SQL-Server-2012-P1.html
Second Question
It is 10 GB per Database. You can have 10 databases of 10GB size each. Which means 100GBs of Databases.
Not sure about the developer edition, but I suppose yes, it doesn't matters if it's installed on a server or on a developer machine, as long as it's used just for testing and development only and not for real production usage.
About the express edition, the limitation is 10GB per database. You can have as many as you want, whose totaling size can be as big as you want, as long as each one, individually, is smaller than 10GB.

Azure SQL Database Billing on new service tiers (Basic,Standard,and Premium)

As per title, I am confused with the new service tiers for azure SQL.
I understand that the current pricing for Web and Business is actually calculated based on the actual size stored on the server. But I am confused with the new service tiers, which is basic,standard and premium.
As from here, http://azure.microsoft.com/en-us/pricing/details/sql-database/#basic-standard-and-premium, they are saying, for example, a STANDARD tier database will cost me ~$20(preview price with 50% discount).
My question is, if I create two database with STANDARD tier(which can support up to 250GB), but with 5GB each, will it be billed as $20/DB(which will cost me $40) or $20 for two DB(since its not exceeding the limit of 250GB)? ps. I do used the pricing calculator that provided here, http://azure.microsoft.com/en-us/pricing/calculator/?scenario=data-management. but it just sound weird and ridiculous to me if I created a database with only 1-2gb but paying $20(which may be increased to $40) for each DB. I just need some clarification. thanks.
Cheaper and much slower. Azure SQL Database just became not useful for small databases that want high performance.
You will pay 2 x $20 so $40 (preview price). It is largely no longer based on database size but other performance metrics. For low usage databases you could use the basic tier which is cheaper.

Active - Active DR Strategy for SQL Sever 2005

We are trying to come up with an Active - Active DR strategy for our 6 TB data warehouse. Our datawarehouse has 40 DBs and everything has to be replicated on a real time bases.
Site 1 : Needs to handle all the ETL
Site 2 : Will handle all the reporting queries.
Database Mirroring (Cannot afford to drop and create snapshots as we cannot Kill any connections)
Replication
Log shipping
Migrating to SQL Server 2008 is an option.
Which is the best way for performance and availability?
Regards,
Nagy
Since you can't afford to drop active connections log shipping isn't an option either. You need to get exclusive access to the database to restore the log. Hardware support (SAN) will be a big help here. I'd almost like to see you ETL into one server, and then snap over making that the active server for reporting and use the other server for ETL. Thus you have a reporting server with no ETL process, and an ETL server with no reporting, but you swap which is which on a nightly? basis.
You need to talk with your hardware vendor - especially the storage one to see if they provide some sort of hardware based replication. Looking at the volume of the data, I don't think software based solution will be optimal.
Here is how I handle it for 3 databases (11, 17 and 23 TB) right now.
We are hosting the database in a EMC SAN.
Every 12 hours the databases are cloned on different luns located on the same same SAN and then mounted on different servers. This is the backup in case the primary servers get hosed. These databases are generally 12 hours behind the primary databases. We use them for reporting where we can live with 12 hours old data.
Every 24 hours, the clones in 2 are copied to a different SAN in a different building and mounted. This is a the secondary backup. In these databases we run the diagnostics, DBCC checks etc.
In total we are running a total of 9 SQL Server Enterprise Edition (3 prod, 3 first line DR and 3 second line DR) instances.
We decided to go this way, as we could live with upto 24 hours of lag in the data.
This is certainly doable, but it will require a fair bit of planning as well as investment in your part. For us the cost for 9 EE license was not much compared to the price of two SANs and the interconnect between them.
Peer to Peer transactional replication is probably the best option for you unless you want to go down the expensive SAN hardware replication path.
It's offer's near real-time so this should be good enough for reporting.
Pretty much SQL Server Replication, or some sort of customer solution using the SQL Service Broker are going to be your best bet. If your tables are static and all data changes are being done at one site then transactional replication may be your best bet. You'll need to large WAN pipe to handle the replication as transactional consistency is maintained even if multiple threads are used.
SQL Server 2008 has some improvements to Replication's performance as it allows multiple threads to the distributor so that may help you.