SQL Azure DB extremely slow today - azure-sql-database

I've been a SQL Azure Database user for some time (over a year). I have a mostly readonly 5GB database that fuels my website. Queries hit the database about once or twice a second, and response times are generally sub 100ms.
There have been a few times when performance for all queries goes down the toilet. Today for example, I awoke to alarms that the database was performing poorly. Simple queries that normally take 30ms are taking over 3 minutes! My load on the server is no greater than usual, so I attribute this decline in performance to my DB sharing an instance with one or more DBs from other Azure users.
To solve this problem, I copy the database to a new instance (CREATE DATABASE NEW_DB AS COPY OF OLD_DB), and point the website to the new instance. All is well until this happens the next time. In about a year's time, this has happened 4 or 5 times.
My question: does anyone have some advice on how to mitigate this? If this is just life under Azure, it's pretty unacceptable.

EDIT: just realized that this question is from 2014. If you're still having issues, the questions and suggestions below may guide you in the right direction. If you've resolved the performance issues, feel free to share how any actions you may have taken to improve performance.
What tier are you on right now?
Reference: http://searchsqlserver.techtarget.com/tip/SQL-Azure-database-recommendations-and-best-practices
Are your users coming from different geographical regions? If so, are you using endpoint monitoring for the web app that accesses your SQL Azure db?
Reference: https://azure.microsoft.com/en-us/documentation/articles/web-sites-monitor/#webendpointstatus
Have you tried reading through the official performance guide?
Reference: https://msdn.microsoft.com/en-us/library/azure/dn369873.aspx
Here's a 3rd-party writeup that mentions "the differences in connectivity behavior or that SQL Azure resources get throttled when you overload the database require you to take such things into account and code your application to handle issues you may not have using traditional a SQL Server application."
Reference: http://searchsqlserver.techtarget.com/tip/SQL-Azure-database-recommendations-and-best-practices
This article requires (free) email signup before reading the full article, but it may help you with some recommendations and best practices.
Hope that helps!

Related

Viewing Logs in Azure SQL

We're having some queries in an Azure SQL database that are occasionally running very slowly. The issue has been difficult to properly diagnose, as the same queries will run fine at other times, even when the server is under a similar load.
To help, I'd like to be able to view log information for the server. If I could see a list of transactions, by time, and their outcome (completed, terminated/rolled back, etc) I believe it would be helpful. Several other SQL pages seem to allude to log-files you can access, but since this is an Azure SQL instance, there isn't a physical server I can just download a file from.
I know I can query sys.event_log to see when particular events are occurring (and in fact, I do see a high amount of deadlocks around our problem times), but I'm unaware of any way to see what query's were being handled at the time of these locks.
I'd like to be able to view log information for the server. If I could see a list of transactions, by time, and their outcome (completed, terminated/rolled back, etc) I believe it would be helpful.
The log information you are trying to view is not helpfull.
You can view slowly running queries running using the same manner like on premises using DMV's
You can also enable query store ,which can you show you different stages of query .This i think will help you more in troubleshooting slow queries and is not tied to Premium Databases only

Mimicing the setup of an azure sql server database locally

We are having significant performance problems on azure. Various factors have made this difficult to examine precisely on azure itself. If the problems are in the performance of the code or of the database I would like to examine them by running locally. However it appears that the default configuration of our database on azure is different than it is locally, e.g. apparently an azure created database defaults to run with different configuration than my local database, e.g. the default on azure includes read committed snapshot as I understand, but that is not the default for a database I create in sql server. That means that performance issues are different for the two.
My question is how can I find all such discrepancies between the setup of the two and correct them so that when I find speed issues locally I will know they represent speed issues on azure. I am a sql server novice. I recognize that I cannot recreate "time to database" and "network time" issues that way, but I don't think those are what are killing us.
You might find my answer to this post useful.
We had great advantages in implementing telemetry to gather information and use it later for analysis, to finally find out where and how you are spending your time interacting with SQL and therefore how to improve the query plans. Here is a link to the CAT blog post: http://blogs.msdn.com/b/windowsazure/archive/2013/06/28/telemetry-basics-and-troubleshooting.aspx

SQL Server full copy of database for read operations

Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.

Atlassian Crowd with SQL Server Azure, anyone encountering performance issues?

I have been tring to use SQL Server Azure as a data storage for an Atlassian Crowd on premise for a few days and I am encountering huge performance issues.
For example, crowd admin application is extremely slow, almost unusable.
I was wondering if someone who has successfully set up this kind of solution could give me some advices.
What I have done so far :
setting up an on premise Atlassian Crowd 2.4.2 on an on premise
SQL2008R2 database
scripting the database and running the script on
an azure database (could not set up directly on azure as setup
scripts misses an azure-mandatory clustered index on table
hibernate_unique_key)
adding the mandatory clustered index to the azure hibernate_unique_key table
setting up the jdbc connection with ssl
I encounter no problem connecting crowd to the db, but everything is very slow. Crowd startup takes something like 5 minutes, when it takes something like 20 seconds with an on premise sql server.
Every round trip to the crowd admin web console takes something like 30 seconds.
My database is less than 1Mb in size. Queries execution summary in azure does not show any problematic query.
I forgot to mention that the SQL Azure Db is very reactive with SQL Server Manager or with an on premise .Net Web App
I tried both jtds jdbc driver and MS JDBC driver 4.0, both with data encryption. I tried both sqlDialect offered by crowd. It stays desperatly slow.
I tried setting special registry keys for Azure as stated by MS JDBC 4.0 driver (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveTime , KeepAliveInterval, TcpMaxDataRetransmission)
Maybe it comes from :
the fact that I don't start from a clean setup done by crowd on Azure (because of the clustered index problem)
Sql Azure using utc time, making "something" expire everytime.
I would be glad if someone had advices on this problem.
Sorry - I don't have direct experience with Crowd.
I may go on a limb here, but 9 times out of 10, client applications installed remotely fail basic performance tests against SQL Database (that's the way it's called nowadays) when the application layer is exceedingly chatty (or exceedlingly chunky), with dozens or worse roundtrips for every screen/function, or returning all the records all the time. The reason this usually comes into play is due to the fact that SQL Database is far away going through a network link that is usually slower than on a local network, and on top of it the traffic is always encrypted (meaning there are more packets to transfer).
The only way to go around this type of issue, other than rewriting applications with a better design, would be to try to deploy your Crowd console in a VM in the cloud, in the same data center than you SQL Database instance. At that point, your console will be on the same network than you database, and it should, if my theory holds, be much faster.

should i advocate migrating from access to (my)sql

We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets.
The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year.
We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it.
It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed?
Thanks for any wisdom you can share..
Since your database is small and very few users, I could not make a solid case for migration. I would definetly set up an script to archive old records on a more frequent basis (don't archive into same db, this would somewhat defeat the purpose).
But also make sure two things are correct as well.
INDEXES. If your queries start slowing down, make sure you have proper indexes
http://support.microsoft.com/kb/304272
Your network connection between computers is fast. Maybe upgrade to gigabit cards and router? Possibly put the db on a scsi drive (raid 10 for speed and redundancy)
Throwing advanced technology at simple problems is an expensive way to go and not always the answer!
First of all, the information that the whole table and the whole database is transferred across the network is simply incorrect. If the queries are indexed, then the search times should not go up that much over time.
As others have mentioned spending the time + money to setup and maintain and then have someone maintain and manage and support that database server is certainly a possibility here. However, keep in mind that simply migrating a JET based application to sql server in many cases will run slower, and in fact sql server is slower then JET when no network is involved.
So, I would take some time to ascertain why things slow down so much, and also check into how indexing is setup.
So, just keep in mind that it is pure folklore and myth that the whole tables and whole database is transferred over the network. This concept is ONLY DUE to most people really not having any computer training and not knowing and understanding how the JET data engine works.
I would probably move to either Microsoft SQL Server 2008 R2 Express Edition (free) or MySQL (free) if there is both funding and time to put in a data access layer. Because you will be making requests of a remote server and not operating on data at the local workstation this move is very involved from the development standpoint.
However you should analyze whether or not its more cost effective to perform your archival process quarterly or monthly, and just move the archive database to SQL Server 2008 R2 Express Edition. (You can install the Microsoft SQL Server Management Studio client tools on workstations and query the archival database for faster reports on historical data without rewriting your entire production application; similar solutions exist for using MySQL or other OSS/free RDBMS).
I have cilents with 300 mb databases although they should be upsizing to SQL Server for other reasons. 19 Mb is relatively small. If performance is bad enough that archiving speeds things up then check the indexes to the tables for all your sorting and selection fields. Albert gave you a good URL there to check.
Entire MDB files do not go down the wire. Unless you are missing indexes.
Instead of shipping the DB over the network to the client and then performing queries, you could instead write a small wrapper on the server that handles requests, looks up the result in the Access DB (using SQL + the Access ODBC driver), and returns the result. This avoids the overhead of a large migration you might not need and still gets rid of the basic problem the users are experiencing.
Moving to a "proper" database solution is the best long term solution, but if your needs scale linearly and slowly over the next 30 years, it's hard to justify an expensive migration. That said, if you expect to really ramp up, or want to be more "future-proof", migrating now will likely save money/time.
It is my understanding that if we were
using sql, the search time would not
go up as the table gets bigger because
the entire .mdb does not have to be
sent over the network each time. Is
this correct?
This general idea is true for almost all databases. The idea of a database is to separate your application from the actual data. The data resides in a database server. Your application doesn't.
Does anyone have any insight about
whether it could be worth it to go to
the trouble (time and money) of
migrating to a new db
Yes. Having proposed this many times. It's expensive. It's complicated. Your MS-Access database will never get better or faster.
Other database servers will (and can) get faster and more sophisticated. After all, you're not sending .MDB files through a network anymore. The limitations are reduced. You're working with standard SQL through ODBC. Any database will work at the end of ODBC. You can fire vendors to find better, faster, cheaper products. Once you stop using Access you have choices.
Either stop using Access now or plan to suffer with it forever. And remake this decision every year until the end of time.