I have Standard S1 SQL database which is fine for most tasks. However I have an overnight task that needs much more computing power.
I am wondering if anyone has experience of using scheduling to scale up the database in terms of Service Tier and Performance Level, executing one or more specific SQL tasks, and then scaling back down to the original level.
I wrote the following Azure Automation workflow for your exact scenario [Azure SQL Database: Vertically Scale]. In full disclosure, there is an open issue running the script against SQL Database v12 right now. I am actively working on it and will post on the script center page when resolved.
(2/28) Update, the issue has been mitigated and the detailed steps for the temporary workaround have been posted on the main script center page.
Related
we use Power Bi Premium, embedded, and deployment pipelines to publish reports for our production environment. One of our reports has performance issues when we publish it to production via deployment pipelines. The report is a direct query and uses a SQL native query for all its data sources. This report is also embedded into our homegrown production system and uses Row Level Security to restrict users from seeing their data only from their own company. In our test environment, this report runs fine, and the performance is ok, but when we use deployment pipelines to publish this report to production, the performance changes and runs slow. I am not sure if this has anything to do with the deployment pipelines causing the performance issues or if it's the direct query component. We have tried query caching, and I have tried using aggregation queries without any success. If anyone has experienced similar issues, please let me know what your solution was. Thanks!
I've been getting some warnings about high utilization on our SQL Azure database server. What is the best way to monitor the utilization of that machine and try to analyze what is causing the high utilization spikes?
Log into the Azure management portal http://manage.windowsazure.com
Select your SQL Database that you interested in getting details on.
Select monitor
Let’s say you want to monitor your DTU %. Click on that line item.
Select ADD RULE
Name the rule and describe it
Specify who you want to receive the alerts.
This flow with screen shots can be found here:
http://blogs.msdn.com/b/mschray/archive/2015/09/04/monitoring-your-sql-database-in-azure.aspx
sys.resouce_stats and sys.resource_usage can be used for monitoring the resource usage. Query store feature in SQL DB v12 helps you debugging the performance issues:
http://azure.microsoft.com/en-us/blog/query-store-a-flight-data-recorder-for-your-database/
Using Dynamic Management Views
Azure SQL Database enables a subset of dynamic management views to diagnose performance problems
take look here DMV sql azure
I've been a SQL Azure Database user for some time (over a year). I have a mostly readonly 5GB database that fuels my website. Queries hit the database about once or twice a second, and response times are generally sub 100ms.
There have been a few times when performance for all queries goes down the toilet. Today for example, I awoke to alarms that the database was performing poorly. Simple queries that normally take 30ms are taking over 3 minutes! My load on the server is no greater than usual, so I attribute this decline in performance to my DB sharing an instance with one or more DBs from other Azure users.
To solve this problem, I copy the database to a new instance (CREATE DATABASE NEW_DB AS COPY OF OLD_DB), and point the website to the new instance. All is well until this happens the next time. In about a year's time, this has happened 4 or 5 times.
My question: does anyone have some advice on how to mitigate this? If this is just life under Azure, it's pretty unacceptable.
EDIT: just realized that this question is from 2014. If you're still having issues, the questions and suggestions below may guide you in the right direction. If you've resolved the performance issues, feel free to share how any actions you may have taken to improve performance.
What tier are you on right now?
Reference: http://searchsqlserver.techtarget.com/tip/SQL-Azure-database-recommendations-and-best-practices
Are your users coming from different geographical regions? If so, are you using endpoint monitoring for the web app that accesses your SQL Azure db?
Reference: https://azure.microsoft.com/en-us/documentation/articles/web-sites-monitor/#webendpointstatus
Have you tried reading through the official performance guide?
Reference: https://msdn.microsoft.com/en-us/library/azure/dn369873.aspx
Here's a 3rd-party writeup that mentions "the differences in connectivity behavior or that SQL Azure resources get throttled when you overload the database require you to take such things into account and code your application to handle issues you may not have using traditional a SQL Server application."
Reference: http://searchsqlserver.techtarget.com/tip/SQL-Azure-database-recommendations-and-best-practices
This article requires (free) email signup before reading the full article, but it may help you with some recommendations and best practices.
Hope that helps!
I have been recently reading about configuring jobs within SQL Server and that they can be configured to do specific tasks.
I recently had issues whereby all the DB indexes where > 75% fragmented and I wondered if there was a way to have SQL Server automatically manage itself.
Now when reading about setting up and configuring jobs it mentions the SQL Server Agent.
In the DB Server I was looking at the SQL Server Agent was switched off.
This made me think that having a "job" to handle the rebuilding/reorganising of indexes may not be great if this agent can simply be disabled...
Is there anything at a DB level which can be configured to do this, or is this still really in the hands of a "DBA"?
So to summarise, my question is, what is the best way to handle rebuilding/reorganising indexes?
A job calling some stored procedures could be your answer.
Automation of this task depends on your DB: volume of data, fragmentation degree, batch updates, etc.
I recommend you to check regularly your index fragmentation, before applying an automatized solution.
Also, you can programmatically check if SQL Server Agent is running.
We are having significant performance problems on azure. Various factors have made this difficult to examine precisely on azure itself. If the problems are in the performance of the code or of the database I would like to examine them by running locally. However it appears that the default configuration of our database on azure is different than it is locally, e.g. apparently an azure created database defaults to run with different configuration than my local database, e.g. the default on azure includes read committed snapshot as I understand, but that is not the default for a database I create in sql server. That means that performance issues are different for the two.
My question is how can I find all such discrepancies between the setup of the two and correct them so that when I find speed issues locally I will know they represent speed issues on azure. I am a sql server novice. I recognize that I cannot recreate "time to database" and "network time" issues that way, but I don't think those are what are killing us.
You might find my answer to this post useful.
We had great advantages in implementing telemetry to gather information and use it later for analysis, to finally find out where and how you are spending your time interacting with SQL and therefore how to improve the query plans. Here is a link to the CAT blog post: http://blogs.msdn.com/b/windowsazure/archive/2013/06/28/telemetry-basics-and-troubleshooting.aspx