Azure database displays high utilization with no active processes - azure-sql-database

I am using 2 Basic and 1 S0 database (just upgraded to V12). I noticed (before the upgrade) that the S0 database is really slow while the basic dbs do fine. A count(*) for a table with 2 mio records takes about 90 seconds.
I checked the monitoring in the new portal: CPU 55% avg, DTU 81%, and DataIO 12%. This looks rather busy to me. But there are no active processes, sp_who2 displays 4 processes, three awaiting command (idle) plus the sp_who2 process, that's it. The utilization is constant (with spikes to 100%) for hours now.
The monitoring for the basic machines show nearly no utilization (although these databases actually do get some requests).
Am I reading the monitoring incorrectly, i.e. is this a server monitor maybe and other processes I don't know about are using the same server (like in a shared environment)? I thought the server readings are actual values for my instance.
What I don't really understand is the server / database distinction. I can use one server with 3 databases or 3 individual servers but will pay the same price, so the performance does not seem to be bound to a server (I am not using the elastic model).

My bad. I found out that there were three of my processes (with the GUI gone to heaven) producing the load. I killed the processes and zero load remained. Obviously sp_who2 does not display all processes. I had more luck getting process information with Dynamic Management Views: https://azure.microsoft.com/en-us/documentation/articles/sql-database-monitoring-with-dmvs/

Related

Diagnosing SQL speeds

I just migrated a production DB onto some new hardware (in a sandbox setting) because they were suffering from poor IO performance.
Testing a simple select * from TABLE query on one of the large tables (123m rows, 24 columns) takes about 20 minutes. I can see that the query maxes out a single core on the SQL server, but memory consumption and disk IO are non-existent.
In the resource monitor there are 0 waits, other than Network I/O which is at 700-800.
The query is being run from a local install of MSSQL Mgmt studio.
Data file I/0 is 0 in the activity monitor.
Wait time in the query is about double the active CPU time.
I am not sure if this is a problem that I need to solve, or that's just the way it works.
I was actually testing the speed of the query directly on the server vs. it being called by my users app - to diagnose if an ODBC driver might be holding things up, as reading from the database took 98% of the scripts time.
Ran a select * from query and was expecting it to complete much faster than it did.
EDIT: Its SQL 2017

SQL Resource Waits - Buffer I/O - PAGEIOLATCH_SH

I am not a DBA, but have recently been charged with monitoring my companies DB. A merge that normally takes 12-18 minutes is now taking over 2 hours. This was setup by a DBA and I am told that all the indexing was done correctly. I found in the activity monitor that the process is hitting a PAGEIOLATCH_SH/EX wait type and the Resource Waits - Buffer I/O wait times are above 3000 (I cant find a baseline to see if this is normal...but it doesn't seem to be). Does anyone have any information on where I could go from here? I have already tried restarting the SQL server and force re-compiling the stored procedure that calls the merge. We are not working with an massive DB only about 2 million rows or so.

Multi-threaded performance testing MS SQL server DB

Let's assume the following situation:
I have a database server that uses 4 core CPU;
My machine has 2 core CPU;
Assume they are of equal speed in terms of GHZ;
Systems are connected over a network (two lines 200mb/s each);
Test tool that I use provides # of threads parameter and will issue commands in parallel to the server.
QUESTIONS:
How would you test parallel reads/writes via stored procedure? Please brainstorm as any advice is appreciated;
How can I prove that many threads are executing the queries on the server (or should I not pay attention to this as this servers and DB's responsibility)?
What controls how many threads are executed at any time primarily in case of SQL server? I checked the "server properties" > processors > # of processors and threads section - waht more should I check?
How can I check that my application truly executes on all my machine cores - in other words - uses real threads instead of virtual ones? Or should I pay attention only to the virtual ones?
Should I pay attention to the network bandwidth? Can it be a bottleneck (I dont' send any big data, only commands with variables).
1.) not sure perhaps someone else can answer
2.) SQL Sentry allows you to monitor your SQL activity (use the free trial and buy if you like it)
3.) Max Dop controls the number of processors & also the cost threshold will affect parrallelism
4.) Same as 2 perhaps, i'm not sure i understand
5.) Depends on what you are doing are where you see aproblem SQL sentry will show wait stats that may help

SQL Server running slow due to thread count increase

On our production server, for some reason at specific amount of time, thread count goes over and over to the certain point that though CPU Utilization is normal(30-50%), but the query starting to run slow we so lot more blocking statements.
I am not sure where to look at it, basically when our site runs normally, the thread count is around 150 threads, but during the specific time in a day(during 1:30 to 2:30) it come up to 270 threads.there are no extra sql transaction goes on, everything as normal as it was before but thread count grows and sql start behaving very very slow.
After restarting the SQL service immediately thread count comes to normal, and our site function fine for another 24 hours.
We are using SQL Server 2005, it is 24 core machine.
any idea?
Blocking statements steal workers (sys.dm_os_workers) so the server will spawn more workers to handle the incoming tasks. At 24 cores you'll have some 700 max worker threads out-of-the-box by default. So seeing 270 'threads' is not an issue, is well within the normal functioning parameters. You real problem must be the blocking, and you have to investigate it accordingly: who is blocking who and why. My bet is that you have a job running between 1:30 and 2:30 that is locking large portions of the database (a delete job perhaps?) and your queries block on locked rows. You'll have to investigate, find the root cause, and act accordingly. Reboot is not a solution, nor is blaming unrelated components (thread count). Use Activity Monitor, use Who Is Active, follow the methodical approach of Waits and Queues methodology. There are plenty of ways to identify the real problem. SQL Server will never appear slow due to thread count. It simply doesn't work like that.
You can control the degree of parallalism using the MAXDOP query hint. For more details please check this article:
http://blog.sqlauthority.com/2010/03/15/sql-server-maxdop-settings-to-limit-query-to-run-on-specific-cpu/
Thanks for your valuable feedback, yes it is true it is nothing which sql is behaving weiredly, it is something our Site which is based on Ektron CMS is responsible for, one of the Functionality of Ektron CMS (which is PageBuilder), while doing operation on this piece of Content was holding of the table badly, we have around 10 million users to our site, and probably since this was blocking the tables SQL Server goes nuts and does not respond very well.
We have finally eliminated the issue.

Is it possible to get sub-1-second latency with transactional replication?

Our database architecture consists of two Sql Server 2005 servers each with an instance of the same database structure: one for all reads, and one for all writes. We use transactional replication to keep the read database up-to-date.
The two servers are very high-spec indeed (the write server has 32GB of RAM), and are connected via a fibre network.
When deciding upon this architecture we were led to believe that the latency for data to be replicated to the read server would be in the order of a few milliseconds (depending on load, obviously). In practice we are seeing latency of around 2-5 seconds in even the simplest of cases, which is unsatisfactory. By a simplest case, I mean updating a single value in a single row in a single table on the write db and seeing how long it takes to observe the new value in the read database.
What factors should we be looking at to achieve latency below 1 second? Is this even achievable?
Alternatively, is there a different mode of replication we should consider? What is the best practice for the locations of the data and log files?
Edit
Thanks to all for the advice and insight - I believe that the latency periods we are experiencing are normal; we were mis-led by our db hosting company as to what latency times to expect!
We're using the technique described near the bottom of this MSDN article (under the heading "scaling databases"), and we'd failed to deal properly with this warning:
The consequence of creating such specialized databases is latency: a write is now going to take time to be distributed to the reader databases. But if you can deal with the latency, the scaling potential is huge.
We're now looking at implementing a change to our caching mechanism that enforces reads from the write database when an item of data is considered to be "volatile".
No. It's highly unlikely you could achieve sub-1s latency times with SQL Server transactional replication even with fast hardware.
If you can get 1 - 5 seconds latency then you are doing well.
From here:
Using transactional replication, it is
possible for a Subscriber to be a few
seconds behind the Publisher. With a
latency of only a few seconds, the
Subscriber can easily be used as a
reporting server, offloading expensive
user queries and reporting from the
Publisher to the Subscriber.
In the following scenario (using the
Customer table shown later in this
section) the Subscriber was only four
seconds behind the Publisher. Even
more impressive, 60 percent of the
time it had a latency of two seconds
or less. The time is measured from
when the record was inserted or
updated at the Publisher until it was
actually written to the subscribing
database.
I would say it's definately possible.
I would look at:
Your network
Run ping commands between the two servers and see if there are any issues
If the servers are next to each other you should have < 1 ms.
Bottlenecks on the server
This could be network traffic (volume)
Like network cards not being configured for 1GB/sec
Anti-virus or other things
Do some analysis on some queries and see if you can identify indexes or locking which might be a problem
See if any of the selects on the read database might be blocking the writes.
Add with (nolock), and see if this makes a difference on one or two queries you're analyzing.
Essentially you have a complicated system which you have a problem with, you need to determine which component is the problem and fix it.
Transactional replication is probably best if the reports / selects you need to run need to be up to date. If they don't you could look at log shipping, although that would add some down time with each import.
For data/log files, make sure they're on seperate drives so the performance is maximized.
Something to remember about transaction replication is that a single update now requires several operations to happen for that change to occur.
First you update the source table.
Next the log readers sees the change and writes the change to the distribution database.
Next the distribution agent sees the new entry in the distribution database and reads that change, then runs the correct stored procedure on the subscriber to update the row.
If you monitor the statement run times on the two servers you'll probably see that they are running in just a few milliseconds. However it is the lag time while waiting for the log reader and distribution agent to see that they need to do something which is going to kill you.
If you truly need sub second processing time then you will want to look into writing your own processing engine to handle data moving from one server to another. I would recommend using SQL Service Broker to handle this as this way everything is native to SQL Server and no third party code has to be written.