Diagnosing SQL speeds - sql

I just migrated a production DB onto some new hardware (in a sandbox setting) because they were suffering from poor IO performance.
Testing a simple select * from TABLE query on one of the large tables (123m rows, 24 columns) takes about 20 minutes. I can see that the query maxes out a single core on the SQL server, but memory consumption and disk IO are non-existent.
In the resource monitor there are 0 waits, other than Network I/O which is at 700-800.
The query is being run from a local install of MSSQL Mgmt studio.
Data file I/0 is 0 in the activity monitor.
Wait time in the query is about double the active CPU time.
I am not sure if this is a problem that I need to solve, or that's just the way it works.
I was actually testing the speed of the query directly on the server vs. it being called by my users app - to diagnose if an ODBC driver might be holding things up, as reading from the database took 98% of the scripts time.
Ran a select * from query and was expecting it to complete much faster than it did.
EDIT: Its SQL 2017

Related

Azure database displays high utilization with no active processes

I am using 2 Basic and 1 S0 database (just upgraded to V12). I noticed (before the upgrade) that the S0 database is really slow while the basic dbs do fine. A count(*) for a table with 2 mio records takes about 90 seconds.
I checked the monitoring in the new portal: CPU 55% avg, DTU 81%, and DataIO 12%. This looks rather busy to me. But there are no active processes, sp_who2 displays 4 processes, three awaiting command (idle) plus the sp_who2 process, that's it. The utilization is constant (with spikes to 100%) for hours now.
The monitoring for the basic machines show nearly no utilization (although these databases actually do get some requests).
Am I reading the monitoring incorrectly, i.e. is this a server monitor maybe and other processes I don't know about are using the same server (like in a shared environment)? I thought the server readings are actual values for my instance.
What I don't really understand is the server / database distinction. I can use one server with 3 databases or 3 individual servers but will pay the same price, so the performance does not seem to be bound to a server (I am not using the elastic model).
My bad. I found out that there were three of my processes (with the GUI gone to heaven) producing the load. I killed the processes and zero load remained. Obviously sp_who2 does not display all processes. I had more luck getting process information with Dynamic Management Views: https://azure.microsoft.com/en-us/documentation/articles/sql-database-monitoring-with-dmvs/

SSIS vs. DTS performance

Seems crazy to be doing this at this late date, but...
I am rebuilding some ETL infrastructure with a Rocket Software UniVerse source and an SQL destination. The old destination platform was SQL 2000 on Windows Server 2003, the new platform is SQL 2012 on Windows Server 2012. In both cases, an ODBC driver is used to connect to the source. Everything seems to work fine on the new platform, but the execution time for a package is exponentially slower. For example, one table with roughly 1.3 Million rows and 28 Columns takes about an hour using SQL 2000/DTS and over 3.5 hours using SQL 2012/SSIS. Both SQL servers are virtualized on Xen Server, the 2012 server has more RAM and more vCPUs, neither machine has an advantage in disk infrastructure. No metrics (Memory, disk IO, etc.) are red-lining (or really even coming close) on the 2012 server during package execution.
I have read several forum posts describing the same scenario, but none really seemed to have a solution that works for me. Since all of these posts were quite dated (most of these conversions from DTS to SSIS happened in the SQL 2005 days), I was curious if there was any fresher info out there.
The packages are very simple table copies, no transforms. I am using a "SELECT column, column,.. FROM sourcetable" for my source connection and 'Table or View - Fast Load' for my destination. The slow down APPEARS to be on the source side of the equation, though I can't be certain.
Any help appreciated.
One option to investigate is lowering the buffer size in your data flow. By default, it's set at 10k rows. If you have a slow data source, it can take quite a while to fill up the "bucket" of data just to start sending a batch of information down to the destination. While it might seem counterintuitive, lowering that number can increase performance as 5k, or 1k or 100 rows of data fill the bucket much sooner. That data then gets shuffled through the data flow and lands in the source while bucket 2, 3, etc are being filled.
If you have a SQL Server source, you can optimize your query by hinting that you'd like a fast N rows, which you'd align with your SSIS package's row size.
See Rob Farley's article for more details about that.

SQL Server running slow

I have SQL job that runs every night which does various inserts/updates/deletes. The job contains 40 steps which mainly execute stored procedures.
It's been running fine up until a week ago when suddenly the run time went up from 2.5 hours to over 5 hours, sometimes even 8,9,10!
Could one you please give me any pointers?
First of all let me recommend you a valuable resource on Simple-Talk site. Is a detailed methodology of how to troubleshoot performance issues on SQL Server.
Does the insert you say was carried out was a huge bulk insert that could affect performance? Maybe if it was a huge load the query execution plans could be different and you need to re-tune your table structure, indexes, etc.
If the run time suddenlychanged and no changes where done in the queries or your database structure then I would ask myself several questions:
first, does the process is still taking so long or it was only one time it ran so slow? maybe now is running smoothly and the issue only arised once. Nevertheless, try to find what triggered that bad performance, it can happend again and take down your server
is the server a dedicated sql server? if not, check if some new tasks unrelated to the SQL engine had been configured, maybe a new tasks is doing some heavy I/O jobs and therefore your CRUD operations take longer
if it is a dedicated server, then check that no new job has been added and can take down your existing jobs. Check this SO link for details on jobs settled up from the SQL Agent
maybe low memory due to another process on same server?
And there is lot more to check, but before going deeper I would check that no external (non sql server related) was the reason of the delay on the process execution.

What are the factors that affect the time taken to run a SQL on a database?

I have a query that runs on a data warehouse. I ran the report last month. It gave me some results in say x minutes. The same report when run on the same database without any modifications to the database returns the same results but in y minutes now.
y>x. The difference between the time is so large.
The amount of data and the indexes are also the same. There is no difference in them.
Now clients ask for me for a reason for this. What are the possible reasons for this?
You leave a lot of questions open
is the database running on a dedicated server.
do you run the reports from clients or directly on the server.
have there been changes to the phyisical network, have some settings been changed.
did they (by accident) change the protocol to communicate with the server (tcp, named-pipes, ...)
have you tried defragmenting
have you rebooted the server
do you have an execution plan before and after
Most likely the query plan has changed. Some minor difference in data has pushed the query optimisers calculations onto a new, less optimal plan.
Here are a few:
The amount of data in the warehouse has changed.
Indexes might have been modified.
Your warehouse is split across different servers and there is connectivity lag between them...
Your database server is processing something else as well due to which it has lesser memory and cpu for ur reports to run.

SQL Server 2005 "Pin" data in Memory

We're running our application's database on dedicated box running only SQL Server 2005.
This DB server has 32 Gb of RAM... and the database file itself is only 6 Gb.
I'd like to force several of the heavily read/queried tables into the SQL Memory buffer to increase speed.
I understand that SQL server is really good about keeping necessary data cached in memory once it's read from disk... But our clients would probably prefer their query running quickly the FIRST time.
"Fastest Performance the Second Time" isn't exactly a product highlight.
Short of the old "Pin Table" DBCC command.. any thoughts?
I've written a "CacheTableToSQLMemory" Proc which Loops through all of a table's Indexes (Clustered & Non) , performing a "Select *" into a Temp table. I've scheduled SQL Agent to run a "cache lots of tables" Proc every 15 minutes in an attempt to keep pages in Memory.
It works to a large extent.. but even after I cache all of a query's relevant tables, running a query still increased the Count of Cached pages for that table. then it's faster the 2nd time.
thoughts?
We're running PAE & AWE. SQL is set to use between 8 & 20 GB of RAM.
The x86 bottleneck is your real issue. AWE can serve only data pages, as they can be mapped in and out of the AWE areas, but every other memory allocation has to cram in the 2GB of the process virtual address space. That would include every thread stack, all the code, all the data currently mappen 'in use' from AWE and, most importantly, every single cached plan, execution plan, cached security token, cached metadata and so on and so forth. and I'm not even counting CLR, I hope you don't use it.
Given that the system has 32GB of RAM, you can't even try /3GB and see if that helps, because of the total PAE reduction to 16GB in that case that would make half your RAM invisible...
You realy, really, have to move to x64. AWE can help only that much. You could collect performance counters from the Buffer Manager and Memory Manager objects and monitor sys.dm_os_memory_clerks so you could get a better picture of how is the instance memory behaving (where does the memory in use go, who is consuming it etc). I don't expect that will help you solve the issue really, but I do expect it will give you enough information to make a case for the upgrade to x64.
There is no way to pin tables in memory in SQL Server 2005. If SQL Server is dropping the tables from memory, it's because there is memory pressure from other parts of the system. Since your database is only 6GB, the database should stay in memory... provided that there are no other databases on the server.
There are a few things you can do to try to keep data in memory, though. Depending on the patch level and edition of your SQL Server installation, you might be able to make use of the lock pages in memory functionality to ensure that SQL Server's memory never gets paged out.
You can also change the memory allocation on the server to be a fixed size. Unless there's something else on your database server, you can set SQL Server's min and max memory to the same value. This won't necessarily prevent this from happening in the future (it's a function of how SQL Server is supposed to work) but it certainly won't hurt to set your SQL Server to use a fixed amount of memory (if you have no other memory concerns).