I just moved a big SQL Server database (about 25G in db file size and 20G in log size) from one computer to another. Then suddenly a query that returns in 1 sec in the old machine will run more than 1 minutes in the newly build machine (much more powerful).
The old machine is a dual core Intel I3 with 4g ram. The new machine is a quad core Intel I7 with 16g ram.
I checked that the indexes are exactly the same.
What could be the reason?
Edits:
Haven't update DB stats. Will do that.
Haven't de-fragment the indexes. Will do that as well.
OS: The old machine runs windows server 2008. The new one runs windows server 2012.
Hard-drive: SSD raid 1. Local physical drive. Partitioned into two logical drive one for DB storage and the other for Log storage.
The new machine is running on full performance settings. It's a single machine, nothing balanced to other machines.
It's dedicated for this DB task, nothing else is running on the machine.
It could be variety of reasons. Is that a local harddrive or networked harddrive?
The newer harddisk is slow
Ensure that the db file and transaction log are defragged. You would need to stop sql server and perform defrag. You can use something like Contig from Microsoft (http://technet.microsoft.com/en-in/sysinternals/bb897428.aspx)
Is the newer harddisk filesystem encrypted?
Check for antivirus software. If you have enabled realtime filesystem check, it will slow down by a significant factor for some antivirus brands
Most probable reason would be 2 or 4 from above
As a general advice, for better performance, store db file and log files on separate hard disks (not just different partitions).
Related
I'm beginning to think this is not possible but I have to ask. I have a Windows Server 2012R2 Datacenter server acting as a PDC and I have the Hyper-V role installed. This server has 15T of disk space but not a lot of CPU or RAM. I want to use it as a disk drive storage server for my VM Guest drives.
I also have a Server 2019 Core server that has more CPU power and 32G of RAM, but very little storage space. I want to use this server as my VM host machine, but I want to build all the storage on the 2012R2 server, but everything I have tried has failed. As expected, I can create VM guests if both the machine and disk is on this server, but if I try to create the disk on the other server it fails with an error similar to "Failed to create the virtual hard disk".
Is it just not possible to create the guest machine and disk on separate servers? Is this because of the 2012R2 and 2019 server differences? Is it possible and I just don't have the disk share setup properly?
Hyper-V is all new to me, it is a learning lab and I have a lot to learn. I've spent hours reading and going through articles but I just haven't found what I'm looking for yet. I think it's time I reach out to the experts and see it it is even possible first.
Thanks,
Tom
I can successfully create a guest if both the guest machine and disk are on the Host. When I try to create the disk on a a different host, I get the "Failed to create the virtual hard disk" error. I'm trying to maximize the use of the resources I have by splitting CPU/RAM on one host and Disk on another, but I am beginning to think it is not possible.
I have a database on Azure that is around 500mb, ~50 tables and a few tables with 100k+ records. A single table with ~1000k records. This is not a big database (around 20-50 DTU's). I have an ASP.NET MVC application that runs on top of this database.
When I run towards this database either from local or on my test/production environment, the database is extremely fast. My code is "solid production code" (indexes, paging), meaning I do not do anything crazy and this works on pretty heavy production loads.
However, when I import this database locally, it's VERY, VERY slow (pages takes 10-20 seconds to load, or simply fails). This results in I basically cannot run my application locally. Here is an example error I get (DK error message, but the typical error - see The wait operation timed out. ASP ) :
I experience the same problem when I run queries outside my main application. I am 99% sure this is a database problem, not an application problem because when I run scripts it's very slow too.
Any idea what the problem is? Why is my localhost so slow it cannot barely run normal queries?
This is how I created my local databae:
Save to local desktop:
I import into (localdb)\MSSQLLocalDB :
Wait for this to happen and everything is confirmed. I then change my connection string to this database.
I suspect you do not have enough RAM to host your own server. I had that problem, locally hosting MSSQL server 2012 with 8GB of RAM on a Windows 10 machine, with Core i7 10th Gen. Sometimes there were no problems, but mostly I was getting very slow query response times, and even timeouts. This was my development machine, and I noticed that all the RAM was being consumed, just from development alone, without any database connections. So I suspected that there was not enough memory for querying the database as well.
I increased the RAM to 32GB (but 16GB would have sufficed), and problems solved.
Enabling TCP/IP connections to your local server could resolve this issue.
Start -> Run -> mmc
File -> Add/Remove Snap-in...
SQL Server Configuration Manager -> [OK]
SQL Server Network Configuration -> Protocols for MSSQLSERVER -> TCP/IP=Enabled
Restart server.
We are currently running two instances of SQL Server. For development purposes, we run a local DB on a desktop PC in our office.
The PC has following stats:
8 GB Ram
AMD Athlon 5350 APU with Radeon(tm) R3 2.05 GZ
64 Bit Windows 8.1
Microsoft SQL Server 2014 - 12.0.2000.8 (X64) Express Edition (64-bit)
HDD Seagate ST1000DM003 1 TB
The server is located in Azure as VM Standard-Tier A3 running the pre-provided Windows Server 2012 R2 Datacenter image
Now we are facing a problem that the exact same query is running locally on the desktop 10 times faster than the on the server.
I connect to the pc with a local installed Management Studio via TCP/IP over our local network. When I connect to the server I use Remote Desktop connection and start a local instance of management studio on the server.
I have changed already the connection mode from default to TCP/IP on the server which brings me to the factor 10 times slower with default connection it will be 20 times slower. Even changing to named pipes the performance is worse.
Also rewriting the query and using different approaches, always the express version is much faster than the server. We did not do any configuration or tuning on the installation of the express version so on the server side.
Any comments a very appreciated!
Best
Simon
You should add the following at the top of the query to see where the differences are:
SET STATISTICS TIME ON
SET STATISTICS IO ON
Is your Local machine have SSD ? If it's the case, it's normal.
Try to rebuild indexes used.
Update the Database/Table statistics. The Execution Plan can be the same, but with bad stats, I've often saw very low performance. Especially if you make a lot of insert/delete.
You can see if something is wrong with SET STATISTICS IO ON. Look at the logical reads on tables, the orders of workfill tables, etc. Check if it's different from the local server.
I'm in need of some guidance. My company is running TFS 2010 and any time a user attempts to access Team Web Access it is blindingly slow. We also have Urban Turtle installed and when we try to bring up the planning board tab that takes forever, too (~15 - 20 seconds).
Here's the setup:
Application Tier (Virtual Machine)
Windows Server 2008R2
IIS 7
TFS 2010 SP1 & latest patches
SQL Server 2008 R2 Analysis and
Reporting Services
2x 2.5GHz CPUs
8GB RAM
2x 80GB HDDs (C: & E:)
Data Tier (Virtual Machine)
Windows Server 2012R2
SQL Server 2008R2 SP2 CU4
4x 2.5GHz CPUs
16GB RAM
4x HDDs (C: 80GB, E: 40GB Temp, F: 40GB Log, G: 300GB Data)
The data tier was virtualized back in early February. It seems like performance has degraded over time as opposed to all at once. I would think if the issue was related to virtualization, we would have seen that right away or at least much sooner.
I have moved the TFS cache to a separate drive on the App Tier. That hasn't really improved anything.
I did some defrag analysis, and the C: drive on the app tier is at 24% fragmentation and E: is at 65% fragmentation. That's terrible! My next step is to defrag these two drives during a maintenance window.
Can you guys think of anything of any other suggestions to help improve performance?
I have the Execute SQL Script package that contains the script to insert about 150K records.
Problem in here is when I execute the package in the Virtual machine its taking 25 min's approx and the same package in physical machine its taking 2 min's
Question 1? Why its taking that much time to load the same data in VM.
Question 2? How to solve this performance issue.
Physical machine configuration has 4GB Ram and 250GB HD + Windows server 2008 R2 + SQL server 2008 R2 Standard Edition.
Virtual machine has the same Configuration
Update: The Problem is with the SQL Server in VM.
Question 1? Why its taking that much time to Run the same script in VM.
Question 2? How to solve this performance issue.
Both the batabases schema in Physical Machine and VM are identical. Other databases are also same. There was no indexing applied for that tables in both machines. Datatypes are same. harddisk as I said has the same configuration.
No RAID is done on both the machines.
Physical machine has the 2.67GHz RAM Quad Core and in the virtual machine has the
2.00GHz RAM Quad Core
Version of SQL PM:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)
Version of SQL PM:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
I executed the script Execution plan for both are the same as there is no difference in plan.
Vendor is HP ML350 Machine.
There are almost 20 VM's on the same physical server out of which 7 servers are active.
There's an article about properly setting SQL's configuration for a VM implementation here: Best Practices for SQL Server. Below is an excerpt, though the article includes other tips and a good performance testing plan:
Storage configuration problems are the number one cause of SQL performance issues. Usually these problems arise because the DBA requests a virtual disk of the VI admin, the VI admin places the VMDK on a LUN that may or may not meet the DBA's performance needs. For instance:
VMs' VMDK files placed on VMFS volumes without enough spindles.
Many VMDK files placed on a single VMFS volume which could use more spindles.
Database and log files placed on the same LUN which, you guessed it, could use more spindles.
This may be obvious to some, but this problem occurs again and again. The VI administrator should be aware of a few technical items that can help understand and avoid this problem:
Based on the IO demands of the DB files, a certain number of
spindles should be guaranteed to this file. This means that its
VMDK must be placed on a VMFS volume to accout for the SQL Server's
demands and all of the other demands on that volume.
Mixing sequential activity (such as log file update) and random activity
(such as database access) results in random behavior. This means
that the LUN configuration in the pre-virtual physical environment
may not be sufficient for the consolidated environment. This is
discussed some in Storage Performance: VMFS and Protocols.
When storage isn't meeting the SQL Server's demands, the device latency
or kernel latency (queueing time) will increase. Read up on these
counters in Storage Performance Analysis and Monitoring.
The most common cause for this problem is the lack of RAM. Having everything setup on a small 4GB RAM machine is your problem.
When you try to load those 150k rows into memory (remember, everything that happens in SSIS is in memory), a lot of those rows are being handled by your pagefile.
Pagefile on your VM is a lot slower than the one on your physical machine.
To solve this, increase the amount of RAM on your virtual machine.
I have a similar problem.
Two client machines (one physical, one virtual) execute a batch using SQLCMD. This batch calls a Stored procedure on a physical server (so it's not a memory problem since the elaboration is only on server side).
The batch executed from the physical machine takes 20 minutes. The batch executed from the virtual machine takes 1 hour and 20 minutes.
Using SQL profiler I noted that in the case of slow execution there is a wait type ASYNC_NETWORK_IO.
Probably the virtualized network layer is not optimized.
Could you run a SQL profiler and check if you see the wait type ASYNC_NETWORK_IO?