Insufficient system memory in resource pool 'default' to run this query - sql

I have a server with Windows Server 2012 R2 and SQL SERVER 2014 with in-memory tables, and when I do multiple deletes and inserts at those tables, I have the following problem: "There is insufficient system memory in resource pool 'default' to run this query.". But my server have 32GB of RAM and the CPU processing doesn't even reach 70%. This server only contains one database. The resource governor is enabled with 70% of max memory. I also have another server with Windows Server 2008 R2 and SQL SERVER 2014 with same process of inserts and deletes and I don't have any problem.
Any idea?

Related

Recurring SQL Error 17189

Problem:
One of our clients has SQL Server 2005 running on a Windows 2008 R2 Standard machine. Every once in a while, the server fails with the following error:
SQL Server failed with error code 0xc0000000 to spawn a thread to process a new login or connection. Check the SQL Server error log and the Windows event logs for information about possible related problems. [CLIENT: <local machine>]
The error occurs at a rate of about once per second, with the value for CLIENT: being the only thing that changes (sometimes, instead of <local machine> it shows the IP of the machine or the IP of other machines belonging to the client) and until the SQL Server is restarted, no connections can be made to it. After the restart, it works fine.
The problem happens about once or twice per month. There are no windows logs for the previous occurrence; I've since increased the max size for the Application log.
Machine configuration:
OS: Windows 2008 R2 Standard SP1 (x64)
SQL: Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86) Nov 24 2008 13:01:59 Copyright (c) 1988-2005 Microsoft Corporation Standard Edition on Windows NT 6.1 (Build 7601: Service Pack 1)
CPU: Intel Xeon E5430 # 2.66GHz
RAM: 32 GB
Paging file: 32 GB on drive E (System managed), None on all other drives (including drive C)
More info:
The server has 2 databases that are actively used:
One database is used for replication (1 Publication with about 450 subscribers, most of which synchronize daily, usually more than once per day). The same database is also used by a web application that has about 150 subscribers that use it actively during the day.
Both of the databases also have frequent jobs running that mainly do file imports and transfers from one db to the other.
Update:
While checking the logs once again, I've noticed that the AppDomain gets marked for unload due to memory pressure, unloaded and recreated at a rate of about once every 30 minutes. During the last 2 occurences of the stated problem, the AppDomain went up to 250 and 264, respectively. Could this be a related issue?
This error could be due to a max worker threads setting that is too low. You can set this as:
EXEC sp_configure 'max worker threads',0
GO
RECONFIGURE WITH OVERRIDE
GO
to raise the limit.
It's entirely possible that you are getting the error due to having too many connections open, in other words the error is the symptom rather than the cause. You should review your application(s) for proper closing of connections.
You can inspect all open connections in SQL Server using sp_who:
Provides information about current users, sessions, and processes in an instance of the Microsoft SQL Server Database Engine. The information can be filtered to return only those processes that are not idle, that belong to a specific user, or that belong to a specific session.
More information on how to inspect open connections, read this thread on SO.

SQL query is running 10 times faster on local desktop SQL Server Express than on SQL Server in Azure

We are currently running two instances of SQL Server. For development purposes, we run a local DB on a desktop PC in our office.
The PC has following stats:
8 GB Ram
AMD Athlon 5350 APU with Radeon(tm) R3 2.05 GZ
64 Bit Windows 8.1
Microsoft SQL Server 2014 - 12.0.2000.8 (X64) Express Edition (64-bit)
HDD Seagate ST1000DM003 1 TB
The server is located in Azure as VM Standard-Tier A3 running the pre-provided Windows Server 2012 R2 Datacenter image
Now we are facing a problem that the exact same query is running locally on the desktop 10 times faster than the on the server.
I connect to the pc with a local installed Management Studio via TCP/IP over our local network. When I connect to the server I use Remote Desktop connection and start a local instance of management studio on the server.
I have changed already the connection mode from default to TCP/IP on the server which brings me to the factor 10 times slower with default connection it will be 20 times slower. Even changing to named pipes the performance is worse.
Also rewriting the query and using different approaches, always the express version is much faster than the server. We did not do any configuration or tuning on the installation of the express version so on the server side.
Any comments a very appreciated!
Best
Simon
You should add the following at the top of the query to see where the differences are:
SET STATISTICS TIME ON
SET STATISTICS IO ON
Is your Local machine have SSD ? If it's the case, it's normal.
Try to rebuild indexes used.
Update the Database/Table statistics. The Execution Plan can be the same, but with bad stats, I've often saw very low performance. Especially if you make a lot of insert/delete.
You can see if something is wrong with SET STATISTICS IO ON. Look at the logical reads on tables, the orders of workfill tables, etc. Check if it's different from the local server.

MAXDOP is set to 0, however, all transactions run in serial

The server setting for MAXDOP is set to 0 and the threshold is default of 5.
All transactions run against the server run in serial and not against multiple cores/cpus (Virtual Machine and it has 8 CPUs).
Server is Windows Server 2008 R2 Enterprise x64 and SQL Server 2008 R2 Enterprise x64.
There is also a "default" and an "internal" workload group on the machine but both resource governors are not enabled and even disabled, they have MAXDOP setting of 0.
I've confirmed that SQL Server has the CPU affinity across all cores/cpus.
I'm kind of lost as to why this server refuses to run transactions in parallel as opposed to all the other machines at this location. I came in noticing that many of the servers here had long CXPACKET wait times and many processes were causing wasted CPU overhead. Then the DBA said after reading my evaluation There is nothing wrong with MAXDOP 0 and threshold of 5 because XXXX server never runs in parallel like you say.
I believe my evaluation of the site is still correct, but this darn server is debunking my evaluation.
I am open to all suggestions as to why this server is not configured and should run in parallel but it doesn't.

SQL Script in VM taking long time for execution

I have the Execute SQL Script package that contains the script to insert about 150K records.
Problem in here is when I execute the package in the Virtual machine its taking 25 min's approx and the same package in physical machine its taking 2 min's
Question 1? Why its taking that much time to load the same data in VM.
Question 2? How to solve this performance issue.
Physical machine configuration has 4GB Ram and 250GB HD + Windows server 2008 R2 + SQL server 2008 R2 Standard Edition.
Virtual machine has the same Configuration
Update: The Problem is with the SQL Server in VM.
Question 1? Why its taking that much time to Run the same script in VM.
Question 2? How to solve this performance issue.
Both the batabases schema in Physical Machine and VM are identical. Other databases are also same. There was no indexing applied for that tables in both machines. Datatypes are same. harddisk as I said has the same configuration.
No RAID is done on both the machines.
Physical machine has the 2.67GHz RAM Quad Core and in the virtual machine has the
2.00GHz RAM Quad Core
Version of SQL PM:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)
Version of SQL PM:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
I executed the script Execution plan for both are the same as there is no difference in plan.
Vendor is HP ML350 Machine.
There are almost 20 VM's on the same physical server out of which 7 servers are active.
There's an article about properly setting SQL's configuration for a VM implementation here: Best Practices for SQL Server. Below is an excerpt, though the article includes other tips and a good performance testing plan:
Storage configuration problems are the number one cause of SQL performance issues. Usually these problems arise because the DBA requests a virtual disk of the VI admin, the VI admin places the VMDK on a LUN that may or may not meet the DBA's performance needs. For instance:
VMs' VMDK files placed on VMFS volumes without enough spindles.
Many VMDK files placed on a single VMFS volume which could use more spindles.
Database and log files placed on the same LUN which, you guessed it, could use more spindles.
This may be obvious to some, but this problem occurs again and again. The VI administrator should be aware of a few technical items that can help understand and avoid this problem:
Based on the IO demands of the DB files, a certain number of
spindles should be guaranteed to this file. This means that its
VMDK must be placed on a VMFS volume to accout for the SQL Server's
demands and all of the other demands on that volume.
Mixing sequential activity (such as log file update) and random activity
(such as database access) results in random behavior. This means
that the LUN configuration in the pre-virtual physical environment
may not be sufficient for the consolidated environment. This is
discussed some in Storage Performance: VMFS and Protocols.
When storage isn't meeting the SQL Server's demands, the device latency
or kernel latency (queueing time) will increase. Read up on these
counters in Storage Performance Analysis and Monitoring.
The most common cause for this problem is the lack of RAM. Having everything setup on a small 4GB RAM machine is your problem.
When you try to load those 150k rows into memory (remember, everything that happens in SSIS is in memory), a lot of those rows are being handled by your pagefile.
Pagefile on your VM is a lot slower than the one on your physical machine.
To solve this, increase the amount of RAM on your virtual machine.
I have a similar problem.
Two client machines (one physical, one virtual) execute a batch using SQLCMD. This batch calls a Stored procedure on a physical server (so it's not a memory problem since the elaboration is only on server side).
The batch executed from the physical machine takes 20 minutes. The batch executed from the virtual machine takes 1 hour and 20 minutes.
Using SQL profiler I noted that in the case of slow execution there is a wait type ASYNC_NETWORK_IO.
Probably the virtualized network layer is not optimized.
Could you run a SQL profiler and check if you see the wait type ASYNC_NETWORK_IO?

Microsoft SQL 2005 active cluster node has 100% CPU load after clustering

Before moving to the SQL Server 2005 cluster we had on avarage 60% CPU load. After moving to active/passive cluster (with the exactly same hardware), the load on active node CPU is becoming 100% and after a while time-outs are comming from our web application. Any ideas what could be a couse?
Additional info:
OS: Windows Server 2008 Enterprise;
SQL: SQL Server 2005 SP3 Enterprise;
Both nodes has exactly the same hardware
OK, it's SO STRANGE, but installing SQL Server 2005 SP3 Comulative Update 6 helped!