SQL Server 2016 RESERVED_MEMORY_ALLOCATION_EXT - sql

I have a SQL Server 2016 Standard Version and often face with
RESERVED_MEMORY_ALLOCATION_EXT
SOS_SCHEDULER_YIELD
MEMORY_ALLOCATION_EXT
wait types. However my server CPU don't exceed 20% and I have always 75 GB freeable memory?
I set MAXDOP value 3 but there is no change. I didn't face with any problem when I was using Enterprise edition with same DB and same queries.
Thank you for help

We exerienced the same today and it was due to a virtualization issue.
We had another VM on the same physical host which ran at 100% CPU and due to overcommitment and missing reservation, the Database VM had to wait for CPU cycles. This is almost impossible to track down from within the Database VM.

Related

Diagnosing SQL speeds

I just migrated a production DB onto some new hardware (in a sandbox setting) because they were suffering from poor IO performance.
Testing a simple select * from TABLE query on one of the large tables (123m rows, 24 columns) takes about 20 minutes. I can see that the query maxes out a single core on the SQL server, but memory consumption and disk IO are non-existent.
In the resource monitor there are 0 waits, other than Network I/O which is at 700-800.
The query is being run from a local install of MSSQL Mgmt studio.
Data file I/0 is 0 in the activity monitor.
Wait time in the query is about double the active CPU time.
I am not sure if this is a problem that I need to solve, or that's just the way it works.
I was actually testing the speed of the query directly on the server vs. it being called by my users app - to diagnose if an ODBC driver might be holding things up, as reading from the database took 98% of the scripts time.
Ran a select * from query and was expecting it to complete much faster than it did.
EDIT: Its SQL 2017

Analyze the purpose of MSSQL error without real-time data

I am looking for a way how to analyze what caused this errors in a MSSQL server:
A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 499352, committed (KB): 1024012, memory utilization: 48%.
2021-12-05 18:49:42.63 spid1504 Error: 8645, Severity: 17, State: 1.
2021-12-05 18:49:42.63 spid1504 A timeout occurred while waiting for memory resources to execute the query in resource pool 'internal' (1). Rerun the query.
This is happening randomly (I didn't found a reason why or when or what can cause this) few times in a month. I already set up some monitoring counters (https://learn.microsoft.com/en-us/sql/relational-databases/performance-monitor/monitor-memory-usage?view=sql-server-ver15) in perfmon but I think this will not solve my problem.
My problem is that I don't know what is causing such SQL memory consumption. I think it is some query from IIS website which is hosted on the same server. It is even worse because it is a production MSSQL server with IIS server and always when this is happening, some admins are going to restart the whole server immediately (because it is PROD and it's need to be working). So I don't have time to look around what is happenning in the server when this occurs. In reality I don't even know (nobody knows) when this will happen again. I need to rely only on historical data. Any thoughts how can I find some reliable information what is causing this?
Server is hosted as VM with this setup:
Windows Server 2019 Datacenter
Microsoft SQL Server 2017 (Maximum server memory (in MB): 32768)
16 vcpus
64 GB RAM

SQL Server 2005 "Pin" data in Memory

We're running our application's database on dedicated box running only SQL Server 2005.
This DB server has 32 Gb of RAM... and the database file itself is only 6 Gb.
I'd like to force several of the heavily read/queried tables into the SQL Memory buffer to increase speed.
I understand that SQL server is really good about keeping necessary data cached in memory once it's read from disk... But our clients would probably prefer their query running quickly the FIRST time.
"Fastest Performance the Second Time" isn't exactly a product highlight.
Short of the old "Pin Table" DBCC command.. any thoughts?
I've written a "CacheTableToSQLMemory" Proc which Loops through all of a table's Indexes (Clustered & Non) , performing a "Select *" into a Temp table. I've scheduled SQL Agent to run a "cache lots of tables" Proc every 15 minutes in an attempt to keep pages in Memory.
It works to a large extent.. but even after I cache all of a query's relevant tables, running a query still increased the Count of Cached pages for that table. then it's faster the 2nd time.
thoughts?
We're running PAE & AWE. SQL is set to use between 8 & 20 GB of RAM.
The x86 bottleneck is your real issue. AWE can serve only data pages, as they can be mapped in and out of the AWE areas, but every other memory allocation has to cram in the 2GB of the process virtual address space. That would include every thread stack, all the code, all the data currently mappen 'in use' from AWE and, most importantly, every single cached plan, execution plan, cached security token, cached metadata and so on and so forth. and I'm not even counting CLR, I hope you don't use it.
Given that the system has 32GB of RAM, you can't even try /3GB and see if that helps, because of the total PAE reduction to 16GB in that case that would make half your RAM invisible...
You realy, really, have to move to x64. AWE can help only that much. You could collect performance counters from the Buffer Manager and Memory Manager objects and monitor sys.dm_os_memory_clerks so you could get a better picture of how is the instance memory behaving (where does the memory in use go, who is consuming it etc). I don't expect that will help you solve the issue really, but I do expect it will give you enough information to make a case for the upgrade to x64.
There is no way to pin tables in memory in SQL Server 2005. If SQL Server is dropping the tables from memory, it's because there is memory pressure from other parts of the system. Since your database is only 6GB, the database should stay in memory... provided that there are no other databases on the server.
There are a few things you can do to try to keep data in memory, though. Depending on the patch level and edition of your SQL Server installation, you might be able to make use of the lock pages in memory functionality to ensure that SQL Server's memory never gets paged out.
You can also change the memory allocation on the server to be a fixed size. Unless there's something else on your database server, you can set SQL Server's min and max memory to the same value. This won't necessarily prevent this from happening in the future (it's a function of how SQL Server is supposed to work) but it certainly won't hurt to set your SQL Server to use a fixed amount of memory (if you have no other memory concerns).

Server hardware requirements for SSRS

As the title says, i'm trying to figure out how much RAM is needed to generate and export to excel a large report using SQL Server Reporting Services on Windows Server 2003.
It is not an option to upgrade it to SS2008 and also not an option to export to CSV.
Strictly from a hardware point of view what is a good configuration for a high load server?
(CPU's, RAM, Storage)
You've got problems - the maximum memory size that SSRS2005 can handle is 2GB. (There is a dodge to enable it to handle 3GB, but it's not recommended for production servers.)
SSRS2008 has no such limitation, which is why the normal response in this situation is to recommend an upgrade to 2008.
If your large report won't run on a machine with 2GB available, it doesn't matter how much RAM (or other resources) you put on your server - the report still won't run.
Your only option (given the restrictions stated above) would be to break the report up into smaller pieces and run them one at a time.

Limiting the RAM consumption of MS SQL SERVER

I just rolled a small web application into production on a remote host. After it ran a while, I noticed that MS SQL Server started to consume most of the RAM on the server, starving out IIS and my application.
I have tried changing the "server maximum memory" setting, but eventually the memory usage begins to creep above this setting. Using the activity monitor I have determined that I am not leaving open connections or something obvious so I am guessing its in cache, materialized views and the like.
I am beginning to believe that this setting doesn't mean what I think it means. I do note that if I simply restart the server, the process of memory consumption starts over without any adverse impact on the application - pages load without incident, all retrievals work.
There has got to be a better way to control sql or force it to release some memory back to the system????
From the MS knowledge base:
Note that the max server memory option
only limits the size of the SQL Server
buffer pool. The max server memory
option does not limit a remaining
unreserved memory area that SQL Server
leaves for allocations of other
components such as extended stored
procedures, COM objects, non-shared
DLLs, EXEs, and MAPI components.
Because of the preceding allocations,
it is normal for the SQL Server
private bytes to exceed the max server
memory configuration.
Are you using any extended stored procedures, .NET code, etc that could be leaking memory outside of SQL Server's control?
There is some more discussion on memory/address space usage here
Typically I recommend setting the max server memory to a couple of Gigs below the physical memory installed.
If you need to run IIS and an application server installed on the server then you'll want to give SQL Server less memory.