I will preface this by stating we are using MS SQL Server 2008 R2.
We're having issues when our database backups are running SQL Server takes all of the available memory and never releases. Our current high watermark of memory usage is about 60%. When the backup job runs it goes to 99% and never releases unless we reset the SQL service. This leads me to 2 questions:
Dealing with memory allocation, Is there a way to accurately limit memory usage of SQL Server? We are limiting the "Maximum server memory" value to 85% but in consistently exceeds that value.
What is the best method of backing up the database? We are currently relying on our provider to maintain the database backups and it seems like the "home grown" method they use through a stored proc and commands is the cause of the memory issues but it is working for other customers of theirs. Should we look at using Maintenance Plans as a replacement?
Any help with this would be great.
Is there a way to accurately limit memory usage of SQL Server?
Yes there is. How to: Set a Fixed Amount of Memory (SQL Server Management Studio)
Use the default settings to allow SQL Server to change its memory
requirements dynamically based on available system resources. The
default setting for min server memory is 0, and the default setting
for max server memory is 2147483647 megabytes (MB). The minimum amount
of memory you can specify for max server memory is 16 MB.
What is the best method of backing up the database?
You can get the answer here: Select the Most Optimal Backup Methods for Server
Related
On a normal SQL server we can tell it how to grow. The default is 10% each time, so the database grows by 10% its current size. Do we have any insight on how the Azure SQL database is growing other than it grows automatically?
Azure SQL server would allow us to configure the database to grow in fixed chunks e.g. 20 MB?
thanks,
sakaldeep
You can use PowerShell, T-SQL, CLI. the portal to increase or decrease the maximum size of a database but Azure SQL Database does not support setting autogrow. You can vote for this feature to be available in the future on this URL.
If you run the following query on the database, you will see the growth is set to 2048 KB.
SELECT name, growth
FROM sys.database_files;
I have few questions about SQL Server 2008.
How can I check is Memory Caching feature in SQL Server 2008 is enabled? Is there a variable to turn memory caching on or off? "I just want to make sure it is on"
Also, when does SQL Server decide that this cached data is outdated so it dumps it and perform a hard disk read again?
Finally, assuming I have this query SELECT * FROM table1 WHERE id = 10 After the record is cached in memory and a process is trying to read it, does SQL Server place a Shared lock on that record in memory or there is no locks in memory?
The short answer is no, you can't turn off Memory Caching at the server level. The engine takes care of memory Caching for you and it is very aggressive in how it caches, you basically want all queries returned from memory and not from disk. The buffer is orders of magnitude faster than disk access.
Check out these articles which explain how the caching works - known as buffer pool in SQL Server speak.
https://dba.stackexchange.com/questions/43572/how-to-see-what-is-cached-in-memory-in-sql-server-2008
Also anything by Paul Randal is pretty definitive - http://www.sqlskills.com/blogs/paul/inside-the-storage-engine-whats-in-the-buffer-pool/
In terms of locking, this is a commonly asked question on SO. In simple terms the default behaviour is that SQL Server uses Shared locks for readers. So multiple readers can access the data at the same time. SQL Server uses a dynamic locking strategy which escalates locks as needed typically from page level to table level but may de-escalate to row level locks if it sees fit. However this is done dynamically and is automatically handled by the SQL Server engine. here is an article on this topic - https://technet.microsoft.com/en-us/library/ms189286(v=sql.105).aspx
Also this question has been asked a lot at SO before so check out this link What are row, page and table locks? And when they are acquired?
and others
I've hit the 4gb limit in my database and I need a bit of breathing space before upgrading it to a SQL Server 2008 database and I just wondered if increasing auto growth would give me more space inside the database beyond the 4GB limit that SQL Server 2005 Express imposes.
I know the pitfalls of Auto Growth with regards to performance as data will be fragmented across the disk and thus make querying it slower but is there an advantage in granting it say 50/100mb of room for auto growth? Whilst the migration process is planned out or an alternative is sought.
Disk space is not an issue and it would only be a temporary measure anyway.
No. Express Edition will not grow, nor attach or restore, a database over its size limit, no matter what the errorlog viewer tells you. Not even temporarily.
But you have an easy solution: SQL Server 2008R2 Express Edition has raised the limit to 10GB.
No it won't. The SQL Server Express edition is for creating Database-oriented applications without the need to purchase an official SQL Server licenses. However, you cannot use it in an production environment for more reasons other than just file size limit.
No, once you've reached 10'240 MB (10*1024MB) , you're out of luck
(technically not exactly 10GiB).
I have a database for a piece of proprietary software that resides on a SQL Server 2005 instance shared with some databases for C# apps I developed. I'm having an issue with some of the proprietary software's stored procedures eating up resources. Is there a way for me to limit the CPU usage of a particular database? I've advocated moving the DBs to a different server / instance, but I need a solution that can hold me off until then.
You can use resource governor and have a function on it which guide system to a workload with database name , and then limit cpu and memory.
First off, I'm not even sure this is possible. One of my co-workers is requesting that I help him retrieve performance metrics from a Microsoft SQL Server 2008 database using a remote connection and a SQL query.
Specifically, we are looking for stuff like database memory usage and CPU usage. Is this stored in a table somewhere that I can easily just SELECT it from?
I've tried googling this but mainly all I come up with are ads for products that do SQL performance monitoring. I realize we could use perfmon in Windows to get this data, but that's not what he's looking for. Also remote WMI gathering of perfmon metrics is out. It has to be a remote SQL query - some product limitation I won't get into in detail. :)
Even a definitive "This is not possible" is a valid answer.
Thanks in advance.
There is DBCC MEMORYSTATUS to get a ton of memory information.
DBCC statements in general are full with useful information about your SQL Server.
SO Answer on how you can "build" your own taskmon for SQL Server.
Another great source for information about server state are Dynamic Management Views.
Knock yourself out.
To get the sort of info you want, you'll need to use SQL Server's performance/system monitor counters. See the MSDN article Monitoring Resource Usage (System Monitor) for details:
If you are running Microsoft Windows server operating system, use
the System Monitor graphical tool to measure the performance of
SQL Server. You can view SQL Server objects, performance counters,
and the behavior of other objects, such as processors, memory, cache,
threads, and processes. Each of these objects has an associated
set of counters that measure device usage, queue lengths, delays,
and other indicators of throughput and internal congestion.
[And yes...you can access peformance counters remotely (assuming you have the requisite permissions]