We have 8 instances out of which 6 are SQL Server 2014, 1 is SQL Server 2017 and all servers are running on Windows Server 2012 R2. The min and max memory set correctly for SQL Server 2014 & 2017. However, one of our DBAs enabled lock pages in memory for SQL Server 2017, but rest do not have this counter enabled.
There is one instance which sometime run into a problem "insufficient memory to process the thread" on same box.
What would be the recommendation for lock pages in memory for server with multiple instances? Should we enable it or not, even if we set min and max memory correctly.
Related
I use SQL Server 2016 edition. I have a web server which runs IIS.
Sometimes, during the day, iis causes so much sql server processes unexpectedly. When i execute sp_Who3 procedure, i see so many resul set higher than it should be. For example, my db transactions should be around 45 - 80. But, instantly it goes up to above 400. What can cause this issue? How can i troubleshoot for this problem?
When i reseted iis server, database returns to the normal value. But, after several minitues like 10 or 15 minutes the problem happens again.
sp_Who3 open source procedure
No error message. Just high numbers of sql processes.
We recently moved from the following server: Windows Server 2008 R2 + MSSQL 2008 R2 STD to Windows Server 2012 R2 +MSSQL 2016 STD. In terms of hardware, the old server was 1271v3 with 24 GB memory and the new server is 1271v6 with 32GB of memory. The rest of the hardware of the two servers is the same. The db was transferred using db backup and restore.
Although everything is working on the new server with no errors, it is significantly slower than the original server and we are even seeing some deadlocks.
If everything is the same or newer/better, how can this be?
The problem was not SQL server at all. The new server was using the default Windows power plan which is "Balanced". Switching the plan to "High Performance" mode caused a performance increase of up to 300% on more complex and long running queries.
I am working on a project which is migrating some legacy SQL Server 2000 instances to SQL Server 2012. As you read word legacy, these databases are used by some VB based desktop applications. It has around 4000+ users and application is rated GOLD (means it has to be up 24x7)
Summary again
Desktop exe Installed VB applications -> SQL Server 2000
Target State
Desktop exe Installed VB applications -> SQL Server 2012
Application uses a config file that contains SQL Server details that it connects to. So once data move to new SQL Server, this config file needs to be changed with new server details.
I have been told that SQL Server 2000 can't migrate directly. It should first go to SQL Server 2008 and then SQL Server 2012. Please correct if this is not right understanding?
My problem is around Implementation Plan for this task in production. I can't move all users in one go means I would be migrating 100 users first and then few other hundreds and finally all left. Which means some users might start using SQL Server 2012 while other still working with SQL Server 2000. The reason I don't want everything in one go because it's too risky in case of any glitch and because application has to be up 24x7 it's not possible to bring down the applications and update config files on each user's desktop.
But if I allow 2000 and 2012 running together (say 1 week until all users move), it will make these databases out of sync and I don't think they can be merged later because both databases may be having same primary keys assigned to different data.
I can't bring the application down and take 4 hours outage to allow all users move to new databases in one shot because application has to be up 24x7.
Can any one recommend any approach that generally companies take to migrate SQL Server without outage like I stated above with keeping data consistency?
The easiest way to handle this is to create a new 2012 instance and create a database from a restore of the 2000 database. Then have replication between the 2 databases so that changes in either database will be published to the other that way your primary keys stay in sync. You will have to be down for a short period where you do the backup and restore to move all the data but assuming the 2 servers are co-located then it should only be a matter of minutes. Then once all your users have been migrated just turn off the 2000 server.
The observed problem and error message is:
The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users.Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions
Environment: SQL Server 2005 Standard edition on Windows server 2003 Standard edition. Virtualized on a VM - with 8 GB RAM
Automatic applications are processing data - reading raw data and writing results to the database. These applications get the error message and they crash on it. (There are also database backup and index maintenance jobs scheduled.)
The same error was never observed on a similar system with sql server 2005 enterprise edition and windows 2003 enterprise edition.
I have already searched the web and found some answers. But e.g.
SQL Server cannot obtain a LOCK resource at this time - What to do?
was not helpful in my case
One source suggested to check:
SELECT request_session_id, COUNT (*) num_locks
FROM sys.dm_tran_locks
GROUP BY request_session_id
ORDER BY count (*) DESC
One session came up with 10.
The memory and lock settings are both in the default settings.
My current idea is to purge most of the data which is old and can be removed.
Does anybody have any other ideas how to deal with the lock resource problem? What exactly is its cause? Does SQL server standard edition allow less resources - is the problem related to the sql server version? How to fix the issue?
The autoshrink function had multiple locks on the database catalog.
The autoshrink also fragmented the primary key of a table after rebuilding the primary key.
Switching off the autoshrink function has solved the problem
I have an ACT! professional for Windows V11.1, with the latest SQL service pack (SP3) and have an apparent memory leak on the server.
After a restart the ACT! SQL instance (SQLSERVR) consumes almost all the available memory on the server, we have added more memory to the server (it is running under Hyper-V) but it continues to consume it all.
I have not been able to connect to the SQL server instance using management studio in order to limit the amount of RAM it is allocated.
Are there any potential solutions for this? or should I continue to restart the services?
Not a memory leak, but standard behavior.
How to configure SQL Server max memory usage
SQL SERVER 2008 - Memory Leak while storing Millions of records
+ this excellent blog entry
Other than that...
x86 or x64?
Server RAM?
PAE/AWE/3GB switch
DB size?
(left field) Trace flag 836 (changes 32 bit memory to act like SQL 2000)