I am using SQL Server 2014 Developer edition.
SQL Server is running on server and it's occupied around 60 GB memory while execution and after completion of execution it's not releasing it.
Please suggest on this.
I want to reduce it to normal.
Why you might want to limit SQL Server memory
It's common for SQL server to have several instances on a machine which is not a dedicated SQL server machine, but where SQL Server embedded or LocalDB is installed as part of an operating system component or as part of an application.
In these circumstances it is appropriate to allocate memory to each instance so they don't tread on each other's toes.
In addition, some applications make heavy use both of a database as well as memory and processor intensive application processing. Where there is a lot of transfer to and from the database it can make sense to locate the DB on the same machine as the application to reduce the IO cost, as they can then use a shared memory connection. In this case, again, you will have to allocate how much memory SQL server is allowed to use.
Development machines are a perfect example of this - typically you will have an installation of SQL Server development edition, and in addition to that any database projects will spin up an instance of SQL Server LocalDB.
How to do it
For your Dev machine you want to keep this value relatively low. I’d suggest 512 MB, but if you feel that’s too low 1024 MB shouldn’t be a problem
To reduce SQL Server memory usage you can run the following SQL:
EXEC sys.sp_configure N'show advanced options', N'1' ;
RECONFIGURE WITH OVERRIDE;
EXEC sys.sp_configure N'max server memory (MB)', N'512';
RECONFIGURE WITH OVERRIDE;
EXEC sys.sp_configure N'show advanced options', N'0';
RECONFIGURE WITH OVERRIDE;
Reducing usage of the LocalDB instances is similar, you simply have to connect to each LocalDB instance and use the same commands.
For example, to set all LocalDB instances to use at most 256MB:
$MaxServerMemory = 256
#( & sqllocaldb info ) | %{
"Reconfiguring (LocalDB)\$_"
& sqlcmd.exe -E -S "(LocalDB)\$_" -Q "EXEC sys.sp_configure N'show advanced options', N'1' ;RECONFIGURE WITH OVERRIDE;EXEC sys.sp_configure N'max server memory (MB)', N'$MaxServerMemory';RECONFIGURE WITH OVERRIDE;EXEC sys.sp_configure N'show advanced options', N'0'; RECONFIGURE WITH OVERRIDE;" *>&1
""
}
Related
I am trying to schedule a SQL query in SQL Server 2014 Management Studio.
For some reason I am unable to find SQL Server Agent (i.e. to expand Jobs to create Schedule).
Is there a new way to schedule SQL query on SQL Server 2014?
Thanks
You can use Windows Task Scheduler to run your SQL query.
Follow below link to creation and deployment of task.
Click here to create windows task
SQL Server Express does not include SQL Server Agent. Editions higher than that (such as Standard, Enterprise/Developer) do.
If your edition supports SQL Server Agent, make sure it is installed.
If it is installed, you may need to configure it before you see it show up as available in SSMS.
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Agent XPs', 1;
GO
RECONFIGURE
GO
source
This is for the most basic configuration, without regard to permissions and such. For that, it's probably worth reading this short overview.
We are currently running two instances of SQL Server. For development purposes, we run a local DB on a desktop PC in our office.
The PC has following stats:
8 GB Ram
AMD Athlon 5350 APU with Radeon(tm) R3 2.05 GZ
64 Bit Windows 8.1
Microsoft SQL Server 2014 - 12.0.2000.8 (X64) Express Edition (64-bit)
HDD Seagate ST1000DM003 1 TB
The server is located in Azure as VM Standard-Tier A3 running the pre-provided Windows Server 2012 R2 Datacenter image
Now we are facing a problem that the exact same query is running locally on the desktop 10 times faster than the on the server.
I connect to the pc with a local installed Management Studio via TCP/IP over our local network. When I connect to the server I use Remote Desktop connection and start a local instance of management studio on the server.
I have changed already the connection mode from default to TCP/IP on the server which brings me to the factor 10 times slower with default connection it will be 20 times slower. Even changing to named pipes the performance is worse.
Also rewriting the query and using different approaches, always the express version is much faster than the server. We did not do any configuration or tuning on the installation of the express version so on the server side.
Any comments a very appreciated!
Best
Simon
You should add the following at the top of the query to see where the differences are:
SET STATISTICS TIME ON
SET STATISTICS IO ON
Is your Local machine have SSD ? If it's the case, it's normal.
Try to rebuild indexes used.
Update the Database/Table statistics. The Execution Plan can be the same, but with bad stats, I've often saw very low performance. Especially if you make a lot of insert/delete.
You can see if something is wrong with SET STATISTICS IO ON. Look at the logical reads on tables, the orders of workfill tables, etc. Check if it's different from the local server.
The server setting for MAXDOP is set to 0 and the threshold is default of 5.
All transactions run against the server run in serial and not against multiple cores/cpus (Virtual Machine and it has 8 CPUs).
Server is Windows Server 2008 R2 Enterprise x64 and SQL Server 2008 R2 Enterprise x64.
There is also a "default" and an "internal" workload group on the machine but both resource governors are not enabled and even disabled, they have MAXDOP setting of 0.
I've confirmed that SQL Server has the CPU affinity across all cores/cpus.
I'm kind of lost as to why this server refuses to run transactions in parallel as opposed to all the other machines at this location. I came in noticing that many of the servers here had long CXPACKET wait times and many processes were causing wasted CPU overhead. Then the DBA said after reading my evaluation There is nothing wrong with MAXDOP 0 and threshold of 5 because XXXX server never runs in parallel like you say.
I believe my evaluation of the site is still correct, but this darn server is debunking my evaluation.
I am open to all suggestions as to why this server is not configured and should run in parallel but it doesn't.
I have a virtual machine running windows 2008 standard R2 sp1 and Forefront TMG 2010. TMG 2010 also installs SQL Server 2008 Express for logging activities. According to perfmon SQL Server Express is using about 1 gb of ram. I have tried to decrease buffer pool max memory usage to 100mb (below code), restarted sql process but sometime later it goes up again about 1gb memory usage. SQL is just used for logging TMG activities and not critical so I want to decrease memory footprint of total SQL usage. If it is possible with Microsoft implementation.
EXEC sp_configure 'show advanced options', 1
RECONFIGURE WITH OVERRIDE
GO
EXEC sp_configure 'max server memory (MB)', 100
GO
EXEC sp_configure 'show advanced options', 0
RECONFIGURE WITH OVERRIDE
I have the Execute SQL Script package that contains the script to insert about 150K records.
Problem in here is when I execute the package in the Virtual machine its taking 25 min's approx and the same package in physical machine its taking 2 min's
Question 1? Why its taking that much time to load the same data in VM.
Question 2? How to solve this performance issue.
Physical machine configuration has 4GB Ram and 250GB HD + Windows server 2008 R2 + SQL server 2008 R2 Standard Edition.
Virtual machine has the same Configuration
Update: The Problem is with the SQL Server in VM.
Question 1? Why its taking that much time to Run the same script in VM.
Question 2? How to solve this performance issue.
Both the batabases schema in Physical Machine and VM are identical. Other databases are also same. There was no indexing applied for that tables in both machines. Datatypes are same. harddisk as I said has the same configuration.
No RAID is done on both the machines.
Physical machine has the 2.67GHz RAM Quad Core and in the virtual machine has the
2.00GHz RAM Quad Core
Version of SQL PM:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1)
Version of SQL PM:
Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64) Apr 2 2010 15:48:46 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) (Hypervisor)
I executed the script Execution plan for both are the same as there is no difference in plan.
Vendor is HP ML350 Machine.
There are almost 20 VM's on the same physical server out of which 7 servers are active.
There's an article about properly setting SQL's configuration for a VM implementation here: Best Practices for SQL Server. Below is an excerpt, though the article includes other tips and a good performance testing plan:
Storage configuration problems are the number one cause of SQL performance issues. Usually these problems arise because the DBA requests a virtual disk of the VI admin, the VI admin places the VMDK on a LUN that may or may not meet the DBA's performance needs. For instance:
VMs' VMDK files placed on VMFS volumes without enough spindles.
Many VMDK files placed on a single VMFS volume which could use more spindles.
Database and log files placed on the same LUN which, you guessed it, could use more spindles.
This may be obvious to some, but this problem occurs again and again. The VI administrator should be aware of a few technical items that can help understand and avoid this problem:
Based on the IO demands of the DB files, a certain number of
spindles should be guaranteed to this file. This means that its
VMDK must be placed on a VMFS volume to accout for the SQL Server's
demands and all of the other demands on that volume.
Mixing sequential activity (such as log file update) and random activity
(such as database access) results in random behavior. This means
that the LUN configuration in the pre-virtual physical environment
may not be sufficient for the consolidated environment. This is
discussed some in Storage Performance: VMFS and Protocols.
When storage isn't meeting the SQL Server's demands, the device latency
or kernel latency (queueing time) will increase. Read up on these
counters in Storage Performance Analysis and Monitoring.
The most common cause for this problem is the lack of RAM. Having everything setup on a small 4GB RAM machine is your problem.
When you try to load those 150k rows into memory (remember, everything that happens in SSIS is in memory), a lot of those rows are being handled by your pagefile.
Pagefile on your VM is a lot slower than the one on your physical machine.
To solve this, increase the amount of RAM on your virtual machine.
I have a similar problem.
Two client machines (one physical, one virtual) execute a batch using SQLCMD. This batch calls a Stored procedure on a physical server (so it's not a memory problem since the elaboration is only on server side).
The batch executed from the physical machine takes 20 minutes. The batch executed from the virtual machine takes 1 hour and 20 minutes.
Using SQL profiler I noted that in the case of slow execution there is a wait type ASYNC_NETWORK_IO.
Probably the virtualized network layer is not optimized.
Could you run a SQL profiler and check if you see the wait type ASYNC_NETWORK_IO?