Monitoring SQL Server Express limitations - sql

Anyone know how can I watch my SQL Server resource usage in my windows server?
I'm using SQL Server 2016 express and especially I want to watch my ram usage if possible.
For example there is maximum ram value 1410 MB per instance for SQL Server 2016 Express. How can I know if I am close to limitations of my SQL Server or not?
Thank you.

If you are running Microsoft Windows server operating system, use the System Monitor graphical tool to measure the performance of SQL Server. You can view SQL Server objects, performance counters, and the behavior of other objects, such as processors, memory, cache, threads, and processes. Each of these objects has an associated set of counters that measure device usage, queue lengths, delays, and other indicators of throughput and internal congestion.
Note: System Monitor replaced Performance Monitor after Windows NT 4.0.
Benefits of System Monitor
System Monitor can be useful to monitor Windows operating system and SQL Server counters at the same time to determine any correlation between the performance of SQL Server and Windows. For example, monitoring the Windows disk input/output (I/O) counters and the SQL Server Buffer Manager counters at the same time can reveal the behavior of the entire system.
System Monitor allows you to obtain statistics on current SQL Server activity and performance. Using System Monitor, you can:
View data simultaneously from any number of computers.
View and change charts to reflect current activity, and show counter values that are updated at a frequency that the user defines.
Export data from charts, logs, alert logs, and reports to spreadsheet or database applications for further manipulation and printing.
Add system alerts that list an event in the alert log and can notify you by issuing a network alert.
Run a predefined application the first time or every time a counter value goes over or under a user-defined value.
Create log files that contain data about various objects from different computers.
Append to one file selected sections from other existing log files to form a long-term archive.
View current-activity reports, or create reports from existing log files.
Save individual chart, alert, log, or report settings, or the entire workspace setup for reuse.
Note: System Monitor replaced the Performance Monitor after Windows NT 4.0. You can use either the System Monitor or Performance Monitor to do these tasks.
System Monitor Performance
When you monitor SQL Server and the Microsoft Windows operating system to investigate performance-related issues, concentrate your initial efforts in three main areas:
Disk activity
Processor utilization
Memory usage
Monitoring a computer on which System Monitor is running can affect computer performance slightly. Therefore, either log the System Monitor data to another disk (or computer) so that it reduces the effect on the computer being monitored, or run System Monitor from a remote computer. Monitor only the counters in which you are interested. If you monitor too many counters, resource usage overhead is added to the monitoring process and affects the performance of the computer that is being monitored.
To start System Monitor in Windows
On the Start menu, point to Run, type perfmon in the Run dialog box, and then click OK.
More information here and here and a detailed PDF here.

It's here what I was looking for;
SELECT COUNT() AS buffer_cache_pages, COUNT() * 8 AS buffer_cache_used_KB FROM sys.dm_os_buffer_descriptors;

Related

How can I enable the DevArt performance counters in Resource Monitor?

I'm using the DevArt dotConnect for Oracle performance counters in my dev/test environment. By adding Use Performance Monitor=True to my database connection string, I can capture useful information such as number of connections, etc.
However, on my production box, I cannot see the DevArt performance counters in Resource Monitor:
I believe that the performance counters are installed, since they appear in the registry:
...and I imagine that adding Use Performance Monitor=True to my database connection string would cause an error if the relevant DLLs etc. were not present.
What else do I need to do to make the performance counters appear in Resource Monitor?

Can VMs on Google Compute detect when they've been migrated?

Is it possible to notify an application running on a Google Compute VM when the VM migrates to different hardware?
I'm a developer for an application (HMMER) that makes heavy use of vector instructions (SSE/AVX/AVX-512). The version I'm working on probes its hardware at startup to determine which vector instructions are available and picks the best set.
We've been looking at running our program on Google Compute and other cloud engines, and one concern is that, if a VM migrates from one physical machine to another while running our program, the new machine might support different instructions, causing our program to either crash or execute more slowly than it could.
Is there a way to notify applications running on a Google Compute VM when the VM migrates? The only relevant information I've found is that you can set a VM to perform a shutdown/reboot sequence when it migrates, which would kill any currently-executing programs but would at least let the user know that they needed to restart the program.
We ensure that your VM instances never live migrate between physical machines in a way that would cause your programs to crash the way you describe.
However, for your use case you probably want to specify a minimum CPU platform version. You can use this to ensure that e.g. your instance has the new Skylake AVX instructions available. See the documentation on Specifying the Minimum CPU Platform for further details.
As per the Live Migration docs:
Live migration does not change any attributes or properties of the VM
itself. The live migration process just transfers a running VM from
one host machine to another. All VM properties and attributes remain
unchanged, including things like internal and external IP addresses,
instance metadata, block storage data and volumes, OS and application
state, network settings, network connections, and so on.
Google does provide few controls to set the instance availability policies which also lets you control aspects of live migration. Here they also mention what you can look for to determine when live migration has taken place.
Live migrate
By default, standard instances are set to live migrate, where Google
Compute Engine automatically migrates your instance away from an
infrastructure maintenance event, and your instance remains running
during the migration. Your instance might experience a short period of
decreased performance, although generally most instances should not
notice any difference. This is ideal for instances that require
constant uptime, and can tolerate a short period of decreased
performance.
When Google Compute Engine migrates your instance, it reports a system
event that is published to the list of zone operations. You can review
this event by performing a gcloud compute operations list --zones ZONE
request or by viewing the list of operations in the Google Cloud
Platform Console, or through an API request. The event will appear
with the following text:
compute.instances.migrateOnHostMaintenance
In addition, you can detect directly on the VM when a maintenance event is about to happen.
Getting Live Migration Notices
The metadata server provides information about an instance's
scheduling options and settings, through the scheduling/
directory and the maintenance-event attribute. You can use these
attributes to learn about a virtual machine instance's scheduling
options, and use this metadata to notify you when a maintenance event
is about to happen through the maintenance-event attribute. By
default, all virtual machine instances are set to live migrate so the
metadata server will receive maintenance event notices before a VM
instance is live migrated. If you opted to have your VM instance
terminated during maintenance, then Compute Engine will automatically
terminate and optionally restart your VM instance if the
automaticRestart attribute is set. To learn more about maintenance
events and instance behavior during the events, read about scheduling
options and settings.
You can learn when a maintenance event will happen by querying the
maintenance-event attribute periodically. The value of this
attribute will change 60 seconds before a maintenance event starts,
giving your application code a way to trigger any tasks you want to
perform prior to a maintenance event, such as backing up data or
updating logs. Compute Engine also offers a sample Python script
to demonstrate how to check for maintenance event notices.
You can use the maintenance-event attribute with the waiting for
updates feature to notify your scripts and applications when a
maintenance event is about to start and end. This lets you automate
any actions that you might want to run before or after the event. The
following Python sample provides an example of how you might implement
these two features together.
You can also choose to terminate and optionally restart your instance.
Terminate and (optionally) restart
If you do not want your instance to live migrate, you can choose to
terminate and optionally restart your instance. With this option,
Google Compute Engine will signal your instance to shut down, wait for
a short period of time for your instance to shut down cleanly,
terminate the instance, and restart it away from the maintenance
event. This option is ideal for instances that demand constant,
maximum performance, and your overall application is built to handle
instance failures or reboots.
Look at the Setting availability policies section for more details on how to configure this.
If you use an instance with a GPU or a preemptible instance be aware that live migration is not supported:
Live migration and GPUs
Instances with GPUs attached cannot be live migrated. They must be set
to terminate and optionally restart. Compute Engine offers a 60 minute
notice before a VM instance with a GPU attached is terminated. To
learn more about these maintenance event notices, read Getting live
migration notices.
To learn more about handling host maintenance with GPUs, read
Handling host maintenance on the GPUs documentation.
Live migration for preemptible instances
You cannot configure a preemptible instances to live migrate. The
maintenance behavior for preemptible instances is always set to
TERMINATE by default, and you cannot change this option. It is also
not possible to set the automatic restart option for preemptible
instances.
As Ramesh mentioned, you can specify the minimum CPU platform to ensure you are only migrated to an instance which has at least the minimum CPU platform you specified. At a high level it looks like:
In summary, when you specify a minimum CPU platform:
Compute Engine always uses the minimum CPU platform where available.
If the minimum CPU platform is not available or the minimum CPU platform is older than the zone default, and a newer CPU platform is
available for the same price, Compute Engine uses the newer platform.
If the minimum CPU platform is not available in the specified zone and there are no newer platforms available without extra cost, the
server returns a 400 error indicating that the CPU is unavailable.

How can I make SQL eligible for zIIP processing?

Is it possible to change SQL in a z/OS mainframe COBOL application so that it becomes eligible to be directed to the IBM System z Integrated Information Processor (zIIP)?
An important distinction to make is that according to IBM, zIIP is only available for "eligible database workloads", and those "eligible" loads are mostly targeted for large BI/ERP/CRM solutions that run on distributed servers, which are connecting through DDF (Distributed Data Facility) over TCP/IP.
IBM has a list of DB2 workloads that can utilize zIIP. These include:
DDF server threads that process SQL requests from applications that
access DB2 by TCP/IP (up to 60%)
Parallel child processes. A portion of each
child process executes under a dependent enclave SRB if it processes
on behalf of an application that originated from an allied address
space, or under an independent enclave SRB if the processing is
performed on behalf of a remote application that accesses DB2 by
TCP/IP. The enclave priority is inherited from the invoking allied
address space for a dependent enclave or from the main DDF server
thread enclave classification for an independent enclave. (Versions up
to 11 allowed 80% to run on zIIP, v12 upped this to 100% eligible).
Utility index build and maintenance processes for the LOAD, REORG, and
REBUILD INDEX utilities.
And if you're on DB2 v10, you can also use zIIP with:
Remote native SQL procedures.
XML Schema validation and non-validation parsing.
DB2 utility functions for maintaining index structures.
Certain portions of processing for the RUNSTATS utility.
Prefetch and deferred write processing for the DB2 buffer pool
Version 11 added the following:
Asynchronous enclave SRBs (service request blocks) that execute in the
Db2 ssnmMSTR, ssnmDBM1 and ssnmDIST address spaces, with the exception
of p-lock negotiation processing. These processes include Db2 buffer pool
processing for prefetch, deferred write, page set castout, log read, and
log write processing. Additional eligible processes include index
pseudo-delete and XML multi version document cleanup processing.
Version 12 allowed parallel child tasks to go 100% to zIIP after a certain threshold of CPU usage.
So, if you're using COBOL programs, it would appear that IBM does not intend for you to use zIIP with these workloads. You can still take advantage of zIIP with utilites (LOAD, REORG), and some steps of the RUNSTATS utility, so it may still be worthwhile to have some zIIP.

Server hardware requirements for SSRS

As the title says, i'm trying to figure out how much RAM is needed to generate and export to excel a large report using SQL Server Reporting Services on Windows Server 2003.
It is not an option to upgrade it to SS2008 and also not an option to export to CSV.
Strictly from a hardware point of view what is a good configuration for a high load server?
(CPU's, RAM, Storage)
You've got problems - the maximum memory size that SSRS2005 can handle is 2GB. (There is a dodge to enable it to handle 3GB, but it's not recommended for production servers.)
SSRS2008 has no such limitation, which is why the normal response in this situation is to recommend an upgrade to 2008.
If your large report won't run on a machine with 2GB available, it doesn't matter how much RAM (or other resources) you put on your server - the report still won't run.
Your only option (given the restrictions stated above) would be to break the report up into smaller pieces and run them one at a time.

Limiting the RAM consumption of MS SQL SERVER

I just rolled a small web application into production on a remote host. After it ran a while, I noticed that MS SQL Server started to consume most of the RAM on the server, starving out IIS and my application.
I have tried changing the "server maximum memory" setting, but eventually the memory usage begins to creep above this setting. Using the activity monitor I have determined that I am not leaving open connections or something obvious so I am guessing its in cache, materialized views and the like.
I am beginning to believe that this setting doesn't mean what I think it means. I do note that if I simply restart the server, the process of memory consumption starts over without any adverse impact on the application - pages load without incident, all retrievals work.
There has got to be a better way to control sql or force it to release some memory back to the system????
From the MS knowledge base:
Note that the max server memory option
only limits the size of the SQL Server
buffer pool. The max server memory
option does not limit a remaining
unreserved memory area that SQL Server
leaves for allocations of other
components such as extended stored
procedures, COM objects, non-shared
DLLs, EXEs, and MAPI components.
Because of the preceding allocations,
it is normal for the SQL Server
private bytes to exceed the max server
memory configuration.
Are you using any extended stored procedures, .NET code, etc that could be leaking memory outside of SQL Server's control?
There is some more discussion on memory/address space usage here
Typically I recommend setting the max server memory to a couple of Gigs below the physical memory installed.
If you need to run IIS and an application server installed on the server then you'll want to give SQL Server less memory.