We have an intermittent performance issue on one of our servers reported by a couple of users when using one particular XPages application. My first thought was that it was related to Java Memory usage and poor code recycling so I'm starting there. At the moment, according to Mark Leusink's fantastic Debug Toolbar, the usage data for the server (64-bit Windows machine with 32Gb physical RAM) looks like this:
I'd like to confirm my understanding of the figures:
Maximum Heap Size - I'm okay with this and know how to change it (and that recommended setting is a quarter of the available RAM but due to low user population on this server, I'm sure 2Gb is more than adequate)
Total Allocated - this seems low to me but am I correct in that this is automatically set by the server and that, if more Java memory is needed then it will allocate more (up to the amount specified in the maximum heap size?) Does this happen only if garbage collection cannot free enough space to load a new java object?
Used - I believe this shows the memory being used across the server and not just in the application containing the debug toolbar itself. Will this only show the memory being used by the Domino HTTP task (so all XPages apps) or can it be affected by Java agents too?
Bonus questions:
How is the "total allocated" figure initially set? On a development server we have (with one current user - me) the figure is currently set to 256M but I can't relate this back to any Notes.ini parameters. (Also, is there a recommended value for this figure?)
If I'm correct about garbage collection running when the "total allocated" figure is reached, then, presumably, a low figure will force it to run more often. Will this have an adverse affect on server performance?
Is the fact that we are also running Traveler on this server (albeit with only about 9 users) something we should take into consideration?
Thanks
The information shown in the toolbar are the standard numbers that the JVM provides: totalMemory(), maxMemory() and freeMemory(). See this question for a detailed explanation. The three values given are for the entire JVM, not a specific application.
In the Domino HTTP JVM you can set the maxMemory with the HTTPJVMMaxHeapSize notes.ini parameter. You cannot set/ change the total allocated (totalMemory) value, but that's also not needed. The JVM will simply allocate more memory when it needs it (up to the value of maxMemory). When garbage collection has been performed, it will eventually also free this memory again.
Java agents do not affect these numbers. The only exception would be a Java agent that runs in the HTTP process (e.g. called from a browser using the ?OpenAgent command).
On a server you can run into memory issues (OutOfMemory exceptions) if the JVM needs more memory that can be allocated. You can monitor this value by creating a simple XAgent to output the current values for the JVM:
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" rendered="false" viewState="nostate">
<xp:this.afterRenderResponse><![CDATA[#{javascript:
var externalContext = facesContext.getExternalContext();
var writer = facesContext.getResponseWriter();
var response = externalContext.getResponse();
response.setContentType("application/json");
response.setHeader("Cache-Control", "no-cache");
var max = java.lang.Runtime.getRuntime().maxMemory();
var free = java.lang.Runtime.getRuntime().freeMemory();
var total = java.lang.Runtime.getRuntime().totalMemory();
var memory = {
max : max,
total : total,
free : free,
available : (max - total) + free
};
writer.write( toJson(memory) );
writer.endDocument();
}]]>
</xp:this.afterRenderResponse>
</xp:view>
Related
It starts w/the proverbial:
[Notes - F1 [107]] Error: An error occurred with the following error message: "System.OutOfMemoryException: Insufficient memory to continue the execution of the program. (SSIS Integration Toolkit for Microsoft Dynamics 365, v10.2.0.6982 - DtsDebugHost, v13.0.1601.5)".
But even in it's own diagnostics, it shows that plenty of memory is available (yes, that's 32GB I have on my system):
Error: The system reports 47 percent memory load. There are 34270687232 bytes of physical memory with 18094620672 bytes free. There are 4294836224 bytes of virtual memory with 981348352 bytes free. The paging file has 34270687232 bytes with 12832284672 bytes free.
The info messages report memory pressure:
Information: The buffer manager failed a memory allocation call for 506870912 bytes, but was unable to swap out any buffers to relieve memory pressure. 2 buffers were considered and 2 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
I currently have the max rows set at 500 w/the buffer size at 506,870,912 in this example. I've tried the maximum buffer size, but that fails instantly, and the minimum buffer size still throws errors. I've fiddled w/various sizes, but it never gets anywhere close to processing the whole data set. The error I get when I set the DefaultBufferSize lower is:
[Notes - F1 [107]] Error: An error occurred with the following error message: "KingswaySoft.IntegrationToolkit.DynamicsCrm.CrmServiceException: CRM service call returned an error: Failed to allocate a managed memory buffer of 536870912 bytes. The amount of available memory may be low. (SSIS Integration Toolkit for Microsoft Dynamics 365, v10.2.0.6982 - DtsDebugHost,
I've looked for resources on how to tune this, but cannot find anything relevant to having a 64bit Window 10 machine (as opposed to a server) that has 32GB of RAM to play with.
For a bit more context, I'm migrating notes from one CRM D365 environment to another using Kingsway. The notes w/attachments are the ones causing the issue.
Properties:
Execution
Source
Destination
I have had this problem before and it was not the physical memory (i.e., RAM), but the physical disk space where the database is stored. Check to see what the available hard drive space is on the drive that stores both the database and transaction log files - chances are that it is full and therefore unable to allocate any additional disk space.
In this context, the error message citing 'memory' is a bit misleading.
UPDATE
I think this is actually caused by having too much data in the pipeline buffer. You will need to either need to look at expanding the buffer's memory allocation (i.e., DefaultBufferSize) or you will need to take a look at what data is flowing through the pipeline. Typical causes can be a lot of columns with large NVARCHAR() character counts. Copying the rows with MultiCast will only compound the problem. With respect to the 3rd party component you are using, your guess is as good as mine because I have not used them.
For anyone coming along later:
The error says "CRM service call returned an error: Failed to allocate a managed memory buffer of 536870912 bytes". I understood it to be the CRM Server that had the memory issue.
Regardless, we saw this error when migrating email attachments via the ActivityMimeAttachment entity. The problem appeared to be related to running the insert to the target CRM with too large a batch size and/or multi-threaded.
We set the batch size to 1 and turned off the multi-threading and the issue went away. (We also set the batch size to 1 on the request from the source - we saw "service unavailable" errors from an on-premise CRM when the batch size was too high and the attachments were too large.)
I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:
The currently selected repository cannot be used for queries due to an error:
org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.
I set the JVM as follows:
-Xms8g
-Xmx9g
I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?
For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?
Open the workbench and check the amount of memory you have given to cache memory.
Xmx should be a value that is enough for
cache-memory + memory-for-queries + entity-pool-hash-memory
sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:
Increase the java memory with a bigger value for Xmx
Decrease the value for cache memory
We recently migrated to Couchbase 3.1.0. The odd thing is - when performing full backup of a bucket, web UI alerts "Hard Out Of Memory Error. Bucket X on node Y is full. All memory allocated to this bucket is used for metadata". The numbers from RAM usage in the web UI contradict that - about 75% is used, but not 100%. I looked into the logs, but haven't find any similar errors there.
Is that even normal?
This is a known issue in the Couchbase Server 3.x releases.
To understand the problem, we must also first understand Database Change Protocol (DCP), the protocol used to transfer data throughout the system. At a high level the flow-control for DCP is as follows:
The Consumer creates a connection with the Producer and sends an Open Connection message. The Consumer then sends a Control message to indicate per stream flow control. This messages will contain “stream_buffer_size” in the key section and the buffer size the Consumer would like each stream to have in the value section.
The Consumer will then start opening streams so that is can receive data from the server.
The Producer will then continue to send data for the stream that has buffer space available until it reaches the maximum send size.
Steps 1-3 continue until the connection is closed, as the Consumer continues to consume items from the stream.
The cbbackup utility does not implement any flow control (data buffer limits) however, and it will try to stream all vbuckets from all nodes at once, with no cap on the buffer size.
While this does not mean that it will use the same amount of memory as your overall data size (as the streams are being drained slowly by the cbbackup process), it does mean that a large memory overhead is required to be able to store the data streams.
When you are in a heavy DGM (disk greater than memory) scenario, the amount of memory required to store the streams is likely to grow more rapidly than cbbackup can drain them as it is streaming large quantities of data off of disk, leading to very large streams, which take up a lot of memory as previously mentioned.
The slightly misleading message about metadata taking up all of the memory is displayed as there is no memory left for the data, so all of the remaining memory is allocated to the metadata, which when using value eviction cannot be ejected from memory.
The reason that this only affects Couchbase Server versions prior to 4.0 is that in 4.0 a server-side improvement to DCP stream management was made that allows the pausing of DCP streams to keep the memory footprint down, this is tracked as MB-12179.
As a result, you should not experience the same issue on Couchbase Server versions 4.x+, regardless of how DGM your bucket is.
Workaround
If you find yourself in a situation where this issue is occurring, then terminating the backup job should release all of the memory consumed by the streams immediately.
Unfortunately if you have already had most of your data evicted from memory as a result of the backup, then you will have to retrieve a large quantity of data off of disk instead of RAM for a small period of time, which is likely to increase your get latencies.
Over time 'hot' data will be brought into memory when requested, so this will only be a problem for a small period of time, however this is still a fairly undesirable situation to be in.
The workaround to avoid this issue completely is to only stream a small number of vbuckets at once when performing the backup, as opposed to all vbuckets which cbbackup does by default.
This can be achieved using cbbackupwrapper which comes bundled with all Couchbase Server releases 3.1.0 and later, details of using cbbackupwrapper can be found in the Couchbase Server documentation.
In particular the parameter to pay attention to is the -n flag, which specifies the number of vbuckets to be backed up in a batch at once.
As the name suggests, cbbackupwrapper is simply a wrapper script on top of cbbackup which partitions the vbuckets up and automatically handles all of the directory creation and backup generation, while still using cbbackup under the hood.
As an example, with a batch size of 50, cbbackupwrapper would backup vbuckets 0-49 first, followed by 50-99, then 100-149 etc.
It is suggested that you test with cbbackupwrapper in a testing environment which mirrors your production environment to find a suitable value for -n and -P (which controls how many backup processes run at once, the combination of these two controls the amount of memory pressure caused by backup as well as the overall speed).
You should not find that lowering the value of -n from its default 100 decreases the backup speed, in some cases you may find that the backup speed actually increases due to the fact that there is far less memory pressure on the server.
You may however wish to sensibly adjust the -P parameter if you wish to speed up the backup further.
Below is an example command:
cbbackupwrapper http://[host]:8091 [backup_dir] -u [user_name] -p [password] -n 50
It should be noted that if you use cbbackupwrapper to perform your backup then you must also use cbrestorewrapper to restore the data, as cbrestorewrapper is automatically aware of the directory structures used by cbbackupwrapper.
When you run a full backup, by default the backup tool streams data from all nodes over the network. This is not the best way, because it causes a lot of extra load and increased memory usage, especially of you run cbbackup on one of the Couchbase nodes. I would use the data-copy mode of cbbackup, which copies data directly from the files on disk:
> sudo /opt/couchbase/bin/cbbackup couchstore-files:///opt/couchbase/var/lib/couchbase/data/ /tmp/backup
Of course, change the data path to wherever your Couchbase data is actually stored. (In my example it runs as sudo because only root has read access to /opt/couchbase/blabla..) Do this on every node, then collect all the backup folders and put them somewhere. Note that the backups are very compressible, so you might want to zip them before copying over the network.
We are running SQL Server 2005 on 64 bit. The PF Usage reaches close to 25 GB every week. Queries that normally take less than a second to run become very slow during this time. What could be causing this?
After running PerfMon, the two counters, Total Server Memory and Target Server Memory show 20 GB and 29 GB respectively. Processor Queue Length and Disk Queue Length are zero.
Sounds like you don't have enough memory, how much is on the Server? More than your page file?
Also could mean Sql Server has "paged out" meaning Windows decided to swap all the info it stored in memory onto the disk.
Open Perfmon ( Goto a command prompt, and type perfmon ) and add these counters:
SQLServer:Buffer Manager - Buffer cache hit ratio
SQLServer:Buffer Manager - Page life expectancy
SQLServer:Memory Manager - Memory Grants Pending
If Buffer Cache Hit Ratio is < 95% it means Sql is using the disk instead of memory a lot you need more memory.
If your page life expectancy is < 500 it mean SqlServer is not keeping results cached in memory, you need more memory.
If you have a lot of Memory Grants Pending, you need more memory.
There are also two stats which let you know how much memory SqlServer wants and how much its actually using. They are called something like "Total Memory Used" and "Total Memory Requested". If Requested > Used, guess what, you need more memory.
There are also some dmv's you can use to determine if your queries are being held up while waiting for memory to free up. I think its sys_dmv.os_wait_stats, something like that.
I community wikied this so a real dba can come in here and clean this up. Don't know the stats off the top of my head.
This is a great read on how to use DMV's to determine memory consumption on sql: http://technet.microsoft.com/en-us/library/cc966540.aspx - look for the 'memory bottlenecks' section.
One big thing is to determine if the memory pressure is internal to SQL (usually the case) or external (rare, but possible). For example, perhaps nothing is wrong with your server, but a driver installed by your antivirus program is leaking memory.
I just rolled a small web application into production on a remote host. After it ran a while, I noticed that MS SQL Server started to consume most of the RAM on the server, starving out IIS and my application.
I have tried changing the "server maximum memory" setting, but eventually the memory usage begins to creep above this setting. Using the activity monitor I have determined that I am not leaving open connections or something obvious so I am guessing its in cache, materialized views and the like.
I am beginning to believe that this setting doesn't mean what I think it means. I do note that if I simply restart the server, the process of memory consumption starts over without any adverse impact on the application - pages load without incident, all retrievals work.
There has got to be a better way to control sql or force it to release some memory back to the system????
From the MS knowledge base:
Note that the max server memory option
only limits the size of the SQL Server
buffer pool. The max server memory
option does not limit a remaining
unreserved memory area that SQL Server
leaves for allocations of other
components such as extended stored
procedures, COM objects, non-shared
DLLs, EXEs, and MAPI components.
Because of the preceding allocations,
it is normal for the SQL Server
private bytes to exceed the max server
memory configuration.
Are you using any extended stored procedures, .NET code, etc that could be leaking memory outside of SQL Server's control?
There is some more discussion on memory/address space usage here
Typically I recommend setting the max server memory to a couple of Gigs below the physical memory installed.
If you need to run IIS and an application server installed on the server then you'll want to give SQL Server less memory.