What causes physical memory, entry process resource allocation to be exhausted and display internal server error and what is the best solution to fix the error ?
For fix the high amount of Entry Process, you must completely analyze the scripts and applications running inside your websites to findout which one is taking too much resource.
Related
We are currently upgrading a TYPO3-Installation with about 60.000 Pages to V9.
The Upgrade-Wizard "Introduce URL parts ("slugs") to all existing pages" does not finish. In Browser (Install-Tool) I get a time-out.
Calling it via
./vendor/bin/typo3cms upgrade:wizard pagesSlugs
results in following Error:
[ Symfony\Component\Process\Exception\ProcessSignaledException ]
The process has been signaled with signal "9".
After using my favourite internet-search-engine I thinks that means most likely "out of memory".
Sadly the database doesn't seams to be touched at all - so no pages got the slug after that. That means just running this process several times will not help. Observing the Process the PHP-Process takes all memory it can get, then filling the swap. When the swap is full the process crashes.
Tested so far on a local Docker with 16GB RAM Host and on a Server with 8 Cores but 8GB RAM (DB is on an external Machine).
Any ideas to fix that?
After debugging I found out that the reason for this are messed up relations in database. So there are non deleted pages which points to non existing parents. This was mainly caused by a heavy clean up of the database before. Beside the wizard is not checking that and could be an improvement on it - the main problem is my database in that case.
It starts w/the proverbial:
[Notes - F1 [107]] Error: An error occurred with the following error message: "System.OutOfMemoryException: Insufficient memory to continue the execution of the program. (SSIS Integration Toolkit for Microsoft Dynamics 365, v10.2.0.6982 - DtsDebugHost, v13.0.1601.5)".
But even in it's own diagnostics, it shows that plenty of memory is available (yes, that's 32GB I have on my system):
Error: The system reports 47 percent memory load. There are 34270687232 bytes of physical memory with 18094620672 bytes free. There are 4294836224 bytes of virtual memory with 981348352 bytes free. The paging file has 34270687232 bytes with 12832284672 bytes free.
The info messages report memory pressure:
Information: The buffer manager failed a memory allocation call for 506870912 bytes, but was unable to swap out any buffers to relieve memory pressure. 2 buffers were considered and 2 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
I currently have the max rows set at 500 w/the buffer size at 506,870,912 in this example. I've tried the maximum buffer size, but that fails instantly, and the minimum buffer size still throws errors. I've fiddled w/various sizes, but it never gets anywhere close to processing the whole data set. The error I get when I set the DefaultBufferSize lower is:
[Notes - F1 [107]] Error: An error occurred with the following error message: "KingswaySoft.IntegrationToolkit.DynamicsCrm.CrmServiceException: CRM service call returned an error: Failed to allocate a managed memory buffer of 536870912 bytes. The amount of available memory may be low. (SSIS Integration Toolkit for Microsoft Dynamics 365, v10.2.0.6982 - DtsDebugHost,
I've looked for resources on how to tune this, but cannot find anything relevant to having a 64bit Window 10 machine (as opposed to a server) that has 32GB of RAM to play with.
For a bit more context, I'm migrating notes from one CRM D365 environment to another using Kingsway. The notes w/attachments are the ones causing the issue.
Properties:
Execution
Source
Destination
I have had this problem before and it was not the physical memory (i.e., RAM), but the physical disk space where the database is stored. Check to see what the available hard drive space is on the drive that stores both the database and transaction log files - chances are that it is full and therefore unable to allocate any additional disk space.
In this context, the error message citing 'memory' is a bit misleading.
UPDATE
I think this is actually caused by having too much data in the pipeline buffer. You will need to either need to look at expanding the buffer's memory allocation (i.e., DefaultBufferSize) or you will need to take a look at what data is flowing through the pipeline. Typical causes can be a lot of columns with large NVARCHAR() character counts. Copying the rows with MultiCast will only compound the problem. With respect to the 3rd party component you are using, your guess is as good as mine because I have not used them.
For anyone coming along later:
The error says "CRM service call returned an error: Failed to allocate a managed memory buffer of 536870912 bytes". I understood it to be the CRM Server that had the memory issue.
Regardless, we saw this error when migrating email attachments via the ActivityMimeAttachment entity. The problem appeared to be related to running the insert to the target CRM with too large a batch size and/or multi-threaded.
We set the batch size to 1 and turned off the multi-threading and the issue went away. (We also set the batch size to 1 on the request from the source - we saw "service unavailable" errors from an on-premise CRM when the batch size was too high and the attachments were too large.)
I created a TruClient Web (IE) protocol script in LR12.55, when I try to run the script with 50 users, only some would go into running state (in between 25-37) and the rest would stuck in init forever.
I tried to change the Controller -> Options-> Timeout and changed Init timeout from default 180 to 999 however it does not resolve the issue. Can anybody comment on how to resolve this????
TruClient runs a real browser for each vuser (virtual-user), so system resource consumption is higher the API-level testing.
It is possible that 50 vusers is too much for your load-generator machine.
I'd suggest checking CPU and memory levels during the run. If either is over 80% utilization, you should split your load between multiple load-generator machines.
If resources are not fully utilized, the failures should be analyzed to determine the root cause.
To further e-Dough's excellent response, you should expect not to execute these virtual users on the same hardware as the controller. You should expect at least three load generators to be involved, two as primary load and one as a control set. This is in addition to the controller.
Your issue does manifest as the classical, "system out of resources" condition. Consider the same best practices for monitoring your load generator health as you would in monitoring your application under test infrastructure. You want to have monitors for your classical finite resource model components ( CPU, DISK, MEMORY and NETWORK) plus additional sub components, such as a breakout of System and Application under CPU, to understand where and how your system is performing. You want to be able to eliminate false negatives on scalability where your load generators are so unhealthy that they are distorting your test results - Virtual users showing the application is slow when in fact the Virtual Users are slow because the machine in use is resource constrained.
I'm performing some queries over a tpch 100gb dataset on presto, I have 4 nodes, 1 master, 3 workers. When I try to run some queries, not all of them, I see on Presto web interface that the nodes die during the execution, resulting in query failure, the error is the following:
.facebook.presto.operator.PageTransportTimeoutException: Encountered too many errors talking to a worker node. The node may have crashed or been under too much load. This is probably a transient issue, so please retry your query in a few minutes.
I rebooted all nodes and presto service but the error remains, this problem doesn't exist if I run the same queries over a smaller dataset.Can someone provide some help on this problem?
Thanks
3 possible causes for this kind of error. You may ssh into one of worker to find out what the problem is when the query is running.
High CPU
Tune down the task.concurrency to, for example, 8
High memory
In the jvm.config, -Xmx should no more than 80% total memory. In the config.properties, query.max-memory-per-node should be no more than the half of Xmx number.
Low open file limit
Set in the /etc/security/limits.conf a larger number for the Presto process. The default is definitely way too low.
It might be an issue for configuration. For example, if the local maximum memory is not set appropriately and the query use too much heap memory, full GC might happen to cause such errors. I would suggest to ask in the Presto Google Group and describe someway to reproduce the issue :)
I was running presto on Mac with 16GB of ram below is the configuration of java.config file.
-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:OnOutOfMemoryError=kill -9 %p
I was getting following error even for running the Query
Select now();
Query 20200817_134204_00005_ud7tk failed: Encountered too many errors talking to a worker node. The node may have crashed or be under too much load. This is probably a transient issue, so please retry your query in a few minutes.
I changed my -Xmx16G value to -Xmx10G and it works fine.
I used following link to install presto on my system.
Link for Presto Installation
I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:
The currently selected repository cannot be used for queries due to an error:
org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.
I set the JVM as follows:
-Xms8g
-Xmx9g
I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?
For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?
Open the workbench and check the amount of memory you have given to cache memory.
Xmx should be a value that is enough for
cache-memory + memory-for-queries + entity-pool-hash-memory
sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:
Increase the java memory with a bigger value for Xmx
Decrease the value for cache memory