I am currently using Laravel 8, and I am using AWS LightSail to operate the service.
Among the services, there is a function to upload images of up to 50 MB. When using this function, the following error sometimes occurs.
Currently, php is set to 512M, and Lightsail is using 16 GB of RAM. Any workaround for this error?
error:
[2022-11-20 19:02:43] local.ERROR: Allowed memory size of 536870912
bytes exhausted (tried to allocate 36864 bytes)
{"userId":13111,"exception":"[object]
(Symfony\Component\ErrorHandler\Error\FatalError(code: 0): Allowed
memory size of 536870912 bytes exhausted (tried to allocate 36864
bytes) at
/var/www/project/vendor/intervention/image/src/Intervention/Image/Gd/Decoder.php:154)
[stacktrace]
Which method should I use?
Related
I have a system with 8GB RAM, 0.5 TB HD on which I'm trying to load a CSV file of 1.2 GB using Jupyter Notebook. I am getting the following error:
Unable to allocate 64.0 KiB for an array with shape (8192,) and data type int64
Is there any way as to how can I load my file without the notebook crashing down?
I am experiencing outofmemory issue while joining 2 datasets; one contains 39M rows other contain 360K rows.
I have 2 worker nodes, each of the worker node has maximum memory of 125 GB.
In Yarn Memory allocated for all YARN containers on a node = 96GB
Minimum Container Size (Memory) = 3072
In Hive settings :
hive.tez.java.opts=-Xmx2728M -Xms2728M -Djava.net.preferIPv4Stack=true -XX:NewRatio=8 -XX:+UseNUMA -XX:+UseG1GC -XX:+ResizeTLAB
hive.tez.container.size=3410
What values I should set to get rid of outofmemory issue.
I solved it by using increasing the Yarn Memory allocated
Minimum Container Size (Memory) = 3072 to 3840
Memory allocated for all YARN containers on a node 96 to 120 GB ( each node had 120GB)
Percentage of physical CPU allocated for all containers on a node 80%
Number of virtual cores 8
https://learn.microsoft.com/en-us/azure/hdinsight/hdinsight-hadoop-hive-out-of-memory-error-oom
Loading 1500 images of size (1000,1000,3) breaks the code and throughs kill 9 without any further error. Memory used before this line of code is 16% of system total memory. Total size of images direcotry is 7.1G.
X = np.asarray(images).astype('float64')
y = np.asarray(labels).astype('float64')
system spec is:
OS: macOS Catalina
processor: 2.2 GHz 6-Core Intel Core i7 16 GB 2
memory: 16 GB 2400 MHz DDR4
Update:
getting the bellow error while running the code on 32 vCPUs, 120 GB memory.
MemoryError: Unable to allocate 14.1 GiB for an array with shape (1200, 1024, 1024, 3) and data type float32
You would have to provide some more info/details for an exact answer but, assuming that this is a memory error(incredibly likely, size of the images on disk does not represent the size they would occupy in memory, so that is irrelevant. In 100% of all cases, the images in memory will occupy a lot more space due to pointers, objects that are needed and so on. Intuitively I would say that 16GB of ram is nowhere nearly enough to load 7GB of images. It's impossible to tell you how much you would need but from experience I would say that you'd need to bump it up to 64GB. If you are using Keras, I would suggest looking into the DirectoryIterator.
Edit:
As Cris Luengo pointed out, I missed the fact that you stated the size of the images.
When we use spark on yarn for non-streaming apps, we generally get the allocated memory to match the number of executors times memory per executor. When doing streaming apps, the allocated memory is immediately pushed to the limit (total memory) as shown in the yarn console.
With this set of parameters
--driver-memory 2g --num-executors 32 --executor-memory 500m
total memory 90G, memory used 85.88G
total vcores 64, vcores used 33
you would expect a basis of 32 * 1 G (500m + overhead) + driver memory or around 34 G, and 33 vcores (32 workers + 1 driver)
question:
is the 64 vcore due to the requirement of 2 core pairs for streaming connection and processing?
how did the estimated 34 G get pushed to 85.88 G? is this always true that with streaming apps, yarn gives it all it has?
In the neo4j-wrapper.conf file I see this:
# Java Heap Size: by default the Java heap size is dynamically
# calculated based on available system resources.
# Uncomment these lines to set specific initial and maximum
# heap size in MB.
#wrapper.java.initmemory=512
#wrapper.java.maxmemory=512
Does that mean that I should not worry about -Xms and -Xmx?
I've read elsewhere that -XX:ParallelGCThreads=4 -XX:+UseNUMA -XX:+UseConcMarkSweepGC would be good.
Should I add that on my Intel® Core™ i7-4770 Quad-Core Haswell 32 GB DDR3 RAM 2 x 240 GB 6 Gb/s SSD (Software-RAID 1) machine?
I would still configure it manually.
Set both to 12 GB and use the remaining 16GB for memory mapping in neo4j.properties. Try to match it to you store file sizes