Process out of memory in php 7 - php-7

I am trying to run a feed import process, but keep getting this. Should I increase some parameters in my php.ini? I tried but no luck..
Out of memory: Kill process 1845 (php-fpm7.0) score 566 or sacrifice child
May 20 06:05:21 kernel: [1278231.829579] Killed process 1845 (php-fpm7.0) total-vm:521780kB, anon-rss:258000kB, file-rss:28036kB
thanks

Related

Training with Roboflow-Train-YOLOv5 stops with a '^C'

Running Roboflow's notebook, 'Roboflow-Train-YOLOv5', stops after completion the epochs loop.
Instead a reporting the results, I get the following lines, with a ^C at the end of the 3rd line
from the end.
I would like to know the reason for the failure, and there is a way to fix it.
10 epochs completed in 0.191 hours.
Optimizer stripped from runs/train/yolov5s_results2/weights/last.pt, 14.9MB
Optimizer stripped from runs/train/yolov5s_results2/weights/best.pt, 14.9MB
Validating runs/train/yolov5s_results2/weights/best.pt...
Fusing layers...
my_YOLOv5s summary: 213 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Class Images Labels P R mAP#.5 mAP#.5:.95: 20% 1/5 [00:01<00:04, 1.03s/it]^C
CPU times: user 7.01 s, sys: 830 ms, total: 7.84 s
Wall time: 12min 31s
My Colab plan is Colab pro, so I guess it is not a problem of resources.

I/O in Pytorch DataLoader with np.load extremely slow on SSD

I am trying to load a relatively large batch of float16 multispectral images (BxCxHxW=800x12x256x256) to train a deep learning model. The code for the DataLoader is extremely simple:
import torch
import os
paths = os.listdir("/home/bla/data")
class MultiSpectralImageDataset(Dataset):
def __init__(self, paths):
self.paths = np.array(self.paths)
self.l = len(self.paths)
def __len__(self):
return self.l
def __getitem__(self, idx):
path = self.paths[idx]
image = np.load(path)
return image
dataset = MultiSpectralImageDataset(paths)
loader = DataLoader(dataset, batch_size=800, shuffle=True, pin_memory=True, num_workers=16, drop_last=True)
for i, X in enumerate(loader):
X = X.cuda(non_blocking=True).float()
The images are individual files on a very fast NVME SSD. I can verify the read speed of the SSD with sudo hdparm -tT /dev/nvme1n1. This gives me:
/dev/nvme1n1:
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
readonly = 0 (off)
readahead = 256 (on)
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
geometry = 1907729/64/32, sectors = 3907029168, start = 0
bla#bla:~/workspace$ sudo hdparm -tT /dev/nvme1n1
/dev/nvme1n1:
Timing cached reads: 59938 MB in 2.00 seconds = 30041.04 MB/sec
HDIO_DRIVE_CMD(identify) failed: Inappropriate ioctl for device
Timing buffered disk reads: 6308 MB in 3.00 seconds = 2102.35 MB/sec
This confirms the read speed of the SSD is over 2GB/s. However, when using PyTorch DataLoader, I am not nearly able to match this IO speed. During training, the GPU is idle (0% utilization) most of the time, and the CPU is hardly used (htop shows most cores at 0% usage, some cores at at 0.5-1.5% usage). Running iotop shows
The Total Disk Read speed never surpasses 300MB/s. If I decrease num_workers (say by half), the Total Disk Read emains the same (~200MB/s), and each individual thread doubles in read speed. In particular, I observe that every num_workers iterations, the iteration is extremely slow (takes ~1 minute). This apparently simply means that the loading from disk is too slow, as discussed in the PyTorch forum here
What's weird is that I am 99.9% confident it used to work. I remember constistently reaching almost 100% GPU utilization with the same data-loading procedure.
Things I've tried, but with no successs:
Updating Ubuntu, updating everything with apt udpate & upgrade, rebooting, powering off and restarting
Updating the SSD firmware using fwupd (no updates available)
Giving higher priority to the process by running Python using sudo and using os.nice(-10)
Making space on the SSD (30% of the storage is empty, I have run fstrim -v.
Using memmap, i.e. using the keyword in np.load(path, memmap_mode='r')
I really appreciate any help, as I've been stuck with this problem for weeks now, and what used to take 13 minutes per epoch now takes approximately 1h45 per epoch, making things infeasible to train.

WSL2 io speeds on Linux filesystem are slow

Trying out WSL2 for the first time. Running Ubuntu 18.04 on a Dell Latitude 9510 with an SSD. Noticed build speeds of a React project were brutally slow. Per all the articles on the web I'm running the project out of ~ and not the windows mount. Ran a benchmark using sysbench --test=fileio --file-test-mode=seqwr run in ~ and got:
File operations:
reads/s: 0.00
writes/s: 3009.34
fsyncs/s: 3841.15
Throughput:
read, MiB/s: 0.00
written, MiB/s: 47.02
General statistics:
total time: 10.0002s
total number of events: 68520
Latency (ms):
min: 0.01
avg: 0.14
max: 22.55
95th percentile: 0.31
sum: 9927.40
Threads fairness:
events (avg/stddev): 68520.0000/0.00
execution time (avg/stddev): 9.9274/0.00
If I'm reading this correctly, that wrote 47 mb/s. Ran the same test on my mac mini and got 942 mb/s. Is this normal? This seems like the Linux i/o speeds on WSL are unusably slow. Any thoughts on ways to speed this up?
---edit---
Not sure if this is a fair comparison, but the output of winsat disk -drive c on the same machine from the Windows side. Smoking fast:
> Dshow Video Encode Time 0.00000 s
> Dshow Video Decode Time 0.00000 s
> Media Foundation Decode Time 0.00000 s
> Disk Random 16.0 Read 719.55 MB/s 8.5
> Disk Sequential 64.0 Read 1940.39 MB/s 9.0
> Disk Sequential 64.0 Write 1239.84 MB/s 8.6
> Average Read Time with Sequential Writes 0.077 ms 8.8
> Latency: 95th Percentile 0.219 ms 8.9
> Latency: Maximum 2.561 ms 8.7
> Average Read Time with Random Writes 0.080 ms 8.9
> Total Run Time 00:00:07.55
---edit 2---
Windows version: Windows 10 Pro, Version 20H2 Build 19042
Late answer, but I had the same issue and wanted to post my solution for anyone who has the problem:
Windows Defender seems to destroy the read speeds in WSL. I added the entire rootfs folder as an exclusion. If you're comfortable turning off Windows Defender, I recommend that as well. Any antivirus probably has similar issues, so adding the WSL directories as an exclusion is probably you best bet.

Tensorflow strange CPU usage

my problem is with Tensorflow and the usage of the CPU.
My System:
CPU => AMD FX 8320 (8 Cores รก 3,5ghz) and 8 Threads
Grafik => GTX 970
RAM => 16Gb and i belive ddr3 2600
I want to run a A3C algorithm for Starcraft 2 (pysc2) on my pc what works fine but the usage of the cpu ist somewhat strange.
If i start the algorithm with 4 Workers i get something about 150k Steps in 1h
and all cpu's are used about 25-30%
If i start the same algorithm with 8 Workers i get something about 120k Steps in 1h and all cpu's are used about 25-30%
If i now start the algorithm with 4 workers twice i get each 150k steps 1h and the cpu usage is 60-70%
Why cant i start the algorithm with 8 Workers, get the double amount of steps in 1H and the cpu is used to 70%?

Can't configure GPU bios properly

I have 6 GPUs of RX470. It should be mining average 25-27 mh/s each but it's only 20 mh/s. Overall is 120 instead of 150-170. I think the problem is GPU bios configuration but can't figure out any other thing. Any suggestions?
25 mh/s is what you would expect from an RX 480 stock. To get the same hashrate for RX 470, you'd be looking at overclocking memory speed (+600). In terms of how to overclock, it depends on whether your running linux or windows.