I defined a multiprocess script to improve an image analysis. It seems work well but i tried to do several tests in order to define the best processes number.
It consists in varying this processes number. And as there is some dispersion, i add a loop in order to repeat one hundred times my test.
But during the process, time increases significantly. What could be the origin of my problem? Have I to flush memory? but it seems to be no saturation.
A piece of my code :
from multiprocessing import Process, current_process
import multiprocessing
import glob as glob
import matplotlib.pyplot as plt
from skimage import io
import time
import sys
import numpy as np
import numpy.ma as ma
import gc
import os
from PIL import Image
from skimage import exposure
import cv2
Path_input = "E:\\test\\raw\\"
Path_output = "E:\\test\\"
Img_list = glob.glob((Path_input + 'Test_*.tif' ))[:]
size_y,size_x = io.imread(Img_list[0]).shape
#Function for the multi process
def Ajustement(x):
#image reading
img = plt.imread(Img_list[x])
#create a CLAHE object
clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl1 = clahe.apply(img_rescale.astype(np.uint16))
cv2.imwrite(Path_output+ '\\Ajusted' + "%05d" % x + '.tif',cl1)
return 'Ajustement OK!'
#create strings list of process
cpu_max = 10
list = ['Process_']*cpu_max
list_process =[]
counter = 1
for u in list:
list_process.append(list[counter-1]+np.str(counter))
counter = counter+1
get_timer = time.clock if sys.platform == "win32" else time.time
time_store = []
time_process = []
if __name__ == '__main__':
range_adjusted = np.arange(0,len(Img_list),cpu_max)
m=0
for m in range(0,100,1): #loop for obtain a mean time for the process
print m
timer = get_timer() # time measuring starts now
for i in range_adjusted:
o = 0
for item in list_process[:cpu_max]: #process creation
globals()[item] = Process(name ='worker1', target=Normalization_and_ajustement, args=(i+o,))
o=o+1
o = 0
for item in list_process[:cpu_max]: #process start
globals()[item].start()
o=o+1
o = 0
for item in list_process[:cpu_max]: #process join
globals()[item].join()
o=o+1
if i == range_adjusted.max():
print("Normalization and Equalization finished")
timer = get_timer() - timer # get delta time as soon as it finishes
time_store.append(timer)
time_process.append(timer/cpu_max)
np.savetxt(Path_output + 'time_tot_normalization.txt',time_store)
np.savetxt(Path_output + 'time_process_normalization.txt',time_process)
print("\tTotal: {:.2f} seconds".format(timer))
print("\tAvg. per process: {:.2f} seconds".format(timer/cpu_max))
m=m+1
I think it was due to a memory leak . Indeed i added gc.collect() command after each loop and the problem was solved.
Related
I'm now trying to convert the signal into a Fast Fourier transform in Python and draw a graph. I have a problem with Len here. How can I fix this? And does anyone have any other ideas about converting Fast Fourier transform?
Exception has occurred: TypeError
object of type 'method' has no len()
That is my problem.
from PyQt5.QtWidgets import*
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
import random
from PyQt5 import QtCore, QtGui, QtWidgets
import datetime
import serial
import time
import random
import numpy as np
from matplotlib import animation
from collections import deque
import threading
x = 0
value = [0]
ser = serial.Serial('com5', 9600)
class scope :
def data(self) :
if ser.readable() :
time.sleep(0.01)
reciving = ser.readline(ser.inWaiting())
str = reciving.decode()
if len(str) > 0 :
if str[:1] == 'X' :
value[0] = str[1:]
#print(float(value[5]))
time.sleep(0.5)
x = float(value[0])
return x
s = scope()
n = len(s.data)
Ts = 0.01
Fs = 1/Ts
# length of the signal
k = np.arange(n)
T = n/Fs
freq = k/T # two sides frequency range
freq = freq[range(int(n/2))] # one side frequency range
Y = np.fft.fft(x)/n # fft computing and normalization
Y = Y[range(int(n/2))]
fig, ax = plt.subplots(2, 1)
ax.plot(freq, abs(Y), 'r', linestyle=' ', marker='^')
ax.set_xlabel('Freq (Hz)')
ax.set_ylabel('|Y(freq)|')
#3ax.vlines(freq, [0], abs(Y))
ax.grid(True)
t = threading.Thread(target= s.data)
t.daemon = True
t.start()
plt.show()
I am using the following code to create an array and and store the the results sequentially in a hdf5 format. I was checking out the dask documentation, and the suggested to use dask.store to store the arrays generated in a function like mine. However I receive an error: dask has no attribute store
My code:
import os
import numpy as np
import time
import concurrent.futures
import multiprocessing
from itertools import product
import h5py
import dask as da
def mean_py(array):
start_time = time.time()
x = array.shape[1]
y = array.shape[2]
values = np.empty((x,y), type(array[0][0][0]))
for i in range(x):
for j in range(y):
values[i][j] = ((np.mean(array[:,i,j])))
end_time = time.time()
hours, rem = divmod(end_time-start_time, 3600)
minutes, seconds = divmod(rem,60)
print("{:0>2}:{:0>2}:{:05.2f}".format(int(hours), int(minutes), int(seconds)))
print(f"{'.'*80}")
return values
def generate_random_array():
a = np.random.randn(120560400).reshape(10980,10980)
return a
def generate_array(nums):
for num in range(nums):
a = generate_random_array()
f = h5py.File('test_db.hdf5')
d = f.require_dataset('/data', shape=a.shape, dtype=a.dtype)
da.store(a, d)
start = time.time()
generate_array(8)
end = time.time()
print(f'\nTime complete: {end-start:.2f}s\n')
Should I use dask for such a a task, or do you recommend to store the results using h5py directly?
Please Ignore the mean_py(array) function. It's for something I want to try out once the data has been produced.
As suggested in the comments, you're currently doing this
import dask as da
When you probably meant to do this
import dask.array as da
I am trying to speed up the conversion of select tfrecords to a series of python dictionaries. Here's what I have. Initially the CPU utilization spikes, but then goes to almost zero, suggesting my code is not working correctly.
My goal is to have 3 dictionaries saved and pickled. There are 14,000+ tfrecord files (2 gigs appx). At the current rate, it will take about 84 hours to run on a single process.
Are there any problems with my use of manage dicts
import glob
import tensorflow as tf
import cPickle
import numpy as np
from tqdm import tqdm
import collections
from multiprocessing import Process, Manager, Pool
def get_multihot_encoding(example_label):
enc = np.zeros(10)
for label in example_label:
if label in lookup.values():
index = lookup_inverted[label]
enc[index] = 1
return list(enc)
# Set-up MultiProcessing
manager = Manager()
audio_embeddings_dict = manager.dict()
audio_labels_dict = manager.dict()
audio_multihot_dict = manager.dict()
sess = tf.Session()
# The iterable which gets passed to the function
all_tfrecord_filenames = glob.glob('/Users/jeff/features/audioset_v1_embeddings/unbal_train/*.tfrecord')
def process_tfrecord(tfrecord):
for idx, example in enumerate(tf.python_io.tf_record_iterator(tfrecord)):
tf_example = tf.train.Example.FromString(example)
vid_id = tf_example.features.feature['video_id'].bytes_list.value[0].decode(encoding='UTF-8')
example_label = list(np.asarray(tf_example.features.feature['labels'].int64_list.value))
# Non zero intersect of 2 sets is True - only create dict entries if this is true!
if set(example_label) & label_filters:
print(set(example_label) & label_filters, " Is the intersection of the two")
tf_seq_example = tf.train.SequenceExample.FromString(example)
n_frames = len(tf_seq_example.feature_lists.feature_list['audio_embedding'].feature)
audio_frame = []
for i in range(n_frames):
audio_frame.append(tf.cast(tf.decode_raw(
tf_seq_example.feature_lists.feature_list['audio_embedding'].feature[i].bytes_list.value[0],tf.uint8)
,tf.float32).eval(session=sess))
audio_embeddings_dict[vid_id] = audio_frame
audio_labels_dict[vid_id] = example_label
audio_multihot_dict[vid_id] = get_multihot_encoding(example_label)
#print(get_multihot_encoding(example_label), "Is the encoded label")
if idx % 100 == 0:
print ("Saving dictionary at loop: {}".format(idx))
cPickle.dump(audio_embeddings_dict, open('audio_embeddings_dict_unbal_train_multi_{}.pkl'.format(idx), 'wb'))
cPickle.dump(audio_multihot_dict, open('audio_multihot_dict_bal_untrain_multi_{}.pkl'.format(idx), 'wb'))
cPickle.dump(audio_multihot_dict, open('audio_labels_unbal_dict_multi_{}.pkl'.format(idx), 'wb'))
pool = Pool(50)
result = pool.map(process_tfrecord, all_tfrecord_filenames)
Does anyone know how one goes about enabling the REFS_OK flag in numpy? I cannot seem to find a clear explanation online.
My code is:
import sys
import string
import numpy as np
import pandas as pd
SNP_df = pd.read_csv('SNPs.txt',sep='\t',index_col = None ,header = None,nrows = 101)
output = open('100 SNPs.fa','a')
for i in SNP_df:
data = SNP_df[i]
data = np.array(data)
for j in np.nditer(data):
if j == 0:
output.write(("\n>%s\n")%(str(data(j))))
else:
output.write(data(j))
I keep getting the error message: Iterator operand or requested dtype holds references, but the REFS_OK was not enabled.
I cannot work out how to enable the REFS_OK flag so the program can continue...
I have isolated the problem. There is no need to use np.nditer. The main problem was with me misinterpreting how Python would read iterator variables in a for loop. The corrected code is below.
import sys
import string
import fileinput
import numpy as np
SNP_df = pd.read_csv('datafile.txt',sep='\t',index_col = None ,header = None,nrows = 5000)
output = open('outputFile.fa','a')
for i in range(1,51):
data = SNP_df[i]
data = np.array(data)
for j in range(0,1):
output.write(("\n>%s\n")%(str(data[j])))
for k in range(1,len(data)):
output.write(str(data[k]))
If you really want to enable the flag, I have an working example.
Python 2.7, numpy 1.14.2, pandas 0.22.0
import pandas as pd
import numpy as np
# get all data as panda DataFrame
data = pd.read_csv("./monthdata.csv")
print(data)
# get values as numpy array
data_ar = data.values # numpy.ndarray, every element is a row
for row in data_ar:
print(row)
sum = 0
count = 0
for month in np.nditer(row, flags=["refs_OK"], op_flags=["readwrite"]):
print month
I want to speed up matplotlib.savefig() for many figures by multiprocessing module, and trying to benchmark the performance between parallel and sequence.
Below is the codes:
# -*- coding: utf-8 -*-
"""
Compare the time of matplotlib savefig() in parallel and sequence
"""
import numpy as np
import matplotlib.pyplot as plt
import multiprocessing
import time
def gen_fig_list(n):
''' generate a list to contain n demo scatter figure object '''
plt.ioff()
fig_list = []
for i in range(n):
plt.figure();
dt = np.random.randn(5, 4);
fig = plt.scatter(dt[:,0], dt[:,1], s=abs(dt[:,2]*1000), c=abs(dt[:,3]*100)).get_figure()
fig.FM_figname = "img"+str(i)
fig_list.append(fig)
plt.ion()
return fig_list
def savefig_worker(fig, img_type, folder):
file_name = folder+"\\"+fig.FM_figname+"."+img_type
fig.savefig(file_name, format=img_type, dpi=fig.dpi)
return file_name
def parallel_savefig(fig_list, folder):
proclist = []
for fig in fig_list:
print fig.FM_figname,
p = multiprocessing.Process(target=savefig_worker, args=(fig, 'png', folder)) # cause error
proclist.append(p)
p.start()
for i in proclist:
i.join()
if __name__ == '__main__':
folder_1, folder_2 = 'Z:\\A1', 'Z:\\A2'
fig_list = gen_fig_list(10)
t1 = time.time()
parallel_savefig(fig_list,folder_1)
t2 = time.time()
print '\nMulprocessing time : %0.3f'%((t2-t1))
t3 = time.time()
for fig in fig_list:
savefig_worker(fig, 'png', folder_2)
t4 = time.time()
print 'Non_Mulprocessing time: %0.3f'%((t4-t3))
And I meet problem "This application has requested the Runtime to terminate it in an unusual way. Please contact the application's support team for more information." error caused by p = multiprocessing.Process(target=savefig_worker, args=(fig, 'png', folder)) .
Why ? And how to solve it ?
(Windows XP + Python: 2.6.1 + Numpy: 1.6.2 + Matplotlib: 1.2.0)
EDIT: (add error msg on python 2.7.3)
When run on IDLE of python 2.7.3, it gives below error msg:
>>>
img0
Traceback (most recent call last):
File "C:\Documents and Settings\Administrator\desktop\mulsavefig_pilot.py", line 61, in <module>
proc.start()
File "d:\Python27\lib\multiprocessing\process.py", line 130, in start
File "d:\Python27\lib\pickle.py", line 286, in save
f(self, obj) # Call unbound method with explicit self
File "d:\Python27\lib\pickle.py", line 748, in save_global
(obj, module, name))
PicklingError: Can't pickle <function notify_axes_change at 0x029F5030>: it's not found as matplotlib.backends.backend_qt4.notify_axes_change
EDIT: (My solution demo)
inspired by Matplotlib: simultaneous plotting in multiple threads
# -*- coding: utf-8 -*-
"""
Compare the time of matplotlib savefig() in parallel and sequence
"""
import numpy as np
import matplotlib.pyplot as plt
import multiprocessing
import time
def gen_data(fig_qty, bubble_qty):
''' generate data for fig drawing '''
dt = np.random.randn(fig_qty, bubble_qty, 4)
return dt
def parallel_savefig(draw_data, folder):
''' prepare data and pass to worker '''
pool = multiprocessing.Pool()
fig_qty = len(draw_data)
fig_para = zip(range(fig_qty), draw_data, [folder]*fig_qty)
pool.map(fig_draw_save_worker, fig_para)
return None
def fig_draw_save_worker(args):
seq, dt, folder = args
plt.figure()
fig = plt.scatter(dt[:,0], dt[:,1], s=abs(dt[:,2]*1000), c=abs(dt[:,3]*100), alpha=0.7).get_figure()
plt.title('Plot of a scatter of %i' % seq)
fig.savefig(folder+"\\"+'fig_%02i.png' % seq)
plt.close()
return None
if __name__ == '__main__':
folder_1, folder_2 = 'A1', 'A2'
fig_qty, bubble_qty = 500, 100
draw_data = gen_data(fig_qty, bubble_qty)
print 'Mulprocessing ... ',
t1 = time.time()
parallel_savefig(draw_data, folder_1)
t2 = time.time()
print 'Time : %0.3f'%((t2-t1))
print 'Non_Mulprocessing .. ',
t3 = time.time()
for para in zip(range(fig_qty), draw_data, [folder_2]*fig_qty):
fig_draw_save_worker(para)
t4 = time.time()
print 'Time : %0.3f'%((t4-t3))
print 'Speed Up: %0.1fx'%(((t4-t3)/(t2-t1)))
You can try to move all of the matplotlib code(including the import) to a function.
Make sure you don't have a import matplotlib or import matplotlib.pyplot as plt at the top of your code.
create a function that does all the matplotlib including the import.
Example:
import numpy as np
from multiprocessing import pool
def graphing_function(graph_data):
import matplotlib.pyplot as plt
plt.figure()
plt.hist(graph_data.data)
plt.savefig(graph_data.filename)
plt.close()
return
pool = Pool(4)
pool.map(graphing_function, data_list)
It is not really a bug, per-say, more of a limitation.
The explanation is in the last line of your error mesage:
PicklingError: Can't pickle <function notify_axes_change at 0x029F5030>: it's not found as matplotlib.backends.backend_qt4.notify_axes_change
It is telling you that elements of the figure objects can not be pickled, which is how MultiProcess passes data between the processes. The objects are pickled in the main processes, shipped as pickles, and then re-constructed on the other side. Even if you fixed this exact issue (maybe by using a different backend, or stripping off the offending function (which might break things in other ways)) I am pretty sure there are core parts of Figure, Axes, or Canvas objects that can not be pickled.
As #bigbug point to, an example of how to get around this limitation, Matplotlib: simultaneous plotting in multiple threads. The basic idea is that you push your entire plotting routine off to the sub-process so you only push numpy arrays an maybe some configuration information across the process boundry.