Concurrently read an HDF5 file in Pandas - pandas

I have a data.h5 file organised in multiple chunks, the entire file being several hundred gigabytes. I need to work with a filtered subset of the file in memory, in the form of a Pandas DataFrame.
The goal of the following routine is to distribute the filtering work across several processes, then concatenate the filtered results into the final DataFrame.
Since reading from the file takes a significant amount of time, I'm trying to make each process read its own chunk in a concurrent manner as well.
import multiprocessing as mp, pandas as pd
store = pd.HDFStore('data.h5')
min_dset, max_dset = 0, len(store.keys()) - 1
dset_list = list(range(min_dset, max_dset))
frames = []
def read_and_return_subset(dset):
# each process is intended to read its own chunk in a concurrent manner
chunk = store.select('batch_{:03}'.format(dset))
# and then process the chunk, do the filtering, and return the result
output = chunk[chunk.some_condition == True]
return output
with mp.Pool(processes=32) as pool:
for frame in pool.map(read_and_return_subset, dset_list):
frames.append(frame)
df = pd.concat(frames)
However, the above code triggers this error:
HDF5ExtError Traceback (most recent call last)
<ipython-input-174-867671c5a58f> in <module>()
53
54 with mp.Pool(processes=32) as pool:
---> 55 for frame in pool.map(read_and_return_subset, dset_list):
56 frames.append(frame)
57
/usr/lib/python3.5/multiprocessing/pool.py in map(self, func, iterable, chunksize)
258 in a list that is returned.
259 '''
--> 260 return self._map_async(func, iterable, mapstar, chunksize).get()
261
262 def starmap(self, func, iterable, chunksize=None):
/usr/lib/python3.5/multiprocessing/pool.py in get(self, timeout)
606 return self._value
607 else:
--> 608 raise self._value
609
610 def _set(self, i, obj):
HDF5ExtError: HDF5 error back trace
File "H5Dio.c", line 173, in H5Dread
can't read data
File "H5Dio.c", line 554, in H5D__read
can't read data
File "H5Dchunk.c", line 1856, in H5D__chunk_read
error looking up chunk address
File "H5Dchunk.c", line 2441, in H5D__chunk_lookup
can't query chunk address
File "H5Dbtree.c", line 998, in H5D__btree_idx_get_addr
can't get chunk info
File "H5B.c", line 340, in H5B_find
unable to load B-tree node
File "H5AC.c", line 1262, in H5AC_protect
H5C_protect() failed.
File "H5C.c", line 3574, in H5C_protect
can't load entry
File "H5C.c", line 7954, in H5C_load_entry
unable to load entry
File "H5Bcache.c", line 143, in H5B__load
wrong B-tree signature
End of HDF5 error back trace
Problems reading the array data.
It seems that Pandas/pyTables have troubles when trying to access the same file in a concurrent manner, even if it's only for reading.
Is there a way to be able to make each process read its own chunk concurrently?

IIUC you can index those columns that are used for filtering data (chunk.some_condition == True - in your sample code) and then read up only that subset of data that satisfies needed conditions.
In order to be able to do that you need to:
save HDF5 file in table format - use parameter: format='table'
index columns, that will be used for filtering - use parameter: data_columns=['col_name1', 'col_name2', etc.]
After that you should be able to filter your data just by reading:
store = pd.HDFStore(filename)
df = store.select('key_name', where="col1 in [11,13] & col2 == 'AAA'")

Related

Python Script that used to work, is now getting automatically killed in Ubuntu

I was once able to run the below python script on my Ubuntu machine without the memory errors I was getting on windows.
import pandas as pd
import numpy as np
#create a pandas dataframe for each input file
dfs1 = pd.read_csv('s1.csv', encoding='utf-8', names=list(range(0,107)),dtype='string', na_filter=False)
dfs2 = pd.read_csv('s2.csv', encoding='utf-8', names=list(range(0,107)),dtype='string', na_filter=False)
dfr = pd.read_csv('r.csv' , encoding='utf-8', names=list(range(0,107)),dtype='string', na_filter=False)
#combine them into one dataframe
dfs12r = pd.concat([dfs1, dfs2, dfr],ignore_index=True)#withour ignore index the line numbers are not adjusted
# bow is "comming
wordlist=[]
for line in range(8052):
for row in range(106) :
#print(line,row,dfs12r[row][line])
if dfs12r[row][line] not in wordlist :
wordlist.append(dfs12r[row][line])
wordlist.sort()
#print(wordlist)
print(len(wordlist)) #12350
dfBOW = pd.DataFrame(np.zeros((len(dfs12r.index), len(wordlist))),dtype='int')
#create the dictionary
wordDict = dict.fromkeys(wordlist,'default')
counter=0
for word in wordlist :
wordDict[word] = counter
counter+=1
#print(wordDict)
#will start scanning every word from dfS12R and +1 the respective cell in dfBOW
for line in range(8052):
for row in range(107):
dfBOW[wordDict[dfs12r[row][line]]][line]+=1
Unfortunately, probably after some automatic Ubuntu updates I am now getting the simple message "KIlled", after trying to run the process without any further explanation.
Through simple print statements I know that the script is interrupted inside the for loop in the end.
I understand that I should be able to make the script more memory efficient, but I am also hoping for guidance on how to get Ubuntu able to run again the same script like they used to. (Through the TOP command I can see the all of my memory including the swap is being used while inside this loop)
Could paging have been disabled somehow after the updates? Any advice is welcome.
I still have 16GB of RAM, and use Ubuntu 20.04 (Specs are the same before and after the script stopped working). I use dual boot on the same SSD.
Below is the error I am getting from teh same script on windows :
Traceback (most recent call last):
File "D:\sharedfiles\Organised\WorkSpace\ptixiaki\github\ptixiaki\code\makingthedata\2.1 Approach (Same as 2 but turning all words to lowercase)\2.1_CSVtoDataframe\CSVtoBOW.py", line 60, in <module>
dfBOW[wordDict[dfs12r[row][line]]][line]+=1
File "D:\wandowsoftware\anaconda\envs\ptixiaki\lib\site-packages\pandas\core\series.py", line 1143, in __setitem__
self._maybe_update_cacher()
File "D:\wandowsoftware\anaconda\envs\ptixiaki\lib\site-packages\pandas\core\series.py", line 1279, in _maybe_update_cacher
ref._maybe_cache_changed(cacher[0], self, inplace=inplace)
File "D:\wandowsoftware\anaconda\envs\ptixiaki\lib\site-packages\pandas\core\frame.py", line 3950, in _maybe_cache_changed
self._mgr.iset(loc, arraylike, inplace=inplace)
File "D:\wandowsoftware\anaconda\envs\ptixiaki\lib\site-packages\pandas\core\internals\managers.py", line 1141, in iset
blk.delete(blk_locs)
File "D:\wandowsoftware\anaconda\envs\ptixiaki\lib\site-packages\pandas\core\internals\blocks.py", line 388, in delete
self.values = np.delete(self.values, loc, 0) # type: ignore[arg-type]
File "<__array_function__ internals>", line 5, in delete
File "D:\wandowsoftware\anaconda\envs\ptixiaki\lib\site-packages\numpy\lib\function_base.py", line 4555, in delete
new = arr[tuple(slobj)]
MemoryError: Unable to allocate 501. MiB for an array with shape (12234, 10736) and data type int32

geopandas cannot read a geojson properly

I am trying the following:
After downloading http://eric.clst.org/assets/wiki/uploads/Stuff/gz_2010_us_050_00_20m.json
In [2]: import geopandas
In [3]: geopandas.read_file('./gz_2010_us_050_00_20m.json')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-83a1d4a0fc1f> in <module>
----> 1 geopandas.read_file('./gz_2010_us_050_00_20m.json')
~/miniconda3/envs/ml3/lib/python3.6/site-packages/geopandas/io/file.py in read_file(filename, **kwargs)
24 else:
25 f_filt = f
---> 26 gdf = GeoDataFrame.from_features(f_filt, crs=crs)
27
28 # re-order with column order from metadata, with geometry last
~/miniconda3/envs/ml3/lib/python3.6/site-packages/geopandas/geodataframe.py in from_features(cls, features, crs)
207
208 rows = []
--> 209 for f in features_lst:
210 if hasattr(f, "__geo_interface__"):
211 f = f.__geo_interface__
fiona/ogrext.pyx in fiona.ogrext.Iterator.__next__()
fiona/ogrext.pyx in fiona.ogrext.FeatureBuilder.build()
TypeError: startswith first arg must be bytes or a tuple of bytes, not str
On the page http://eric.clst.org/tech/usgeojson/ with 4 geojson files under the 20m column, the above file corresponds to the US Counties row, and is the only one that cannot be read out of the 4. The error message isn't very informative, I wonder what's the reason, please?
If your error message looks anything like "Polygons and MultiPolygons should follow the right-hand rule", it means the order of the coordinates in those GeoObjects should be clock-wise.
Here's an online tool to "fix" your objects, with a short explanation:
https://mapster.me/right-hand-rule-geojson-fixer/
Possibly an answer for people arriving at this page, I received the same error and the error was thrown due to encoding issues.
Try encoding the initial file with utf-8 or try opening the file with an encoding which you think is applied to the file. This fixed my error.
More info here

Convert Pandas DataFrame to bytes-like object

Hi I am trying to convert my df to binary and store it in a variable.
my_df:
df = pd.DataFrame({'A':[1,2,3],'B':[4,5,6]})
my code:
import io
towrite = io.BytesIO()
df.to_excel(towrite) # write to BytesIO buffer
towrite.seek(0) # reset pointer
I am getting AttributeError: '_io.BytesIO' object has no attribute 'write_cells'
Full Traceback:
AttributeError Traceback (most recent call last)
<ipython-input-25-be6ee9d9ede6> in <module>()
1 towrite = io.BytesIO()
----> 2 df.to_excel(towrite) # write to BytesIO buffer
3 towrite.seek(0) # reset pointer
4 encoded = base64.b64encode(towrite.read()) #
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py in to_excel(self, excel_writer, sheet_name, na_rep, float_format, columns, header, index, index_label, startrow, startcol, engine, merge_cells, encoding, inf_rep, verbose, freeze_panes)
1422 formatter.write(excel_writer, sheet_name=sheet_name, startrow=startrow,
1423 startcol=startcol, freeze_panes=freeze_panes,
-> 1424 engine=engine)
1425
1426 def to_stata(self, fname, convert_dates=None, write_index=True,
C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\formats\excel.py in write(self, writer, sheet_name, startrow, startcol, freeze_panes, engine)
624
625 formatted_cells = self.get_formatted_cells()
--> 626 writer.write_cells(formatted_cells, sheet_name,
627 startrow=startrow, startcol=startcol,
628 freeze_panes=freeze_panes)
AttributeError: '_io.BytesIO' object has no attribute 'write_cells'
I solved the issue by upgrading pandas to newer version.
import io
towrite = io.BytesIO()
df.to_excel(towrite) # write to BytesIO buffer
towrite.seek(0)
print(towrite)
b''
print(type(towrite))
_io.BytesIO
if you want to see the bytes-like object use getvalue,
print(towrite.getvalue())
b'PK\x03\x04\x14\x00\x00\x00\x08\x00\x00\x00!\x00<\xb
Pickle
Pickle is a reproducible format for a Pandas dataframe, but it's only for internal use among trusted users. It's not for sharing with untrusted users due to security reasons.
import pickle
# Export:
my_bytes = pickle.dumps(df, protocol=4)
# Import:
df_restored = pickle.loads(my_bytes)
This was tested with Pandas 1.1.2. Unfortunately this failed for a very large dataframe, but then what worked is pickling and parallel-compressing each column individually, followed by pickling this list. Alternatively you can pickle chunks of the large dataframe.
CSV
If you must use a CSV representation:
df.to_csv(index=False).encode()
Note that various datatypes are lost when using CSV.
Parquet
See this answer. Note that various datatypes are converted when using parquet.
Excel
Avoid its use for the most part because it limits the max number of rows and columns.
I required to upload the file object to S3 via boto3 which didn't accept the pandas bytes object. So building on the answer from Asclepius I cast the object to a BytesIO, eg:
from io import BytesIO
data = BytesIO(df.to_csv(index=False).encode('utf-8'))

Calling scheduler.multiprocessing.get in a separate process in dask

I am training a neural network with a large text corpora. Each text generate quite a big matrix because I'm using a convolutional model. As my data won't feet in my still large memory, I try to stream it, and use keras.models fit_generator.
To feed keras, I have a pipeline composed of different preprocessing steps, that I arrange with a dask bag with lots of partitions. The dask bag reads a file on disk.
Even is dask is not handling iteration in a smart way (it just compute() and iter on result, which in my case blow up memory), I was to use something like this:
def compute_partition_iter(collection, **kwargs):
"""A utility to compute a collection items after items
"""
get = kwargs.pop("get", None) or _globals['get']
if get is None:
get = collection.__dask_scheduler__
postcompute_func, postcompute_args = collection.__dask_postcompute__()
dsk = collection.__dask_graph__()
for key in collection.__dask_keys__():
yield from f([partition], *args)
This compute partitions one by one and return items, computing next partition as we cross partition border.
This approach has a problem : it's only when we hit last item from partition that we provoque the computation of next elements, leading to a lag until next element. Within this lag, keras is stalled and we loose precious time !
So I imagine running the above compute_partition_iter in a separate process thanks to multiprocessing.Pool, feeding partitions in a Queue with say 2 slots, so that in the generator, I won't always have one more partition ready.
But it seems that this is not supported by dask.bag. I didn't dive deeply enough in the code, but it seems like there are some async methods used, or I don't know what.
Here is a reproductible code for the problem.
First a code that work, using a simple range.
import multiprocessing
import time
def put_q(n, q):
for i in range(n):
print(i, "<-")
q.put(i)
q.put(None)
q = multiprocessing.Queue(2)
with multiprocessing.Pool(1, put_q, (4, q)) as pool:
i = True
while i is not None:
print("zzz")
time.sleep(.5)
i = q.get()
if i is None:
break
print("-> ", i)
This outputs
0 <-
1 <-
2 <-
zzz
3 <-
-> 0
zzz
-> 1
zzz
-> 2
zzz
-> 3
zzz
you can see that, as expected, elements where computed in anticipation and it's all ok.
Now let's replace the range by a dask.bag:
import multiprocessing
import time
import dask.bag
def put_q(n, q):
for i in dask.bag.from_sequence(range(n), npartitions=2):
print(i, "<-")
q.put(i)
q.put(None)
q = multiprocessing.Queue(5)
with multiprocessing.Pool(1, put_q, (4, q)) as pool:
i = True
while i is not None:
print("zzz")
time.sleep(.5)
i = q.get()
if i is None:
break
print("-> ", i)
In a jupyter notebook, it indefinitely raises :
Process ForkPoolWorker-71:
Traceback (most recent call last):
File "/usr/lib/python3.5/multiprocessing/process.py", line 249, in _bootstrap
self.run()
File "/usr/lib/python3.5/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.5/multiprocessing/pool.py", line 103, in worker
initializer(*initargs)
File "<ipython-input-3-e1e9ef9354a0>", line 8, in put_q
for i in dask.bag.from_sequence(range(n), npartitions=2):
File "/usr/local/lib/python3.5/dist-packages/dask/bag/core.py", line 1190, in __iter__
return iter(self.compute())
File "/usr/local/lib/python3.5/dist-packages/dask/base.py", line 154, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/dask/base.py", line 407, in compute
results = get(dsk, keys, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/dask/multiprocessing.py", line 152, in get
initializer=initialize_worker_process)
File "/usr/lib/python3.5/multiprocessing/context.py", line 118, in Pool
context=self.get_context())
File "/usr/lib/python3.5/multiprocessing/pool.py", line 168, in __init__
self._repopulate_pool()
File "/usr/lib/python3.5/multiprocessing/pool.py", line 233, in _repopulate_pool
w.start()
File "/usr/lib/python3.5/multiprocessing/process.py", line 103, in start
'daemonic processes are not allowed to have children'
AssertionError: daemonic processes are not allowed to have children
while the main process is stalled, waiting for elements in queue.
I also tried using a ipyparallel cluster but in this case the main process is simply stalled (no trace of the exception).
Does anyone knows the right way to do that ?
Is there a way I can run scheduler.get in parallel to my main code ?
Finally I should have got a closer look at the exception !
Stackoverflow gave me the solution : Python Process Pool non-daemonic?
In fact as the bag scheduler uses Pool, it can't be called inside a process spawned by pool. The solution in my case is to simply use threads. (Note that the bug and its solution depends on the scheduler you use).
So I have substituted multiprocessing.Pool for multiprocessing.pool.ThreadPool and it works like a charm, either in a normal notebook, or when using ipyparallel.
So it goes like this:
import queue
from multiprocessing.pool import ThreadPool
import time
import dask.bag
def put_q(n, q):
b = dask.bag.from_sequence(range(n), npartitions=3)
for i in b:
print(i, "<-")
q.put(i)
q.put(None)
q = queue.Queue(2)
with ThreadPool(1, put_q, (6, q)) as pool:
i = True
while i is not None:
print("zzz")
time.sleep(.5)
i = q.get()
if i is None:
break
print("-> ", i)
Which outputs:
zzz
0 <-
1 <-
2 <-
-> 0
zzz
3 <-
-> 1
zzz
4 <-
-> 5 <-
2
zzz
-> 3
zzz
-> 4
zzz
-> 5
zzz

Dask array from_npy_stack misses info file

Action
Trying to create a Dask array from a stack of .npy files not written by Dask.
Problem
Dask from_npy_stack() expects an info file, which is normally created by to_npy_stack() function when creating .npy stack with Dask.
Attempts
I found this PR (https://github.com/dask/dask/pull/686) with a description of how the info file is created
def to_npy_info(dirname, dtype, chunks, axis):
with open(os.path.join(dirname, 'info'), 'wb') as f:
pickle.dump({'chunks': chunks, 'dtype': x.dtype, 'axis': axis}, f)
Question
How do I go about loading .npy stacks that are created outside of Dask?
Example
from pathlib import Path
import numpy as np
import dask.array as da
data_dir = Path('/home/tom/data/')
for i in range(3):
data = np.zeros((2,2))
np.save(data_dir.joinpath('{}.npy'.format(i)), data)
data = da.from_npy_stack('/home/tom/data')
Resulting in the following error:
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-94-54315c368240> in <module>()
9 np.save(data_dir.joinpath('{}.npy'.format(i)), data)
10
---> 11 data = da.from_npy_stack('/home/tom/data/')
/home/tom/vue/env/local/lib/python2.7/site-packages/dask/array/core.pyc in from_npy_stack(dirname, mmap_mode)
3722 Read data in memory map mode
3723 """
-> 3724 with open(os.path.join(dirname, 'info'), 'rb') as f:
3725 info = pickle.load(f)
3726
IOError: [Errno 2] No such file or directory: '/home/tom/data/info'
The function from_npy_stack is short and simple. Agree that it probably ought to take the metadata as an optional argument for cases such as yours, but you could simply use the lines of code after loading the "info" file assuming you have the right values to. Some of these values, i.e., dtype and the shape of each array for making chunks, could presumably be obtained by looking at the first of the data files
name = 'from-npy-stack-%s' % dirname
keys = list(product([name], *[range(len(c)) for c in chunks]))
values = [(np.load, os.path.join(dirname, '%d.npy' % i), mmap_mode)
for i in range(len(chunks[axis]))]
dsk = dict(zip(keys, values))
out = Array(dsk, name, chunks, dtype)
Also, note that we are constructing the names of the files here, but you might want to get those by doing a listdir or glob.