Groupby and multiindexes - how to organize data with irregular sizes? - multi-index

I am trying to organize 3D data collected from several participants with a different number of samples for each participant. Each participant has a unique session and seat index in the experiment. For each participant i, I have a 3D array composed of Ni images (height*width).
I first tried by creating a Dataset of participants but I ended up having many NaNs due to the fact that participants have different samples on the same dimension (sample dim). I then switched to a unique DataArray containing all my participants data concatenated on a single dimension I call depth. This dimension is then associated to a multiindex coordinate combining session, seatand sample coordinates:
<xarray.DataArray (depth: 52, height: 4, width: 4)>
array([[[0.92337111, 0.86505447, 0.08541727, 0.74850848],
[0.02336959, 0.0495726 , 0.98745956, 0.58831929],
[0.62128185, 0.7732787 , 0.27716268, 0.83634779],
[0.08146719, 0.35851012, 0.44170263, 0.74338872]],
...
[[0.4365896 , 0.23527988, 0.86891853, 0.94486637],
[0.20884748, 0.81012315, 0.61542411, 0.76706922],
[0.33391262, 0.88955315, 0.25329999, 0.35803887],
[0.49586615, 0.94767265, 0.40868892, 0.42393425]]])
Coordinates:
* height (height) int64 0 1 2 3
* width (width) int64 0 1 2 3
* depth (depth) MultiIndex
- session (depth) int64 0 0 0 0 0 0 0 0 0 0 0 1 1 ... 3 3 3 3 3 3 3 3 3 3 3 3
- seat (depth) int64 0 0 0 0 0 1 1 1 1 1 1 0 0 ... 0 0 0 0 0 1 1 1 1 1 1 1
- sample (depth) int64 0 1 2 3 4 0 1 2 3 4 5 0 1 ... 1 2 3 4 5 0 1 2 3 4 5 6
However I find this solution not really usable for several reasons:
each time I want to perform a groupby I have to reset the index to recreate one with the coordinates I want to group since xarray does not support multiple groupby on the same dim:
da = da.reset_index('depth')
da = da.set_index(depth=['session', 'seat'])
da.groupby('depth').mean()
the result of the code above is not perfect as it does not maintain the multiindex names:
<xarray.DataArray (depth: 8, height: 4, width: 4)>
array([[[0.47795382, 0.67322777, 0.12946181, 0.48983815],
[0.33895882, 0.46772217, 0.62886196, 0.55970122],
[0.57370573, 0.47272117, 0.31529004, 0.63230245],
[0.63230284, 0.5352105 , 0.65805407, 0.65274841]],
...
[[0.55672404, 0.37963945, 0.57334768, 0.64853806],
[0.46608072, 0.39506509, 0.66339553, 0.71447367],
[0.58989461, 0.66066485, 0.53271228, 0.43036214],
[0.44163921, 0.54990042, 0.4229631 , 0.5941268 ]]])
Coordinates:
* height (height) int64 0 1 2 3
* width (width) int64 0 1 2 3
* depth (depth) MultiIndex
- depth_level_0 (depth) int64 0 0 1 1 2 2 3 3
- depth_level_1 (depth) int64 0 1 0 1 0 1 0 1
I can use sel only on fully indexed data (i.e. by using session, seatand sample in the depth index), so I end up re-indexing my data again and again.
I find using hvplot on such DataArray not really straightforward (skipping the details here for easier reading of this already long post).
Is there something I am missing ? Is there a better way to organize my data ? I tried to create mutliple indexes on the same dim for convenience but without success.

Related

pandas: idxmax for k-th largest

Having df of probabilities distribution, I get max probability for rows with df.idxmax(axis=1) like this:
df['1k-th'] = df.idxmax(axis=1)
and get the following result:
(scroll the tables to the right if you can not see all the columns)
0 1 2 3 4 5 6 1k-th
0 0.114869 0.020708 0.025587 0.028741 0.031257 0.031619 0.747219 6
1 0.020206 0.012710 0.010341 0.012196 0.812495 0.113863 0.018190 4
2 0.023585 0.735475 0.091795 0.021683 0.027581 0.054217 0.045664 1
3 0.009834 0.009175 0.013165 0.016014 0.015507 0.899115 0.037190 5
4 0.023357 0.736059 0.088721 0.021626 0.027341 0.056289 0.046607 1
the question is how to get the 2-th, 3th, etc probabilities, so that I get the following result?:
0 1 2 3 4 5 6 1k-th 2-th
0 0.114869 0.020708 0.025587 0.028741 0.031257 0.031619 0.747219 6 0
1 0.020206 0.012710 0.010341 0.012196 0.812495 0.113863 0.018190 4 3
2 0.023585 0.735475 0.091795 0.021683 0.027581 0.054217 0.045664 1 4
3 0.009834 0.009175 0.013165 0.016014 0.015507 0.899115 0.037190 5 4
4 0.023357 0.736059 0.088721 0.021626 0.027341 0.056289 0.046607 1 2
Thank you!
My own solution is not the prettiest, but does it's job and works fast:
for i in range(7):
p[f'{i}k'] = p[[0,1,2,3,4,5,6]].idxmax(axis=1)
p[f'{i}k_v'] = p[[0,1,2,3,4,5,6]].max(axis=1)
for x in range(7):
p[x] = np.where(p[x]==p[f'{i}k_v'], np.nan, p[x])
The loop does:
finds the largest value and it's column index
drops the found value (sets to nan)
again
finds the 2nd largest value
drops the found value
etc ...

How to split a column in a data frame containing only numbers into multiple columns in pandas

I have a .dat file containing the following data:
0001100000101010100
110101000001111
101100011001110111
0111111010100
1010111111100011
Need to count number of zeros and ones in each row
I have tried with Pandas.
Step-1: Read the data file
Step-2: Given a column name
Step-3: Tried to split the values into multiple columns. But could
not succeed
df1=pd.read_csv('data.dat',header=None) df1.head()
0 1100000101010100
1 110101000001111
2 101100011001110111
3 111111010100
4 1010111111100011
df1.columns=['kirti']
df1.head()
Kirti
_______________________
0 1100000101010100
1 110101000001111
2 101100011001110111
3 111111010100
4 1010111111100011
I need to split the data frame into multiple columns depending upon the 0s and 1s in each row.
the maximum number of columns will be equal to max no of zeros and ones in any of the rows in the data frame.
First create one column DataFrame by parameters names and dtype=str for convert column to strings:
import pandas as pd
temp="""0001100000101010100
110101000001111
101100011001110111
0111111010100
1010111111100011"""
#after testing replace 'pd.compat.StringIO(temp)' to 'filename'
df = pd.read_csv(StringIO(temp), header=None, names=['kirti'], dtype=str)
print (df)
kirti
0 0001100000101010100
1 110101000001111
2 101100011001110111
3 0111111010100
4 1010111111100011
And then create new DataFrame by convert values to lists:
df = pd.DataFrame([list(x) for x in df['kirti']])
print (df)
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
0 0 0 0 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 0
1 1 1 0 1 0 1 0 0 0 0 0 1 1 1 1 None None None None
2 1 0 1 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 None
3 0 1 1 1 1 1 1 0 1 0 1 0 0 None None None None None None
4 1 0 1 0 1 1 1 1 1 1 1 0 0 0 1 1 None None None
If your data is in a list of strings, then use the count method:
>> data = ["0001100000101010100", "110101000001111", "101100011001110111", "0111111010100", "1010111111100011"]
>> for i in data:
print(i.count("0"))
13
7
7
5
5
If your data is in a .dat file with whitespace sepparation as you discribed, then I would recommend loading your data as follows:
data = pd.read_csv("data.dat", lineterminator=" ",dtype="str", header=None, names=["Kirti"])
Kirti
0 0001100000101010100
1 110101000001111
2 101100011001110111
3 0111111010100
4 1010111111100011
The lineterminator argument ensures that every entry is in a new row. The dtype argument ensures that it's read as string. Otherwise you will loose leading zeros.
If your data is in a DataFrame, you can use the count method (inspired from here):
>> data["Kirti"].str.count("0")
0 13
1 7
2 7
3 5
4 5
Name: Kirti, dtype: int64

rolling sum of a column in pandas dataframe at variable intervals

I have a list of index numbers that represent index locations for a DF. list_index = [2,7,12]
I want to sum from a single column in the DF by rolling through each number in list_index and totaling the counts between the index points (and restart count at 0 at each index point). Here is a mini example.
The desired output is in OUTPUT column, which increments every time there is another 1 from COL 1 and RESTARTS the count at 0 on the location after the number in the list_index.
I was able to get it to work with a loop but there are millions of rows in the DF and it takes a while for the loop to run. It seems like I need a lambda function with a sum but I need to input start and end point in index.
Something like lambda x:x.rolling(start_index, end_index).sum()? Can anyone help me out on this.
You can try of cummulative sum and retrieving only 1 values related information , rolling sum with diffferent intervals is not possible
a = df['col'].eq(1).cumsum()
df['output'] = a - a.mask(df['col'].eq(1)).ffill().fillna(0).astype(int)
Out:
col output
0 0 0
1 1 1
2 1 2
3 0 0
4 1 1
5 1 2
6 1 3
7 0 0
8 0 0
9 0 0
10 0 0
11 1 1
12 1 2
13 0 0
14 0 0
15 1 1

Python particles simulator: out-of-core processing

Problem description
In writing a Monte Carlo particle simulator (brownian motion and photon emission) in python/numpy. I need to save the simulation output (>>10GB) to a file and process the data in a second step. Compatibility with both Windows and Linux is important.
The number of particles (n_particles) is 10-100. The number of time-steps (time_size) is ~10^9.
The simulation has 3 steps (the code below is for an all-in-RAM version):
Simulate (and store) an emission rate array (contains many almost-0 elements):
shape (n_particles x time_size), float32, size 80GB
Compute counts array, (random values from a Poisson process with previously computed rates):
shape (n_particles x time_size), uint8, size 20GB
counts = np.random.poisson(lam=emission).astype(np.uint8)
Find timestamps (or index) of counts. Counts are almost always 0, so the timestamp arrays will fit in RAM.
# Loop across the particles
timestamps = [np.nonzero(c) for c in counts]
I do step 1 once, then repeat step 2-3 many (~100) times. In the future I may need to pre-process emission (apply cumsum or other functions) before computing counts.
Question
I have a working in-memory implementation and I'm trying to understand what is the best approach to implement an out-of-core version that can scale to (much) longer simulations.
What I would like it exist
I need to save arrays to a file, and I would like to use a single file for a simulation. I also need a "simple" way to store and recall a dictionary of simulation parameter (scalars).
Ideally I would like a file-backed numpy array that I can preallocate and fill in chunks. Then, I would like the numpy array methods (max, cumsum, ...) to work transparently, requiring only a chunksize keyword to specify how much of the array to load at each iteration.
Even better, I would like a Numexpr that operates not between cache and RAM but between RAM and hard drive.
What are the practical options
As a first option
I started experimenting with pyTables, but I'm not happy with its complexity and abstractions (so different from numpy). Moreover my current solution (read below) is UGLY and not very efficient.
So my options for which I seek an answer are
implement a numpy array with required functionality (how?)
use pytable in a smarter way (different data-structures/methods)
use another library: h5py, blaze, pandas... (I haven't tried any of them so far).
Tentative solution (pyTables)
I save the simulation parameters in '/parameters' group: each parameter is converted to a numpy array scalar. Verbose solution but it works.
I save emission as an Extensible array (EArray), because I generate the data in chunks and I need to append each new chunk (I know the final size though). Saving counts is more problematic. If a save it like a pytable array it's difficult to perform queries like "counts >= 2". Therefore I saved counts as multiple tables (one per particle) [UGLY] and I query with .get_where_list('counts >= 2'). I'm not sure this is space-efficient, and
generating all these tables instead of using a single array, clobbers significantly the HDF5 file. Moreover, strangely enough, creating those tables require creating a custom dtype (even for standard numpy dtypes):
dt = np.dtype([('counts', 'u1')])
for ip in xrange(n_particles):
name = "particle_%d" % ip
data_file.create_table(
group, name, description=dt, chunkshape=chunksize,
expectedrows=time_size,
title='Binned timetrace of emitted ph (bin = t_step)'
' - particle_%d' % particle)
Each particle-counts "table" has a different name (name = "particle_%d" % ip) and that I need to put them in a python list for easy iteration.
EDIT: The result of this question is a Brownian Motion simulator called PyBroMo.
Dask.array can perform chunked operations like max, cumsum, etc. on an on-disk array like PyTables or h5py.
import h5py
d = h5py.File('myfile.hdf5')['/data']
import dask.array as da
x = da.from_array(d, chunks=(1000, 1000))
X looks and feels like a numpy array and copies much of the API. Operations on x will create a DAG of in-memory operations which will execute efficiently using multiple cores streaming from disk as necessary
da.exp(x).mean(axis=0).compute()
http://dask.pydata.org/en/latest/
conda install dask
or
pip install dask
See here for how to store your parameters in the HDF5 file (it pickles, so you can store them how you have them; their is a 64kb limit on the size of the pickle).
import pandas as pd
import numpy as np
n_particles = 10
chunk_size = 1000
# 1) create a new emission file, compressing as we go
emission = pd.HDFStore('emission.hdf',mode='w',complib='blosc')
# generate simulated data
for i in range(10):
df = pd.DataFrame(np.abs(np.random.randn(chunk_size,n_particles)),dtype='float32')
# create a globally unique index (time)
# http://stackoverflow.com/questions/16997048/how-does-one-append-large-amounts-of-
data-to-a-pandas-hdfstore-and-get-a-natural/16999397#16999397
try:
nrows = emission.get_storer('df').nrows
except:
nrows = 0
df.index = pd.Series(df.index) + nrows
emission.append('df',df)
emission.close()
# 2) create counts
cs = pd.HDFStore('counts.hdf',mode='w',complib='blosc')
# this is an iterator, can be any size
for df in pd.read_hdf('emission.hdf','df',chunksize=200):
counts = pd.DataFrame(np.random.poisson(lam=df).astype(np.uint8))
# set the index as the same
counts.index = df.index
# store the sum across all particles (as most are zero this will be a
# nice sub-selector
# better maybe to have multiple of these sums that divide the particle space
# you don't have to do this but prob more efficient
# you can do this in another file if you want/need
counts['particles_0_4'] = counts.iloc[:,0:4].sum(1)
counts['particles_5_9'] = counts.iloc[:,5:9].sum(1)
# make the non_zero column indexable
cs.append('df',counts,data_columns=['particles_0_4','particles_5_9'])
cs.close()
# 3) find interesting counts
print pd.read_hdf('counts.hdf','df',where='particles_0_4>0')
print pd.read_hdf('counts.hdf','df',where='particles_5_9>0')
You can alternatively, make each particle a data_column and select on them individually.
and some output (pretty active emission in this case :)
0 1 2 3 4 5 6 7 8 9 particles_0_4 particles_5_9
0 2 2 2 3 2 1 0 2 1 0 9 4
1 1 0 0 0 1 0 1 0 3 0 1 4
2 0 2 0 0 2 0 0 1 2 0 2 3
3 0 0 0 1 1 0 0 2 0 3 1 2
4 3 1 0 2 1 0 0 1 0 0 6 1
5 1 0 0 1 0 0 0 3 0 0 2 3
6 0 0 0 1 1 0 1 0 0 0 1 1
7 0 2 0 2 0 0 0 0 2 0 4 2
8 0 0 0 1 3 0 0 0 0 1 1 0
10 1 0 0 0 0 0 0 0 0 1 1 0
11 0 0 1 1 0 2 0 1 2 1 2 5
12 0 2 2 4 0 0 1 1 0 1 8 2
13 0 2 1 0 0 0 0 1 1 0 3 2
14 1 0 0 0 0 3 0 0 0 0 1 3
15 0 0 0 1 1 0 0 0 0 0 1 0
16 0 0 0 4 3 0 3 0 1 0 4 4
17 0 2 2 3 0 0 2 2 0 2 7 4
18 0 1 2 1 0 0 3 2 1 2 4 6
19 1 1 0 0 0 0 1 2 1 1 2 4
20 0 0 2 1 2 2 1 0 0 1 3 3
22 0 1 2 2 0 0 0 0 1 0 5 1
23 0 2 4 1 0 1 2 0 0 2 7 3
24 1 1 1 0 1 0 0 1 2 0 3 3
26 1 3 0 4 1 0 0 0 2 1 8 2
27 0 1 1 4 0 1 2 0 0 0 6 3
28 0 1 0 0 0 0 0 0 0 0 1 0
29 0 2 0 0 1 0 1 0 0 0 2 1
30 0 1 0 2 1 2 0 2 1 1 3 5
31 0 0 1 1 1 1 1 0 1 1 2 3
32 3 0 2 1 0 0 1 0 1 0 6 2
33 1 3 1 0 4 1 1 0 1 4 5 3
34 1 1 0 0 0 0 0 3 0 1 2 3
35 0 1 0 0 1 1 2 0 1 0 1 4
36 1 0 1 0 1 2 1 2 0 1 2 5
37 0 0 0 1 0 0 0 0 3 0 1 3
38 2 5 0 0 0 3 0 1 0 0 7 4
39 1 0 0 2 1 1 3 0 0 1 3 4
40 0 1 0 0 1 0 0 4 2 2 1 6
41 0 3 3 1 1 2 0 0 2 0 7 4
42 0 1 0 2 0 0 0 0 0 1 3 0
43 0 0 2 0 5 0 3 2 1 1 2 6
44 0 2 0 1 0 0 1 0 0 0 3 1
45 1 0 0 2 0 0 0 1 4 0 3 5
46 0 2 0 0 0 0 0 1 1 0 2 2
48 3 0 0 0 0 1 1 0 0 0 3 2
50 0 1 0 1 0 1 0 0 2 1 2 3
51 0 0 2 0 0 0 2 3 1 1 2 6
52 0 0 2 3 2 3 1 0 1 5 5 5
53 0 0 0 2 1 1 0 0 1 1 2 2
54 0 1 2 2 2 0 1 0 2 0 5 3
55 0 2 1 0 0 0 0 0 3 2 3 3
56 0 1 0 0 0 2 2 0 1 1 1 5
57 0 0 0 1 1 0 0 1 0 0 1 1
58 6 1 2 0 2 2 0 0 0 0 9 2
59 0 1 1 0 0 0 0 0 2 0 2 2
60 2 0 0 0 1 0 0 1 0 1 2 1
61 0 0 3 1 1 2 0 0 1 0 4 3
62 2 0 1 0 0 0 0 1 2 1 3 3
63 2 0 1 0 1 0 1 0 0 0 3 1
65 0 0 1 0 0 0 1 5 0 1 1 6
.. .. .. .. .. .. .. .. .. .. ... ...
[9269 rows x 12 columns]
PyTable Solution
Since functionality provided by Pandas is not needed, and the processing is much slower (see notebook below), the best approach is using PyTables or h5py directly. I've tried only the pytables approach so far.
All tests were performed in this notebook:
Python particles simulator: numpy out-of-core processing
Introduction to pytables data-structures
Reference: Official PyTables Docs
Pytables allows store data in HDF5 files in 2 types of formats: arrays and tables.
Arrays
There are 3 types of arrays Array, CArray and EArray. They all allow to store and retrieve (multidimensional) slices with a notation similar to numpy slicing.
# Write data to store (broadcasting works)
array1[:] = 3
# Read data from store
in_ram_array = array1[:]
For optimization in some use cases, CArray is saved in "chunks", whose size can be chosen with chunk_shape at creation time.
Array and CArray size is fixed at creation time. You can fill/write the array chunk-by-chunk after creation though. Conversely EArray can be extended with the .append() method.
Tables
The table is a quite different beast. It's basically a "table". You have only 1D index and each element is a row. Inside each row there are the "columns" data types, each columns can have a different type. It you are familiar with numpy record-arrays, a table is basically an 1D record-array, with each element having many fields as the columns.
1D or 2D numpy arrays can be stored in tables but it's a bit more tricky: we need to create a row data type. For example to store an 1D uint8 numpy array we need to do:
table_uint8 = np.dtype([('field1', 'u1')])
table_1d = data_file.create_table('/', 'array_1d', description=table_uint8)
So why using tables? Because, differently from arrays, tables can be efficiently queried. For example, if we want to search for elements > 3 in a huge disk-based table we can do:
index = table_1d.get_where_list('field1 > 3')
Not only it is simple (compared with arrays where we need to scan the whole file in chunks and build index in a loop) but it is also very extremely fast.
How to store simulation parameters
The best way to store simulation parameters is to use a group (i.e. /parameters), convert each scalar to numpy array and store it as CArray.
Array for "emission"
emission is the biggest array that is generated and read sequentially. For this usage pattern A good data structure is EArray. On "simulated" data with ~50% of zeros elements blosc compression (level=5) achieves 2.2x compression ratio. I found that a chunk-size of 2^18 (256k) has the minimum processing time.
Storing "counts"
Storing also "counts" will increase the file size by 10% and will take 40% more time to compute timestamps. Having counts stored is not an advantage per-se because only the timestamps are needed in the end.
The advantage is that recostructing the index (timestamps) is simpler because we query the full time axis in a single command (.get_where_list('counts >= 1')). Conversely, with chunked processing, we need to perform some index arithmetics that is a bit tricky, and maybe a burden to maintain.
However the the code complexity may be small compared to all the other operations (sorting and merging) that are needed in both cases.
Storing "timestamps"
Timestamps can be accumulated in RAM. However, we don't know the arrays size before starting and a final hstack() call is needed to "merge" the different chunks stored in a list. This doubles the memory requirements so the RAM may be insufficient.
We can store as-we-go timestamps to a table using .append(). At the end we can load the table in memory with .read(). This is only 10% slower than all-in-memory computation but avoids the "double-RAM" requirement. Moreover we can avoid the final full-load and have minimal RAM usage.
H5Py
H5py is a much simpler library than pytables. For this use-case of (mainly) sequential processing seems a better fit than pytables. The only missing feature is the lack of 'blosc' compression. If this results in a big performance penalty remains to be tested.
Use OpenMM to simulate particles (https://github.com/SimTk/openmm) and MDTraj (https://github.com/rmcgibbo/mdtraj) to handle trajectory IO.
The pytables vs pandas.HDFStore tests in the accepted answer is completely misleading:
The first critical error is the timing did not apply os.fsync after
flush, which make the speed test unstable. So sometime, the pytables
function is accidentally much faster.
The 2nd problem is the pytables and pandas versions have completely
different shapes due to misunderstanding the pytables.EArray's
shape arg. The author try to append column into pytables version but
append row into pandas version.
The 3rd problem is the author used different chunkshape during
comparison.
The author also forgot to disable the table index generation during store.append() which is a time consuming process.
The follow table showed the performance results from his version and my fixes.
tbold is his pytables version, pdold is his pandas version. tbsync and pdsync are his version with fsync() after flush() and also disable the table index generation during append. the tbopt and pdopt are my optimized version, with blosc:lz4 and complevel 9.
| name | dt | data size [MB] | comp ratio % | chunkshape | shape | clib | indexed |
|:-------|-----:|-----------------:|---------------:|:-------------|:--------------|:----------------|:----------|
| tbold | 5.11 | 300.00 | 84.63 | (15, 262144) | (15, 5242880) | blosc[5][1] | False |
| pdold | 8.39 | 340.00 | 39.26 | (1927,) | (5242880,) | blosc[5][1] | True |
| tbsync | 7.47 | 300.00 | 84.63 | (15, 262144) | (15, 5242880) | blosc[5][1] | False |
| pdsync | 6.97 | 340.00 | 39.27 | (1927,) | (5242880,) | blosc[5][1] | False |
| tbopt | 4.78 | 300.00 | 43.07 | (4369, 15) | (5242880, 15) | blosc:lz4[9][1] | False |
| pdopt | 5.73 | 340.00 | 38.53 | (3855,) | (5242880,) | blosc:lz4[9][1] | False |
The pandas.HDFStore uses pytables under the hood. Thus if we use them correctly, they should have no difference at all.
We can see the pandas version has larger data size. This is because the pandas use pytables.Table instead of EArray. And the pandas.DataFrame always have an index column. The first column of the Table object is this DataFrame index which require some extra space to save. This only affect IO performance a little but provide more features such as out-of-core query. So I still recommend pandas here. #MRocklin also mentioned a nicer out-of-core package dask, if most features you used are just array operations instead of table-like query. But the IO performance won't have distinguishable difference.
h5f.root.emission._v_attrs
Out[82]:
/emission._v_attrs (AttributeSet), 15 attributes:
[CLASS := 'GROUP',
TITLE := '',
VERSION := '1.0',
data_columns := [],
encoding := 'UTF-8',
index_cols := [(0, 'index')],
info := {1: {'names': [None], 'type': 'RangeIndex'}, 'index': {}},
levels := 1,
metadata := [],
nan_rep := 'nan',
non_index_axes := [(1, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])],
pandas_type := 'frame_table',
pandas_version := '0.15.2',
table_type := 'appendable_frame',
values_cols := ['values_block_0']]
Here is the functions:
def generate_emission(shape):
"""Generate fake emission."""
emission = np.random.randn(*shape).astype('float32') - 1
emission.clip(0, 1e6, out=emission)
assert (emission >=0).all()
return emission
def test_puretb_earray(outpath,
n_particles = 15,
time_chunk_size = 2**18,
n_iter = 20,
sync = True,
clib = 'blosc',
clevel = 5,
):
time_size = n_iter * time_chunk_size
data_file = pytb.open_file(outpath, mode="w")
comp_filter = pytb.Filters(complib=clib, complevel=clevel)
emission = data_file.create_earray('/', 'emission', atom=pytb.Float32Atom(),
shape=(n_particles, 0),
chunkshape=(n_particles, time_chunk_size),
expectedrows=n_iter * time_chunk_size,
filters=comp_filter)
# generate simulated emission data
t0 =time()
for i in range(n_iter):
emission_chunk = generate_emission((n_particles, time_chunk_size))
emission.append(emission_chunk)
emission.flush()
if sync:
os.fsync(data_file.fileno())
data_file.close()
t1 = time()
return t1-t0
def test_puretb_earray2(outpath,
n_particles = 15,
time_chunk_size = 2**18,
n_iter = 20,
sync = True,
clib = 'blosc',
clevel = 5,
):
time_size = n_iter * time_chunk_size
data_file = pytb.open_file(outpath, mode="w")
comp_filter = pytb.Filters(complib=clib, complevel=clevel)
emission = data_file.create_earray('/', 'emission', atom=pytb.Float32Atom(),
shape=(0, n_particles),
expectedrows=time_size,
filters=comp_filter)
# generate simulated emission data
t0 =time()
for i in range(n_iter):
emission_chunk = generate_emission((time_chunk_size, n_particles))
emission.append(emission_chunk)
emission.flush()
if sync:
os.fsync(data_file.fileno())
data_file.close()
t1 = time()
return t1-t0
def test_purepd_df(outpath,
n_particles = 15,
time_chunk_size = 2**18,
n_iter = 20,
sync = True,
clib='blosc',
clevel=5,
autocshape=False,
oldversion=False,
):
time_size = n_iter * time_chunk_size
emission = pd.HDFStore(outpath, mode='w', complib=clib, complevel=clevel)
# generate simulated data
t0 =time()
for i in range(n_iter):
# Generate fake emission
emission_chunk = generate_emission((time_chunk_size, n_particles))
df = pd.DataFrame(emission_chunk, dtype='float32')
# create a globally unique index (time)
# http://stackoverflow.com/questions/16997048/how-does-one-append-large-
# amounts-of-data-to-a-pandas-hdfstore-and-get-a-natural/16999397#16999397
try:
nrows = emission.get_storer('emission').nrows
except:
nrows = 0
df.index = pd.Series(df.index) + nrows
if autocshape:
emission.append('emission', df, index=False,
expectedrows=time_size
)
else:
if oldversion:
emission.append('emission', df)
else:
emission.append('emission', df, index=False)
emission.flush(fsync=sync)
emission.close()
t1 = time()
return t1-t0
def _test_puretb_earray_nosync(outpath):
return test_puretb_earray(outpath, sync=False)
def _test_purepd_df_nosync(outpath):
return test_purepd_df(outpath, sync=False,
oldversion=True
)
def _test_puretb_earray_opt(outpath):
return test_puretb_earray2(outpath,
sync=False,
clib='blosc:lz4',
clevel=9
)
def _test_purepd_df_opt(outpath):
return test_purepd_df(outpath,
sync=False,
clib='blosc:lz4',
clevel=9,
autocshape=True
)
testfns = {
'tbold':_test_puretb_earray_nosync,
'pdold':_test_purepd_df_nosync,
'tbsync':test_puretb_earray,
'pdsync':test_purepd_df,
'tbopt': _test_puretb_earray_opt,
'pdopt': _test_purepd_df_opt,
}

Assigning one column to another column between pandas DataFrames (like vector to vector assignment)

I have a super strange problem which I spent the last hour trying to solve, but with no success. It is even more strange since I can't replicate it on a small scale.
I have a large DataFrame (150,000 entries). I took out a subset of it and did some manipulation. the subset was saved as a different variable, x.
x is smaller than the df, but its index is in the same range as the df. I'm now trying to assign x back to the DataFrame replacing values in the same column:
rep_Callers['true_vpID'] = x.true_vpID
This inserts all the different values in x to the right place in df, but instead of keeping the df.true_vpID values that are not in x, it is filling them with NaNs. So I tried a different approach:
df.ix[x.index,'true_vpID'] = x.true_vpID
But instead of filling x values in the right place in df, the df.true_vpID gets filled with the first value of x and only it! I changed the first value of x several times to make sure this is indeed what is happening, and it is. I tried to replicate it on a small scale but it didn't work:
df = DataFrame({'a':ones(5),'b':range(5)})
a b
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
z =Series([random() for i in range(5)],index = range(5))
0 0.812561
1 0.862109
2 0.031268
3 0.575634
4 0.760752
df.ix[z.index[[1,3]],'b'] = z[[1,3]]
a b
0 1 0.000000
1 1 0.812561
2 1 2.000000
3 1 0.575634
4 1 4.000000
5 1 5.000000
I really tried it all, need some new suggestions...
Try using df.update(updated_df_or_series)
Also using a simple example, you can modify a DataFrame by doing an index query and modifying the resulting object.
df_1
a b
0 1 0
1 1 1
2 1 2
3 1 3
4 1 4
df_2 = df_1.ix[3:5]
df_2.b = df_2.b + 2
df_2
a b
3 1 5
4 1 6
df_1
a b
0 1 0
1 1 1
2 1 2
3 1 5
4 1 6