Meshroom in Google Colab but no output file - google-colaboratory

I tried using meshroom to make a 3d model using google colab. Everything worked fine but there was no output file. I tried mounting mega instead of google drive, but no luck. Instead, I got the following text(some of output cut due to word limit). This is the colab notebook that I used: https://colab.research.google.com/drive/10T2pDZGRUd5r1UiAvQUwJZTqE_tLydcu
[12:26:49.520867][info] Bundle Adjustment Statistics:
- local strategy enabled: no
- adjustment duration: 0.00275099 s
- poses:
- # refined: 2
- # constant: 0
- # ignored: 0
- landmarks:
- # refined: 88
- # constant: 0
- # ignored: 0
- intrinsics:
- # refined: 0
- # constant: 1
- # ignored: 0
- # residual blocks: 176
- # successful iterations: 12
- # unsuccessful iterations: 0
- initial RMSE: 0.386763
- final RMSE: 0.36501
[12:26:49.520935][info] Remove outliers:
- # outliers residual error: 0
- # outliers angular error: 0
[12:26:49.520954][info] Bundle adjustment iteration: 0 took 3 msec.
[12:26:49.520966][info] Bundle adjustment with 1 iterations took 3 msec.
[12:26:49.521135][info] Initial pair is: 738871193, 833403405
[12:26:49.521201][info] Begin Incremental Reconstruction:
- mode: SfM augmentation
- # images in input: 236
- # images in resection: 234
- # landmarks in input: 44
- # cameras already calibrated: 2
[12:26:49.521225][info] Incremental Reconstruction start iteration 0:
- # number of resection groups: 0
- # number of poses: 2
- # number of landmarks: 44
- # remaining images: 234
[12:26:49.522265][info] Update Reconstruction:
- resection id: 0
- # images in the resection group: 1
- # images remaining: 234
[12:26:49.522355][info] [3/236] Robust Resection of view: 18451152
[12:26:49.529856][info] Robust Resection information:
- resection status: true
- threshold (error max): 2.89881
- # points used for resection: 32
- # points validated by robust resection: 31
[12:26:49.532694][info] Bundle adjustment start.
[12:26:49.532746][info] Start bundle adjustment iteration: 0
block_sparse_matrix.cc:81 Allocating values array with 15600 bytes.
detect_structure.cc:95 Dynamic f block size because the block size changed from 6 to 4
detect_structure.cc:113 Schur complement static structure <2,3,-1>.
detect_structure.cc:95 Dynamic f block size because the block size changed from 6 to 4
detect_structure.cc:113 Schur complement static structure <2,3,-1>.
[12:26:49.552254][info] Bundle Adjustment Statistics:
- local strategy enabled: no
- adjustment duration: 0.0190014 s
- poses:
- # refined: 3
- # constant: 0
- # ignored: 0
- landmarks:
- # refined: 75
- # constant: 0
- # ignored: 0
- intrinsics:
- # refined: 1
- # constant: 0
- # ignored: 0
- # residual blocks: 150
- # successful iterations: 51
- # unsuccessful iterations: 0
- initial RMSE: 0.476511
- final RMSE: 0.313505
[12:26:49.552361][info] Remove outliers:
- # outliers residual error: 0
- # outliers angular error: 0
[12:26:49.552432][info] Bundle adjustment iteration: 0 took 19 msec.
[12:26:49.552454][info] Bundle adjustment with 1 iterations took 19 msec.
[12:26:49.625419][info] Incremental Reconstruction start iteration 1:
- # number of resection groups: 1
- # number of poses: 0
- # number of landmarks: 0
- # remaining images: 233
[12:26:49.625465][info] Incremental Reconstruction completed with 2 iterations:
- # number of resection groups: 1
- # number of poses: 0
- # number of landmarks: 0
- # remaining images: 233
[12:26:49.625540][info] Structure from Motion statistics:
- # input images: 236
- # cameras calibrated: 0
- # poses: 0
- # landmarks: 0
- elapsed time: 0.104
- residual RMSE: -nan
[12:26:49.625566][info] Histogram of residuals:
0 | 0
0.1 | 0
0.2 | 0
0.3 | 0
0.4 | 0
0.5 | 0
0.6 | 0
0.7 | 0
0.8 | 0
0.9 | 0
1
[12:26:49.625587][info] Histogram of observations length:
0 | 0
0.1 | 0
0.2 | 0
0.3 | 0
0.4 | 0
0.5 | 0
0.6 | 0
0.7 | 0
0.8 | 0
0.9 | 0
1
[12:26:49.625605][info] Histogram of nb landmarks per view:
0 | 0
0 | 0
0 | 0
0 | 0
0 | 0
0 | 0
0 | 0
0 | 0
0 | 0
0 | 0
1

Have you tried this part of the code?
# Choose format (tar.gz or zip)
!tar -czvf out.tar.gz ./out
from google.colab import files
files.download('out.tar.gz')
!zip -r out.zip ./out
files.download('out.zip')
For more information you can check this source:
https://colab.research.google.com/gist/natowi/3044484ad0c98877692c399297e3ab7e/meshroomcolab.ipynb#scrollTo=VQ8F_rxPw4dK

Related

Negative binomial , Poisson-gamma mixture winbugs

Winbugs trap error
model
{
for (i in 1:5323) {
Y[i] ~ dpois(mu[i]) # NB model as a Poisson-gamma mixture
mu[i] ~ dgamma(b[i], a[i]) # NB model as a poisson-gamma mixture
a[i] <- b[i] / Emu[i]
b[i] <- B * X[i]
Emu[i] <- beta0 * pow(X[i], beta1) # model equation
}
# Priors
beta0 ~ dunif(0,10) # parameter
beta1 ~ dunif(0,10) # parameter
B ~ dunif(0,10) # over-dispersion parameter
}
X[] Y[]
1.5 0
2.9 0
1.49 0
0.39 0
3.89 0
2.03 0
0.91 0
0.89 0
0.97 0
2.16 0
0.04 0
1.12 1s
2.26 0
3.6 1
1.94 0
0.41 1
2 0
0.9 0
0.9 0
0.9 0
0.1 0
0.88 1
0.91 0
6.84 2
3.14 3
End ```
This is just a sample of the data, the model question is coming from Ezra Hauer 8.3.2, the art of regression of road safety, the model is providing an **error undefined real result. **
The aim of model is to fully Bayesian and a one step model and not use empirical bayes.
The results should be similar to MLE where beta0 is 1.65, beta1 0.871, overdispersion is 0.531
X is the only variable and y is actual collision,
So X cannot be zero or negative, while y cannot be lower than zero, if the model in solved as Poisson gamma mixture using maximum likelihood then it can be created
How can I make this model work
Solving an error in winbugs?
the data is in excel, the model worked fine when I selected the biggest 1000 observations only.

How can I pad matrix in python without using the np.pad() function?

I want to take matrix 1 like the one below and pad it with 1 padding so that it looks like matrix 2 or pad it with 2 padding to make it look like matrix 3. I want to do this without using using the np.pad() or any other Numpy function.
Matrix 1
| 4 4 |
| 7 2 |
Matrix 2 - with padding of 1
| 0 0 0 0 |
| 0 4 4 0 |
| 0 7 2 0 |
| 0 0 0 0 |
Matrix 3 - with padding of 2
| 0 0 0 0 0 0 |
| 0 0 0 0 0 0 |
| 0 0 5 1 0 0 |
| 0 0 7 1 0 0 |
| 0 0 0 0 0 0 |
| 0 0 0 0 0 0 |
You could create a custom pad function like so:
Very late edit: Do not use this function, use the one below it called pad2().
def pad(mat, padding):
dim1 = len(mat)
dim2 = len(mat[0])
# new empty matrix of the required size
new_mat = [
[0 for i in range(dim1 + padding*2)]
for j in range(dim2 + padding*2)
]
# "insert" original matix in the empty matrix
for i in range(dim1):
for j in range(dim2):
new_mat[i+padding][j+padding] = mat[i][j]
return new_mat
It might not be the optimal/fastest solution, but this should work fine for regular sized matrices.
Very late edit:
I tried to use this function on a non square matrix and noticed it threw an IndexError. So for future reference here is the corrected version that works for N x M matrices (where N != M):
def pad2(mat, padding, pad_with=0):
n_rows = len(mat)
n_cols = len(mat[0])
# new empty matrix of the required size
new_mat = [
[pad_with for col in range(n_cols + padding * 2)]
for row in range(n_rows + padding * 2)
]
# "insert" original matix in the empty matrix
for row in range(n_rows):
for col in range(n_cols):
new_mat[row + padding][col + padding] = mat[row][col]
return new_mat

How to use a list of categories that example belongs to as a feature solving classification problem?

One of features looks like this:
1 170,169,205,174,173,246,247,249,380,377,383,38...
2 448,104,239,277,276,99,154,155,76,412,139,333,...
3 268,422,419,124,1,17,431,343,341,435,130,331,5...
4 50,53,449,106,279,420,161,74,123,364,231,18,23...
5 170,169,205,174,173,246,247,249,380,377,383,38...
It tells us what categories the example belongs to.
How should I use it while solving classification problem?
I've tried to use dummy variables,
df=df.join(features['cat'].str.get_dummies(',').add_prefix('contains_'))
but we don't know where there are some other categories that were not mentioned in the training set, so, I do not know how to preprocess all the objects.
That's interesting. I didn't know str.get_dummies, but maybe I can help you with the rest.
You basically have two problems:
The set of categories you get later contains categories that were unknown while training the model. You have to get rid of these later.
The set of categories you get later does not contain all categories. You have to make sure, you generate dummies for them as well.
Problem 1: filtering out unknown/unwanted categories
The first problem is easy to solve:
# create a set of all categories, you want to allow
# either definie it as a fixed set, or extract it from your
# column like this (the output of the map is actually irrelevant)
# the result will be in valid_categories
valid_categories= set()
df['categories'].str.split(',').map(valid_categories.update)
# now if you want to normalize your data before you do the
# dummy encoding, you can cleanse the data by
# splitting it, creating an intersection and then joining
# it back again to get a string on which you can work with
# str.get_dummies
df['categories'].str.split(',').map(lambda l: valid_categories.intersection(l)).str.join(',')
Problem 2: generating dummies for all known categories
The second problem can be solved by just adding a dummy row, that
contains all categories e.g. with df.append just before you
call get_dummies and removing it right after get_dummies.
# e.g. you can do it like this
# get a new index value to
# be able to remove the row later
# (this only works if you have
# a numeric index)
dummy_index= df.index.max()+1
# assign the categories
#
df.loc[dummy_index]= {'id':999, 'categories': ','.join(valid_categories)}
# now do the processing steps
# mentioned in the section above
# then create the dummies
# after that remove the dummy line
# again
df.drop(labels=[dummy_index], inplace=True)
Example:
import io
raw= """id categories
1 170,169,205,174,173,246,247
2 448,104,239,277,276,99,154
3 268,422,419,124,1,17,431,343
4 50,53,449,106,279,420,161,74
5 170,169,205,174,173,246,247"""
df= pd.read_fwf(io.StringIO(raw))
valid_categories= set()
df['categories'].str.split(',').map(valid_categories.update)
# remove 154 and 170 for demonstration purposes
valid_categories.remove('170')
valid_categories.remove('154')
df['categories'].str.split(',').map(lambda l: valid_categories.intersection(l)).str.join(',').str.get_dummies(',')
Out[622]:
1 104 106 124 161 169 17 173 174 205 239 246 247 268 276 277 279 343 419 420 422 431 448 449 50 53 74 99
0 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 1
2 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 1 1 0 0 0 0 0 0
3 0 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 0
4 0 0 0 0 0 1 0 1 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
You can see, that there are not columns for 154 and 170.

Internal node predictions of xgboost model

Is it possible to calculate the internal node predictions of an xgboost model? The R package, gbm, provides a prediction for internal nodes of each tree.
The xgboost output, however only shows predictions for the final leaves of the model.
xgboost output:
Notice that the Quality column has the final prediction for the leaf node in row 6. I would like that value for each of the internal nodes as well.
Tree Node ID Feature Split Yes No Missing Quality Cover
1: 0 0 0-0 Sex=female 0.50000 0-1 0-2 0-1 246.6042790 222.75
2: 0 1 0-1 Age 13.00000 0-3 0-4 0-4 22.3424225 144.25
3: 0 2 0-2 Pclass=3 0.50000 0-5 0-6 0-5 60.1275253 78.50
4: 0 3 0-3 SibSp 2.50000 0-7 0-8 0-7 23.6302433 9.25
5: 0 4 0-4 Fare 26.26875 0-9 0-10 0-9 21.4425507 135.00
6: 0 5 0-5 Leaf NA <NA> <NA> <NA> 0.1747126 42.50
R gbm output:
In the R gbm package output, the prediction column contains values for both leaf nodes (SplitVar == -1) and the internal nodes. I would like access to these values from the xgboost model
SplitVar SplitCodePred LeftNode RightNode MissingNode ErrorReduction Weight Prediction
0 1 0.000000000 1 8 15 32.564591 445 0.001132514
1 2 9.500000000 2 3 7 3.844470 282 -0.085827382
2 -1 0.119585850 -1 -1 -1 0.000000 15 0.119585850
3 0 1.000000000 4 5 6 3.047926 207 -0.092846157
4 -1 -0.118731665 -1 -1 -1 0.000000 165 -0.118731665
5 -1 0.008846912 -1 -1 -1 0.000000 42 0.008846912
6 -1 -0.092846157 -1 -1 -1 0.000000 207 -0.092846157
Question:
How do I access or calculate predictions for the internal nodes of an xgboost model? I would like to use them for a greedy, poor man's version of SHAP scores.
The solution to this problem is to dump the xgboost json object with all_stats=True. That adds the cover statistic to the output which can be used to distribute the leaf points through the internal nodes:
def _calculate_contribution(node: AnyNode) -> float32:
if isinstance(node, Leaf):
return node.contrib
else:
return (
node.left.cover * Node._calculate_contribution(node.left)
+ node.right.cover * Node._calculate_contribution(node.right)
) / node.cover
The internal contribution is the weighted average of the child contributions. Using this method, the generated results exactly match those returned when calling the predict method with pred_contribs=True and approx_contribs=True.

Python particles simulator: out-of-core processing

Problem description
In writing a Monte Carlo particle simulator (brownian motion and photon emission) in python/numpy. I need to save the simulation output (>>10GB) to a file and process the data in a second step. Compatibility with both Windows and Linux is important.
The number of particles (n_particles) is 10-100. The number of time-steps (time_size) is ~10^9.
The simulation has 3 steps (the code below is for an all-in-RAM version):
Simulate (and store) an emission rate array (contains many almost-0 elements):
shape (n_particles x time_size), float32, size 80GB
Compute counts array, (random values from a Poisson process with previously computed rates):
shape (n_particles x time_size), uint8, size 20GB
counts = np.random.poisson(lam=emission).astype(np.uint8)
Find timestamps (or index) of counts. Counts are almost always 0, so the timestamp arrays will fit in RAM.
# Loop across the particles
timestamps = [np.nonzero(c) for c in counts]
I do step 1 once, then repeat step 2-3 many (~100) times. In the future I may need to pre-process emission (apply cumsum or other functions) before computing counts.
Question
I have a working in-memory implementation and I'm trying to understand what is the best approach to implement an out-of-core version that can scale to (much) longer simulations.
What I would like it exist
I need to save arrays to a file, and I would like to use a single file for a simulation. I also need a "simple" way to store and recall a dictionary of simulation parameter (scalars).
Ideally I would like a file-backed numpy array that I can preallocate and fill in chunks. Then, I would like the numpy array methods (max, cumsum, ...) to work transparently, requiring only a chunksize keyword to specify how much of the array to load at each iteration.
Even better, I would like a Numexpr that operates not between cache and RAM but between RAM and hard drive.
What are the practical options
As a first option
I started experimenting with pyTables, but I'm not happy with its complexity and abstractions (so different from numpy). Moreover my current solution (read below) is UGLY and not very efficient.
So my options for which I seek an answer are
implement a numpy array with required functionality (how?)
use pytable in a smarter way (different data-structures/methods)
use another library: h5py, blaze, pandas... (I haven't tried any of them so far).
Tentative solution (pyTables)
I save the simulation parameters in '/parameters' group: each parameter is converted to a numpy array scalar. Verbose solution but it works.
I save emission as an Extensible array (EArray), because I generate the data in chunks and I need to append each new chunk (I know the final size though). Saving counts is more problematic. If a save it like a pytable array it's difficult to perform queries like "counts >= 2". Therefore I saved counts as multiple tables (one per particle) [UGLY] and I query with .get_where_list('counts >= 2'). I'm not sure this is space-efficient, and
generating all these tables instead of using a single array, clobbers significantly the HDF5 file. Moreover, strangely enough, creating those tables require creating a custom dtype (even for standard numpy dtypes):
dt = np.dtype([('counts', 'u1')])
for ip in xrange(n_particles):
name = "particle_%d" % ip
data_file.create_table(
group, name, description=dt, chunkshape=chunksize,
expectedrows=time_size,
title='Binned timetrace of emitted ph (bin = t_step)'
' - particle_%d' % particle)
Each particle-counts "table" has a different name (name = "particle_%d" % ip) and that I need to put them in a python list for easy iteration.
EDIT: The result of this question is a Brownian Motion simulator called PyBroMo.
Dask.array can perform chunked operations like max, cumsum, etc. on an on-disk array like PyTables or h5py.
import h5py
d = h5py.File('myfile.hdf5')['/data']
import dask.array as da
x = da.from_array(d, chunks=(1000, 1000))
X looks and feels like a numpy array and copies much of the API. Operations on x will create a DAG of in-memory operations which will execute efficiently using multiple cores streaming from disk as necessary
da.exp(x).mean(axis=0).compute()
http://dask.pydata.org/en/latest/
conda install dask
or
pip install dask
See here for how to store your parameters in the HDF5 file (it pickles, so you can store them how you have them; their is a 64kb limit on the size of the pickle).
import pandas as pd
import numpy as np
n_particles = 10
chunk_size = 1000
# 1) create a new emission file, compressing as we go
emission = pd.HDFStore('emission.hdf',mode='w',complib='blosc')
# generate simulated data
for i in range(10):
df = pd.DataFrame(np.abs(np.random.randn(chunk_size,n_particles)),dtype='float32')
# create a globally unique index (time)
# http://stackoverflow.com/questions/16997048/how-does-one-append-large-amounts-of-
data-to-a-pandas-hdfstore-and-get-a-natural/16999397#16999397
try:
nrows = emission.get_storer('df').nrows
except:
nrows = 0
df.index = pd.Series(df.index) + nrows
emission.append('df',df)
emission.close()
# 2) create counts
cs = pd.HDFStore('counts.hdf',mode='w',complib='blosc')
# this is an iterator, can be any size
for df in pd.read_hdf('emission.hdf','df',chunksize=200):
counts = pd.DataFrame(np.random.poisson(lam=df).astype(np.uint8))
# set the index as the same
counts.index = df.index
# store the sum across all particles (as most are zero this will be a
# nice sub-selector
# better maybe to have multiple of these sums that divide the particle space
# you don't have to do this but prob more efficient
# you can do this in another file if you want/need
counts['particles_0_4'] = counts.iloc[:,0:4].sum(1)
counts['particles_5_9'] = counts.iloc[:,5:9].sum(1)
# make the non_zero column indexable
cs.append('df',counts,data_columns=['particles_0_4','particles_5_9'])
cs.close()
# 3) find interesting counts
print pd.read_hdf('counts.hdf','df',where='particles_0_4>0')
print pd.read_hdf('counts.hdf','df',where='particles_5_9>0')
You can alternatively, make each particle a data_column and select on them individually.
and some output (pretty active emission in this case :)
0 1 2 3 4 5 6 7 8 9 particles_0_4 particles_5_9
0 2 2 2 3 2 1 0 2 1 0 9 4
1 1 0 0 0 1 0 1 0 3 0 1 4
2 0 2 0 0 2 0 0 1 2 0 2 3
3 0 0 0 1 1 0 0 2 0 3 1 2
4 3 1 0 2 1 0 0 1 0 0 6 1
5 1 0 0 1 0 0 0 3 0 0 2 3
6 0 0 0 1 1 0 1 0 0 0 1 1
7 0 2 0 2 0 0 0 0 2 0 4 2
8 0 0 0 1 3 0 0 0 0 1 1 0
10 1 0 0 0 0 0 0 0 0 1 1 0
11 0 0 1 1 0 2 0 1 2 1 2 5
12 0 2 2 4 0 0 1 1 0 1 8 2
13 0 2 1 0 0 0 0 1 1 0 3 2
14 1 0 0 0 0 3 0 0 0 0 1 3
15 0 0 0 1 1 0 0 0 0 0 1 0
16 0 0 0 4 3 0 3 0 1 0 4 4
17 0 2 2 3 0 0 2 2 0 2 7 4
18 0 1 2 1 0 0 3 2 1 2 4 6
19 1 1 0 0 0 0 1 2 1 1 2 4
20 0 0 2 1 2 2 1 0 0 1 3 3
22 0 1 2 2 0 0 0 0 1 0 5 1
23 0 2 4 1 0 1 2 0 0 2 7 3
24 1 1 1 0 1 0 0 1 2 0 3 3
26 1 3 0 4 1 0 0 0 2 1 8 2
27 0 1 1 4 0 1 2 0 0 0 6 3
28 0 1 0 0 0 0 0 0 0 0 1 0
29 0 2 0 0 1 0 1 0 0 0 2 1
30 0 1 0 2 1 2 0 2 1 1 3 5
31 0 0 1 1 1 1 1 0 1 1 2 3
32 3 0 2 1 0 0 1 0 1 0 6 2
33 1 3 1 0 4 1 1 0 1 4 5 3
34 1 1 0 0 0 0 0 3 0 1 2 3
35 0 1 0 0 1 1 2 0 1 0 1 4
36 1 0 1 0 1 2 1 2 0 1 2 5
37 0 0 0 1 0 0 0 0 3 0 1 3
38 2 5 0 0 0 3 0 1 0 0 7 4
39 1 0 0 2 1 1 3 0 0 1 3 4
40 0 1 0 0 1 0 0 4 2 2 1 6
41 0 3 3 1 1 2 0 0 2 0 7 4
42 0 1 0 2 0 0 0 0 0 1 3 0
43 0 0 2 0 5 0 3 2 1 1 2 6
44 0 2 0 1 0 0 1 0 0 0 3 1
45 1 0 0 2 0 0 0 1 4 0 3 5
46 0 2 0 0 0 0 0 1 1 0 2 2
48 3 0 0 0 0 1 1 0 0 0 3 2
50 0 1 0 1 0 1 0 0 2 1 2 3
51 0 0 2 0 0 0 2 3 1 1 2 6
52 0 0 2 3 2 3 1 0 1 5 5 5
53 0 0 0 2 1 1 0 0 1 1 2 2
54 0 1 2 2 2 0 1 0 2 0 5 3
55 0 2 1 0 0 0 0 0 3 2 3 3
56 0 1 0 0 0 2 2 0 1 1 1 5
57 0 0 0 1 1 0 0 1 0 0 1 1
58 6 1 2 0 2 2 0 0 0 0 9 2
59 0 1 1 0 0 0 0 0 2 0 2 2
60 2 0 0 0 1 0 0 1 0 1 2 1
61 0 0 3 1 1 2 0 0 1 0 4 3
62 2 0 1 0 0 0 0 1 2 1 3 3
63 2 0 1 0 1 0 1 0 0 0 3 1
65 0 0 1 0 0 0 1 5 0 1 1 6
.. .. .. .. .. .. .. .. .. .. ... ...
[9269 rows x 12 columns]
PyTable Solution
Since functionality provided by Pandas is not needed, and the processing is much slower (see notebook below), the best approach is using PyTables or h5py directly. I've tried only the pytables approach so far.
All tests were performed in this notebook:
Python particles simulator: numpy out-of-core processing
Introduction to pytables data-structures
Reference: Official PyTables Docs
Pytables allows store data in HDF5 files in 2 types of formats: arrays and tables.
Arrays
There are 3 types of arrays Array, CArray and EArray. They all allow to store and retrieve (multidimensional) slices with a notation similar to numpy slicing.
# Write data to store (broadcasting works)
array1[:] = 3
# Read data from store
in_ram_array = array1[:]
For optimization in some use cases, CArray is saved in "chunks", whose size can be chosen with chunk_shape at creation time.
Array and CArray size is fixed at creation time. You can fill/write the array chunk-by-chunk after creation though. Conversely EArray can be extended with the .append() method.
Tables
The table is a quite different beast. It's basically a "table". You have only 1D index and each element is a row. Inside each row there are the "columns" data types, each columns can have a different type. It you are familiar with numpy record-arrays, a table is basically an 1D record-array, with each element having many fields as the columns.
1D or 2D numpy arrays can be stored in tables but it's a bit more tricky: we need to create a row data type. For example to store an 1D uint8 numpy array we need to do:
table_uint8 = np.dtype([('field1', 'u1')])
table_1d = data_file.create_table('/', 'array_1d', description=table_uint8)
So why using tables? Because, differently from arrays, tables can be efficiently queried. For example, if we want to search for elements > 3 in a huge disk-based table we can do:
index = table_1d.get_where_list('field1 > 3')
Not only it is simple (compared with arrays where we need to scan the whole file in chunks and build index in a loop) but it is also very extremely fast.
How to store simulation parameters
The best way to store simulation parameters is to use a group (i.e. /parameters), convert each scalar to numpy array and store it as CArray.
Array for "emission"
emission is the biggest array that is generated and read sequentially. For this usage pattern A good data structure is EArray. On "simulated" data with ~50% of zeros elements blosc compression (level=5) achieves 2.2x compression ratio. I found that a chunk-size of 2^18 (256k) has the minimum processing time.
Storing "counts"
Storing also "counts" will increase the file size by 10% and will take 40% more time to compute timestamps. Having counts stored is not an advantage per-se because only the timestamps are needed in the end.
The advantage is that recostructing the index (timestamps) is simpler because we query the full time axis in a single command (.get_where_list('counts >= 1')). Conversely, with chunked processing, we need to perform some index arithmetics that is a bit tricky, and maybe a burden to maintain.
However the the code complexity may be small compared to all the other operations (sorting and merging) that are needed in both cases.
Storing "timestamps"
Timestamps can be accumulated in RAM. However, we don't know the arrays size before starting and a final hstack() call is needed to "merge" the different chunks stored in a list. This doubles the memory requirements so the RAM may be insufficient.
We can store as-we-go timestamps to a table using .append(). At the end we can load the table in memory with .read(). This is only 10% slower than all-in-memory computation but avoids the "double-RAM" requirement. Moreover we can avoid the final full-load and have minimal RAM usage.
H5Py
H5py is a much simpler library than pytables. For this use-case of (mainly) sequential processing seems a better fit than pytables. The only missing feature is the lack of 'blosc' compression. If this results in a big performance penalty remains to be tested.
Use OpenMM to simulate particles (https://github.com/SimTk/openmm) and MDTraj (https://github.com/rmcgibbo/mdtraj) to handle trajectory IO.
The pytables vs pandas.HDFStore tests in the accepted answer is completely misleading:
The first critical error is the timing did not apply os.fsync after
flush, which make the speed test unstable. So sometime, the pytables
function is accidentally much faster.
The 2nd problem is the pytables and pandas versions have completely
different shapes due to misunderstanding the pytables.EArray's
shape arg. The author try to append column into pytables version but
append row into pandas version.
The 3rd problem is the author used different chunkshape during
comparison.
The author also forgot to disable the table index generation during store.append() which is a time consuming process.
The follow table showed the performance results from his version and my fixes.
tbold is his pytables version, pdold is his pandas version. tbsync and pdsync are his version with fsync() after flush() and also disable the table index generation during append. the tbopt and pdopt are my optimized version, with blosc:lz4 and complevel 9.
| name | dt | data size [MB] | comp ratio % | chunkshape | shape | clib | indexed |
|:-------|-----:|-----------------:|---------------:|:-------------|:--------------|:----------------|:----------|
| tbold | 5.11 | 300.00 | 84.63 | (15, 262144) | (15, 5242880) | blosc[5][1] | False |
| pdold | 8.39 | 340.00 | 39.26 | (1927,) | (5242880,) | blosc[5][1] | True |
| tbsync | 7.47 | 300.00 | 84.63 | (15, 262144) | (15, 5242880) | blosc[5][1] | False |
| pdsync | 6.97 | 340.00 | 39.27 | (1927,) | (5242880,) | blosc[5][1] | False |
| tbopt | 4.78 | 300.00 | 43.07 | (4369, 15) | (5242880, 15) | blosc:lz4[9][1] | False |
| pdopt | 5.73 | 340.00 | 38.53 | (3855,) | (5242880,) | blosc:lz4[9][1] | False |
The pandas.HDFStore uses pytables under the hood. Thus if we use them correctly, they should have no difference at all.
We can see the pandas version has larger data size. This is because the pandas use pytables.Table instead of EArray. And the pandas.DataFrame always have an index column. The first column of the Table object is this DataFrame index which require some extra space to save. This only affect IO performance a little but provide more features such as out-of-core query. So I still recommend pandas here. #MRocklin also mentioned a nicer out-of-core package dask, if most features you used are just array operations instead of table-like query. But the IO performance won't have distinguishable difference.
h5f.root.emission._v_attrs
Out[82]:
/emission._v_attrs (AttributeSet), 15 attributes:
[CLASS := 'GROUP',
TITLE := '',
VERSION := '1.0',
data_columns := [],
encoding := 'UTF-8',
index_cols := [(0, 'index')],
info := {1: {'names': [None], 'type': 'RangeIndex'}, 'index': {}},
levels := 1,
metadata := [],
nan_rep := 'nan',
non_index_axes := [(1, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])],
pandas_type := 'frame_table',
pandas_version := '0.15.2',
table_type := 'appendable_frame',
values_cols := ['values_block_0']]
Here is the functions:
def generate_emission(shape):
"""Generate fake emission."""
emission = np.random.randn(*shape).astype('float32') - 1
emission.clip(0, 1e6, out=emission)
assert (emission >=0).all()
return emission
def test_puretb_earray(outpath,
n_particles = 15,
time_chunk_size = 2**18,
n_iter = 20,
sync = True,
clib = 'blosc',
clevel = 5,
):
time_size = n_iter * time_chunk_size
data_file = pytb.open_file(outpath, mode="w")
comp_filter = pytb.Filters(complib=clib, complevel=clevel)
emission = data_file.create_earray('/', 'emission', atom=pytb.Float32Atom(),
shape=(n_particles, 0),
chunkshape=(n_particles, time_chunk_size),
expectedrows=n_iter * time_chunk_size,
filters=comp_filter)
# generate simulated emission data
t0 =time()
for i in range(n_iter):
emission_chunk = generate_emission((n_particles, time_chunk_size))
emission.append(emission_chunk)
emission.flush()
if sync:
os.fsync(data_file.fileno())
data_file.close()
t1 = time()
return t1-t0
def test_puretb_earray2(outpath,
n_particles = 15,
time_chunk_size = 2**18,
n_iter = 20,
sync = True,
clib = 'blosc',
clevel = 5,
):
time_size = n_iter * time_chunk_size
data_file = pytb.open_file(outpath, mode="w")
comp_filter = pytb.Filters(complib=clib, complevel=clevel)
emission = data_file.create_earray('/', 'emission', atom=pytb.Float32Atom(),
shape=(0, n_particles),
expectedrows=time_size,
filters=comp_filter)
# generate simulated emission data
t0 =time()
for i in range(n_iter):
emission_chunk = generate_emission((time_chunk_size, n_particles))
emission.append(emission_chunk)
emission.flush()
if sync:
os.fsync(data_file.fileno())
data_file.close()
t1 = time()
return t1-t0
def test_purepd_df(outpath,
n_particles = 15,
time_chunk_size = 2**18,
n_iter = 20,
sync = True,
clib='blosc',
clevel=5,
autocshape=False,
oldversion=False,
):
time_size = n_iter * time_chunk_size
emission = pd.HDFStore(outpath, mode='w', complib=clib, complevel=clevel)
# generate simulated data
t0 =time()
for i in range(n_iter):
# Generate fake emission
emission_chunk = generate_emission((time_chunk_size, n_particles))
df = pd.DataFrame(emission_chunk, dtype='float32')
# create a globally unique index (time)
# http://stackoverflow.com/questions/16997048/how-does-one-append-large-
# amounts-of-data-to-a-pandas-hdfstore-and-get-a-natural/16999397#16999397
try:
nrows = emission.get_storer('emission').nrows
except:
nrows = 0
df.index = pd.Series(df.index) + nrows
if autocshape:
emission.append('emission', df, index=False,
expectedrows=time_size
)
else:
if oldversion:
emission.append('emission', df)
else:
emission.append('emission', df, index=False)
emission.flush(fsync=sync)
emission.close()
t1 = time()
return t1-t0
def _test_puretb_earray_nosync(outpath):
return test_puretb_earray(outpath, sync=False)
def _test_purepd_df_nosync(outpath):
return test_purepd_df(outpath, sync=False,
oldversion=True
)
def _test_puretb_earray_opt(outpath):
return test_puretb_earray2(outpath,
sync=False,
clib='blosc:lz4',
clevel=9
)
def _test_purepd_df_opt(outpath):
return test_purepd_df(outpath,
sync=False,
clib='blosc:lz4',
clevel=9,
autocshape=True
)
testfns = {
'tbold':_test_puretb_earray_nosync,
'pdold':_test_purepd_df_nosync,
'tbsync':test_puretb_earray,
'pdsync':test_purepd_df,
'tbopt': _test_puretb_earray_opt,
'pdopt': _test_purepd_df_opt,
}