I have a problem regarding a selective read-in routine while using h5py.
f = h5py.File('file.hdf5','r')
data = f['Data']
I have several positive values in the 'Data'- dataset and also some placeholders with -9999.
How I can get only all positive values for calculations like np.min?
np.ma.masked_array creates a full copy of the array and all the benefits from using h5py are lost ... (regarding memory usage). The problem is, that I get errors if I try to read data sets that exceed 100 millions of values per data set using data = f['Data'][:,0]
Or if this is not possible is something like that possible?
np.place(data[...], data[...] <= -9999, float('nan'))
Thanks in advance
You could use:
mask = f['Data'] >= 0
data = f['Data'][mask]
although I am not sure how much memory the mask calculation itself uses.
Related
I am trying to use numpy's SeedSequence to seed RNGs in different processes. However I am not sure whether I should use ss.generate_state or ss.spawn:
import concurrent.futures
import numpy as np
def worker(seed):
rng = np.random.default_rng(seed)
return rng.random(1)
num_repeats = 1000
ss = np.random.SeedSequence(243799254704924441050048792905230269161)
with concurrent.futures.ProcessPoolExecutor() as pool:
result1 = np.hstack(list(pool.map(worker, ss.generate_state(num_repeats))))
ss = np.random.SeedSequence(243799254704924441050048792905230269161)
with concurrent.futures.ProcessPoolExecutor() as pool:
result2 = np.hstack(list(pool.map(worker, ss.spawn(num_repeats))))
What are the differences between the two approaches and which should I use?
Using ss.generate_state is ~10% faster for the basic example above, likely because we are serializing floats instead of objects.
Well the SeedSequence is intended to generate good quality seeds from not so good seeds.
Performance
The generate_state is much faster than spawn
The difference of time for transfering the object or the state should not be the main reason for the difference you notice. You can test this without any
%%timeit
ss.generate_state(num_repeats)
100 times faster than
ss.spawn(num_repeats)
Seed size
In the code map(worker, ss.generate_state(num_repeats)) the RNGs are seeded with integers, while in the map(worker, ss.spawn(num_repeats)) the RNGs are seeded with SeedSequence that can potentially give result in quality initialization, the seed sequence can be used internally to generate a state vector with as many bits as required to completely initialize the RNG. Well, to be honest I expect it to do some expansion on this number to initialize the RNG, not simply padding with zeros for example.
Repeated use
The most important difference is that generate_state gives the same result if called multiple times. On the other hand, spawn gives different results each call.
For illustration, check the following example
ss = np.random.SeedSequence(243799254704924441050048792905230269161)
print('With generate_state')
print(np.hstack([worker(s) for s in ss.generate_state(5)]))
print(np.hstack([worker(s) for s in ss.generate_state(5)]))
print('With spawn')
print(np.hstack([worker(s) for s in ss.spawn(5)]))
print(np.hstack([worker(s) for s in ss.spawn(5)]))
With generate_state
[0.6625651 0.17654256 0.25323331 0.38250588 0.52670541]
[0.6625651 0.17654256 0.25323331 0.38250588 0.52670541]
With spawn
[0.06988312 0.40886412 0.55733136 0.43249601 0.53394111]
[0.64885573 0.16788206 0.12435154 0.14676836 0.51876499]
As you see the arrays generated by seeding different RNGs with generate_state gives the same result, not only immediately after construction, but every time the method is called. Spawn should give you the same results using a newly constructed SeedSequence (I am using numpy 1.19.2), however if you run the same code twice using the same instance, the second time will give produce different seeds.
I have this function below that iterates through every row of a data frame (using pandas apply) and determines what values are valid from a prediction-probability matrix (L2) by referencing another data frame (GST) to obtain the valid values for a given row. The function just returns the row back with the maximum valid probability assigned to the previously blank value for that row (Predicted Level 2) in the data frame passed to the function (test_x2)
Not a terribly complex function and it works fine on smaller datasets but when I scale to like 3-5 million records, it starts to take way too long. I tried using the multiprocessing module as well as dask/numba but nothing was able to improve the runtime (not sure if this is just due to the fact the function is not vectorizable).
My question is two fold:
1) Is there a better way to write this? (I'm guessing there is)
2) If not, what parallel computing strategies could work with this type of function? I've already tried a number of different python options but I'm just leaning more towards running the larger datasets on totally separate machines at this point. Feel free to provide any suggested code to parallelize something like this. Thanks in advance for any guidance provided.
l2 = MNB.predict_proba(test_x)
l2_classes = MNB.classes_
L2 = pd.DataFrame(l2, columns = MNB.classes_)
test_x2["Predicted Level 2"] = ""
def predict_2(row):
s = row["Predicted Level 1"]
s = GST.loc[s,:]
s.reset_index(inplace = True)
Valid_Level2s = s["GST Level 2"].tolist()
p2 = L2.ix[row.name, Valid_Level2s]
max2 = p2.idxmax(axis = 1)
output = row["Predicted Level 2"] = max2
return row
test_x2 = test_x2.apply(predict_2, axis = 1)
If I have a complex calculation of the form
tmp1 = tf.fun1(placeholder1,placeholder2)
tmp2 = tf.fun2(tmp1,placeholder3)
tmp3 = tf.fun3(tmp2)
ret = tf.fun4(tmp3)
and I calculate
ret_vals = sess.run(ret,feed_dict={placeholder1: vals1, placeholder2: vals2, placeholder3: vals3})
fun1, fun2 etc are possibly costly operations on a lot of data.
If I run to get ret_vals as above, is it possible to later or at the same time access the intermediate values as well without re-running everything up to that value? For example, to get tmp2, I could re-run everything using
tmp2_vals = sess.run(tmp2,feed_dict={placeholder1: vals1, placeholder2: vals2, placeholder3: vals3})
But this looks like a complete waste of computation? Is there a way to access several of the intermediate results in a graph after performing one run?
The reason why I want to do this is for debugging or testing or logging of progress when ret_vals gets calculated e.g. in an optimization loop. Every step where I run the ret_vals calculations is costly but I want to see some of the intermediate results that were calculated.
If I do something like
tmp2_vals, ret_vals = sess.run([tmp2, ret], ...)
does this guarantee that the graph will only get run once (instead of one time for tmp2 and one time for ret) like I want it?
Have you looked at tf.Print? This is an identity op with printing funciton. You can insert it in your graph right after tmp2 to get the value of it. Note that the default setting only allows you to print the first n values of the tensor, you can modify the value n by giving attribute first_n to the op.
I am a starter in text mining topic. When I run LDA() over a huge dataset with 996165 observations, it displays the following error:
Error in LDA(dtm, k, method = "Gibbs", control = list(nstart = nstart, :
Each row of the input matrix needs to contain at least one non-zero entry.
I am pretty sure that there is no missing values in my corpus and also. The table of "DocumentTermMatrix" and "simple_triplet_matrix" is:
table(is.na(dtm[[1]]))
#FALSE
#57100956
table(is.na(dtm[[2]]))
#FALSE
#57100956
A little confused how "57100956" comes. But as my dataset is pretty large, I don't know how to check why does this error occurs. My LDA command is:
ldaOut<-LDA(dtm,k, method="Gibbs", control=list(nstart=nstart, seed = seed, best=best, burnin = burnin, iter = iter, thin=thin))
Can anyone provide some insights? Thanks.
In my opinion the problem is not the presence of missing values, but the presence of all 0 rows.
To check it:
raw.sum=apply(table,1,FUN=sum) #sum by raw each raw of the table
Then you can delete all raws which are all 0 doing:
table=table[raw.sum!=0,]
Now table should has all "non 0" raws.
I had the same problem. The design matrix, dtm, in your case, had rows with all zeroes because dome documents did not contain certain words (i.e. their frequency was zero). I suppose this somehow causes a singular matrix problem somewhere along the line. I fixed this by adding a common word to each of the documents so that every row would have at least one non-zero entry. At the very least, the LDA ran successfully and classified each of the documents. Hope this helps!
I need to hold a 50,000x50,000 sparse matrix/2d-array, with ~5% of the cells, uniformly distributed, being non-empty. I will need to:
edit I need to do this in numpy/scipy, sorry if wasn't clear. Also, added requirements.
Read the 5% non-empty data from a DB, and assign it to matrix/2d-array cells, as quickly as possible.
Use as little memory as possible.
Use fancy indexing (take the indexes of and all non-empty values in a column, say). This is nice-to-have, memory and construction-time as more important.
Once constructed, the matrix will not change.
I will, however, want to take its transpose, with preferably O(1) memory and time.
What's the most efficient way of achieving this?
Can I hold nan's instead of zeros to indicate "empty" cells? (0 is a valid value for me), and can I efficiently run nansum, nanmean?
If not, can I efficiently take the index of and values of all non-zeros in a given column/row?
http://en.wikipedia.org/wiki/Sparse_matrix has a good summary of a few different methods. If the data you retrieve from the website is unordered, I'd recommend 'List of Lists' (or more efficiently in this case - probably an array of Lists of column/value pairs). If you can guarantee ordering, I'd recommend the 'Yale format'. Both these solutions make storing NANs unnecessary, and make nanmean/nanaverage fast.
These solutions offer slow insertions though. These solutions will use about 10% of the space of the full matrix.
Well, for my purposes it seems like csc is the way to go. With 5% "sparsity factor", the memory that the row indexes in csc take is still worth it. Here's the code I used to test that the stuff I need really is fast:
def build_csc(N, SPARSITY_FACTOR):
data = []
row_indexes = []
column_indexes = [0] * (N+1)
current_index = 0
for j in xrange(N):
column_indexes[j] = current_index
for i in xrange(N):
if random.random() < SPARSITY_FACTOR:
row_indexes.append(i)
data.append(random.random())
current_index += 1
column_indexes[N] = current_index
return sp.csc_matrix((data,row_indexes,column_indexes), shape=(N,N), dtype=np.float)
def take_from_col(m, col_index):
col = m[:,col_index]
indexes = col.nonzero()[0]
values = col[indexes]
Running this in %timeit shows that this is indeed fast.