Obtaining total number of records from .tfrecords file in Tensorflow - tensorflow

Is it possible for obtain the total number of records from a .tfrecords file ? Related to this, how does one generally keep track of the number of epochs that have elapsed while training models? While it is possible for us to specify the batch_size and num_of_epochs, I am not sure if it is straightforward to obtain values such as current epoch, number of batches per epoch etc - just so that I could have more control of how the training is progressing. Currently, I'm just using a dirty hack to compute this as I know before hand how many records there are in my .tfrecords file and the size of my minibatches. Appreciate any help..

To count the number of records, you should be able to use tf.python_io.tf_record_iterator.
c = 0
for fn in tf_records_filenames:
for record in tf.python_io.tf_record_iterator(fn):
c += 1
To just keep track of the model training, tensorboard comes in handy.

No it is not possible. TFRecord does not store any metadata about the data being stored inside. This file
represents a sequence of (binary) strings. The format is not random
access, so it is suitable for streaming large amounts of data but not
suitable if fast sharding or other non-sequential access is desired.
If you want, you can store this metadata manually or use a record_iterator to get the number (you will need to iterate through all the records that you have:
sum(1 for _ in tf.python_io.tf_record_iterator(file_name))
If you want to know the current epoch, you can do this either from tensorboard or by printing the number from the loop.

As tf.io.tf_record_iterator is being deprecated, the great answer of Salvador Dali should now read
tf.enable_eager_execution()
sum(1 for _ in tf.data.TFRecordDataset(file_name))

As per the deprecation warning on tf_record_iterator, we can also use eager execution to count records.
#!/usr/bin/env python
from __future__ import print_function
import tensorflow as tf
import sys
assert len(sys.argv) == 2, \
"USAGE: {} <file_glob>".format(sys.argv[0])
tf.enable_eager_execution()
input_pattern = sys.argv[1]
# Expand glob if there is one
input_files = tf.io.gfile.glob(input_pattern)
# Create the dataset
data_set = tf.data.TFRecordDataset(input_files)
# Count the records
records_n = sum(1 for record in data_set)
print("records_n = {}".format(records_n))

As tf.enable_eager_execution() is no longer valid, use:
tf.compat.v1.enable_eager_execution
sum(1 for _ in tf.data.TFRecordDataset(FILENAMES))

Related

One Hot Encoding of large dataset

I want to build recommendation system using association rules with implemented in mlxtend library apriori algorithm. In my sales data there is information about 36 millions of transactions and 50k unique products.
I tried to use sklearn OneHotEncoder and pandas get_dummies() but both are giving OOM error as they are not able to create frame in shape of (36 mil, 50k)
MemoryError: Unable to allocate 398. GiB for an array with shape (36113798, 50087) and data type uint8
Is there any other solution?
Like you, I too had out of memory error with mlxtend at first, but the following small changes fixed the problem completely.
`
from mlxtend.preprocessing import TransactionEncoder
import pandas as pd
te = TransactionEncoder()
#te_ary = te.fit(itemSetList).transform(itemSetList)
#df = pd.DataFrame(te_ary, columns=te.columns_)
fitted = te.fit(itemSetList)
te_ary = fitted.transform(itemSetList, sparse=True) # seemed to work good
df = pd.DataFrame.sparse.from_spmatrix(te_ary, columns=te.columns_) # seemed to work good
# now you can call mlxtend's fpgrowth() followed by association_rules()
`
You should also use fpgrowth instead of apriori on the big transaction datasets because apriori is too primitive. fpgrowth is more intelligent and modern than apriori but gives equivalent results. The mlxtend lib supports both apriori and fpgrowth.
I think a good solution would be to use embeddings instead of one-hot encoding for your problem. In addition, I recommend that you split your dataset into smaller subsets to further avoid the memory consumption problems.
You should also consult this thread : https://datascience.stackexchange.com/questions/29851/one-hot-encoding-vs-word-embeding-when-to-choose-one-or-another

Tensorboard extract scalar by a script

I want to extract my scalars by a script, because I have a lot of test runs.
Based on this answer I can get all tf summaries of one board. I can even separate the tag for the loss:
<class 'tensorflow.core.framework.summary_pb2.Value'>
tag: "training_loss"
simple_value: 0.0590251199901104
But it seems that every loss value that is saved as on summary_pb2.Value. I could extract every loss single value, but I don't find an information about the step number or time of these single values, so that I can order them (They have the same tag as well). Unfortunately this is not well documented, does someone know how I can get this information?
I would use the EventAccumulator:
You can pass the model directory to the _load_run() function.
from tensorboard.backend.event_processing import event_accumulator
import numpy as np
def _load_run(path):
event_acc = event_accumulator.EventAccumulator(path)
event_acc.Reload()
data = {}
for tag in sorted(event_acc.Tags()["scalars"]):
x, y = [], []
for scalar_event in event_acc.Scalars(tag):
x.append(scalar_event.step)
y.append(scalar_event.value)
data[tag] = (np.asarray(x), np.asarray(y))
return data
print(_load_run("/models/vae/run_1"))
Hope this helps!

How to merge very large numpy arrays?

I will have many Numpy arrays stored in npz files, which are being saved using savez_compressed function.
I am splitting the information in many arrays because, if not, the functions I am using crash due to memory issues. The data is not sparse.
I will need to joint all that info in one unique array (to be able to process it with some routines), and store it into disk (to process it many times with diffente parameters).
Arrays won't fit into RAM+swap memory.
How to merge them into an unique array and save it to a disk?
I suspect that I should use mmap_mode, but I do not realize exactly how. Also, I imagine that can be some performance issues if I do not reserve contiguous disk space at first.
I have read this post but I still cannot realize how to do it.
EDIT
Clarification: I have made many functions to process similar data, some of them require an array as argument. In some cases I could pass them only part of this large array by using slicing. But it is still important to have all the info. in such an array.
This is because of the following: The arrays contain information (from physical simulations) time ordered. Among the argument of the functions, the user can set the initial and last time to process. Also, he/she can set the size of the processing chunk (which is important because this affect to the performance but allowed chunk size depend on the computational resources). Because of this, I cannot store the data as separated chunks.
The way in which this particular array (the one I am trying to create) is built is not important while it works.
You should be able to load chunk by chunk on a np.memap array:
import numpy as np
data_files = ['file1.npz', 'file2.npz2', ...]
# If you do not know the final size beforehand you need to
# go through the chunks once first to check their sizes
rows = 0
cols = None
dtype = None
for data_file in data_files:
with np.load(data_file) as data:
chunk = data['array']
rows += chunk.shape[0]
cols = chunk.shape[1]
dtype = chunk.dtype
# Once the size is know create memmap and write chunks
merged = np.memmap('merged.buffer', dtype=dtype, mode='w+', shape=(rows, cols))
idx = 0
for data_file in data_files:
with np.load(data_file) as data:
chunk = data['array']
merged[idx:idx + len(chunk)] = chunk
idx += len(chunk)
However, as pointed out in the comments working across a dimension which is not the fastest one will be very slow.
This would be an example how to write a 90GB of easily compressible data to disk. The most important points are mentioned here https://stackoverflow.com/a/48405220/4045774
The write/read speed should be in the range of (300 MB/s,500MB/s) on a nomal HDD.
Example
import numpy as np
import tables #register blosc
import h5py as h5
import h5py_cache as h5c
import time
def read_the_arrays():
#Easily compressable data
#A lot smaller than your actual array, I do not have that much RAM
return np.arange(10*int(15E3)).reshape(10,int(15E3))
def writing(hdf5_path):
# As we are writing whole chunks here this isn't realy needed,
# if you forget to set a large enough chunk-cache-size when not writing or reading
# whole chunks, the performance will be extremely bad. (chunks can only be read or written as a whole)
f = h5c.File(hdf5_path, 'w',chunk_cache_mem_size=1024**2*1000) #1000 MB cache size
dset = f.create_dataset("your_data", shape=(int(15E5),int(15E3)),dtype=np.float32,chunks=(10000,100),compression=32001,compression_opts=(0, 0, 0, 0, 9, 1, 1), shuffle=False)
#Lets write to the dataset
for i in range(0,int(15E5),10):
dset[i:i+10,:]=read_the_arrays()
f.close()
def reading(hdf5_path):
f = h5c.File(hdf5_path, 'r',chunk_cache_mem_size=1024**2*1000) #1000 MB cache size
dset = f["your_data"]
#Read chunks
for i in range(0,int(15E3),10):
data=np.copy(dset[:,i:i+10])
f.close()
hdf5_path='Test.h5'
t1=time.time()
writing(hdf5_path)
print(time.time()-t1)
t1=time.time()
reading(hdf5_path)
print(time.time()-t1)

Feeding .npy (numpy files) into tensorflow data pipeline

Tensorflow seems to lack a reader for ".npy" files.
How can I read my data files into the new tensorflow.data.Dataset pipline?
My data doesn't fit in memory.
Each object is saved in a separate ".npy" file. each file contains 2 different ndarrays as features and a scalar as their label.
It is actually possible to read directly NPY files with TensorFlow instead of TFRecords. The key pieces are tf.data.FixedLengthRecordDataset and tf.io.decode_raw, along with a look at the documentation of the NPY format. For simplicity, let's suppose that a float32 NPY file containing an array with shape (N, K) is given, and you know the number of features K beforehand, as well as the fact that it is a float32 array. An NPY file is just a binary file with a small header and followed by the raw array data (object arrays are different, but we're considering numbers now). In short, you can find the size of this header with a function like this:
def npy_header_offset(npy_path):
with open(str(npy_path), 'rb') as f:
if f.read(6) != b'\x93NUMPY':
raise ValueError('Invalid NPY file.')
version_major, version_minor = f.read(2)
if version_major == 1:
header_len_size = 2
elif version_major == 2:
header_len_size = 4
else:
raise ValueError('Unknown NPY file version {}.{}.'.format(version_major, version_minor))
header_len = sum(b << (8 * i) for i, b in enumerate(f.read(header_len_size)))
header = f.read(header_len)
if not header.endswith(b'\n'):
raise ValueError('Invalid NPY file.')
return f.tell()
With this you can create a dataset like this:
import tensorflow as tf
npy_file = 'my_file.npy'
num_features = ...
dtype = tf.float32
header_offset = npy_header_offset(npy_file)
dataset = tf.data.FixedLengthRecordDataset([npy_file], num_features * dtype.size, header_bytes=header_offset)
Each element of this dataset contains a long string of bytes representing a single example. You can now decode it to obtain an actual array:
dataset = dataset.map(lambda s: tf.io.decode_raw(s, dtype))
The elements will have indeterminate shape, though, because TensorFlow does not keep track of the length of the strings. You can just enforce the shape since you know the number of features:
dataset = dataset.map(lambda s: tf.reshape(tf.io.decode_raw(s, dtype), (num_features,)))
Similarly, you can choose to perform this step after batching, or combine it in whatever way you feel like.
The limitation is that you had to know the number of features in advance. It is possible to extract it from the NumPy header, though, just a bit of a pain, and in any case very hardly from within TensorFlow, so the file names would need to be known in advance. Another limitation is that, as it is, the solution requires you to either use only one file per dataset or files that have the same header size, although if you know that all the arrays have the same size that should actually be the case.
Admittedly, if one considers this kind of approach it may just be better to have a pure binary file without headers, and either hard code the number of features or read them from a different source...
You can do it with tf.py_func, see the example here.
The parse function would simply decode the filename from bytes to string and call np.load.
Update: something like this:
def read_npy_file(item):
data = np.load(item.decode())
return data.astype(np.float32)
file_list = ['/foo/bar.npy', '/foo/baz.npy']
dataset = tf.data.Dataset.from_tensor_slices(file_list)
dataset = dataset.map(
lambda item: tuple(tf.py_func(read_npy_file, [item], [tf.float32,])))
Does your data fit into memory? If so, you can follow the instructions from the Consuming NumPy Arrays section of the docs:
Consuming NumPy arrays
If all of your input data fit in memory, the simplest way to create a Dataset from them is to convert them to tf.Tensor objects and use Dataset.from_tensor_slices().
# Load the training data into two NumPy arrays, for example using `np.load()`.
with np.load("/var/data/training_data.npy") as data:
features = data["features"]
labels = data["labels"]
# Assume that each row of `features` corresponds to the same row as `labels`.
assert features.shape[0] == labels.shape[0]
dataset = tf.data.Dataset.from_tensor_slices((features, labels))
In the case that the file doesn't fit into memory, it seems like the only recommended approach is to first convert the npy data into a TFRecord format, and then use the TFRecord data set format, which can be streamed without fully loading into memory.
Here is a post with some instructions.
FWIW, it seems crazy to me that TFRecord cannot be instantiated with a directory name or file name(s) of npy files directly, but it appears to be a limitation of plain Tensorflow.
If you can split the single large npy file into smaller files that each roughly represent one batch for training, then you could write a custom data generator in Keras that would yield only the data needed for the current batch.
In general, if your dataset cannot fit in memory, storing it as one single large npy file makes it very hard to work with, and preferably you should reformat the data first, either as TFRecord or as multiple npy files, and then use other methods.
Problem setup
I had a folder with images that were being fed into an InceptionV3 model for extraction of features. This seemed to be a huge bottleneck for the entire process. As a workaround, I extracted features from each image and then stored them on disk in a .npy format.
Now I had two folders, one for the images and one for the corresponding .npy files. There was an evident problem with the loading of .npy files in the tf.data.Dataset pipeline.
Workaround
I came across TensorFlow's official tutorial on show attend and tell which had a great workaround for the problem this thread (and I) were having.
Load numpy files
First off we need to create a mapping function that accepts the .npy file name and returns the numpy array.
# Load the numpy files
def map_func(feature_path):
feature = np.load(feature_path)
return feature
Use the tf.numpy_function
With the tf.numpy_function we can wrap any python function and use it as a TensorFlow op. The function must accept numpy object (which is exactly what we want).
We create a tf.data.Dataset with the list of all the .npy filenames.
dataset = tf.data.Dataset.from_tensor_slices(feature_paths)
We then use the map function of the tf.data.Dataset API to do the rest of our task.
# Use map to load the numpy files in parallel
dataset = dataset.map(lambda item: tf.numpy_function(
map_func, [item], tf.float16),
num_parallel_calls=tf.data.AUTOTUNE)

Writing a netcdf4 file is 6-times slower than writing a netcdf3_classic file and the file is 8-times as big?

I am using the netCDF4 library in python and just came across the issue stated in the title. At first I was blaming groups for this, but it turns out that it is a difference between the NETCDF4 and NETCDF3_CLASSIC formats (edit: and it appears related to our Linux installation of the netcdf libraries).
In the program below, I am creating a simple time series netcdf file of the same data in 2 different ways: 1) as NETCDF3_CLASSIC file, 2) as NETCDF4 flat file (creating groups in the netcdf4 file doesn't make much of a difference). What I find with a simple timing and the ls command is:
1) NETCDF3 1.3483 seconds 1922704 bytes
2) NETCDF4 flat 8.5920 seconds 15178689 bytes
It's exactly the same routine which creates 1) and 2), the only difference is the format argument in the netCDF4.Dataset method. Is this a bug or a feature?
Thanks, Martin
Edit: I have now found that this must have something to do with our local installation of the netcdf library on a Linux computer. When I use the program version below (trimmed down to the essentials) on my Windows laptop, I get similar file sizes, and netcdf4 is actually almost 2-times as fast as netcdf3! When I run the same program on our linux system, I can reproduce the old results. Thus, this question is apparently not related to python.
Sorry for the confusion.
New code:
import datetime as dt
import numpy as np
import netCDF4 as nc
def write_to_netcdf_single(filename, data, series_info, format='NETCDF4'):
vname = 'testvar'
t0 = dt.datetime.now()
with nc.Dataset(filename, "w", format=format) as f:
# define dimensions and variables
dim = f.createDimension('time', None)
time = f.createVariable('time', 'f8', ('time',))
time.units = "days since 1900-01-01 00:00:00"
time.calendar = "gregorian"
param = f.createVariable(vname, 'f4', ('time',))
param.units = "kg"
# define global attributes
for k, v in sorted(series_info.items()):
setattr(f, k, v)
# store data values
time[:] = nc.date2num(data.time, units=time.units, calendar=time.calendar)
param[:] = data.value
t1 = dt.datetime.now()
print "Writing file %s took %10.4f seconds." % (filename, (t1-t0).total_seconds())
if __name__ == "__main__":
# create an array with 1 mio values and datetime instances
time = np.array([dt.datetime(2000,1,1)+dt.timedelta(hours=v) for v in range(1000000)])
values = np.arange(0., 1000000.)
data = np.array(zip(time, values), dtype=[('time', dt.datetime), ('value', 'f4')])
data = data.view(np.recarray)
series_info = {'attr1':'dummy', 'attr2':'dummy2'}
filename = "testnc4.nc"
write_to_netcdf_single(filename, data, series_info)
filename = "testnc3.nc"
write_to_netcdf_single(filename, data, series_info, format='NETCDF3_CLASSIC')
[old code deleted because it had too much unnecessary stuff]
The two file formats do have different characteristics. the classic file format was dead simple (well, more simple than the new format: http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Classic-Format-Spec.html#Classic-Format-Spec ): A small header described all the data, and then (since you have 3 record variables) the 3 record variables get interleaved.
nice and simple, but you only get one UNLIMITED dimension, there's no facility for parallel I/O, and no way to manage data into groups.
Enter the new HDF5-based back-end, introduced in NetCDF-4.
In exchange for new features, more flexibility, and fewer restrictions on file and variable size, you have to pay a bit of a price. For large datasets, the costs are amortized, but your variables are (relatively speaking) kind of small.
I think the file size discrepancy is exacerbated by your use of record variables. in order to support arrays grow-able in N dimensions, there is more metadata associated with each record entry in the Netcdf-4 format.
HDF5 uses the "reader makes right" convention, too. classic NetCDF says "all data will be big-endian", but HDF5 encodes a bit of information about how the data was stored. If the reader process is the same architecture as the writer process (which is common, as it would be on your laptop or if restarting from a simulation checkpoint), then no conversion need be conducted.
This question is unlikely to help others as it appears to be a site-specific problem related to the interplay between netcdf libraries and the python netCDF4 module.