TensorFlow - how to import data with multiple labels - tensorflow

I'm trying to create a model in TensorFlow which predicts ideal item for a user by predicting a vector of numbers.
I have created a dataset in Spark and saved it as a TFRecord using Spark TensorFlow connector.
In the dataset, I have several hundreds of features and 20 labels in each row. For easier manipulation, I have given every column a prefix 'feature_' or 'label_'.
Now I'm trying to write input function for TensorFlow, but I can't figure out how to parse the data.
So far I have written this:
def dataset_input_fn():
path = ['data.tfrecord']
dataset = tf.data.TFRecordDataset(path)
def parser(record):
example = tf.train.Example()
example.ParseFromString(record)
# TODO: no idea what to do here
# features = parsed["features"]
# label = parsed["label"]
# return features, label
dataset = dataset.map(parser)
dataset = dataset.shuffle(buffer_size=10000)
dataset = dataset.batch(32)
dataset = dataset.repeat(100)
iterator = dataset.make_one_shot_iterator()
features, labels = iterator.get_next()
return features, labels
How can I split the Example into a feature set and a label set? I have tried to split the Example into two parts, but there is no way to even access it. The only way I have managed to access it is by printing the example out, which gives me something like this.
features {
...
feature {
key: "feature_wishlist_hour"
value {
int64_list {
value: 0
}
}
}
feature {
key: "label_emb_1"
value {
float_list {
value: 0.4
}
}
}
feature {
key: "label_emb_2"
value {
float_list {
value: 0.8
}
}
}
...
}

Your parser function should be similar to how you constructed the example proto. In your case its should be something similar to:
# example proto decode
def parser(example_proto):
keys_to_features = {'feature_wishlist_hour':tf.FixedLenFeature((), tf.int64),
'label_emb_1': tf.FixedLenFeature((), tf.float32),
'label_emb_2': tf.FixedLenFeature((), tf.float32)}
parsed_features = tf.parse_single_example(example_proto, keys_to_features)
return parsed_features['feature_wishlist_hour'], (parsed_features['label_emb_1'], parsed_features['label_emb_2'])
EDIT: From the comments it seems you are encoding each of the features as key, value pair, which is not right. Check this answer: Numpy to TFrecords: Is there a more simple way to handle batch inputs from tfrecords? on how to write it in a proper way.

Related

How to save large float into TFRecord format? float_list/float32 seems to truncate the values

We write processed data into TFRecords and we are noticing data loss when read back from TFRecords. Reproducible example below. Strange thing is that it doesn't just drop the decimals but seem to randomly roundup/down values. Since it only allows float32, int64 and string, we are not sure what other options to try.
We are writing these values
[20191221.1, 20191222.1, 20191223.1, 20191224.1, 20191225.1, 20191226.1, 20191227.1, 20191228.1, 20191229.1, 20191230.1]
But reading from tfrecords returns these values
tf.Tensor(
[20191222. 20191222. 20191224. 20191224. 20191226. 20191226. 20191228.
20191228. 20191230. 20191230.], shape=(10,), dtype=float32)
Reproducible Code
import tensorflow as tf
def write_date_tfrecord():
#writes 10 dummy values to replicate the issue
data = [20191221.1 + x for x in range(0,10)]
print("Writing data - ", data)
example = tf.train.Example(
features = tf.train.Features(
feature = {
'data':tf.train.Feature(float_list=tf.train.FloatList(value=data))
}
))
writer = tf.io.TFRecordWriter("data.tf_record")
writer.write(example.SerializeToString())
def parse_function(serialized_example):
features = {
'data': tf.io.FixedLenSequenceFeature([], tf.float32,allow_missing=True)
}
features = tf.io.parse_single_example(serialized=serialized_example, features=features)
data = features['data']
return data
def dataset_generator():
trRecordDataset = tf.data.TFRecordDataset("data.tf_record")
trRecordDataset = trRecordDataset.map(parse_function, num_parallel_calls = tf.data.experimental.AUTOTUNE)
return trRecordDataset
if __name__ == '__main__':
write_date_tfrecord()
generator = dataset_generator()
for data in generator:
print(data)
This solved my issue. I had this issue when writing audio files as floating point matrix using FloatList.. but when i used BytesList and stored the data into tfrecords and then read the data by decoding it.. the issue resolved.. note that even decoding with tf.float32 will lead not solve the issue. we need to decode it with tf.float64..
def _bytes_feature2(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def serialize_example(sound):
feature = {
'snd': _bytes_feature2(sound.tobytes()),
}
example_proto = tf.train.Example(features=tf.train.Features(feature=feature))
return example_proto.SerializeToString()
def write_tfrecords(rf,snd):
nsamples = len(snd)
with tf.io.TFRecordWriter(rf) as writer:
for i in range(nsamples):
SND = snd[i]
tf_example = serialize_example(SND)
writer.write(tf_example)
# writing records
write_tfrecords(os.getcwd()+'\\tfrec\\'+'train.tfrecords',train)
# loading records
raw_dataset = tf.data.TFRecordDataset(os.getcwd()+'\\tfrec\\'+'train.tfrecords')
def parse_record(record):
name_to_features= {
'snd':tf.io.FixedLenFeature([],tf.string),
}
return tf.io.parse_single_example(record, name_to_features)
def decode_record(record):
aud = tf.io.decode_raw(
record['snd'], out_type=tf.float64
)
return aud
for record in raw_dataset:
parsed_record = parse_record(record)
decoded_record = decode_record(parsed_record)
aud = decoded_record
print(aud.numpy()[0:10])
print(train[0][0:10])
output:
[ 417.69951205 -231.58708746 -10.05624011 -146.10342256 -66.60317323
-159.91550792 -3.93602823 29.94517981 106.22196629 65.53008959]
[ 417.69951205 -231.58708746 -10.05624011 -146.10342256 -66.60317323
-159.91550792 -3.93602823 29.94517981 106.22196629 65.53008959]

Tensorflow parse_single_example returns all dataset

I'm creating a basic LinearClassifier in Tensorflow, but it seems that my input function returns the whole dataset at the first iteration, instead of just one example & its label.
My TFRecord has the following structure (obtained with print( tf.train.Example.FromString(example.SerializeToString())) )
features {
feature {
key: "attackType"
value {
int64_list {
value: 0
value: 0
...
feature {
key: "dst_ip_addr"
value {
bytes_list {
value: "OPENSTACK_NET"
value: "EXT_SERVER"
...
It seems the TFRecord file is well formatted. However, when I try to parse it with the following snippet:
def input_fn_train(repeat=10, batch_size=32):
"""
Reads dataset from tfrecord, apply parser with map
"""
# Import MNIST data
dataset = tf.data.TFRecordDataset([processed_bucket+processed_key])
# Map the parser over dataset, and batch results by up to batch_size
dataset = dataset.map(_decode)
dataset = dataset.repeat(repeat)
dataset = dataset.batch(batch_size)
return dataset
def _decode(serialized_ex):
features={
'src_ip_addr': tf.FixedLenFeature(src_ip_size,tf.string),
'src_pt': tf.FixedLenFeature(src_pt_size,tf.int64),
'dst_ip_addr': tf.FixedLenFeature(dst_ip_size,tf.string),
'dst_pt': tf.FixedLenFeature(dst_pt_size,tf.int64),
'proto': tf.FixedLenFeature(proto_size,tf.string),
'packets': tf.FixedLenFeature(packets_size,tf.int64),
'subnet': tf.FixedLenFeature(subnet_size,tf.int64),
'attackType': tf.FixedLenFeature(attack_type_size,tf.int64)
}
parsed_features = tf.parse_single_example(serialized_ex, features)
label = parsed_features.pop('attackType')
return parsed_features, label
sess = tf.Session()
it = input_fn_train().make_one_shot_iterator()
print(sess.run(it.get_next()))
It shows that it.get_next() returns
({'dst_ip_addr': array([[b'OPENSTACK_NET', b'EXT_SERVER',...
This is incorrect since it yields an array of array! The result should be
array([b'OPENSTACK_NET',...
Any thoughts ? I've been trying to change the shape parameter of FixedLenFeature, with no success.
Ok, seems it's the dataset.batch command that created this strange behavior. Removed it, and it works fine now !

How to perform data augmentation in Tensorflow Estimator's input_fn

Using Tensorflow's Estimator API, at what point in the pipeline should I perform the data augmentation?
According to this official Tensorflow guide, one place to perform the data augmentation is in the input_fn:
def parse_fn(example):
"Parse TFExample records and perform simple data augmentation."
example_fmt = {
"image": tf.FixedLengthFeature((), tf.string, ""),
"label": tf.FixedLengthFeature((), tf.int64, -1)
}
parsed = tf.parse_single_example(example, example_fmt)
image = tf.image.decode_image(parsed["image"])
# augments image using slice, reshape, resize_bilinear
# |
# |
# |
# v
image = _augment_helper(image)
return image, parsed["label"]
def input_fn():
files = tf.data.Dataset.list_files("/path/to/dataset/train-*.tfrecord")
dataset = files.interleave(tf.data.TFRecordDataset)
dataset = dataset.map(map_func=parse_fn)
# ...
return dataset
My question
If I perform data augmentation inside input_fn, does parse_fn return a single example or a batch including the original input image + all of the augmented variants? If it should only return a single [augmented] example, how do I ensure that all images in the dataset are used in its un-augmented form, as well as all variants?
If you use iterators on your dataset, your _augment_helper function will be called with each iteration of the dataset across each block of data fed in ( as you are calling the parse_fn in dataset.map )
Change your code to
ds_iter = dataset.make_one_shot_iterator()
ds_iter = ds_iter.get_next()
return ds_iter
I've tested this with a simple augmentation function
def _augment_helper(image):
print(image.shape)
image = tf.image.random_brightness(image,255.0, 1)
image = tf.clip_by_value(image, 0.0, 255.0)
return image
Change 255.0 to whatever the maximum value is in your dataset, I used 255.0 as my example's data set was in 8 bit pixel values
It will return single examples for every call you make to the parse_fn, then if you use the .batch() operation it will return a batch of parsed images

TFRecords: Write list of tensors to single Example

I'm extracting features from images using a convolutional neural network. The network in question has three outputs (three output tensors), which differ in size. I want to store the extracted features in TFRecords, one Example for each image:
Example:
image_id: 1
features/fc8: [output1.1, output1.2, output1.3]
Example:
image_id: 2
features/fc8: [output2.1, output2.2, output2.3]
....
How can I achieve this structure using TFRecords?
EDIT: Elegant way is to use tf.SequenceExample.
Convert the data using tf.SequenceExample() format
def make_example(features, image_id):
ex = tf.train.SequenceExample()
ex.context.feature['image_id'].int64_list.value.append(image_id)
fl_features = ex.feature_lists.feature_list['features/fc8']
for feature in features:
fl_features.feature.add().bytes_list.value.append(frame.tostring())
return ex
Writing to TFRecord
def _convert_to_tfrecord(output_file, feature_batch, ids_batch):
writer = tf.python_io.TFRecordWriter(output_file)
for features, id in zip(feature_batch, ids_batch):
ex = make_example(features, id)
writer.write(ex.SerializeToString())
writer.close()
Parsing example
def parse_example_proto(example_serialized):
context_features = {
'image_id': tf.FixedLenFeature([], dtype=tf.int64)}
sequence_features = {
'features/fc8': tf.FixedLenSequenceFeature([], dtype=tf.string)}
context_parsed, sequence_parsed = tf.parse_single_sequence_example(
serialized=example_serialized,
context_features=context_features,
sequence_features=sequence_features)
return context_parsed['image_id'], sequence_features['features/fc8']
Note: The features here are saved in byte_list, you can also save it in float_list.
Another way, is to use tf.parse_single_example() by storing the examples as:
image_id: 1
features/fc8_1: output1.1
features/fc8_2: output1.2
features/fc8_3: output1.3

tensorflow record with float numpy array

I want to create tensorflow records to feed my model;
so far I use the following code to store uint8 numpy array to TFRecord format;
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _floats_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=[value]))
def convert_to_record(name, image, label, map):
filename = os.path.join(params.TRAINING_RECORDS_DATA_DIR, name + '.' + params.DATA_EXT)
writer = tf.python_io.TFRecordWriter(filename)
image_raw = image.tostring()
map_raw = map.tostring()
label_raw = label.tostring()
example = tf.train.Example(features=tf.train.Features(feature={
'image_raw': _bytes_feature(image_raw),
'map_raw': _bytes_feature(map_raw),
'label_raw': _bytes_feature(label_raw)
}))
writer.write(example.SerializeToString())
writer.close()
which I read with this example code
features = tf.parse_single_example(example, features={
'image_raw': tf.FixedLenFeature([], tf.string),
'map_raw': tf.FixedLenFeature([], tf.string),
'label_raw': tf.FixedLenFeature([], tf.string),
})
image = tf.decode_raw(features['image_raw'], tf.uint8)
image.set_shape(params.IMAGE_HEIGHT*params.IMAGE_WIDTH*3)
image = tf.reshape(image_, (params.IMAGE_HEIGHT,params.IMAGE_WIDTH,3))
map = tf.decode_raw(features['map_raw'], tf.uint8)
map.set_shape(params.MAP_HEIGHT*params.MAP_WIDTH*params.MAP_DEPTH)
map = tf.reshape(map, (params.MAP_HEIGHT,params.MAP_WIDTH,params.MAP_DEPTH))
label = tf.decode_raw(features['label_raw'], tf.uint8)
label.set_shape(params.NUM_CLASSES)
and that's working fine. Now I want to do the same with my array "map" being a float numpy array, instead of uint8, and I could not find examples on how to do it;
I tried the function _floats_feature, which works if I pass a scalar to it, but not with arrays;
with uint8 the serialization can be done by the method tostring();
How can I serialize a float numpy array and how can I read that back?
FloatList and BytesList expect an iterable. So you need to pass it a list of floats. Remove the extra brackets in your _float_feature, ie
def _floats_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
numpy_arr = np.ones((3,)).astype(np.float)
example = tf.train.Example(features=tf.train.Features(feature={"bytes": _floats_feature(numpy_arr)}))
print(example)
features {
feature {
key: "bytes"
value {
float_list {
value: 1.0
value: 1.0
value: 1.0
}
}
}
}
I will expand on the Yaroslav's answer.
Int64List, BytesList and FloatList expect an iterator of the underlying elements (repeated field). In your case you can use a list as an iterator.
You mentioned: it works if I pass a scalar to it, but not with arrays. And this is expected, because when you pass a scalar, your _floats_feature creates an array of one float element in it (exactly as expected). But when you pass an array you create a list of arrays and pass it to a function which expects a list of floats.
So just remove construction of the array from your function: float_list=tf.train.FloatList(value=value)
I've stumbled across this while working on a similar problem. Since part of the original question was how to read back the float32 feature from tfrecords, I'll leave this here in case it helps anyone:
If map.ravel() was used to input map of dimensions [x, y, z] into _floats_feature:
features = {
...
'map': tf.FixedLenFeature([x, y, z], dtype=tf.float32)
...
}
parsed_example = tf.parse_single_example(serialized=serialized, features=features)
map = parsed_example['map']
Yaroslav's example failed when a nd array was the input:
numpy_arr = np.ones((3,3)).astype(np.float)
I found that it worked when I used numpy_arr.ravel() as the input. But is there a better way to do it?
First of all, many thanks to Yaroslav and Salvador for their enlightening answers.
According to my experience, their methods only works when the input is a 1D NumPy array as the size of (n, ). When the input is a Numpy array with the dimension of more than 2, the following error info appears:
def _float_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
numpy_arr = np.arange(12).reshape(2, 2, 3).astype(np.float)
example = tf.train.Example(features=tf.train.Features(feature={"bytes":
_float_feature(numpy_arr)}))
print(example)
TypeError: array([[0., 1., 2.],
[3., 4., 5.]]) has type numpy.ndarray, but expected one of: int, long, float
So, I'd like to expand on Tsuan's answer, that is, flattening the input before it was fed into the TF example. The modified code is as follows:
def _floats_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
numpy_arr = np.arange(12).reshape(2, 2, 3).astype(np.float).flatten()
example = tf.train.Example(features=tf.train.Features(feature={"bytes":
_float_feature(numpy_arr)}))
print(example)
In addition, np.flatten() is more applicable than np.ravel().
Use tfrmaker, a TFRecord utility package. You can install the package with pip:
pip install tfrmaker
Then you could create tfrecords like this:
from tfrmaker import images
# mapping label names with integer encoding.
LABELS = {"bishop": 0, "knight": 1, "pawn": 2, "queen": 3, "rook": 4}
# specifiying data and output directories.
DATA_DIR = "datasets/chess/"
OUTPUT_DIR = "tfrecords/chess/"
# create tfrecords from the images present in the given data directory.
info = images.create(DATA_DIR, LABELS, OUTPUT_DIR)
# info contains a list of information (path: releative path, size: no of images in the tfrecord) about created tfrecords
print(info)
The package also has some cool features like:
dynamic resizing
splitting tfrecords into optimal shards
spliting training, validation, testing of tfrecords
count no of images in tfrecords
asynchronous tfrecord creation
NOTE: This package currently supports image datasets that are organised as directories with class names as sub directory names.