What is the Tensorflow event file nomenclature? - tensorflow

I need to understand the TF event file nomenclature. My event files are formatted like this:
events.out.tfevents.[TIMESTAMP].[HOSTNAME].[4 to 5 digits from 0 to 9].0.v2
What is the signification of the 4-5 digits sequence? and the one of the last 0?
Thanks a lot for your answers.
N.B. I am using the object detection API

Related

DeepLabV3, segmentation and classification/detection on coral

I am trying to use DeepLabV3 for image segmentation and object detection/classification on Coral.
I was able to sucessfully run the semantic_segmentation.py example using DeepLabV3 on the coral, but that only shows an image with an object segmented.
I see that it assigns labels to colors - how do i associate the labels.txt file that I made based off of the label info of the model to these colors? (how do i know which color corresponds to which label).
When I try to run the
engine = DetectionEngine(args.model)
using the deeplab model, I get the error
ValueError: Dectection model should have 4 output tensors!This model
has 1.
I guess this way is the wrong approach?
Thanks!
I believe you have reached out to us regarding the same query. I just wanted to paste the answer here for others to reference:
"The detection model usually have 4 output tensors to specifies the locations, classes, scores, and number and detections. You can read more about it here. In contrary, the segmentation model only have a single output tensor, so if you treat it the same way, you'll most likely segfault trying to access the wrong memory region. If you want to do all three tasks on the same image, my suggestion is to create 3 different engines and feed the image into each. The only problem with this is that each time you switch the model, there will likely be data transfer bottleneck for the model to get loaded onto the TPU. We have here an example on how you can run 2 models on a single TPU, you should be able to modify it to take 3 models."
On the last note, I just saw that you added:
how do i associate the labels.txt file that I made based off of the label info of the model to these colors
I just don't think this is something you can do for segmentation model but maybe I'm just confused on your query?
Take object detection model for example, there are 4 output tensors, the second tensor gives you an array of id associates with a certain class that you can map to a a label file. Segmentaion models only give the pixel surrounding an objects.
[EDIT]
Apology, looks like I'm the one confused on segmentation models.
Quote form my college :)
"You are interested to know the name of the label, you can find the corresponding integer to that label from result array in Semantic_segmentation.py. Where result is classification data of each pixel.
For example;
if you print result array in the with bird.jpg as input you would find few pixel's value as 3 which is corresponding 4th label in pascal_voc_segmentation_labels.txt (as indexing starts at 0 )."

How to see more evaluation steps in tensorboard

I want to see more evaluation steps in Tensorboard, while I'm training and evaluating my object detection (standard code in tensorflow object detection).
Here you can see what I mean for number of evaluation steps. As you can see, it's fixed to 10 visualization.
I can't find where to change and increase this parameter. Moreover, these visualizations are random and not the last 10.
Is it possible to set a different number of visualization?
And what can I do for see the last N evaluations instead of random N evaluations?
Thank you in advance.
Added: Image from link:
I assume you're using this code:
https://github.com/tensorflow/models/tree/master/research/object_detection
(you should include that link to clarify in future questions, and if that assumption is wrong you should edit your question to specify what code you're using)
If you look at the trainer.py code at the bottom they have:
slim.learning.train(
train_tensor,
logdir=train_dir,
master=master,
is_chief=is_chief,
session_config=session_config,
startup_delay_steps=train_config.startup_delay_steps,
init_fn=init_fn,
summary_op=summary_op,
number_of_steps=(
train_config.num_steps if train_config.num_steps else None),
save_summaries_secs=120,
sync_optimizer=sync_optimizer,
saver=saver)
It looks like they've hard coded save_summaries_sec=120 to save a summary every 120 seconds. That's what you want to edit to change the tensorboard summary update period.
Edit: I've added the image to the question to help clarify. I believe the answer is in tf.summary.image you have a property max_outputs which controls the number of values from the block of images. To choose a subset of images specifically you should simply write your own code to select them in whatever way you see fit, randomly, or in some order, then pass that new set of images to tf.summary.image.
You may want to consider looking at the eval_config section of the model config file.
eval_config: {
num_examples: 100
num_visualizations: 50
# Note: The below line limits the evaluation process to 10 evaluations.
# Remove the below line to evaluate indefinitely.
#max_evals: 10
}
I'm guessing that max_evals is what you're looking for.

Tensorflow RNN Input shape

Updated question: This is a good resource: http://machinelearningmastery.com/understanding-stateful-lstm-recurrent-neural-networks-python-keras/
See the section on "LSTM State Within A Batch".
If I interpret this correctly, the author did not need to reshape the data as x,y,z (as he did in the preceding example); he just increased the batch size. So an LSTM cells hidden state (the one that gets passed from one time step to the next) started at row 0, and keeps getting updated until all rows in the batch have finished? is that right?
If that is correct then why does one ever need to have a time step greater than 1? Could I not just stack all my time-series rows in order, and feed them as a single batch?
Original question:
I'm getting myself into an absolute muddle trying to understand the correct way to shape my data for tensorflow, particularly around time_steps. Reading around has only confused me further, so I thought I'd cave in and ask.
I'm trying to model time series data in which the data at time t is a 5 columns in width (5 features , 1 label).
So then t-1 will also have another 5 features, and 1 label
Here is an example with 2 rows.
x=[1,1,1,1,1] y=[5]
x=[2,2,2,2,2] y=[15]
I've got an RNN model to work by feeding in a 1x1x5 matrix into my x variable. Which implies my 'time step' has a dimension of 1. However as with the above example, the second line I feed in is correlated to the first (15 = 5 +(2+2+2+2+20 in case you haven't spotted it)
So is the way I'm currently entering it correct? How does the time stamp dimension work?
Or should I be thinking of it as batch size, rows, cols in my head?
Either way can someone tell me what are the dimensions are I should be reshaping my input data to? For sake of argument assume I've split the data into batches of 1000. So within those 1000 rows I want a prediction for every row, but the RNN should be look to the row above it in my batch to figure out the answer.
x1=[1,1,1,1,1] y=[5]
x2=[2,2,2,2,2] y=[15]
...
etc.

Keras/TensorFlow: How do I transform text to use as input?

I've been reading tutorials for the last few days, but they all seem to start at the step of "I have my data from this pre-prepared data set, let's go".
What I'm trying to do is take a set of emails I've tokenized, and figure out how to get them into a model as the training and evaluation data.
Example email:
0 0 0 0 0 0 0 0 0 0 0 0 32192 6675 16943 1380 433 8767 2254 8869 8155
I have a folder of emails (one file per email) for each spam and not spam:
/spam/
93451.txt
...
/not-spam/
112.txt
...
How can I get Keras to read this data?
Alternatively, how can I generate a CSV or some other format that it wants to use to input it?
There are many ways to do this, but ill try in this order:
You need to create dictionary of all the words in dataset and then assign a token for each of them. When inputing to the network you can convert it into a one-hot encoded form.
You can convert the input text by feeding it to a pretrained word embeddings model like glove or word-2-vec and obtain a embeddings vector.
You can use the one-hot vector from 1 and train your own embeddings.
As I understood from your task description (please guide me if I'm wrong), you need to classify texts into either spam or not spam category.
Basically, if you want to create the universal text data classification input solution, your
data input stage code should contain 3 steps:
1. Reading list of folders ("spam", "not spam" in your case) and iterating each folder to the list of files.
At the end you should have:
a) a dictionary containing (label_id -> label_name).
So in your case, you should stay with (0-> spam, 1->not_spam).
b) A pair of (file_content, label).
As you understand, this is out of scope of both keras and tensorflow. It is typical python' code.
2. For each part (file_content, label) you should process the first element, and that's the most interesting part usually.
In your example I can see 0 0 0 0 0 0 0 0 0 0 0 0 32192 6675 16943 1380 433 8767 2254 8869 8155. So you already have the indexes of the words, but they are in the text form. All you need is to transform the string to the array having the 300 items (words in your message).
For the further text machine learning projects, I suggest to use raw text data as a source and transform it to the word indexes using tf.contrib.learn.preprocessing.VocabularyProcessor.
3. Transform labels(categories) to the one-hot vector.
So at the end of these step you have a pair of (word_indexes_as_array, label_as_one_hot).
Then you can use these data as input data for training.
Naturally, you would divide this tuple into two, treating the first 80% of data as training set and 20% as testing set (please do not focus on 80/20, numbers it is just a sample).
You may look at at the text classification with keras examples. They are rather straightforward and may be helpful for you as they are starting from the data input step.
Also, please, look at the load_data_and_labels() method in the data input step example. It is a very similar case to yours (positive/negative).

Reading sequential data from TFRecords files within the TensorFlow graph?

I'm working with video data, but I believe this question should apply to any sequential data. I want to pass my RNN 10 sequential examples (video frames) from a TFRecords file. When I first start reading the file, I need to grab 10 examples, and use this to create a sequence-example which is then pushed onto the queue for the RNN to take when it's ready. However, now that I have the 10 frames, next time I read from the TFRecords file, I only need to take 1 example and just shift the other 9 over. But when I hit the end of the first TFRecords file, I need to restart the process on the second TFRecords file. It's my understanding that the cond op will process the ops required under each condition even if that condition is not the one that is to be used. This would be a problem when using a condition to check whether to read 10 examples or only 1. Is there anyway to resolve this problem to still have the desired result outlined above?
You can use the recently added Dataset.window() transformation in TensorFlow 1.12 to do this:
filenames = tf.data.Dataset.list_files(...)
# Define a function that will be applied to each filename, and return the sequences in that
# file.
def get_examples_from_file(filename):
# Read and parse the examples from the file using the appropriate logic.
examples = tf.data.TFRecordDataset(filename).map(...)
# Selects a sliding window of 10 examples, shifting along 1 example at a time.
sequences = examples.window(size=10, shift=1, drop_remainder=True)
# Each element of `sequences` is a nested dataset containing 10 consecutive examples.
# Use `Dataset.batch()` and get the resulting tensor to convert it to a tensor value
# (or values, if there are multiple features in an example).
return sequences.map(
lambda d: tf.data.experimental.get_single_element(d.batch(10)))
# Alternatively, you can use `filenames.interleave()` to mix together sequences from
# different files.
sequences = filenames.flat_map(get_examples_from_file)