is my model using future data? - tensorflow

I am training a model in TensorFlow to predict a time series. The network gets a window of data of length L and tries to come up with the next value.
I do the training in batches of overlapping windows (that slide forward in time). To speed up the process, instead of feeding an array of windows, I feed a single larger one and I use tf.extract_image_patches to extract the windows.
My question is: is it the case that the model can "cheat" by looking at the next values in the larger window? Technically, the next value of each window (except for the last one) is in the initial large window that I feed at the beginning.
EDIT: my model is a custom recurrent neural network, that is fed the various windows (one per loop) and the previous prediction.

Unless you are using Recurrent units, your model wouldn't know what it is going to be fed next.
Also, it is generally not a good idea to keep such a structure (overlapping windows) in input data. It's better to use shuffled data, rest whatever works in your particular case.

Related

Issues when modelling LSTM for multi series

I'm a beginner on time series analysis with deep learning, and I have been searching for examples with LSTM in which more than one series (for example one for each city or place) is trained to avoid fitting a model for each one. The main benefit of course is that you have more training data and less computational costs. I have found an interesting code to help modeling this problem with conditional/temporally-static variables (it's called cond-rnn). But wherever I search, it's not clear to me some issues regarding sorting the inputs appropriately.
The context is that I have a target and a set of autoregressive inputs (features, lags, timesteps, wherever you call it), in which data from different series are stack together. RF and GB are outperforming LSTM on this task (with overfitting, even when I use 100k+ samples, dropout or regularization), and I'm not sure if I'm using it appropriately.
It is wrong to stack series together and have the inputs-targets randomly sorted (as in the figure)? Does the LSTM need to receive the inputs temporally sorted?
If they need so, do you have any advice on how to deal with the problem of providing new series (that start from the first time period) to the LSTM training? This answer to a similar problem (but another perspective) suggest to pick "places" as an input column, but I don't think this answer help the questions here I posed.

Updating Tensorflow Object detection model with new images

I have trained a faster rcnn model with a custom dataset using Tensorflow's Object Detection Api. Over time I would like to continue to update the model with additional images (collected weekly). The goal is to optimize for accuracy and to weight newer images over time.
Here are a few alternatives:
Add images to previous dataset and train a completely new model
Add images to previous dataset and continue training previous model
New dataset with just new images and continue training previous model
Here are my thoughts:
option 1: would be more time consuming, but all images would be treated "equally".
Option 2: would like take less additional training time, but one concern is that the algorithm might be weighting the earlier images more.
Option 3: This seems like the best option. Take original model and simply focus on training the new stuff.
Is one of these clearly better? What would be the pros/cons of each?
In addition, I'd like to know if it's better to keep one test set as a control for accuracy or to create a new one each time that includes newer images. Perhaps adding some portion of new images to model and another to the test set, and then feeding older test set images back into model (or throwing them out)?
Consider the case where your dataset is nearly perfect. If you ran the model on new images (collected weekly), then the results (i.e. boxes with scores) would be exactly what you want from the model and it would be pointless adding these to the dataset because the model would not be learning anything new.
For the imperfect dataset, results from new images will show (some) errors and these are appropriate for further training. But there may be "bad" images already in the dataset and it is desirable to remove these. This indicates that Option 1 must occur, on some schedule, to remove entirely the effect of "bad" images.
On a shorter schedule, Option 3 is appropriate if the new images are reasonably balanced across the domain categories (in some sense a representative subset of the previous dataset).
Option 2 seems pretty safe and is easier to understand. When you say "the algorithm might be weighting the earlier images more", I don't see why this is a problem if the earlier images are "good". However, I can see that the domain may change over time (evolution) in which case you may well wish to counter-weight older images. I understand that you can modify the training data to do just that as discussed in this question:
Class weights for balancing data in TensorFlow Object Detection API

Cells detection using deep learning techniques

I have to analyse some images of drops, taken using a microscope, which may contain some cell. What would be the best thing to do in order to do it?
Every acquisition of images returns around a thousand pictures: every picture contains a drop and I have to determine whether the drop has a cell inside or not. Every acquisition dataset presents with a very different contrast and brightness, and the shape of the cells is slightly different on every setup due to micro variations on the focus of the microscope.
I have tried to create a classification model following the guide "TensorFlow for poets", defining two classes: empty drops and drops containing a cell. Unfortunately the result wasn't successful.
I have also tried to label the cells and giving to an object detection algorithm using DIGITS 5, but it does not detect anything.
I was wondering if these algorithms are designed to recognise more complex object or if I have done something wrong during the setup. Any solution or hint would be helpful!
Thank you!
This is a collage of drops from different samples: the cells are a bit different from every acquisition, due to the different setup and ambient lights
This kind of problem should definitely be possible. I would suggest starting with a cifar 10 convolutional neural network tutorial and customizing it for your problem.
In future posts you should tell us how your training is progressing. Make sure you're outputting the following information every few steps (maybe every 10-100 steps):
Loss/cost function output, you should see your loss decreasing over time.
Classification accuracy on the current batch of your training data
Classification accuracy on a held out test set (if you've implemented test set evaluation, you might implement this second)
There are many, many, many things that can go wrong, from bad learning rates, to preprocessing steps that go awry. Neural networks are very hard to debug, they are very resilient to bugs, making it hard to even know if you have a bug in your code. For that reason make sure you're visualizing everything.
Another very important step to follow is to save the images exactly as you are passing them to tensorflow. You will have them in a matrix form, you can save that matrix form as an image. Do that immediately before you pass the data to tensorflow. Make sure you are giving the network what you expect it to receive. I can't tell you how many times I and others I know have passed garbage into the network unknowingly, assume the worst and prove yourself wrong!
Your next post should look something like this:
I'm training a convolutional neural network in tensorflow
My loss function (sigmoid cross entropy) is decreasing consistently (show us a picture!)
My input images look like this (show us a picture of what you ACTUALLY FEED to the network)
My learning rate and other parameters are A, B, and C
I preprocessed the data by doing M and N
The accuracy the network achieves on training data (and/or test data) is Y
In answering those questions you're likely to solve 10 problems along the way, and we'll help you find the 11th and, with some luck, last one. :)
Good luck!

LSTM for Regression (in Tensorflow)

I want to implement some LSTM model in Tensorflow. I think I understood the tutorials fairly well. In those input data was given in the form of words, which were embedded into a continous vector space (which has several advantages).
I now want to make an LSTM to predict a series of contionous numbers and do not know what is the best approach to that.
Should I discretize my input range, thus effectively get a classification problem with a number of classes and use the embedding desribed before, or stick to the continous numbers and do regression? In that case I just in each time step pass one feature to the model, namely the continous number?
Here's two example you may find helpful.
https://github.com/MorvanZhou/tutorials/blob/master/tensorflowTUT/tf20_RNN2.2/full_code.py
http://mourafiq.com/2016/05/15/predicting-sequences-using-rnn-in-tensorflow.html
You can just use regression. However, if your input is forever long, you need to fix size sequences.

Adjust existing Tensorflow Graphs (VGG)

I want to use the VGG converted tensorflow model from Ryan.
https://github.com/ry/tensorflow-vgg16
Now I want to adjust the layers and add another layer or change the fully connected layers. But I don't know how to get the single layers/weights out of the graphDef or how to adjust the graph.
Short answer: you can't adjust a graph, but there are probably ways to get what you want accomplished.
Long answer: TensorFlow Graph objects are structurally immutable. You can modify some aspects of them (e.g., the shape of a tensor flowing into a node), but you can't remove a node or add a node between two existing nodes. However, there are a couple ways to get the same effect:
If your changes are limited to additions only, then there's no problem with doing this. For instance, if you wanted to add a layer on the end of a network, go for it. Likewise, you can "replace" the last layer by simply adding a new layer which takes the second-to-last layer as input and just ignoring the existing last layer. When you run the graph, if you never ask for the output of the original last layer, TensorFlow will never compute it.
If you need to do modifications, one way is to slowly build up a copy of the graph node by node. So read in the original graph definition, then build your own new graph by iterating over the original and adding similar nodes to your new copy. This is somewhat tedious and can be error-prone. Moreover...
...You might not need to "adjust" the graph at all. If you want something similar to that VGG-16 implementation, you can just work off the python code directly. Don't like the width of fc6? Just edit the code that generates it.
This brings us to the real issue, though. If your goal is to modify the network and be able to re-use the weights, then 2. and 3. aren't going to work. Realistically, this isn't possible in a lot of cases. For instance, if I wanted to add or remove a layer in the middle of VGG-16 (say, adding another convolutional layer), the pre-trained weights are no longer valid. You might be able to salvage any pre-trained weights which are upstream of your changes, but everything downstream will basically be wrong. You'll need to retrain the network anyways. (Maybe you can use the pre-trained networks as initialization, but you'll still need to retrain.) Even if you're just adding to the network (as in 1.), you'll still need to train the network.
Thanks! I have recreated the graph and then loaded every single weight by getting the value of the graph definition.
This was done by graph.get_tensor_by_name('import/...') where ... is the name of the weight
https://www.tensorflow.org/versions/r0.9/how_tos/tool_developers/index.html