I'd like to choose the path of inputs for my tensor flow model looks something like below.
I don't think it's necessary to specify an un-trainable 'decision forest' layer, there has to be a better way to do this.
Note: I'd like to train these separately and then combine them (easy)
Related
I want to implement a tf model with a tweets-set as input and sentiment (or price movement prediction of the underlying asset) as output. Notice that my input is not a single tweet, but a set of tweets published over the same narrow time frame. The model architecture would look something like this:
I use the same model Trainable Model to predict the single sentiments s_i. I then take the average over these sentiments to compute the overall tweets-set sentiment, which I consider as my output.
Now my question is: Can I implement something like this in tensorflow?
One of the main difficulties I can think of, is that the input shape is not fixed. It depends on the the number of tweets n published in that time frame. I read about tf.placeholder, but it doesn't seem to be suitable here, because it still requires a constant input dimension (How to feed input with changing size in Tensorflow).
Also what possibilities does tensorflow offer in order to define such custom models (not fully connected, custom computations e.g. averaging the sentiments etc.)?
I would like to change the input and output size of a convolutional model of tensorflow, which I am importing from the tensorflow hub.
Would I like to know what is the best way to do this? If I could convert the model to kaeras format I think it would be easier, but I'm not succeeding either.
This is the model https://tfhub.dev/intel/midas/v2_1_small/1
The format of the input is determined by the publisher of the model. Some models could be flexible on the dimensions of the input and some require input with very specific dimensions. In that case, the best way would be to resize the input as needed before feeding it to the model.
I am using the tf.keras API and I want my Model to take input with shape (None,), None is batch_size.
The shape of keras.layers.Input() doesn't include batch_size, so I think it can't be used.
Is there a way to achieve my goal? I prefer a solution without tf.placeholder since it is deprecated
By the way, my model is a sentence embedding model, so I want the input is something like ['How are you.','Good morning.']
======================
Update:
Currently, I can create an input layer with layers.Input(dtype=tf.string,shape=1), but this need my input to be something like [['How are you.'],['Good morning.']]. I want my input to have only one dimension.
Have you tried tf.keras.layers.Input(dtype=tf.string, shape=())?
If you wanted to set a specific batch size, tf.keras.Input() does actually include a batch_size parameter. But the batch size is presumed to be None by default, so you shouldn't even need to change anything.
Now, it seems like what you actually want is to be able to provide samples (sentences) of variable length. Good news! The tf.keras.layers.Embedding layer allows you to do this, although you'll have to generate an encoding for your sentences first. The Tensorflow website has a good tutorial on the process.
So I frequently run models with different architectures, but have code intended to apply to all of them which runs inference off the saved models. Thus, I will be calling eval() on the last layer of this model, like this:
yhat = graph.get_tensor_by_name("name_of_my_last_layer:0")
decoded_image = yhat.eval(session=sess, feed_dict={x : X})
However, without arduous log parsing, I don't know exactly what the last layer is named, and I'm currently hand-coding it. I've considered creating a generic 'output' tensor in my graph but that seems wasteful/brittle. What is the better way?
The best way is to either making the layer you want to analyse a model output or to fix its name (by passing the name= keyword argument to the layer function when creating the layer) to be a known string.
I am using Tensorboard to visualize Tensorflow runs, and I would like to have a summary graph that only writes a value once per epoch.
I want to do something like this:
with graph.as_default():
tf_ending = tf.placeholder(tf.bool)
tf.scalar_summary('Loss', loss) # Some summaries are written every time
if tf_ending:
# This summary should only get written sometimes.
tf.scalar_summary('Total for Epoch', epoch_total)
I have the feeling that I need to do something other than tf.merge_all_summaries() and manage the sets of summaries separately, but I'm not sure how that would work.
One way to do this is to add a custom Summary protobuf to the SummaryWriter. At the end of each epoch (outside of session/graph), you can add something like:
summary = tf.Summary()
summary.value.add(tag='Total for Epoch',simple_value=epoch_total)
summary_writer.add_summary(summary, train_step)
This, however, requires the value (epoch_total) to be returned via the tensorflow graph (sess.run). Also, I'm not sure if this is the best way to do something like this, however you do see this used in TF examples, e.g. here and here.