Question about parameter amount given by Keras summary() - tensorflow

I have a pre-trained model from Tensorflow, checked out its summary and I have questions about the parameter amount displayed from it. It's 48,190,600. Where does it come from?

Related

Is there a way to obtain the loss values for specific validation data for a Keras autoencoder?

I'm trying to obtain the loss of each image compared to the autoencoder prediction. I am able to obtain the mean loss per epoch, but I would like to see how it holds for different types of images (slightly different from the trained set) to see how it performs.
Found it here: https://keras.io/api/losses/ under "Standalone usage of loss".

TensorFlow model with time series data, having different input shapes for training and prediction

I am having a somewhat decent working neural net, utilising mostly LSTM, Dropout and Dense layers. I usually use it for sales prediction only but now my issue is that I'd like to train and predict with datasets of different shapes.
I have several columns showing marketing spending per channel, as well as sales for different products. Below you find an image, illustrating the dataset. Now, the orange data (marketing channels and product sales) are supposed to be the training data. When I do a many-to-many prediction, I could just forecast all the columns, like I do when I've got a dataset containing only sales.
But I already know the marketing spendings for the future, because it already is planned ahead. Now, for that I could just use pystats (OLS for example) but LSTM are really good at remembering the past marketing spendings and sales.
Actual Question:
is there a way to utilise a tensorflow neural net with a different input shape on training and test data? Test data in this case would be either actual test data or already the actual future.
Or any other comparable model? Unfortunately, I have not found any solution during my research.
Thanks for your time.

why are my tensorflow events files empty?

I am running the tensorflow object detection API and using the SSD_mobilenet model.I have the model.cpkt as well as the graph.pbtxt in my training dir. But in my training dir I found that my events files are empty. It seems that no data was written to my events. Could anyone help me,please!!!
Tensorflow event files will be generated based on the summaries what we have added in code.
For example, suppose you are training a convolutional neural network for recognizing MNIST digits. You'd like to record how the learning rate varies over time, and how the objective function is changing. Collect these by attaching tf.summary.scalar ops to the nodes that output the learning rate and loss respectively. Then, give each scalar_summary a meaningful tag, like 'learning rate' or 'loss function'.
For example:
Add a scalar summary for the snapshot loss.
tf.summary.scalar('loss', loss)
Please refer the below link:
https://www.tensorflow.org/guide/summaries_and_tensorboard

Tensorflow embeddings

I know what embeddings are and how they are trained. Precisely, while referring to the tensorflow's documentation, I came across two different articles. I wish to know what exactly is the difference between them.
link 1: Tensorflow | Vector Representations of words
In the first tutorial, they have explicitly trained embeddings on a specific dataset. There is a distinct session run to train those embeddings. I can then later on save the learnt embeddings as a numpy object and use the
tf.nn.embedding_lookup() function while training an LSTM network.
link 2: Tensorflow | Embeddings
In this second article however, I couldn't understand what is happening.
word_embeddings = tf.get_variable(“word_embeddings”,
[vocabulary_size, embedding_size])
embedded_word_ids = tf.gather(word_embeddings, word_ids)
This is given under the training embeddings sections. My doubt is: does the gather function train the embeddings automatically? I am not sure since this op ran very fast on my pc.
Generally: What is the right way to convert words into vectors (link1 or link2) in tensorflow for training a seq2seq model? Also, how to train the embeddings for a seq2seq dataset, since the data is in the form of separate sequences for my task unlike (a continuous sequence of words refer: link 1 dataset)
Alright! anyway, I have found the answer to this question and I am posting it so that others might benefit from it.
The first link is more of a tutorial that steps you through the process of exactly how the embeddings are learnt.
In practical cases, such as training seq2seq models or Any other encoder-decoder models, we use the second approach where the embedding matrix gets tuned appropriately while the model gets trained.

How do a track validation loss in TensorBoard? [duplicate]

This question already has an answer here:
When using tensorboard, how to summarize a loss that is computed over several minibatches?
(1 answer)
Closed 6 years ago.
I am training a model in TensorFlow. Periodically during training, I evaluate the model on a validation set. I'd like to write a summary of the training procedure so that TensorBoard displays a plot of the validation set loss so that I can see it go down with more training iterations. (Or jump back up if I start to overfit.)
I already have a global iteration variable as part of my summary. I'm thinking of creating a scalar summary validation_loss variable in the model graph that isn't connected to anything, but to which I periodically assign a variable to from my training loop.
Is this a good strategy? Is there a more idiomatic way to do this in TensorFlow?
(The specific project I'm working on is the TensorFlow RNN Language Model, which is a generalization of the RNN tutorial in the TensorFlow documentation.)
As I understand it, the idiomatic solution is to merge all summaries (in case loss is not your only summary) before creating a tf.train.SummaryWriter separately for your training and validation set. Then use the add_summary Op on the validation SummaryWriter for each (periodic) iteration.