I want to transpose the output shape of a graph in saved_model or tflite format.
The current form looks like this.(.tflite)
enter image description here
I want to change the output of "partitionedcall:1" to [1,32,160,160], in this case do I need to add a layer of transpose after the current output layer?
If you don't mind, can you tell me how to do it?
Thank you.
Related
I'm building a model using Tensorflow where the input is a slice of the output. Think of the output layer as a 2D array. One row of that array is the input data. The neural network currently tries to connect the input to the output using a mean-squared error loss function. It's doing a fairly good job, but the accuracy needs to be improved a little.
To do that, I'm trying to add another physics-based loss function. If I can have the network place the input slice in its correct location in the output, that would greatly simplify the problem as each row in the output 2D array depends on the two rows above it.
I hope this makes sense.
I am trying to make a model that is able to extract human speech from a recording. To do this I have loaded 1500 noisy files (some of these files are the exact same but with different speech to noise ratios (-1,1,3,5,7). I want my model to take in a wav file as a one dimensional array/tensor along the horizontal axis, and output a one dimensional array/tensor that I could then play.
currently this is how my data is set up.
this is how my model is setup
an error I am having is that I am not able to make a prediction and when I am i get an array/tensor with only one element, instead one with 220500. The reason behind 22050 is that it is the length of the background noise that was overlapped into clean speech so every file is this length.
I have been messing around with layers.Input because while I want my model to take in every row as one "object"/audio clip. I dont know if that is what's happening because the only "successful" prediction is an error
The model you built expect data in the format (batch_size, 1, 220500), as in the input layer you declared an input_shape of (1, 220500).
For the data you are using you should just use an input_shape of (220500,).
Another problem you might encounter, is that you are using a single unit in the last layer. This way the output of the model will be (batch_size, 1), but you need (batch_size, 220500) as an output.
For this last problem I suggest you to use a generative recurrent neural network.
I would like to change the input and output size of a convolutional model of tensorflow, which I am importing from the tensorflow hub.
Would I like to know what is the best way to do this? If I could convert the model to kaeras format I think it would be easier, but I'm not succeeding either.
This is the model https://tfhub.dev/intel/midas/v2_1_small/1
The format of the input is determined by the publisher of the model. Some models could be flexible on the dimensions of the input and some require input with very specific dimensions. In that case, the best way would be to resize the input as needed before feeding it to the model.
This page (https://keras.io/examples/mnist_siamese/) highlights how we train a Siamese model. The model will output a score given two images input. What I want to do is that, during inference, given an image, I want it to return a 128-dimension vector that represents the image, how should I achieve that?
If you run model.summary() you will see a summary of all model layers. In your case 'model' appears to be the layer of interest. Then you can select the layer that contains the 128D output using the get_layer() method. Finally you can extract the output as below.
model.get_layer('model').output
I am using the tf.keras API and I want my Model to take input with shape (None,), None is batch_size.
The shape of keras.layers.Input() doesn't include batch_size, so I think it can't be used.
Is there a way to achieve my goal? I prefer a solution without tf.placeholder since it is deprecated
By the way, my model is a sentence embedding model, so I want the input is something like ['How are you.','Good morning.']
======================
Update:
Currently, I can create an input layer with layers.Input(dtype=tf.string,shape=1), but this need my input to be something like [['How are you.'],['Good morning.']]. I want my input to have only one dimension.
Have you tried tf.keras.layers.Input(dtype=tf.string, shape=())?
If you wanted to set a specific batch size, tf.keras.Input() does actually include a batch_size parameter. But the batch size is presumed to be None by default, so you shouldn't even need to change anything.
Now, it seems like what you actually want is to be able to provide samples (sentences) of variable length. Good news! The tf.keras.layers.Embedding layer allows you to do this, although you'll have to generate an encoding for your sentences first. The Tensorflow website has a good tutorial on the process.