This seems so basic, but for some reason, I can't find any clear documentation on it.
So lets say I know my ONNX model wants an input of shape [245, 245, 3]. The second argument in the constructor Ort::Value::CreateTensor wants a linear array of the data to fill the tensor. What is the order of the linear array?
For example, are the first three values in the linear array the BGR values for the 0-th pixel in the image, or are the first three values in the linear array the B-channel value of the first three pixels in the image? And as for ordering of pixels in the image: row-major?
The short answer is : ONNX only supports NCHW
As a reference, please check the section My converted TensorFlow model is slow - why? in onnxruntime.ai. This is the only "official" material that talking about the data format I found so far.
It's row-major. The format of inputs in ONNX is NCHW. C = number of channels. In this case C=3. The ordering of C (BGR or RGB) depends on the model. For e.g. the YOLO model takes an image 3(RGB) x 416px x 416px.
Related
I am trying pixelcnn, which is auto-regressive generative model. After training, the model receive an all-zero tensor and generate the next pixel form the left top coner. Now that the model parameters are fixed, does the model only can produce the same outputs starting from the same zero tensor? How to produce different samples?
Yes, you always provide an all-zero tensor. However, for PixelCNN each pixel location is represented by a distribution. So when you do the forward pass you then sample from a random distribution at the end. That is how the pixel values are different each run.
This is of course because PixelCNN is a probabilistic neural network. So the pixels, as mentioned before, are all represented by conditional probability distributions of all the layers below, not just point estimates.
When implementing the reinforcement learning with tensorflow, the inputs are black/white images. Each pixel can be represented as a bit 1/0.
Can I give the data directly to tensorflow, with each bit as a feature? Or I had to expand the bits to bytes before sending to tensorflow? I'm new to tensorflow, so some code example would be nice.
Thanks
You can directly load the Image data as you would normally do, the Image being binary will have no effect other that the input channel width becoming 1 for the input.
Whenever you put an Image through a convnet, each output filter generally learns features for all the channels, so in case of a binary image, there is a separate kernel defined for each input channel / output channel combination (Since Only 1 input channel) in the first layer.
Each channel is defined by it's number of filters and there exists a 2D kernel for each input channel which averages over all filters, so you will have weights/parameters equal to input_channels * number_of_filters * filter_dims, here for the first layer input_channels becomes one.
Since you asked for some sample code.
Let your image be in a tensor X, simply use
X_out = tf.nn.conv2d(X, filters = 6, kernel_size = [height,width])
After that you can apply an activation, this will make your output image have 6 channels. If you face any problem or have some doubts, feel free to comment, for theoretical clarification, check out https://www.coursera.org/learn/convolutional-neural-networks/lecture/nsiuW/one-layer-of-a-convolutional-network
Edit
Since the question was about simple neural net, not conv net, here is the code for that,
X_train is the variable in which image is stored as (n_x,n_x) byte resolution, n_x is used later.
You will need to flatten the input.
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
This first flattens the image horizontally and then transposes it to arrange it vertically.
Then you will create placeholder tensor X as :
X = tf.placeholder(tf.bool,[n_x*n_x,None]) #Your Input tensor should have dimension same as your input layer.
let W, b be weight and bias respectively.
Z1 = tf.add(tf.matmul(W1,X),b1) #Linear Transformation step
A1 = tf.nn.relu(Z1) #Activation Step
And you keep on creating your graph, I think that answers your question, if not let me know.
I'm working on a problem using Keras that has been presenting me with issues:
My X data is all of shape (num_samples, 8192, 8), but my Y data is of shape (num_samples, 4), where 4 is a one-hot encoded vector.
Both X and Y data will be run through LSTM layers, but the layers are rejecting the Y data because it doesn't match the shape of the X data.
Is padding the Y data with 0s so that it matches the dimensions of the X data unreasonable? What kind of effects would that have? Is there a better solution?
Edited for clarification:
As requested, here is more information:
My Y data represents the expected output of passing the X data through my model. This is my first time working with LSTMs, so I don't have an architecture in mind, but I'd like to use an architecture that works well with classifying long (8192-length) sequences of words into one of several categories. Additionally, the dataset that I have is of an immense size when fed through an LSTM, so I'm currently using batch-training.
Technologies being used:
Keras (Tensorflow Backend)
TL;DR Is padding one tensor with zeroes in all dimensions to match another tensor's shape a bad idea? What could be a better approach?
First of all, let's make sure your representation is actually what you think it is; the input to an LSTM (or any recurrent layer, for that matter) must be of dimensionality: (timesteps, shape), i.e. if you have 1000 training samples, each consisting of 100 timesteps, with each timestep having 10 values, your input shape will be (100,10,). Therefore I assume from your question that each input sample in your X set has 8192 steps and 8 values per step. Great; a single LSTM layer can iterate over these and produce 4-dimensional representations with absolutely no problem, just like so:
myLongInput = Input(shape=(8192,8,))
myRecurrentFunction = LSTM(4)
myShortOutput = myRecurrentFunction(myLongInput)
myShortOutput.shape
TensorShape([Dimension(None), Dimension(4)])
I assume your problem stems from trying to apply yet another LSTM on top of the first one; the next LSTM expects a tensor that has a time dimension, but your output has none. If that is the case, you'll need to let your first LSTM also output the intermediate representations at each time step, like so:
myNewRecurrentFunction=LSTM(4, return_sequences=True)
myLongOutput = myNewRecurrentFunction(myLongInput)
myLongOutput.shape
TensorShape([Dimension(None), Dimension(None), Dimension(4)])
As you can see the new output is now a 3rd order tensor, with the second dimension now being the (yet unassigned) timesteps. You can repeat this process until your final output, where you usually don't need the intermediate representations but rather only the last one. (Sidenote: make sure to set the activation of your last layer to a softmax if your output is in one-hot format)
On to your original question, zero-padding has very little negative impact on your network. The network will strain itself a bit in the beginning trying to figure out the concept of the additional values you have just thrown at it, but will very soon be able to learn they're meaningless. This comes at a cost of a larger parameter space (therefore more time and memory complexity), but doesn't really affect predictive power most of the time.
I hope that was helpful.
I am trying to model a classifier that contain Multi Dimensional Feature as input. Can any one knew of a dataset that contain multi dimensional Features?
Lets say for example: In mnist data we have pixel location as feature & feature value is a Single Dimensional grey scale value that varies from (0 - 255), But if we consider a colour image then in that case a single grey scale value is not sufficient, in this case also we will take the pixel location as feature but feature value will be of 3 Dimension( R(0-255) as one dimension, G(0-255) as second dimension and B(0-255) as third dimension) So in this case how can one solve using FeedForward Neural network?
SMALL SUGGESTIONS ALSO ACCEPTED.
The same way.
If you plug the pixels into your network directly just reshape the tensor to have H*W*3 length.
If you use convolutions note the the last parameter is the number of input/output dimensions. Just make sure the first convolution uses 3 as input.
I have one image data tensor with shape of B*H*W*C and one position tensor with shape of B*H*W*2. The values in position tensor are pixel coordinates and I want to sample pixels in image data tensor according to these pixel coordinates. I have tried one way to do that like reshaping the tensor to one-dimension tensor, but I think it's really inconvenient. I wonder whether I could implement it by some more convenient approach like matrix mapping(e.g. remap in opencv).
I would first ask if you are sure the position matrix isn't redundant. If the position matrix entries simply correspond to the pixel locations in the image array, then for a given application however you access the position matrix could be used instead on the image data.
Perhaps as a starting point, running
sess = tf.Session()
np_img, np_pos = sess.run([tf_img, tf_pos], feed_dict={...})
will convert tensors to numpy arrays, which may make your operations easier.
Otherwise, a 1D-tensor isn't that bad and there are TF functions for reshaping easily.