I am trying to apply style transfer to a webcam capture. I am reading a frozen model I've previously trained in python and converted for TFjs. The output tensor's shape and rank is as follows:
I am having issues in the last line of this function, when I try to apply tf.browser.toPixels
function predictWebcam() {
tf.tidy(() => {
loadmodel().then(model=>{
//let tensor= model.predict(tf.expandDims(tf.browser.fromPixels(video)));
let tensor= model.predict(tf.browser.fromPixels(video, 3).toFloat().div(tf.scalar(255)).expandDims());
console.log('shape', tensor.shape);
console.log('rank', tensor.rank);
tf.browser.toPixels(tensor, resultImage);
});
});
}
I get this error. I cannot figure out how to reshape or modify the tensor to get an image out of it:
Uncaught (in promise) Error: toPixels only supports rank 2 or 3 tensors, got rank 4.
Maybe I have to replicate tensor_to_image function from python to javascript as in the example in the website.
Thanks in advance!
given your tensor is [1, 15, 20, 512]
you can remove any dims with value of 1 (same dim you've added by running expandDims) by running
const squeezed = tf.squeeze(tensor)
that will give you shape of [15, 20, 512]
but that still doesn't make sense - what is width, height and channels (e.g. rgb) here?
i think that model result needs additional post-processing, that is not an image.
Related
I created my first model, but the predictions are not in the right format. How I do I remove a dimension on my prediction output (or change my last layer to get the correct one)?
const actualYs = [1,2,3] // The shape of my values Y
const predictions = [[1],[2],[3]] // The shape of my predictions
// My last layer looks like this:
model.add(tf.layers.dense({ units: 1, useBias: true }))
So from my limited understanding. I could maybe remove a dimension to predictions or change the last layer? But I already put 1, so not sure what else I could set it to.
In case this helps, this is my actual console.log
MY Y VALUES
Tensor
[0.0862738, 0.0862553, 0.0861815, ..., 0.0054516, 0.0043004, 0.0037461]
PREDICTIONS
Tensor
[[0.1690691],
[0.1659686],
[0.1698797],
...,
[0.1118171],
[0.1092742],
[0.1096415]]
I want predictions to look like my actual Y values.
Thanks in advance.
reshape or squeeze can be used
const x = tf.tensor([[1],[2],[3]] ).reshape([-1]);
// or
const x = tf.tensor([[1],[2],[3]] ).squeeze();
I am using p5 to return the vector path of a drawn line. All the vectors in the line are pushed into an array that holds all the vectors. I'm trying to use this as a tensor but I keep getting an error saying
Error when checking model input: the Array of Tensors that you are passing to your model is not the size the model expected. Expected to see 1 Tensor(s), but instead got the following list of Tensor(s):
When I opened the array on the dev tool, each vector was printed like this:
0: Vector {p5: p5, x: 0.5150300601202404, y: -0.25450901803607207, z: 0}
could it be the p5 text in the vector array that's giving me the error? Here's my model and fit code:
let vectorpath = []; //vector path array
// model, setting layers till next '-----'
const model = tf.sequential();
model.add(tf.layers.dense({units: 4, inputShape: [2, 2], activation: 'sigmoid'}));
model.add(tf.layers.dense({units: 2, activation: 'sigmoid'}));
console.log(JSON.stringify(model.outputs[0].shape));
model.weights.forEach(w => {
console.log(w.name, w.shape);
});
// -----
//this is under the draw function so it is continually updated
const labels = tf.randomUniform([0, 1]);
function onBatchEnd(batch, logs) {
console.log('Accuracy', logs.acc);
}
model.fit(vectorpath, labels, {
epochs: 5,
batchSize: 32,
callbacks: {onBatchEnd}
}).then(info => {
console.log('Final accuracy', info.history.acc);
});
What could be causing the error? and how can I fix it?
The question's pretty vague but I'm really just not sure.
I'm having trouble with the transition of Tensorflow Python to Tensorflow.js in regards to image preprocessing
in Python
single_coin = r"C:\temp\coins\20Saint-03o.jpg"
img = image.load_img(single_coin, target_size = (100, 100))
array = image.img_to_array(img)
x = np.expand_dims(array, axis=0)
vimage = np.vstack([x])
prediction =model.predict(vimage)
print(prediction[0])
I get the correct result
[2.8914417e-05 3.5085387e-03 1.9252902e-03 6.2635467e-05 3.7389682e-03
1.2983804e-03 7.4157811e-04 1.4608903e-04 2.7099697e-06 1.1844193e-02
1.3398369e-04 9.3798796e-03 9.7308388e-05 7.3931034e-05 1.9695959e-04
9.6496813e-05 4.2653349e-04 8.7305409e-05 8.1476872e-04 4.9094640e-04
1.3498703e-04 9.6476960e-01]
However in Tensorflow.js with the same image post the following preprocessing function:
function preprocess(img)
{
let tensor = tf.browser.fromPixels(img)
const resized = tf.image.resizeBilinear(tensor, [100, 100]).toFloat()
const offset = tf.scalar(255.0);
const normalized = tf.scalar(1.0).sub(resized.div(offset));
const batched = normalized.expandDims(0)
return batched
}
I get the following result:
[0.044167134910821915,
0.04726826772093773,
0.04546305909752846,
0.04596292972564697,
0.044733788818120956,
0.04367975518107414,
0.04373137652873993,
0.044592827558517456,
0.045657724142074585,
0.0449688546359539,
0.04648510739207268,
0.04426411911845207,
0.04494940862059593,
0.0457320399582386,
0.045905906707048416,
0.04473186656832695,
0.04691491648554802,
0.04441603645682335,
0.04782886058092117,
0.04696653410792351,
0.045027654618024826,
0.04655187949538231]
I'm obviously not translating the preprocessing appropriately. Does anyone see what I'm missing?
There is no normalization applied in the python code but there is a normalization in the js code. Either the same normalization applied in js is applied in python as well, or the normalization is removed from the js code.
Similar answer has been given here
simple question and im sure answer is straightforward but im really struggling to match model shape with tensor fitting into model.
this simple code
let tf = require('#tensorflow/tfjs-node');
let features = {
x: [1,2,3,4,5,6,7,8,9],
y: [1,2,3,4,5,6,7,8,9]
}
let tensorfeature = tf.tensor2d(Object.values(features))
console.log(tensorfeature.shape)
const model = tf.sequential();
model.add(tf.layers.dense(
{
inputShape: tensorfeature.shape,
units: 1
}
))
const optimizer = tf.train.sgd(0.005);
model.compile({optimizer: optimizer, loss: 'meanAbsoluteError'});
model.fit(tensorfeature,
{epochs: 5}
)
Results in Error: Error when checking input: expected dense_Dense1_input to have 3 dimension(s). but got array with shape 2,9
tried multiple things with reshape, slice, etc with no luck. Can someone point me what exactly is wrong?
model.fit takes at least two parameters x, y which are either tensors or array of tensors. The config object is the third parameter.
Also, the feature(tensorfeature) tensor passed as argument to model.fit should be one dimension higher than the inputShape of the model. Since tensorfeature.shape is used as the inputShape, if we want to traing the model with tensorfeature its dimension should be expanded. It can be done using reshape or expandDims.
model.fit(tensorfeature.expandDims(0))
// or possibly
model.fit(tensorfeature.reshape([1, ...tensorfeature.shape])
This shape mismatch between the model and the training data has been discussed here and there
I'm writing a custom Tensorflow op using the tutorial and I'm having trouble understanding how to read and write to/from Tensors.
let's say I have a Tensor in my OpKernel that I get from
const Tensor& values_tensor = context->input(0); (where context = OpKernelConstruction*)
if that Tensor has shape, say, [2, 10, 20], how can I index into it (e.g. auto x = values_tensor[1, 4, 12], etc.)?
equivalently, if I have
Tensor *output_tensor = NULL;
OP_REQUIRES_OK(context, context->allocate_output(
0,
{batch_size, value_len - window_size, window_size},
&output_tensor
));
how can I assign to output_tensor, like output_tensor[1, 2, 3] = 11, etc.?
sorry for the dumb question, but the docs are really tripping me up here and the examples in the Tensorflow kernel code for built-in ops somehow obfuscate this to the point that I get very confused :)
thank you!
The easiest way to read from and write to tensorflow::Tensor objects is to convert them to an Eigen tensor, using the tensorflow::Tensor::tensor<T, NDIMS>() method. Note that you have to specify the (C++) type of elements in tensor as template parameter T.
For example, to read a particular value from a DT_FLOAT32 tensor:
const Tensor& values_tensor = context->input(0);
auto x = value_tensor.tensor<float, 3>()(1, 4, 12);
To write a particular value to a DT_FLOAT32 tensor:
Tensor* output_tensor = ...;
output_tensor->tensor<float, 3>()(1, 2, 3) = 11.0;
There are also convenience methods for accessing a scalar, vector, or matrix.