Tensorflow retrained graph in C# (Tensorflowsharp) - tensorflow

I'am just trying to use a retrained inception model in Tensorflow sharp in Unity.
The retrained model was prepared with optimize_for_inference and is working like a charm in python.
But it is pretty inaccurate in c#.
the code works like this:
First i get the Picture
//webcamtexture transformed to picture in jpg
var pic = _texture.EncodeToJpg();
//added Picture to queue for the object detection thread
_detectedObjects.addTens(pic);
After that a thread will handle each collected picture
public void HandlePicture(byte[] picture)
{
var tensor = ImageUtil.CreateTensorFromImageFile(picture);
var runner = session.GetRunner();
runner.AddInput(g_input, tensor).Fetch(g_output);
var output = runner.Run();
var bestIdx = 0;
float best = 0;
var result = output[0];
var rshape = result.Shape;
var probabilities = ((float[][])result.GetValue(jagged: true))[0];
for (int r = 0; r < probabilities.Length; r++)
{
if (probabilities[r] > best)
{
bestIdx = r;
best = probabilities[r];
}
}
Debug.Log("Tensorflow thinks this is: " + labels[bestIdx] + " Prob : " + best * 100);
}
so my guess is:
1.it has something to do with retrained graphs (because i can't find any application/test it is used and working).
2.It has something to do with how i handle the picture transform into a tensor?! (but if that is wrong i could need help there, the code further down)
to transform the picture i'am also using a graph like it is used in the tensorsharp example
public static class ImageUtil
{
// Convert the image in filename to a Tensor suitable as input to the Inception model.
public static TFTensor CreateTensorFromImageFile(byte[] contents, TFDataType destinationDataType = TFDataType.Float)
{
// DecodeJpeg uses a scalar String-valued tensor as input.
var tensor = TFTensor.CreateString(contents);
TFGraph graph;
TFOutput input, output;
// Construct a graph to normalize the image
ConstructGraphToNormalizeImage(out graph, out input, out output, destinationDataType);
// Execute that graph to normalize this one image
using (var session = new TFSession(graph))
{
var normalized = session.Run(
inputs: new[] { input },
inputValues: new[] { tensor },
outputs: new[] { output });
return normalized[0];
}
}
// The inception model takes as input the image described by a Tensor in a very
// specific normalized format (a particular image size, shape of the input tensor,
// normalized pixel values etc.).
//
// This function constructs a graph of TensorFlow operations which takes as
// input a JPEG-encoded string and returns a tensor suitable as input to the
// inception model.
private static void ConstructGraphToNormalizeImage(out TFGraph graph, out TFOutput input, out TFOutput output, TFDataType destinationDataType = TFDataType.Float)
{
// Some constants specific to the pre-trained model at:
// https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
//
// - The model was trained after with images scaled to 224x224 pixels.
// - The colors, represented as R, G, B in 1-byte each were converted to
// float using (value - Mean)/Scale.
const int W = 299;
const int H = 299;
const float Mean = 128;
const float Scale = 1;
graph = new TFGraph();
input = graph.Placeholder(TFDataType.String);
output = graph.Cast(graph.Div(
x: graph.Sub(
x: graph.ResizeBilinear(
images: graph.ExpandDims(
input: graph.Cast(
graph.DecodeJpeg(contents: input, channels: 3), DstT: TFDataType.Float),
dim: graph.Const(0, "make_batch")),
size: graph.Const(new int[] { W, H }, "size")),
y: graph.Const(Mean, "mean")),
y: graph.Const(Scale, "scale")), destinationDataType);
}
}

Related

How do I access all the pixels for a Raster Source

I am attempting to calculate some statistics for pixel values using openlayers 6.3.1 & I am having an issue iterating over all pixels. I have read the docs for the pixels array that gets passed to the operation callback and it states:
For pixel type operations, the function will be called with an array
of * pixels, where each pixel is an array of four numbers ([r, g, b, a]) in the * range of 0 - 255. It should return a single pixel
array.
I have taken this to mean that the array passed contains all the pixels but everything I do seems to prove that I only get the current pixel to work on.
if(this.rasterSource == null) {
this.rasterSource = new Raster({
sources: [this.imageLayer],
operation: function (pixels, data) {
data['originalPixels'] = pixels;
if(!isSetUp) {
// originalPixels = pixels as number[][];
// const originalPixels = Array.from(pixels as number[][]);
// let originals = generateOriginalHistograms(pixels as number[][]);
isSetUp = true;
}
// console.log(pixels[0]);
let pixel = pixels[0];
pixel[data['channel']] = data['value'];
return pixel;
},
lib: {
isSetUp: isSetUp,
numBins: numBins,
// originalPixels: originalPixels,
// originalRed: originalRed,
// originalGreen: originalGreen,
// originalBlue: originalBlue,
generateOriginalHistograms: generateOriginalHistograms,
}
});
this.rasterSource.on('beforeoperations', function(event) {
event.data.channel = 0;
event.data.value = 255;
});
this.rasterSource.on('afteroperations', function(event) {
console.debug("After Operations");
});
I have realised that I cannot pass arrays through the lib object so I have had to stop attempting that. These are the declarations I am currently using:
const numBins = 256;
var isSetUp: boolean = false;
function generateOriginalHistograms(pixels: number[][]) {
let originalRed = new Array(numBins).fill(0);
let originalGreen = new Array(numBins).fill(0);
let originalBlue = new Array(numBins).fill(0);
for(let i = 0; i < numBins; ++i) {
originalRed[Math.floor(pixels[i][0])]++
originalGreen[Math.floor(pixels[i][1])]++;
originalBlue[Math.floor(pixels[i][2])]++;
}
return {red: originalRed, blue: originalBlue, green: originalGreen};
}
& they are declared outside of the current angular component that I am writing this in. I did have another question on this but I have since realised that I was way off in what I could and couldn't use here;
This now runs and, as it is currently commented will tint the image red. But the value of data['originalPixels'] = pixels; only ever produces one pixel. Can anyone tell me why this is & what I need to do to access the whole pixel array. I have tried to slice & spread the array to no avail. If I uncomment the line // let originals = generateOriginalHistograms(pixels as number[][]); I get an error ​
Uncaught TypeError: Cannot read properties of undefined (reading '0')
generateOriginalHistograms # blob:http://localhos…a7fa-b5a410582c06:6
(anonymous) # blob:http://localhos…7fa-b5a410582c06:76
(anonymous) # blob:http://localhos…7fa-b5a410582c06:62
(anonymous) # blob:http://localhos…7fa-b5a410582c06:83
& if I uncomment the line // console.log(pixels[0]); I get all the pixel values streaming in the console but quite slowly.
The answer appears to be change the operationType to 'image' and work with the ImageData object.
this.rasterSource = new Raster({
sources: [this.imageLayer],
operationType: "image",
operation: function (pixels, data) {
let imageData = pixels[0] as ImageData;
...
I now have no issues calculating the stats I need.

Resume a ml-agents training after changing hyperparameters and adding new observation vectors

I am working on training an agents thanks to ml-agents with Unity. When I changed the number of stacked vector, the observation vectors and the hyperparameters I can not resume the training from the last training because tensorflow tells me there is a problem for the lhs rhs shape that are not the same.
I would like to be able to change the agent scripts and config scripts and resume the training with this new parameters not to loose the past progress the agent made...Because for the moment I must restart a new training or not change the number of observations vectors etc.
How to do so ?
Thank you very much.
EDIT : Here an example of what I want to test and what errors I got with RollerBall ML-agents tutorial. See here https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-Create-New.md
GOAL : I want to see the impact of the observations vector choice on the agent's training.
I ran a learning with the basic script for the agent given in the tutorial. Here it is :
using System.Collections.Generic;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class RollerAgent : Agent
{
Rigidbody rBody;
void Start()
{
rBody = GetComponent();
}
public Transform Target;
public override void OnEpisodeBegin()
{
if (this.transform.localPosition.y < 0)
{
// If the Agent fell, zero its momentum
this.rBody.angularVelocity = Vector3.zero;
this.rBody.velocity = Vector3.zero;
this.transform.localPosition = new Vector3(0, 0.5f, 0);
}
// Move the target to a new spot
Target.localPosition = new Vector3(Random.value * 8 - 4,
0.5f,
Random.value * 8 - 4);
}
public override void CollectObservations(VectorSensor sensor)
{
// Target and Agent positions
sensor.AddObservation(Target.localPosition);
sensor.AddObservation(this.transform.localPosition);
// Agent velocity
sensor.AddObservation(rBody.velocity.x);
sensor.AddObservation(rBody.velocity.z);
}
public float speed = 10;
public override void OnActionReceived(float[] vectorAction)
{
// Actions, size = 2
Vector3 controlSignal = Vector3.zero;
controlSignal.x = vectorAction[0];
controlSignal.z = vectorAction[1];
rBody.AddForce(controlSignal * speed);
// Rewards
float distanceToTarget = Vector3.Distance(this.transform.localPosition, Target.localPosition);
// Reached target
if (distanceToTarget < 1.42f)
{
SetReward(1.0f);
EndEpisode();
}
// Fell off platform
if (this.transform.localPosition.y < 0)
{
EndEpisode();
}
}
public override void Heuristic(float[] actionsOut)
{
actionsOut[0] = Input.GetAxis("Horizontal");
actionsOut[1] = Input.GetAxis("Vertical");
}
}
I stopped the training before the agent hit the benchmark.
I suppressed the observation vectors concerning the velocity observation of the agent and adjusted the number of observation vector in unity from 8 to 6. Here is the new code :
using System.Collections.Generic;
using UnityEngine;
using Unity.MLAgents;
using Unity.MLAgents.Sensors;
public class RollerAgent : Agent
{
Rigidbody rBody;
void Start()
{
rBody = GetComponent();
}
public Transform Target;
public override void OnEpisodeBegin()
{
if (this.transform.localPosition.y < 0)
{
// If the Agent fell, zero its momentum
this.rBody.angularVelocity = Vector3.zero;
this.rBody.velocity = Vector3.zero;
this.transform.localPosition = new Vector3(0, 0.5f, 0);
}
// Move the target to a new spot
Target.localPosition = new Vector3(Random.value * 8 - 4,
0.5f,
Random.value * 8 - 4);
}
public override void CollectObservations(VectorSensor sensor)
{
// Target and Agent positions
sensor.AddObservation(Target.localPosition);
sensor.AddObservation(this.transform.localPosition);
// Agent velocity
//sensor.AddObservation(rBody.velocity.x);
//sensor.AddObservation(rBody.velocity.z);
}
public float speed = 10;
public override void OnActionReceived(float[] vectorAction)
{
// Actions, size = 2
Vector3 controlSignal = Vector3.zero;
controlSignal.x = vectorAction[0];
controlSignal.z = vectorAction[1];
rBody.AddForce(controlSignal * speed);
// Rewards
float distanceToTarget = Vector3.Distance(this.transform.localPosition, Target.localPosition);
// Reached target
if (distanceToTarget < 1.42f)
{
SetReward(1.0f);
EndEpisode();
}
// Fell off platform
if (this.transform.localPosition.y < 0)
{
EndEpisode();
}
}
public override void Heuristic(float[] actionsOut)
{
actionsOut[0] = Input.GetAxis("Horizontal");
actionsOut[1] = Input.GetAxis("Vertical");
}
}
I ran again with the same ID and I RESUMED the training so as to keep the advancement made during the last training. But when I pressed the play button on the Unity editor I got this error :
tensorflow.python.framework.errors_impl.InvalidArgumentError:
Restoring from checkpoint failed. This is most likely due to a
mismatch between the current graph and the graph from the checkpoint.
Please ensure that you have not altered the graph expected based on
the checkpoint. Original error:
Assign requires shapes of both tensors to match. lhs shape= [6,128]
rhs shape= [8,128]
[[node save_1/Assign_26 (defined at c:\users\jeann\anaconda3\envs\ml-agents-1.0.2\lib\site-packages\mlagents\trainers\policy\tf_policy.py:115)
]]
Errors may have originated from an input operation.
I know that it makes non-sense to use the advancement of the last training whereas I use a new brain configuration for the agent, but in the project I am currently working on, I need to keep the improvement made by the agent before even if we change the observation vectors. Is there a way to do so or it is impossible ?
Thank you :)

How to call a multidimensional prediction on a keras model with a javascript api

I have trained a model based on the keras lstm_text_generation example, and I would like to perform predictions on this model with front-end javascript.
First I tried using keras.js, however that only takes 1-dimensional Float32Array vectors in it's prediction function so I am unable to use it since the lstm_text_generation example uses a multidimensional array of shape (1, maxlen, len(chars)).
Next I tried using tensorflow.js, using this tutorial to port my keras model to a model.json file. Everything seems to work fine, up to the point where I perform the actual prediction where it freezes and gives me the warning Orthogonal initializer is being called on a matrix with more than 2000 (65536) elements: Slowness may result.
I noticed that in many of the tensorflow.js examples, people convert their arrays to tensor2d, but I did this and it had no effect on the performance of my code.
For anyone curious, here is the javascript code I wrote...
async function predict_from_model() {
const model = await tf.loadModel('https://raw.githubusercontent.com/98mprice/death-grips-lyrics-generator/master/model.json');
try {
var seed = "test test test test test test test test"
var maxlen = 40
for (var i = 0; i < 1; i++) {
var x_pred = nj.zeros([1, maxlen, 61]).tolist()
for (var j = 0; j < seed.length; j++) {
x_pred[0][j][char_indices[seed.charAt(j)]] = 1
}
console.log("about to predict")
const preds = model.predict(x_pred) //gets stuck here
console.log("prediction done")
}
} catch (err) {
// handle error
}
}
...to perform the same function as on_epoch_end() in the lstm_text_generation.py example. The output of x_pred is the same in both python and javascript code, so I don't think the issue lies there.
I think I need to make some optimisations in tensorflow.js, but I'm not sure what. Does anyone know how to fix any of my issues above and/or any other javascript library that would work for my purpose?
x_pred needs to be a tensor, the simplest way to create a tensor with custom values is tf.buffer, which can be initialized with a TypedArray or can be modified using .set() which would be better for you, because most of your values are 0 and buffer are filled with zeros by default. And to create a tensor out of a buffer just use .toTensor();
So it would something like this:
var x_pred = tf.buffer([1, maxlen, 61]);
for (var j = 0; j < seed.length; j++) {
x_pred.set(1, 0, j, char_indices[seed.charAt(j)]);
}
console.log("about to predict")
const preds = model.predict(x_pred.toTensor());
console.log("prediction done")

Object detection in TensorFlow Lite C++ with the MobileNet-SSD v1 model

According to this information link, TensorFlow Lite now supports object detection using the MobileNet-SSD v1 model. There is an example for Java in this link, but how can the output be parsed in C++? I cannot find any documentation about this. This code shows an example.
.......
(fill inputs)
.......
intepreter->Invoke();
const std::vector<int>& results = interpreter->outputs();
TfLiteTensor* outputLocations = interpreter->tensor(results[0]);
TfLiteTensor* outputClasses = interpreter->tensor(results[1]);
float *data = tflite::GetTensorData<float>(outputClasses);
for(int i=0;i<NUM_RESULTS;i++)
{
for(int j=1;j<NUM_CLASSES;j++)
{
float score = expit(data[i*NUM_CLASSES+j]); // ¿? This does not seem to be correct.
}
}
If you need to compute expit, you need to define a function to do that. Add at the top:
#include <cmath>
and then
intepreter->Invoke();
const std::vector<int>& results = interpreter->outputs();
TfLiteTensor* outputLocations = interpreter->tensor(results[0]);
TfLiteTensor* outputClasses = interpreter->tensor(results[1]);
float *data = tflite::GetTensorData<float>(outputClasses);
for(int i=0;i<NUM_RESULTS;i++)
{
for(int j=1;j<NUM_CLASSES;j++)
{
auto expit = [](float x) {return 1.f/(1.f + std::exp(-x));};
float score = expit(data[i*NUM_CLASSES+j]); // ¿? This does not seem to be correct.
}
}

calculating forward kinematics using D-H matrix

I have a 6-DOF robot arm model:
robot arm structure
I want to calculate forward kinematics, so I uses the D-H matrix. the D-H parameters are:
static const std::vector<float> theta = {
0,0,90.0f,0,-90.0f,0};
// d
static const std::vector<float> d = {
380.948f,0,0,-560.18f,0,0};
// a
static const std::vector<float> a = {
-220.0f,522.331f,80.0f,0,0,94.77f};
// alpha
static const std::vector<float> alpha = {
90.0f,0,90.0f,-90.0f,-90.0f,0};
and the calculation :
glm::mat4 Robothand::armForKinematics() noexcept
{
glm::mat4 pose(1.0f);
float cos_theta, sin_theta, cos_alpha, sin_alpha;
for (auto i = 0; i < 6;i++)
{
cos_theta = cosf(glm::radians(theta[i]));
sin_theta = sinf(glm::radians(theta[i]));
cos_alpha = cosf(glm::radians(alpha[i]));
sin_alpha = sinf(glm::radians(alpha[i]));
glm::mat4 Ai = {
cos_theta, -sin_theta * cos_alpha,sin_theta * sin_alpha, a[i] * cos_theta,
sin_theta, cos_theta * cos_alpha, -cos_theta * sin_alpha,a[i] * sin_theta,
0, sin_alpha, cos_alpha, d[i],
0, 0, 0, 1 };
pose = pose * Ai;
}
return pose;
}
the problem I have is that, I can't get the correct result, for example, I want to calculate the transformation matrix from first joint to the 4th joint, I will change the for loop i < 3,then I can get the pose matrix, and I can the origin coordinate in 4th coordinate system by pose * (0,0,0,1).but the result (380.948,382.331,0) seems not correct because it should be move along x-axis not y-axis. I have read many books and materials about D-H matrix, but I can't figure out what's wrong with it.
I have figured it out by myself, the real problem behind is glm::mat, glm::mat is col-type which means columns will be initialized before rows,I changed the code and get the correct result:
for (int i = 0; i < joint_num; ++i)
{
pose = glm::rotate(pose, glm::radians(degrees[i]), glm::vec3(0, 0, 1));
pose = glm::translate(pose,glm::vec3(0,0,d[i]));
pose = glm::translate(pose, glm::vec3(a[i], 0, 0));
pose = glm::rotate(pose,glm::radians(alpha[i]),glm::vec3(1,0,0));
}
then I can get the position by:
auto pos = pose * glm::vec4(x,y,z,1);