My team is working on developing a new backend for TensorFlow. Generally, tensorflow opkernels are passed as arguments "Tensor" types which use memory allocated with our architecture:
void ComputeAsync(OpKernelContext* context, DoneCallback done) override {
// Grab the input tensors
const Tensor& A = context->input(0);
const Tensor& B = context->input(1);
// ...input validation...
const our::Memory_Type *bA = static_cast<const our::Memory_Type *>(DMAHelper::base(&A));
const our::Memory_Type *bB = static_cast<const our::Memory_Type *>(DMAHelper::base(&B));
// ...additional preconditioning...
// Create an output tensor
Tensor *C = NULL;
OP_REQUIRES_OK(context, context->allocate_output(0, out_shape, &C));
our::Memory_Type *bC = static_cast<kpi::KPI_RMR_Mem*>(DMAHelper::base(C));
//and run it
our_impl(bA, bB, bC, done);
}
However, we are having more trouble with porting the "CrossOp" type, because part of the preconditioning involves converting the datatype to Eigen types:
// in0, in1, and output are all tensorflow::Tensor types, but ConstTensor is an Eigen type
typename TTypes<Type, 2>::ConstTensor in0_data =
in0.flat_inner_dims<Type>();
typename TTypes<Type, 2>::ConstTensor in1_data =
in1.flat_inner_dims<Type>();
typename TTypes<Type, 2>::Tensor output_data =
output->flat_inner_dims<Type>();
DMAHelper::base() assumes it is run on a Tensor and not a ConstTensor. Is it safe to follow the above operations with the ones below, or is does the process of flat_inner_dims() change the contents of the underlying data such that the result would be invalid or not read by TensorFlow?
const our::Memory_Type *in0_arg = static_cast<const our::Memory_Type *>(DMAHelper::base(&in0));
const our::Memory_Type *in1_arg = static_cast<const our::Memory_Type *>(DMAHelper::base(&in1));
const our::Memory_Type *output_arg = static_cast<const our::Memory_Type *>(DMAHelper::base(&output));
our_cross_impl(in0_arg, in1_arg, output_arg);
Related
I am attempting to calculate some statistics for pixel values using openlayers 6.3.1 & I am having an issue iterating over all pixels. I have read the docs for the pixels array that gets passed to the operation callback and it states:
For pixel type operations, the function will be called with an array
of * pixels, where each pixel is an array of four numbers ([r, g, b, a]) in the * range of 0 - 255. It should return a single pixel
array.
I have taken this to mean that the array passed contains all the pixels but everything I do seems to prove that I only get the current pixel to work on.
if(this.rasterSource == null) {
this.rasterSource = new Raster({
sources: [this.imageLayer],
operation: function (pixels, data) {
data['originalPixels'] = pixels;
if(!isSetUp) {
// originalPixels = pixels as number[][];
// const originalPixels = Array.from(pixels as number[][]);
// let originals = generateOriginalHistograms(pixels as number[][]);
isSetUp = true;
}
// console.log(pixels[0]);
let pixel = pixels[0];
pixel[data['channel']] = data['value'];
return pixel;
},
lib: {
isSetUp: isSetUp,
numBins: numBins,
// originalPixels: originalPixels,
// originalRed: originalRed,
// originalGreen: originalGreen,
// originalBlue: originalBlue,
generateOriginalHistograms: generateOriginalHistograms,
}
});
this.rasterSource.on('beforeoperations', function(event) {
event.data.channel = 0;
event.data.value = 255;
});
this.rasterSource.on('afteroperations', function(event) {
console.debug("After Operations");
});
I have realised that I cannot pass arrays through the lib object so I have had to stop attempting that. These are the declarations I am currently using:
const numBins = 256;
var isSetUp: boolean = false;
function generateOriginalHistograms(pixels: number[][]) {
let originalRed = new Array(numBins).fill(0);
let originalGreen = new Array(numBins).fill(0);
let originalBlue = new Array(numBins).fill(0);
for(let i = 0; i < numBins; ++i) {
originalRed[Math.floor(pixels[i][0])]++
originalGreen[Math.floor(pixels[i][1])]++;
originalBlue[Math.floor(pixels[i][2])]++;
}
return {red: originalRed, blue: originalBlue, green: originalGreen};
}
& they are declared outside of the current angular component that I am writing this in. I did have another question on this but I have since realised that I was way off in what I could and couldn't use here;
This now runs and, as it is currently commented will tint the image red. But the value of data['originalPixels'] = pixels; only ever produces one pixel. Can anyone tell me why this is & what I need to do to access the whole pixel array. I have tried to slice & spread the array to no avail. If I uncomment the line // let originals = generateOriginalHistograms(pixels as number[][]); I get an error
Uncaught TypeError: Cannot read properties of undefined (reading '0')
generateOriginalHistograms # blob:http://localhos…a7fa-b5a410582c06:6
(anonymous) # blob:http://localhos…7fa-b5a410582c06:76
(anonymous) # blob:http://localhos…7fa-b5a410582c06:62
(anonymous) # blob:http://localhos…7fa-b5a410582c06:83
& if I uncomment the line // console.log(pixels[0]); I get all the pixel values streaming in the console but quite slowly.
The answer appears to be change the operationType to 'image' and work with the ImageData object.
this.rasterSource = new Raster({
sources: [this.imageLayer],
operationType: "image",
operation: function (pixels, data) {
let imageData = pixels[0] as ImageData;
...
I now have no issues calculating the stats I need.
I am trying to build a data structure to represent an RGB image in xtensor (a 3D matrix, with shape in the form of (WIDTH, HEIGHT, 3).
Each "pixel" contains data collected by a function of the pixel coordinates. Basically, I want to replicate what this code does in python:
image = [[cell_info(x, y) for x in range(WIDTH)]
for y in range(HEIGHT)]
where cell info returns a 3 elements list representing the color channels.
I suppose the proper way to do this should be using an xgenerator, but to be honest I cannot understand how to use that class.
I found a solution:
I changed cell_info to accept a int channel parameter, so that it returns an integer instead of an array. Then I wrote this:
class img_generator_fn {
public:
using value_type = int;
img_generator_fn(const Map *map, const Position ¢er, shared_ptr<const Player> player,
const unsigned int field_radius)
: m_map(map), m_center(center), m_player(player), m_translation(-(field_radius + 1)) {}
~img_generator_fn() { m_map = nullptr; }
inline auto operator()(const unsigned int x, const unsigned int y, const unsigned int channel) const {
return m_map->at(m_center + Position(x, y))->cell_info(m_player, channel);
}
template <class It> inline auto element(It, It end) const {
return m_map->at(m_center + Position(*(end - 2) + m_translation, (*(end - 3)) + m_translation))
->cell_info(m_player, *(end - 1));
}
private:
const Map *m_map;
const Position &m_center;
shared_ptr<const Player> m_player;
const unsigned int m_translation;
};
template <unsigned int field_side> auto field(const Position ¢er, shared_ptr<const Player> player) const {
const array<unsigned int, 3> shape = {field_side, field_side, 3};
auto gen = xt::detail::make_xgenerator(img_generator_fn(this, center, player, (field_side - 1) / 2), shape);
return xt::xtensor_fixed<int, xt::xshape<field_side, field_side, 3>>(gen);
}
Here Map represents a 2D matrix, it is the structure which contains, togheter with player, the informations I want to store in an image. The function at picks up the map cell at the specified position (the map cell will be converted into a pixel). The function field generates an image centered around center from a given Map, using an xgenerator.
[EDIT]
I am looking for the way to use the optim_lbfgs function in Rcppnumerical and RcppEigen with Rcpparmadillo. I follow the way in Rcpp Integration for Numerical Computing Libraries, but it was not working with the error, cannot declare variable 'obj' to be of abstract type 'scoreftn_mns'. But now, I fixed some codes to make it work. I define the beta as Eigen::VectorXd beta(p); as in the Rcppnumerical and convert it to arma::vec in the template(?).
Here's my code that I am trying to do.
// [[Rcpp::depends(RcppEigen)]]
// [[Rcpp::depends(RcppNumerical)]]
// [[Rcpp::depends(RcppArmadillo)]]
#include <RcppArmadillo.h>
#include <RcppNumerical.h>
using namespace Numer;
using namespace arma;
using namespace Rcpp;
// Eigen::Ref<Eigen::VectorXd>
// Eigen::Ref<const Eigen::VectorXd>
class socreftn_mns: public MFuncGrad
{
private:
const arma::vec TIME;
const arma::vec DELTA;
const arma::mat COVARI;
const arma::vec TARGETVEC;
public:
socreftn_mns(const arma::vec Time, const arma::vec Delta, const arma::mat Covari,
const arma::vec targetvec) : TIME(Time), DELTA(Delta), COVARI(Covari), TARGETVEC(targetvec) {}
double f_grad(Constvec& beta, Refvec grad){
arma::vec b_s = arma::vec(beta.data(),beta.size());
int n = COVARI.n_rows;
int p = COVARI.n_cols;
arma::vec zero_vec_p = zeros(p);
arma::mat zero_mat_np = zeros(n,p);
arma::vec tempvec_p(p);
arma::mat tempmat_np(n,p);
arma::vec resid = log(TIME) + COVARI*b_s;
arma::uvec index_resid = sort_index(resid);
TIME(index_resid);
DELTA(index_resid);
COVARI.rows(index_resid);
resid(index_resid);
tempmat_np = zero_mat_np; arma::vec U_inf = zero_vec_p;
for(int it=0; it<n; it++){
tempmat_np = COVARI.row(it) - COVARI.each_row();
U_inf += sum(tempmat_np.each_col()%conv_to<vec>::from((resid>=resid(it))),0).t()*DELTA(it);
}
U_inf = U_inf/n - TARGETVEC;
double objvalue = conv_to<double>::from(sum(pow(U_inf,2)));
double h = 1e-4;
for(int itt=0; itt<p; itt++){
tempvec_p = b_s;
tempvec_p(itt) = tempvec_p(itt) + h;
arma::vec resid_g = log(TIME) + COVARI*tempvec_p;
arma::uvec index_resid_g = sort_index(resid_g);
TIME(index_resid_g);
DELTA(index_resid_g);
COVARI.rows(index_resid_g);
resid(index_resid_g);
tempmat_np = zero_mat_np; arma::vec score_grad = zero_vec_p;
for(int it=0; it<n; it++){
tempmat_np = COVARI.row(it) - COVARI.each_row();
score_grad += sum(tempmat_np.each_col()%conv_to<vec>::from((resid_g>=resid_g(it))),0).t()*DELTA(it);
}
score_grad = score_grad/n - TARGETVEC;
double score_objvalue = conv_to<double>::from(sum(pow(score_grad,2)));
grad(itt) = (score_objvalue-objvalue)/h;
}
// beta = Eigen::Ref(b_s);
return objvalue;
}
};
// [[Rcpp::export]]
Rcpp::NumericVector aftsrr_bfgs(arma::vec Time, arma::vec Delta, arma::mat Covari, arma::vec targetvec){
const arma::vec TIME = Time;
const arma::vec DELTA = Delta;
const arma::mat COVARI = Covari;
const arma::vec TARGETVEC = targetvec;
int p = COVARI.n_cols;
// Score Function
socreftn_mns obj(TIME, DELTA, COVARI, TARGETVEC);
// Initial Guess
Eigen::VectorXd beta(p);
beta.setOnes();
double fopt;
int status = optim_lbfgs(obj, beta, fopt);
if(status < 0)
Rcpp::stop("fail to converge");
return Rcpp::wrap(beta);
}
And, this is R code that is working.
library(Rcpp)
library(RcppArmadillo)
library(RcppEigen)
library(RcppNumerical)
library(survival)
library(aftgee)
sourceCpp("C:/Users/mattw/Documents/paper_wj/exercise/aftsrr_wj_cpp/aftsrr_wj/optim_bfgs.cpp")
U_beta_r_non = function(beta,Time,Delta,Covari) {
n=length(Time)
p=ncol(Covari)
e_i_beta = as.vector(log(Time) + Covari %*% beta)
order_resid = order(e_i_beta)
Time = Time[order_resid]
Covari = matrix(Covari[order_resid,],nrow = n)
Delta = Delta[order_resid]
e_i_beta = e_i_beta[order_resid]
U_beta = list(NA)
for (i in 1:n) {
U_beta[[i]] = colSums(Delta[i]*t(Covari[i,]-t(Covari))*(e_i_beta>=e_i_beta[i]))
}
U_beta = Reduce('+',U_beta)/n
U_beta = sum(U_beta^2)
# grad = c(); h = 1e-4;
# for (it in 1:p) {
# beta_g = beta;
# beta_g[it] = beta[it] + h
#
# e_i_beta = as.vector(log(Time) + Covari %*% beta_g)
#
# order_resid = order(e_i_beta)
#
# Time = Time[order_resid]
# Covari = matrix(Covari[order_resid,],nrow = n)
# Delta = Delta[order_resid]
# e_i_beta = e_i_beta[order_resid]
#
# grad_beta = list(NA);
# for (i in 1:n) {
# grad_beta[[i]] = colSums(Delta[i]*t(Covari[i,]-t(Covari))*(e_i_beta>=e_i_beta[i]))
# }
# grad_beta = Reduce('+',grad_beta)/n
# grad_beta = sum(grad_beta^2)
#
# grad[it] = (grad_beta-U_beta)/h
# }
# return(U_beta)
return(sum(U_beta^2))
# return(grad)
}
#------------------------DATA GENERATION----------------------
set.seed(1)
n=300
beta_0=1
gamma_0=0.5
Z1=matrix(rnorm(n,3,1),nrow=n)
Z2=matrix(rexp(n,5),nrow=n)
T_aft=as.vector(exp(-beta_0*Z1-gamma_0*Z2+rnorm(n,5,1)))
C_aft=as.vector(exp(-beta_0*Z1-gamma_0*Z2+rnorm(n,6,1)))
X_aft=C_aft*(T_aft>C_aft)+T_aft*(T_aft<=C_aft)
D_aft=0*(T_aft>C_aft)+1*(T_aft<=C_aft)
table(D_aft)
beta_aftsrr=-aftsrr(Surv(X_aft,D_aft)~Z1+Z2)$beta;beta_aftsrr
init_beta = rep(0,2)
cpp_bfgs_esti = aftsrr_bfgs(init_beta,X_aft,D_aft,cbind(Z1,Z2),rep(0,2));cpp_bfgs_esti
optim_lbfgsb = optim(init_beta,function(x){U_beta_r_non(x,X_aft,D_aft,cbind(Z1,Z2))},method = "L-BFGS-B")$par;optim_lbfgsb
Now, it is working, however, it seems that it just gives the init_beta without any calculation in the R example. As some people say that it is not possible to use the above armadillo codes, I am studying the Eigen library. However, I am not familiar with the computing it is hard to convert it. So, hopefully, there's a way to use it with some modification. Thanks!
Any comments will be helpful.
The rest of the error message that I am getting when compiling your code is of interest:
optim_num.cpp: In function ‘Rcpp::NumericVector aftsrr_bfgs(arma::vec, arma::vec, arma::mat, arma::vec)’:
optim_num.cpp:86:16: error: cannot declare variable ‘obj’ to be of abstract type ‘socreftn_mns’
socreftn_mns obj(TIME, DELTA, COVARI, TARGETVEC);
^~~
optim_num.cpp:11:7: note: because the following virtual functions are pure within ‘socreftn_mns’:
class socreftn_mns: public MFuncGrad
^~~~~~~~~~~~
In file included from /usr/local/lib/R/site-library/RcppNumerical/include/integration/wrapper.h:13:0,
from /usr/local/lib/R/site-library/RcppNumerical/include/RcppNumerical.h:16,
from optim_num.cpp:6:
/usr/local/lib/R/site-library/RcppNumerical/include/integration/../Func.h:52:20: note: virtual double Numer::MFuncGrad::f_grad(Numer::Constvec&, Numer::Refvec)
virtual double f_grad(Constvec& x, Refvec grad) = 0;
^~~~~~
In essence that means that your class socreftn_mns is derived from the abstract class MFuncGrad. This class is abstract since it does not contain a definition for the method f_grad(Constvec& x, Refvec grad). You try to define this by defining the method f_grad(arma::vec& b_s, arma::vec grad), but due to the different function signature, the virtual function is not overloaded. Hence your class is also abstract.
If you use the same signature, things should work out. The required types are defined in terms of Eigen objects:
// Reference to a vector
typedef Eigen::Ref<Eigen::VectorXd> Refvec;
typedef const Eigen::Ref<const Eigen::VectorXd> Constvec;
So you will have to convert back and forth between Aramdillo and Eigen constructs.
According to this information link, TensorFlow Lite now supports object detection using the MobileNet-SSD v1 model. There is an example for Java in this link, but how can the output be parsed in C++? I cannot find any documentation about this. This code shows an example.
.......
(fill inputs)
.......
intepreter->Invoke();
const std::vector<int>& results = interpreter->outputs();
TfLiteTensor* outputLocations = interpreter->tensor(results[0]);
TfLiteTensor* outputClasses = interpreter->tensor(results[1]);
float *data = tflite::GetTensorData<float>(outputClasses);
for(int i=0;i<NUM_RESULTS;i++)
{
for(int j=1;j<NUM_CLASSES;j++)
{
float score = expit(data[i*NUM_CLASSES+j]); // ¿? This does not seem to be correct.
}
}
If you need to compute expit, you need to define a function to do that. Add at the top:
#include <cmath>
and then
intepreter->Invoke();
const std::vector<int>& results = interpreter->outputs();
TfLiteTensor* outputLocations = interpreter->tensor(results[0]);
TfLiteTensor* outputClasses = interpreter->tensor(results[1]);
float *data = tflite::GetTensorData<float>(outputClasses);
for(int i=0;i<NUM_RESULTS;i++)
{
for(int j=1;j<NUM_CLASSES;j++)
{
auto expit = [](float x) {return 1.f/(1.f + std::exp(-x));};
float score = expit(data[i*NUM_CLASSES+j]); // ¿? This does not seem to be correct.
}
}
I'am just trying to use a retrained inception model in Tensorflow sharp in Unity.
The retrained model was prepared with optimize_for_inference and is working like a charm in python.
But it is pretty inaccurate in c#.
the code works like this:
First i get the Picture
//webcamtexture transformed to picture in jpg
var pic = _texture.EncodeToJpg();
//added Picture to queue for the object detection thread
_detectedObjects.addTens(pic);
After that a thread will handle each collected picture
public void HandlePicture(byte[] picture)
{
var tensor = ImageUtil.CreateTensorFromImageFile(picture);
var runner = session.GetRunner();
runner.AddInput(g_input, tensor).Fetch(g_output);
var output = runner.Run();
var bestIdx = 0;
float best = 0;
var result = output[0];
var rshape = result.Shape;
var probabilities = ((float[][])result.GetValue(jagged: true))[0];
for (int r = 0; r < probabilities.Length; r++)
{
if (probabilities[r] > best)
{
bestIdx = r;
best = probabilities[r];
}
}
Debug.Log("Tensorflow thinks this is: " + labels[bestIdx] + " Prob : " + best * 100);
}
so my guess is:
1.it has something to do with retrained graphs (because i can't find any application/test it is used and working).
2.It has something to do with how i handle the picture transform into a tensor?! (but if that is wrong i could need help there, the code further down)
to transform the picture i'am also using a graph like it is used in the tensorsharp example
public static class ImageUtil
{
// Convert the image in filename to a Tensor suitable as input to the Inception model.
public static TFTensor CreateTensorFromImageFile(byte[] contents, TFDataType destinationDataType = TFDataType.Float)
{
// DecodeJpeg uses a scalar String-valued tensor as input.
var tensor = TFTensor.CreateString(contents);
TFGraph graph;
TFOutput input, output;
// Construct a graph to normalize the image
ConstructGraphToNormalizeImage(out graph, out input, out output, destinationDataType);
// Execute that graph to normalize this one image
using (var session = new TFSession(graph))
{
var normalized = session.Run(
inputs: new[] { input },
inputValues: new[] { tensor },
outputs: new[] { output });
return normalized[0];
}
}
// The inception model takes as input the image described by a Tensor in a very
// specific normalized format (a particular image size, shape of the input tensor,
// normalized pixel values etc.).
//
// This function constructs a graph of TensorFlow operations which takes as
// input a JPEG-encoded string and returns a tensor suitable as input to the
// inception model.
private static void ConstructGraphToNormalizeImage(out TFGraph graph, out TFOutput input, out TFOutput output, TFDataType destinationDataType = TFDataType.Float)
{
// Some constants specific to the pre-trained model at:
// https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
//
// - The model was trained after with images scaled to 224x224 pixels.
// - The colors, represented as R, G, B in 1-byte each were converted to
// float using (value - Mean)/Scale.
const int W = 299;
const int H = 299;
const float Mean = 128;
const float Scale = 1;
graph = new TFGraph();
input = graph.Placeholder(TFDataType.String);
output = graph.Cast(graph.Div(
x: graph.Sub(
x: graph.ResizeBilinear(
images: graph.ExpandDims(
input: graph.Cast(
graph.DecodeJpeg(contents: input, channels: 3), DstT: TFDataType.Float),
dim: graph.Const(0, "make_batch")),
size: graph.Const(new int[] { W, H }, "size")),
y: graph.Const(Mean, "mean")),
y: graph.Const(Scale, "scale")), destinationDataType);
}
}