I have built some gaussian process models in GPflow and learned them successfully, but I cannot find APIs that can help me to make inferences straightforwardly in GPflow, such as seperating the contributions of different kernels in a GPR model.
I know that I can do it manually, like calculating the covariance matrices, inverse and multiply, but such work can be quite annoying as the model gets more complex, like a multi-output SVGP model. Any suggestions?
Thanks in advance!
If you want to e.g. decompose an additive Kernel, I think the easiest way for vanilla GPR would be to just switch out the Kernel to the part you're interested in, while still keeping the learned hyperparameters.
I'm not totally sure about it, but I think it could also work out for SVGP, since the approximation itself is just a standard GP using the same kernel but conditioned on the Inducing Points.
However, I'm not sure if the decomposition of the Variational approximation can be assumed to be close to the decomposition of the true posterior.
Related
I try to find the optimum of a data-driven function represented as a Tensorflow model.
Means I trained a model to approximate a function and now want to find the optimum of this approximated function using a algorithm and software package/python library like ipopt, ipyopt, casadi, .... Or is there a possibility to do this directly in Tensorflow. I also have to define constraints, so I can't just use simple autodiff to do gradient decent and optimize my input.
Is there any idea how to realize this in an efficient way?
Maybe this image visualizes my problem to better understand what I'm looking for.
Is there a way to find the inverse of neural network representation of a function in tensorflow v1? I require this to find the optimal function in an optimization problem that I am solving.
To be precise, the optimal function is found by minimizing the error computed as L2 norm of difference between the approximated optimal function C* (coded as a neural network object), and inverse of a value function V* (coded as another neural network object).
My problem is that I do not know how to write inverse of V* in tensorflow, as I cannot find something like tf.inverse().
Any help is much appreciated. Thanks.
Unless I am misunderstanding the situation, I believe that it is impossible to do this in a generalized way. Many functions do not have a perfect inverse. For a simple example, imagine a square(x) function that computes x2. You might think that the inverse is sqrt(y), but in reality the "correct" result could be either sqrt(y) or -sqrt(y), with no way of telling which is correct.
Similarly, with most neural networks I imagine it would be impossible to find the "true" mathematical inverse. There are architectures that attempt to train a neural net and its inverse simultaneously (autoencoders and BiGAN/ALI come to mind), and for some nets it might be possible to train an inverse empirically, but these can have extremely varying levels of accuracy that depend heavily on many factors.
Depending on how much control you have over V*, you might be able to design it in such a way that it is mathematically invertible (and then you would have to manually code the inverse), or you might be able to make it a simpler model that is not based on a neural net. However, if V* is an arbitrary preexisting net, then you're probably out of luck.
Further reading:
SO: local inverse of a neural network
AI.SE: Can we get the inverse of the function that a neural network represents?
I need to run k-means algorithm from Tensorflow in Go, i.e. cluster a graph intro subgraphs according to nodes similarity matrix.
I came across this article which shows an example on how to run a Keras trained model in Go. In this example the algo is of a supervised learning type. However in clustering algos, as I understand, there will be no model to save and export it to Go implementation.
The reason I am interested in Tensorflow, is because I think its code is optimized and will run much faster than k-mean implementation in Go, even with the scenario I described above.
I need an opinion of whether:
It is indeed impossible to use a Tensorflow k-mean algorithm in Go, and it is much better just to use k-means implemented in Go for this case.
It is possible to do this, and some sort of example or ideas on how to do this are very much appreciated.
Many times I have seen in neural networks forward propagation that example vectors are multiplied from the left (vector-matrix) and some times from the right (matrix-vector). Notation, some Tensorflow tutorials and the datasets I have found seem to prefer the former over the later, contrary to the way in which linear algebra tends to be teached (matrix-vector way).
Moreover, they represent inverted ways of representing parameters: enumerate problem variables in dimension 0 or enumerate neurons in dimension 0.
This confuses me and makes me wonder if there is really a standard here or it has been only coincidence. If there is, I would like to know if the standard follows some deeper reasons. I would feel really better answering this question.
(By the way, I know that you will normally use example matrices instead of vectors [or more complex things in conv nets, etc..] because the use of minibatches, but the point still holds.)
Not sure if this answer is what you are looking for, but in the context of Tensorflow, the standard is to use a dense layer (https://www.tensorflow.org/api_docs/python/tf/layers/dense) which is a higher level abstraction that wraps up the affine transformation logic you are referring to.
So tensorflow is extremely useful at creating neural networks that involve perceptron neurons. However, if one wanted to use a new type of neuron instead of the classic perceptron neuron, is this possible through augmenting tensorflow code? I can't seem to find an answer. I understand this would change the forward propagation, and more mathematical calculations, and I am willing to change all the necessary areas.
I am also aware that I can just code from scratch the layers I need, and the neurons I had in mind, but tensorflow nevertheless has GPU integration, so one can see its more ideal to manipulate their code as opposed to creating my own from scratch.
Has anyone experimented with this? My goal is to create neural network structures that use a different type of neuron than the classic perceptron.
If someone who knows where in tensorflow I could look to see where they initialize the perceptron neurons, I would very much appreciate it!
Edit:
To be more specific, is it possible to alter code in tensorflow to use a different neuron type rather than the perceptron to invoke the tensorlfow Module: tf.layers for example? Or tf.nn? (conv2D, batch-norm, max-pool, etc). I can figure out the details. I just need to know where (I'm sure they're a few locations) I would go about changing code for this.
However, if one wanted to use a new type of neuron instead of the classic perceptron neuron, is this possible through augmenting tensorflow code?
Yes. Tensorflow provides you the possibility to define a computational graph. It then can automatically calculate the gradient for that. No need to do it yourself. This is the reason why you define it symbolically. You might want to read the whitepaper or start with a tutorial.