Why do we iterate when training a Hidden Markov Model - iteration

I'm using a hidden markov model for classification, the jahmm implementation.
When training a model i use kMeans clustering for an initial model. Then i a use an arbitrary iteration rounds to optimize the model. I was wonderding was happens in these iteration.
My guts tell me that sequenes are generated based on the initial model, which in turn are used to train the model again, and so on.
is this true or is there something else which happens?
Thank you!

BaumWelchLearner.java:
public <O extends Observation> Hmm<O>
learn(Hmm<O> initialHmm, List<? extends List<? extends O>> sequences)
{
Hmm<O> hmm = initialHmm;
for (int i = 0; i < nbIterations; i++)
hmm = iterate(hmm, sequences);
return hmm;
}
Actually it is using the provided observation sequences over and over again in each iteration. Iterations are needed because models sometimes converge only slowly to a local max. Write a program like this to see the model after each iteration:
BaumWelchLearner bwl = new BaumWelchLearner();
for (int i=0; i<=bwl.getNbIterations(); i++) {
Hmm iteration = bwl.iterate(yourHmm, learningSequences);
System.out.println("\nIteration " + i + ":\n" + iteration.toString());
yourHmm = iteration;
}

Related

How to call a multidimensional prediction on a keras model with a javascript api

I have trained a model based on the keras lstm_text_generation example, and I would like to perform predictions on this model with front-end javascript.
First I tried using keras.js, however that only takes 1-dimensional Float32Array vectors in it's prediction function so I am unable to use it since the lstm_text_generation example uses a multidimensional array of shape (1, maxlen, len(chars)).
Next I tried using tensorflow.js, using this tutorial to port my keras model to a model.json file. Everything seems to work fine, up to the point where I perform the actual prediction where it freezes and gives me the warning Orthogonal initializer is being called on a matrix with more than 2000 (65536) elements: Slowness may result.
I noticed that in many of the tensorflow.js examples, people convert their arrays to tensor2d, but I did this and it had no effect on the performance of my code.
For anyone curious, here is the javascript code I wrote...
async function predict_from_model() {
const model = await tf.loadModel('https://raw.githubusercontent.com/98mprice/death-grips-lyrics-generator/master/model.json');
try {
var seed = "test test test test test test test test"
var maxlen = 40
for (var i = 0; i < 1; i++) {
var x_pred = nj.zeros([1, maxlen, 61]).tolist()
for (var j = 0; j < seed.length; j++) {
x_pred[0][j][char_indices[seed.charAt(j)]] = 1
}
console.log("about to predict")
const preds = model.predict(x_pred) //gets stuck here
console.log("prediction done")
}
} catch (err) {
// handle error
}
}
...to perform the same function as on_epoch_end() in the lstm_text_generation.py example. The output of x_pred is the same in both python and javascript code, so I don't think the issue lies there.
I think I need to make some optimisations in tensorflow.js, but I'm not sure what. Does anyone know how to fix any of my issues above and/or any other javascript library that would work for my purpose?
x_pred needs to be a tensor, the simplest way to create a tensor with custom values is tf.buffer, which can be initialized with a TypedArray or can be modified using .set() which would be better for you, because most of your values are 0 and buffer are filled with zeros by default. And to create a tensor out of a buffer just use .toTensor();
So it would something like this:
var x_pred = tf.buffer([1, maxlen, 61]);
for (var j = 0; j < seed.length; j++) {
x_pred.set(1, 0, j, char_indices[seed.charAt(j)]);
}
console.log("about to predict")
const preds = model.predict(x_pred.toTensor());
console.log("prediction done")

Tensorflow retrained graph in C# (Tensorflowsharp)

I'am just trying to use a retrained inception model in Tensorflow sharp in Unity.
The retrained model was prepared with optimize_for_inference and is working like a charm in python.
But it is pretty inaccurate in c#.
the code works like this:
First i get the Picture
//webcamtexture transformed to picture in jpg
var pic = _texture.EncodeToJpg();
//added Picture to queue for the object detection thread
_detectedObjects.addTens(pic);
After that a thread will handle each collected picture
public void HandlePicture(byte[] picture)
{
var tensor = ImageUtil.CreateTensorFromImageFile(picture);
var runner = session.GetRunner();
runner.AddInput(g_input, tensor).Fetch(g_output);
var output = runner.Run();
var bestIdx = 0;
float best = 0;
var result = output[0];
var rshape = result.Shape;
var probabilities = ((float[][])result.GetValue(jagged: true))[0];
for (int r = 0; r < probabilities.Length; r++)
{
if (probabilities[r] > best)
{
bestIdx = r;
best = probabilities[r];
}
}
Debug.Log("Tensorflow thinks this is: " + labels[bestIdx] + " Prob : " + best * 100);
}
so my guess is:
1.it has something to do with retrained graphs (because i can't find any application/test it is used and working).
2.It has something to do with how i handle the picture transform into a tensor?! (but if that is wrong i could need help there, the code further down)
to transform the picture i'am also using a graph like it is used in the tensorsharp example
public static class ImageUtil
{
// Convert the image in filename to a Tensor suitable as input to the Inception model.
public static TFTensor CreateTensorFromImageFile(byte[] contents, TFDataType destinationDataType = TFDataType.Float)
{
// DecodeJpeg uses a scalar String-valued tensor as input.
var tensor = TFTensor.CreateString(contents);
TFGraph graph;
TFOutput input, output;
// Construct a graph to normalize the image
ConstructGraphToNormalizeImage(out graph, out input, out output, destinationDataType);
// Execute that graph to normalize this one image
using (var session = new TFSession(graph))
{
var normalized = session.Run(
inputs: new[] { input },
inputValues: new[] { tensor },
outputs: new[] { output });
return normalized[0];
}
}
// The inception model takes as input the image described by a Tensor in a very
// specific normalized format (a particular image size, shape of the input tensor,
// normalized pixel values etc.).
//
// This function constructs a graph of TensorFlow operations which takes as
// input a JPEG-encoded string and returns a tensor suitable as input to the
// inception model.
private static void ConstructGraphToNormalizeImage(out TFGraph graph, out TFOutput input, out TFOutput output, TFDataType destinationDataType = TFDataType.Float)
{
// Some constants specific to the pre-trained model at:
// https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
//
// - The model was trained after with images scaled to 224x224 pixels.
// - The colors, represented as R, G, B in 1-byte each were converted to
// float using (value - Mean)/Scale.
const int W = 299;
const int H = 299;
const float Mean = 128;
const float Scale = 1;
graph = new TFGraph();
input = graph.Placeholder(TFDataType.String);
output = graph.Cast(graph.Div(
x: graph.Sub(
x: graph.ResizeBilinear(
images: graph.ExpandDims(
input: graph.Cast(
graph.DecodeJpeg(contents: input, channels: 3), DstT: TFDataType.Float),
dim: graph.Const(0, "make_batch")),
size: graph.Const(new int[] { W, H }, "size")),
y: graph.Const(Mean, "mean")),
y: graph.Const(Scale, "scale")), destinationDataType);
}
}

How does one use Tensorflow's OpOutputList?

On the github, an OpOutputList is initialized like so:
OpOutputList outputs;
OP_REQUIRES_OK(context, context->output_list("output",&outputs));
And tensors are added like this:
Tensor* tensor0 = nullptr;
Tensor* tensor1 = nullptr;
long long int sz0 = 3;
long long int sz1 = 4;
...
OP_REQUIRES_OK(context, outputs.allocate(0, TensorShape({sz0}), &tensor0));
OP_REQUIRES_OK(context, outputs.allocate(1, TensorShape({sz1}), &tensor1));
I'm assuming that OpOutputList is like OpInputList in that jagged arrays are allowed.
My question is, how does OpOutputList work? Sometimes I get segfaults where I can't access the first index when I use Eigen::Tensor::flat() but because I don't understand how allocation works I can't pinpoint the error.
Many thanks.
OpOutputList object itself is a very simple value object containing just two integers - the start and end indices of the op outputs that are contained in this list. Being simple value objects, you generally just create them on the stack, no "allocation" required.
You allocate the tensors that logically belong to an OpOutputList just like any other tensor. Generally using allocate_output(). Here is the implementation of OpOutputList::allocate:
Status OpOutputList::allocate(int i, const TensorShape& shape,
Tensor** output) {
DCHECK_GE(i, 0);
DCHECK_LT(i, stop_ - start_);
return ctx_->allocate_output(start_ + i, shape, output);
}
As you can see it just checks that the index i is indeed within this OpOutputList and call allocate_output.

Looping with iterator vs temp object gives different result graphically (Libgdx/Java)

I've got a particle "engine" whom I've implementing a Pool system to and I've tested two different ways of rendering every Particle in a list. Please note that the Pooling really doesn't have anything with the problem to do. I just followed a tutorial and tried to use the second method when I noticed that they behaved differently.
The first way:
for (int i = 0; i < particleList.size(); i++) {
Iterator<Particle> it = particleList.iterator();
while (it.hasNext()) {
Particle p = it.next();
if (p.isDead()){
it.remove();
}
p.render(batch, delta);
}
}
Which works just fine. My particles are sharp and they move with the correct speed.
The second way:
Particle p;
for (int i = 0; i < particleList.size(); i++) {
p = particleList.get(i);
p.render(batch, delta);
if (p.isDead()) {
particleList.remove(i);
bulletPool.free(p);
}
}
Which makes all my particles blurry and moving really slow!
The render method for my particles look like this:
public void render(SpriteBatch batch, float delta) {
sprite.setX(sprite.getX() + (dx * speed) * delta * Assets.FPS);
sprite.setY(sprite.getY() + (dy * speed) * delta * Assets.FPS);
ttl--;
sprite.setScale(sprite.getScaleX() - 0.002f);
if (ttl <= 0 || sprite.getScaleX() <= 0)
isDead = true;
sprite.draw(batch);
}
Why do the different rendering methods provide different results?
Thanks in advance
You are mutating (removing elements from) a list while iterating over it. This is a classic way to make a mess.
The Iterator must have code to handle the delete case correctly. But your index-based for loop does not. Specifically when you call particleList.remove(i) the i is now "out of sync" with the content of the list. Consider what happens when you remove the element at index 3: 'i' will increment to 4, but the old element 4 got shuffled down into index 3, so it will get skipped.
I assume you're avoiding the Iterator to avoid memory allocations. So, one way to side-step this issue is to reverse the loop (go from particleList.size() down to 0). Alternatively, you can only increment i for non-dead particles.

Output the nodes in a cycle existing in a directed graph

While I understand that we can detect cycles with the DFS algorithm by detecting back-edges http://cs.wellesley.edu/~cs231/fall01/dfs.pdf. I am not being able to figure out how to output the nodes in the cycle in an efficient and "clean" manner while following the above said method.
Would be gratfeull for some help
Thanks
This is how i did it in my own implementation. It deviates a little bit in the naming conventions from the one used in your PDF but it should be obvious what it does.
All m_* variables are vectors, except m_topoOrder and m_cycle which are stacks.
The nodes of the cycle will be in m_cycle.
The m_onStack keeps track of nodes which are on the recursive call stack.
For a complete description i suggest the book Algorithms by Robert Sedgewick.
void QxDigraph::dfs(int v)
{
m_marked[v] = true;
m_onStack[v] = true;
foreach(int w, m_adj[v]) {
if(hasCycle()) return;
else if(!m_marked[w])
{
m_edgeTo[w] = v;
dfs(w);
}
else if(m_onStack[w])
{
m_cycle.clear();
for(int x=v; x!=w; x = m_edgeTo[x])
m_cycle.push(x);
m_cycle.push(w);
m_cycle.push(v);
}
}
m_onStack[v] = false;
m_topoOrder.push(v);
}