K-means clustering algorithm in TensorFlow - tensorflow

I was looking for the k-means clustering algorithm in TensorFlow. Does anyone know if TensorFlow has support for it?

Here you have it: https://www.tensorflow.org/api_docs/python/tf/contrib/factorization/KMeansClustering.
It may also be helpful: https://www.tensorflow.org/api_docs/python/tf/contrib/factorization/KMeans.
Note that all code under tf.contrib is volatile or experimental. Furthermore, note that you could easily have found what I did using the search box in the TensorFlow website.

Related

Distributed Tensorflow Using fit_generator()

Is it possible to use Tensorflow in a distributed manner and use the fit_generator()? In my research so far I have not seen anything on how to do this or if it is possible. If it is not possible then what are some possible solutions to use distributed Tensorflow when all the data will not fit in memory.
Using fit_generator() is not possible under a tensorflow distribution scope.
have a lookt at tf.data. i rewrote all my Keras ImageDataGenerators to a tensorflow data pipeline. doesn't need much time, is more transparent and quite remarkably faster.

How to use TF 2.0 tf.recompute_grad?

I wanted to use memory saving gradients (openai/gradient-checkpointing) to reduce GPU memory cost of my neural network, but I found that it's impossible in TF 2.0, but also I have found that I can use tf.recompute_grad for this purpose. I didn't found any examples or tutorials on Google, so I ask here. Also, is it possible to use this with tf.keras?

Numerically Stable Tensorflow

Tensorflow has been used to compute images but I want to use Tensorflow to compute Biological Models. However, the biological model requires big division and this causes numerical instability. I want to have TensorFlow that supports more numerically stability. Are there any hacks to allow Tensorflow to be more numerically stable? I will follow up with more codes in the near feature but if there are any options please tell me.
Please refer to the below paper on Tensorflow Distributions and see if it helps answer your question.
https://arxiv.org/pdf/1711.10604.pdf
If not, please elaborate your issue.
Mention any code snippet of what you have tried, or any limitations you faced while implementing your model.

How to run Tensorflow clustering algorithm model

I need to run k-means algorithm from Tensorflow in Go, i.e. cluster a graph intro subgraphs according to nodes similarity matrix.
I came across this article which shows an example on how to run a Keras trained model in Go. In this example the algo is of a supervised learning type. However in clustering algos, as I understand, there will be no model to save and export it to Go implementation.
The reason I am interested in Tensorflow, is because I think its code is optimized and will run much faster than k-mean implementation in Go, even with the scenario I described above.
I need an opinion of whether:
It is indeed impossible to use a Tensorflow k-mean algorithm in Go, and it is much better just to use k-means implemented in Go for this case.
It is possible to do this, and some sort of example or ideas on how to do this are very much appreciated.

tensorflow embedding projector t-sne algorithm difference from other implementation?

I've been playing with the tensorflow standalone embedding projector (http://projector.tensorflow.org/) and found it a very helpful tool for visualization. However, when I try to replicate the t-sne result using other implementations (e.g., Rtsne, sklearn.manifold.tsne), the low dimension projection seems to be very different. Particularly, the clusters are much more spread-out in the embedding projector than that learnt using R or python packages.
I used the same perplexity, learning rate and momentum parameters. And tried both spherizing or not spherizing the data as implied in the projector.
Could anyone help to shed light on the difference between the tensorflow projector implementation of the t-sne algorithm and other implementations like Rtsne? For example, is there a similar 'exaggeration' parameter used in the projector as in Rtsne? What is the optimization algorithm? Or is there anything special in generating the visualization?
I believe the source code of the tensorflow projector is the oss_demo_bin.js file in https://github.com/tensorflow/embedding-projector-standalone. Unfortunately I'm not familiar with javascript and found it hard to interpret.
Thanks!