Explanation of parallel arguments of tf.while_loop in TensorFlow - tensorflow

I want to implement an algorithm which allows a parallel implementation in TensorFlow. My question is what the arguments parallel_iterations, swap_memory and maximum_iterations actually do and which are their appropriate values according the situation. Specifically, in the documentation on TensorFlow's site https://www.tensorflow.org/api_docs/python/tf/while_loop says that parallel_iterations are the number of iterations allowed to run in parallel. Is this number the number of threads? When someone should allow CPU-GPU swap memory and for what reason? What are the advantages and disadvantages from this choice? What is the purpose of maximum_iterations? Can it be combined with parallel_iterations?

swap_memory is used when you want to have extra memory on the GPU device. Usually when you are training a model some activations are saved in the GPU mem. for later use. With swap_memory, you can store those activations in the CPU memory and use the GPU mem. to fit e.g. larger batch sizes. And this is an advantage. You would choose this if you need big batch_sizes or have long sequences and want to avoid OOM exceptions. Disadvantage is computation time since you need to transfer the data from CPU mem. to GPU mem.
The maximum iterations is smth. like this:
while num_iter < 100 and <some condition>:
do something
num_iter += 1
So it is useful when you check a condition, but also want to have an upper bound (one example is to check if your model converges. If it doesn't you still want to end after k iterations.)
As for parallel_iterations I am not sure, but it sounds like multiple threads, yes. You can try and see the effect in a sample script.

Related

What is actually meant by parallel_iterations in tfp.mcmc.sample_chain?

I am not able to get what does the parameter parallel_iterations stand for in sampling multiple chains during MCMC.
The documentation for mcmc.sample_chain() doesn't give much details, it just says that
The parallel iterations are the number of iterations allowed to run in parallel. It must be a positive integer.
I am running a NUTS sampler with multiple chains while specifying parallel_iterations=8.
Does it mean that the chains are strictly run in parallel? Is the parallel execution dependent on multi-core support? If so, what is a good value (based on the number of cores) to set parallel_iterations? Should I naively set it to some higher value?
TensorFlow can unroll iterations of while loops to execute in parallel, when some parts of the data flow (I.e. iteration condition) can be computed faster than other parts. If you don't have a special preference (i.e. reproducibility with legacy stateful samplers), leave it at default.

Kotlin's Array vs ArrayList vs List for storing large amounts of data

I'm building a Deep Neural Network in Kotlin (I know Python would be better, but I have to do that in Kotlin).
For training the net I need a huge amount of data from the MNIST database, this means I need to read about 60,000 images from a single file in IDX format and store them for simultaneous use.
Every image consists of 784 Bytes. So the total size is:
784*60,000 = 47,040,000 = ~47 MB of training data.
Which ain't that much, since I'm running the JVM in an 8GB RAM env.
After reading an image i need to convert it to a KMatrix, a custom data structure for matrix math operations. Under the hood of a KMatrix there's an Array<Array<Double>>.
I need a structure to store all the images at once, so I'm currently using a List<KMatrix>, which basically tranlates to a List<Array<Array<Double>>>
The problem is that while building the List<KMatrix> the Garbage Collector runs out of memory, launching a OutOfMemoryException: GC overhead limit exceeded.
I wonder if the problem is which data structures I'm using (i.e. should I use an ArrayList instead of an Array?) or maybe how I'm building the entire thing up (i.e. I need some optimization work to do).
I'll put the code, if needed, as soon as I can.
Thanks for your help.
Self-answer with the summarized solution (Thanks to answers by #Tenfour04 and #gidds)
As #Tenfour04 stated, you have basically three alternatives to the Array<Array<Double>> for the KMatrix:
an Array<DoubleArray> which mantains the same logic as the original, but saving lots of memory and increasing performance;
a 1-Dimensional DoubleArray which saves a bit of extra memory and performance, but with increased complexity given by the index-mapping of the array (the [i;j] element of the matrix is given by the [i * w + j] element of the array), and this probably isn't worth it as #gidds pointed out;
a 1-D DoubleBuffer created with ByteBuffer.allocateDirect(8 * size).asDoubleBuffer(), which improves performances even further but has only get and put methods, so it is useless if you need simple and direct set operations.
Conclusion
I choose the option 2, since in my case I'm performing very intensive operations, but in common cases, probably option 1 is the best as it is balanced in complexity and performance.
If you need a highest-performance structure and read/put methods are enough, I'd say that option 3 is what you're looking for.
Hope this helps someone

Understanding tensorflow inter/intra parallelism threads

I would like to understand a little more about these two parameters: intra and inter op parallelism threads
session_conf = tf.ConfigProto(
intra_op_parallelism_threads=1,
inter_op_parallelism_threads=1)
I read this post which has a pretty good explanation: TensorFlow: inter- and intra-op parallelism configuration
But I am seeking confirmations and also asking new questions below. And I am running my task in keras 2.0.9, tensorflow 1.3.0:
when both are set to 1, does it mean that, on a computer with 4 cores for example, there will be only 1 thread shared by the four cores?
why using 1 thread does not seem to affect my task very much in terms of speed? My network has the following structure: dropout, conv1d, maxpooling, lstm, globalmaxpooling,dropout, dense. The post cited above says that if there are a lot of matrix multiplication and subtraction operations, using a multiple thread setting can help. I do not know much about the math underneath but I'd imagine there are quite a lot of such matrix operations in my model? However, setting both params from 0 to 1 only sees a 1 minute slowdown over a 10 minute task.
why multi-thread could be a source of non-reproducible results? See Results not reproducible with Keras and TensorFlow in Python. This is the main reason I need to use single threads as I am doing scientific experiments. And surely tensorflow has been improving over the time, why this is not addressed in the release?
Many thanks in advance
When both parameters are set to 1, there will be 1 thread running on 1 of the 4 cores. The core on which it runs might change but it will always be 1 at a time.
When running something in parallel there is always a trade-off between lost time on communication and gained time through parallelization. Depending on the used hardware and the specific task (like the size of the matrices) the speedup will change. Sometimes running something in parallel will be even slower than using one core.
For example when using floats on a cpu, (a + b) + c will not be equal to a + (b + c) because of the floating point precision. Using multiple parallel threads means that operations like a + b + c will not always be computed in the same order, leading to different results on each run. However those differences are extremely small and will not effect the overall result in most cases. Completely reproducible results are usually only needed for debugging. Enforcing complete reproducibility would slow down multi-threading a lot.
Answer to question 1 is "No".
Setting both the parameters to 1 (intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) will generate N threads, where N is the count of cores. I've tested it multiple times on different versions of TensorFlow. This is true even for latest version of TensorFlow. There are multiple questions on how to reduce the number of threads to 1 but with no clear answer. Some examples are
How to stop TensorFlow from multi-threading
https://github.com/usnistgov/frvt/issues/12
Changing the number of threads in TensorFlow on Cifar10
Importing TensorFlow spawns threads
https://github.com/tensorflow/tensorflow/issues/13853

Word2Vec: Any way to train model fastly?

I use Gensim Word2Vec to train word sets in my database.
I have about 400,000 phrase(Each phrase is short. Total 700MB) in my PostgreSQL database.
This is how I train these data using Django ORM:
post_vector_list = []
for post in Post.objects.all():
post_vector = my_tokenizer(post.category.name)
post_vector.extend(my_tokenizer(post.title))
post_vector.extend(my_tokenizer(post.contents))
post_vector_list.append(post_vector)
word2vec_model = gensim.models.Word2Vec(post_vector_list, window=10, min_count=2, size=300)
But this job getting a lot of time and feels like not efficient.
Especially, creating post_vector_list part took a lot of time and space..
I want to improve speed of training but have no idea how to do.
Want to get your advices. Thanks.
To optimize such code, you need to collect good information about where the time is spent.
Is most of the time spent preparing post_vector_list?
If so, you will want to make sure my_tokenizer (whose code is not shown) is as efficient as possible. You may want to try to minimize the number of extend()s and append()s that are done on large lists. You might have to even take a look at your DB's configuration or options to speed up the DB-to-Object mapping started inside Post.objects.all().
Is most of the time spent in the call to Word2Vec()?
If so, other steps may help:
ensure you're using gensim's Cython-optimized routines – if not, you should be seeing a logged warning (and training will be up to 100X slower)
consider using a workers=4 or workers=8 optional argument to use more threads, if your machine has at least 4 or 8 CPU cores
consider using a larger min_count, which speeds training somewhat (and since vectors for words where there are only a few examples typically aren't very good anyway, doesn't lose much and can even improve the quality of the surviving words)
consider using a smaller window, since training takes longer for larger windows
consider using a smaller vector_size (previously called size), since training takes longer for larger-size vectors
consider using a more-aggressive (smaller) value for the optional sample argument, which randomly skips more of the most-frequent words. The default is 1e-04, but values of 1e-05 or 1e-06 (especially on larger corpuses) can offer additional speedup, and even often improve the final vectors (by spending relatively less training time on words with an excess of usage examples)
consider using a lower-than-default (5) value for the optional epochs parameter (previously called iter). (I wouldn't recommend this unless the corpus is very large – so it already has many redundant, equally-good examples of the same words throughout.)
you could use a python generator instead of loading all the data into the list. Gensim works with python generators too. The code will look something like this
class Post_Vectors(object):
def __init__(self, Post):
self.Post = Post
def __iter__(self):
for post in Post.objects.all():
post_vector = my_tokenizer(post.category.name)
post_vector.extend(my_tokenizer(post.title))
post_vector.extend(my_tokenizer(post.contents))
yield post_vector
post_vectors = Post_Vectors(Post)
word2vec_model = gensim.models.Word2Vec(post_vectors, window=10, min_count=2, size=300, workers=??)
For the gensim speedup, if you have a multi-core CPU, you could use the workers parameter. (By default it is 3)

Is there a GPU accelerated numpy.max(X, axis=0) implementation in Theano?

Do we have a GPU accelerated of version of numpy.max(X, axis=None) in Theano.
I looked into the documentation and found theano.tensor.max(X, axis=None), but it is 4-5 times slower than the numpy implementation.
I can assure you, it is not slow because of some bad choice of matrix size. Same matrix under theano.tensor.exp is 40 times faster than its numpy counterpart.
Any suggestions?
The previous answer is partial. The suggestion should not work, as the work around is the one used in the final compiled code. There is optimization that will do this transformation automatically.
The title of the question isn't the same as the content. They differ by the axis argument. I'll answer both questions.
If the axis is 0 or None we support this on the GPU for that operation for matrix. If the axis is None, we have a basic implementation that isn't well optimized as it is harder to parallelize. If the axis is 0, we have a basic implementation, but it is faster as it is easier to parallelize.
Also, how did you do your timing? If you just make one function with only that operation and test it via the device=gpu flags to do your comparison, this will include the transfer time between CPU and GPU. This is a memory bound operation, so if you include the transfer in your timming, personnaly I don't expect any speed op for that case. To see only the GPU operation, use Theano profiler: run with the Theano flag profile=True.
The max and exp operations are fundamentally different; exp (and other operations like addition, sin, etc.) is an elementwise operation that is embarrassingly parallelizable, while max requires a parallel-processing scan algorithm that basically builds up a tree of pairwise comparisons over an array. It's not impossible to speed up max, but it's not as easy as exp.
Anyway, the theano implementation of max basically consists of the following lines (in theano/tensor/basic.py):
try:
out = max_and_argmax(x, axis)[0]
except Exception:
out = CAReduce(scal.maximum, axis)(x)
where max_and_argmax is a bunch of custom code that, to my eye, implements a max+argmax operation using numpy, and CAReduce is a generic GPU-accelerated scan operation used as a fallback (which, according to the comments, doesn't support grad etc.). You could try using the fallback directly and see whether that is faster, maybe something like this:
from theano.tensor.elemwise import CAReduce
from theano.scalar import maximum
def mymax(X, axis=None):
CAReduce(maximum, axis)(X)