From looking at Tensorflow code, some MKL optimizations are done by a graph rewrite replacing sets of nodes by fused functions that use MKL. I tried to look for the rewrites with tf.logging.set_verbosity(1) but never see of the log messages I expect.
I have built Tensorflow from sources on CPU with MKL and XLA enabled. I think the build is using MKL because I can use 'NCHW' data format for tf.nn.conv2d and tf.nn.bias_add in the forward pass if they occur together. It also runs faster and fully utilises the CPU. The backward pass though errors saying that "CPU BiasGradOp only supports NHWC" though it looks like MKL functions exist to fuse Conv2D and BiasAdd both forwards and backwards with 'NCHW'. So I want to look directly for the rewrites.
How can I see if the graph rewrites are happening?
One way is to use the timeline/trace feature. You can followed this StackOverflow answer. If it uses MKL you would see nodes with names like _MklReshape or _MklConv2D
This is not specifically testing for graph rewrites, but you can check, if mkl is enabled in tensorflow by using:
tf.python.pywrap_tensorflow.IsMklEnabled()
From: https://github.com/tensorflow/tensorflow/issues/17176#issuecomment-371364155
Tensorflow has a debugger (tfdbg) with a tutorial here. The debugger prints a list of all graph nodes that will be visited by a session.run() before running it.
You can also explore the input tensors, output tensors, and the attributes of each node.
Ariel's answer also works to see the op types if you don't want to take the time to compile with tfdbg.
For v2.0.0+ the command is:
python -c "from tensorflow.python import pywrap_tensorflow; print(pywrap_tensorflow.IsMklEnabled())"
source: https://software.intel.com/en-us/forums/intel-optimized-ai-frameworks/topic/837000
Related
I am trying to understand how the internal flow goes in mxnet when we call forward . Is there any way to get source code of mxnet?
This really depends on what your symbolic graph looks like. I assume you use MXNet with Python (Python documentation). There you can choose to use the MXNet symbol library or the Gluon library.
Now, you were asking whether one can inspect the code, and, yes, you can find it on GitHub. The folder python contains the python interface and src contains all MXNet sources. What happens on forward is eventually defined by the MXNet execution engine, which tracks input/output dependencies of operators and neural network layers, allocate memory on the different devices (CPU, GPUs). There is a general architecture documentation for this.
I suppose you are interested in what each and every operation does, such as argmax (reduction), tanh (unary math operation) or convolution (complex neural network operation). This you can find in the operator folder of MXNet. This requires a whole documentation in itself and there is a special forum for MXNet specifics here, but I will give a short orientation:
Each operation in a (symbolic) execution graph needs a defined forward and backward operation. It also needs to define its output shape, so that it can be chained with other operations. If that operator needs weights, it needs to define the amount of memory it requires, so MXNet can allocate it.
Each operation requires several implementations for a) CPU b) GPU (CUDA) c) wrapper around cuDNN
All unary math operations follow the same pattern, so they are all defined in a similar way in mshadow_op.h (e.g. relu).
This is all I can tell you based on your quite broad question.
The TensorFlow Lite binary size is about 900KB, and is still large for me. I want to know how to reduce the size with only the operators needed for supporting the model?
Tensorflow Lite
If you are using Tensorflow Lite, the only solution I have found is to work at level of Interpreter and customize the Kernel Library (OpResolver). I don't think there is an automatic way of doing this, and the available only example (here the header) is not so easy to understand IMHO. I think that more improvements on this topic will be included in the next releases. Also, I'm not sure this will reduce the size of the final library. In the API notes this approach is considered equivalent to the selective registration, that is explained in the next part of the answer for Tensorflow Mobile.
Tensorflow Mobile
As an answer to the question "How can I enable only the ops used by my model", the answer is in Tensorflow Mobile Documentation (at the subsection Binary Size).
The usual size for Tensorflow Mobile seems to be 12MB, but it is possible to reduce it by including only the model required ops. Obviously this requires to build Tensorflow Lite as a Framework using Bazel.
You can create an header of required ops (ops_to_register.h) using the tool print_selective_registration_header.py, that is available here. The generated header should be placed in the root of the Tensorflow source directory.
You are now ready to compile the library, passing the SELECTIVE_REGISTRATION definition to the compiler (building with Bazel, you should add the option: --copts=”-DSELECTIVE_REGISTRATION”).
I think this procedure will give the library with minimal ops inside. Some other compiler optimization flags may help you with the size (sometimes penalizing performance).
Compile options
I actually don't know how you are compiling your code (static lib or dynamic lib), which are your needs in terms of performance, and which are the default options in Tensorflow bazelfile, but you may try:
to reduce the optimization to -O1 or -Os (sometimes helps with the binary size, and I think the default for Tensorflow is -O2 for the framework and -O3 for the single kernels, I don't know for the lite version though).
use the flags -fdata-section and --gc-sections: quoting gcc documentation: "[-fdata-sections] Together with a linker garbage collection (linker --gc-sections option) these options may lead to smaller statically-linked executables (after stripping)." (It seems that at least --gc-sections is used in linker options for Raspberry Pi)
-fvisibility-inlines-hidden should impact on performance of inline functions, but decreases the size of the export table of the shared object. This option may break the library. Some explanations can be read here.
Even more dangerous is -fvisibility=hidden. Look at it here.
UPDATE: I have to re-write this question as after some investigation I realise that this is a different problem.
Context: running keras in a gridsearch setting using the kerasclassifier wrapper with scikit learn. Sys: Ubuntu 16.04, libraries: anaconda distribution 5.1, keras 2.0.9, scikitlearn 0.19.1, tensorflow 1.3.0 or theano 0.9.0, using CPUs only.
Code:
I simply used the code here for testing: https://machinelearningmastery.com/use-keras-deep-learning-models-scikit-learn-python/, the second example 'Grid Search Deep Learning Model Parameters'. Pay attention to line 35, which reads:
grid = GridSearchCV(estimator=model, param_grid=param_grid)
Symptoms: When grid search uses more than 1 jobs (means cpus?), e.g.,, setting 'n_jobs' on the above line A to '2', line below:
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=2)
will cause the code to hang indefinitely, either with tensorflow or theano, and there is no cpu usage (see attached screenshot, where 5 python processes were created but none is using cpu).
By debugging, it appears to be the following line with 'sklearn.model_selection._search' that causes problems:
line 648: for parameters, (train, test) in product(candidate_params,
cv.split(X, y, groups)))
, on which the program hangs and cannot continue.
I would really appreciate some insights as to what this means and why this could happen.
Thanks in advance
Are you using a GPU? If so, you can't have multiple threads running each variation of the params because they won't be able to share the GPU.
Here's a full example on how to use keras, sklearn wrappers in a Pipeline with GridsearchCV: Pipeline with a Keras Model
If you really want to have multiple jobs in the GridSearchCV, you can try to limit the GPU fraction used by each job (e.g. if each job only allocates 0.5 of the available GPU memory, you can run 2 jobs simultaneously)
See these issues:
Limit the resource usage for tensorflow backend
GPU memory fraction does not work in keras 2.0.9 but it works in 2.0.8
I dealt with this problem too and it really slowed me down not being able to run what is essentially trivially-parallelizable code. The issue is indeed with the tensorflow session. If a session in created in the parent process before GridSearchCV.fit(), it will hang!
The solution for me was to keep all session/graph creation code restricted to the KerasClassifer class and the model creation function i passed to it.
Also what Felipe said about the memory is true, you will want to restrict the memory usage of TF in either the model creation function or a subclass of KerasClassifier.
Related info:
Session hang issue with python multiprocessing
Keras + Tensorflow and Multiprocessing in Python
TL;DR Answer: You can't because your Keras model can't be serialized, and serialization is needed for parallelizing in Python with joblib.
This problem is much detailed here: https://www.neuraxle.org/stable/scikit-learn_problems_solutions.html#problem-you-can-t-parallelize-nor-save-pipelines-using-steps-that-can-t-be-serialized-as-is-by-joblib
The solution to parallelize your code is to make your Keras estimator serializable. This can be done using savers as described at the link above.
If you're lucky enough to be using TensorFlow v2's prebuilt Keras module, the following practical code sample will reveal to be useful to you as you'd practically just need to take the code and modify it with yours:
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
In this example, all the saving and loading code is all pre-written for you using Neuraxle-TensorFlow, and this makes it parallelizeable if you use Neuraxle's AutoML methods (e.g.: Neuraxle's grid search and Neuraxle's own parallelism things).
I am trying to run a model which had been written for GPU on a CPU, and have discovered that the tf.nn.bias_add function does not support a data_format attribute of "NCHW" when executing on CPU, it only supports "NHWC".
Is there a list of which operations, like this one, are restricted to GPU? I haven't been able to find one yet.
No, there is no such list and sadly documentation does not tell these details either.
There was an attempt to ask for documentation improvement here, but it does not look like it was implemented.
I'm trying to coerce Tensorflow on OS/X to read from HDFS. The documentation
https://www.tensorflow.org/deploy/hadoop
does not clearly specify whether this is possible, and the code refers only to "posix" operating systems. The error I'm seeing when trying to use the HDFS is the following:
UnimplementedError (see above for traceback): File system scheme hdfs not implemented
[[Node: ReaderReadV2 = ReaderReadV2[_device="/job:localhost/replica:0/task:0/cpu:0"](TFRecordReaderV2, input_producer)]]
Here's what I've done up to this point:
brew installed Hadoop 2.7.2
separately compiled Hadoop 2.7.2 for the native libraries. Hadoop is installed on /usr/local/Cellar/hadoop/2.7.2/libexec on my system, and the native libraries (libhdfs.dylib) are in ~/Source/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-2.7.2/lib/native.
Edited the code at https://github.com/tensorflow/tensorflow/blob/v1.0.0/tensorflow/core/platform/hadoop/hadoop_file_system.cc#L113-L119 to read from libhdfs.dylib rather than libhdfs.so, recompiled, and reinstalled Tensorflow. (I have to admit this is pretty boneheaded, and I have no idea if it's all that's required to make this code work on Mac.)
Here is the code to reproduce.
test.sh:
set -x
export JAVA_HOME=$($(dirname $(which java | xargs readlink))/java_home)
export HADOOP_HOME=/usr/local/Cellar/hadoop/2.7.2/libexec
. $HADOOP_HOME/libexec/hadoop-config.sh
export HADOOP_HDFS_HOME=$(echo ~/Source/hadoop/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-2.7.2)
export CLASSPATH=$($HADOOP_HDFS_HOME/bin/hdfs classpath --glob)
# Virtual environment with Tensorflow and necessary dependencies
. venv/bin/activate
python ./test.py
test.py:
import tensorflow as tf
_, example_bytes = tf.TFRecordReader().read(
tf.train.string_input_producer(
[
"hdfs://localhost:9000/user/foo/feature_output/part-r-00000",
"hdfs://localhost:9000/user/foo/feature_output/part-r-00001",
"hdfs://localhost:9000/user/foo/feature_output/part-r-00002",
"hdfs://localhost:9000/user/foo/feature_output/part-r-00003",
]
)
)
with tf.Session().as_default() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
print(len(sess.run(example_bytes)))
The code path I'm seeing in the Tensorflow source seems to indicate to me that I'd receive a different error than the one above if the issue were really mac-specific, since some kind of handler is registered for the "hdfs" scheme regardless: https://github.com/tensorflow/tensorflow/blob/v1.0.0/tensorflow/core/platform/hadoop/hadoop_file_system.cc#L474 . Has anyone else succeeded in coercing Tensorflow to work with Mac? If it isn't supported, is there an easy place to patch it?
I'm also open to suggestions as to what might be a better approach. The high-level goal is to efficiently train a model in parallel, using shared parameter servers, considering that each worker will only read a subset of the data. This is readily accomplished using the local filesystem, but it's less clear how to scale beyond that. Even if I do succeed in making the code above work, the result could suffer from problems with data locality.
This thread https://github.com/tensorflow/tensorflow/issues/2218 suggests using pyspark.RDD.toLocalIterator to iterate over the data set with a placeholder in the graph. Aside from my concern about forcing each worker to iterate through the full dataset, I don't see a way to coerce Tensorflow's builtin Estimator class to accept a custom feed function along with a specified input_fn, and a custom input_fn appears necessary in order to take advantage of models like LinearClassifier (https://www.tensorflow.org/tutorials/linear) that are capable of learning from sparse, weighted features.
Any thoughts?
Did you enable HDFS support in ./configure when building? That's the error you would get if HDFS is disabled.
I think you made the correct change to make it work. Feel free to send a pull request to look for .dylib on macOS.