In practice, isn't running global_variables_initializer enough to initialize model variables?
local_variables_initializer seems to be unnecessary and absent even in official and semi-official tensorflow example code. See for example:
https://github.com/dandelionmane/tf-dev-summit-tensorboard-tutorial
https://www.tensorflow.org/get_started/mnist/pros
In both cases only global_variables_initializer is used.
Am I missing something here? Is there any case where I should call local_variables_initializer explicitly?
local_variables_initializer is useful in particular for streaming metrics (e.g. tf.contrib.metrics.streaming_auc). As said in the doc of contrib.metrics:
Because the streaming metrics use local variables, the Initialization stage is performed by running the op returned by tf.local_variables_initializer().
Related
(I have read a similar question: https://www.tensorflow.org/guide/variables, but that does not answer it).
I see that variables in TF can be created with three different raw operations. I understand that Variable is deprecated and VariableV2 should be used. However somewhere I had read Variable gives back a variable by reference and that is considered outdated, while VarhandleOp gives back a resource-typed variable and that is to be used. With VariableV2 I am confused. Is it a new version of the old style Variable and hence still not considered up-to-date or it is actually a modern approach, just using the same old Variable interface (probably for compatibility reasons)? Maybe it is using something like VarhandleOp under the hood?
A related question: What is a "container" used in all the three a.m. operations? In all the documents I found, it only says that it is defaulted to "" and it is OK. But what is it?
I try to migrate my code from tf1.* to tf2, while in tf2 doc it says that tf.compat.v1.metrics.auc is deprecated because "The value of AUC returned by this may race with the update". This statement is vague to me. Does it mean that it can't be used in multithreading context? If not, in what situation can I use this function?
tf.compat.v1.metrics.auc has been moved under tf.keras.
As mentioned in the error message itself you can start using tf.keras.metrics.AUC in Tf2.
There are some changes in hyper parameters in the updated version, details with the example can be found in the above mentioned document.
In general, there are three ways I can think of for reading custom data in TF:
Native Implementation / Custom Data Reader
https://www.tensorflow.org/versions/r0.10/how_tos/new_data_formats/index.html
Python Function Wrapping
https://www.tensorflow.org/versions/r0.9/api_docs/python/script_ops.html
Placeholders
I have already implemented this succesfully. But I want an in-graph solution like (1) or (2).
Can someone elaborate on the pros and cons (mainly from performance/efficiency standpoint) the difference between (1) and (2), so I can use the queue runners.
My feeling says (1) should be the most efficient and robust way. But that solution would not be portable unless I share or PR the code and other users would have to compile. Whereas (2) and (3) are portable, right?
I have also opened a feature request 'LMDB Reading Feature' issue on GitHub that was misinterpreted and closed as a question.
UPDATE
TensorFlow not has a native reader: https://github.com/tensorflow/tensorflow/pull/9950
(2) and (3) Both suffer from Python's GIL; eventually you'll probably lock up. This implemented is therefore also slower because it's not in-graph, and will be quite hard to parallelize correctly. It's quick and easy but also suboptimal. So, go for 1 to go pro;
I have also found that (1) has two solutions:
(1A) Implement a custom op in the source.
This is the way if you want your Op to end up in the Tensorflow source-code at some point, quality permitting.
(1B) Implement a stand-alone custom op.
This turns out to be very easy and portable. You can just compile your own .cc code, and register it through Python. No rebuilding of source code required: https://www.tensorflow.org/extend/adding_an_op
I use keySet() api in production. But I know it's not recommended.
So I'd like to change it to new api over version 7.x
It's introduced on official blog.
http://blog.infinispan.org/2014/05/iterate-all-entries-in-cache.html
But I can't figure it out how to use it in Hotrod RemoteCache.
Anyone already tried successfully?
Thanks a lot.
This was answered at https://developer.jboss.org/message/920029?et=watches.email.thread#920029
Radim Vansa said:
Regrettably, this feature is not available yet over Hot Rod. Remote clients have certain lag after embedded-mode features. Map/Reduce and DistributedExecutors over HR are quite close on the roadmap, distributed entry retrievers should follow.
William Burns said:
I also wanted to make sure you are aware that the keySet method is fine to use in the API as outlined [1]. The Cache Javadoc has some more specifics [2]. Basically the methods you should never use are the toArray methods on the collections returned from keySet, entrySet or values. The other methods are done lazily. Note this means the collection isn't a copy like before as well.
Also to note if you do end up using any of the iterators from these bulk methods, you need to make sure to close them properly.
However as Radim pointed out Hot Rod does not have this support yet (embedded only), but should be coming to a new version soon.
[1] http://blog.infinispan.org/2014/11/why-doesnt-mapsize-return-size-of.html
[2] https://docs.jboss.org/infinispan/7.1/apidocs/org/infinispan/Cache.html#entrySet%28%29
Currently I am trying to use Magma to do matrix operation on GPU, however, I found few documents about it. The only thing I can refer to is its testing program and the online generated document(here), which is not convenient to use. And the user guide seems outdated.
If you look here, getri and potri are supported.