Equivalent of `ed.copy` in `tensorflow_probability.edward2` - tensorflow

I am trying to change my code from edward to tensorflow_probability.edward2. The issue is that, whenever I define a posterior distribution, I use a_post = ed.copy(a, {u: qu}, scope='a_post') but .copy API seems no longer available:
module 'tensorflow_probability.python.edward2' has no attribute 'copy'
What's the tensorflow_probability way of doing the same operation?

In edward, copy depended on unsupported TF graph-walking and copying. In edward2, the approach is based on tracing, using 'interceptors'. Check out https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/deep_exponential_family.py for an example of VI using the 'tape' interceptor.
Update: this one might be a simpler and/or more familiar (LDA) example: https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/latent_dirichlet_allocation_edward2.py

Related

Federated learning with tensorflow

test_data = tff.python.simulation.datasets.ClientData.from_clients_and_tf_fn(
client_ids=test_client_ids,
serializable_dataset_fn=create_tf_dataset_for_client_fn
)
print(test_data)
After running the above code I have got the below error...
module 'tensorflow_federated.python.simulation.datasets' has no attribute 'ClientData'
How I can solve it?
It depends on which version of Tensorflow Federated you are using. Based on your code, you maybe using tensorflow_federated=0.20.0. For that, instead of using tff.python.simulation.datasets.ClientData, use tff.simulation.datasets.ClientData as follows:
test_data = tff.simulation.datasets.ClientData.from_clients_and_tf_fn(
client_ids=test_client_ids,
serializable_dataset_fn=create_tf_dataset_for_client_fn
)
This should sort out the mentioned error.
If you've already done it with another approach, Kindly share that.

How to use tensorflow grappler?

I'm trying to optimize my tensorflow model serving performance by applying grappler, I'm working on a C++ tensorflow-serving service.
AFAIK, I should do the grappler stuff after LoadSavedModel. But I'm not sure what exactly should I do, should I write the op optimization myself or I just call the API?
I've Google searched for quite a while and didn't see problem-solving post or code snippets.
Could you give me any advice or code example for this?
I've found an answer by searching the tensorflow code base.
tensorflow::grappler::GrapplerItem item;
item.fetch = std::vector<std::string>{output_node_};
item.graph = bundle_.meta_graph_def.graph_def();
tensorflow::RewriterConfig rw_cfg;
rw_cfg.add_optimizers("constfold");
rw_cfg.add_optimizers("layout");
auto new_graph_def = bundle_.meta_graph_def.mutable_graph_def();
tensorflow::grappler::MetaOptimizer meta_opt(nullptr, rw_cfg);
meta_opt.Optimize(nullptr, item, new_graph_def);
By adding the code lines above, I got my GraphDef-Serialized-Filesize reduce from 20MB to 6MB, so surely it did the pruning. But I found the session.Run() cost more time than before.
============
update:
The usage above is incorrect. The default setting optimizes graph with grappler, and runs when load saved models. You could learn the right usage by review the LoadSavedModel related codes.

What is the difference between SessionBundlePredict and SavedModelPredict in tensorflow serving?

As I read in the source code, SessionBundlePredict uses collection_def in MetaGraphDef and SavedModelPredict uses signature_def in MetaGraphDef but I have no idea what is the difference between collection_def and signature_def.
My understanding of the two is that when I use Exporter.export then I should use SessionBundlePredict but when I use SavedModelBuilder then I should use SavedModelPredict?
That's essentially correct. Specifically, the signature information for session bundles is stored in a special collection in collection_def.
However, SessionBundle and Exporter.export have been unsupported since 2017-06-30. So please use SavedModel going forward.

How to create an op like conv_ops in tensorflow?

What I'm trying to do
I'm new to C++ and bazel and I want to make some change on the convolution operation in tensorflow, so I decide that my first step is to create an ops just like it.
What I have done
I copied conv_ops.cc from //tensorflow/core/kernels and change the name of the ops registrated in my new_conv_ops.cc. I also changed some name of the functions in the file to avoid duplication. And here is my BUILD file.
As you can see, I copy the deps attributes of conv_ops from //tensorflow/core/kernels/BUILD. Then I use "bazel build -c opt //tensorflow/core/user_ops:new_conv_ops.so" to build the new op.
What my problem is
Then I got this error.
I tried to delete bounds_check and got same error for the next deps. Then I realize that there is some problem for including h files in //tensorflow/core/kernels from //tensorflow/core/user_ops. So how can I perfectely create a new op excatcly like conv_ops?
Adding a custom operation to TensorFlow is covered in the tutorial here. You can also look at actual code examples.
To address your specific problem, note that the tf_custom_op_library macro adds most of the necessary dependencies to your target. You can simply write the following :
tf_custom_op_library(
name="new_conv_ops.so",
srcs=["new_conv_ops.cc"]
)

Tensorflow: checkpoints simple load

I have a checkpoint file:
checkpoint-20001 checkpoint-20001.meta
how do I extract variables from this space, without having to load the previous model and starting session etc.
I want to do something like
cp = load(checkpoint-20001)
cp.var_a
It's not documented, but you can inspect the contents of a checkpoint from Python using the class tf.train.NewCheckpointReader.
Here's a test case that uses it, so you can see how the class works.
https://github.com/tensorflow/tensorflow/blob/861644c0bcae5d56f7b3f439696eefa6df8580ec/tensorflow/python/training/saver_test.py#L1203
Since it isn't a documented class, its API may change in the future.