test_data = tff.python.simulation.datasets.ClientData.from_clients_and_tf_fn(
client_ids=test_client_ids,
serializable_dataset_fn=create_tf_dataset_for_client_fn
)
print(test_data)
After running the above code I have got the below error...
module 'tensorflow_federated.python.simulation.datasets' has no attribute 'ClientData'
How I can solve it?
It depends on which version of Tensorflow Federated you are using. Based on your code, you maybe using tensorflow_federated=0.20.0. For that, instead of using tff.python.simulation.datasets.ClientData, use tff.simulation.datasets.ClientData as follows:
test_data = tff.simulation.datasets.ClientData.from_clients_and_tf_fn(
client_ids=test_client_ids,
serializable_dataset_fn=create_tf_dataset_for_client_fn
)
This should sort out the mentioned error.
If you've already done it with another approach, Kindly share that.
Related
I am trying to solve an optimization problem with PyPSA in Google Colab using Gurobi as solver, which normally works fine on my local computer (but takes too much time). When I try to run this in Google Colab I always get the error of the size-limited license, although I have the correct non-size-limited academic license.
Before trying to run this in Google Colab I followed the steps indicated in the post "Google Colab: Installation and Licensing" (Gurobi) and created an environment to solve my model using the license:
e = gp.Env(empty=True)
e.setParam('WLSACCESSID', 'your wls accessid (string)')
e.setParam('WLSSECRET', 'your wls secret (string)')
e.setParam('LICENSEID', <your license id (integer)>)
e.start()
The problem is that the model to be optimized is inside PyPSA and must not be created by myself with a line like:
model = gp.Model(env=e)
as indicated in the aforementioned post.
So what I need is to find out how to make the PyPSA model run in the correct environment.
I am using this dictionary to specify some parameters for the Gurobi Solver when running the optimization:
solver_options = {'Crossover':0,
'Method' :2,
'BarHomogeneous' :1
}
network.lopf(snapshots = network.snapshots, pyomo = False, solver_name = 'gurobi',
solver_options=solver_options,
extra_functionality = extra_functionalities)
How can I make the PyPSA optimization problem run in the correct environment?
Thank you in advance for your help.
Regards,
Sebastian
I am referring TensorFlow speciliazation from Coursera where a certain piece of code works absolutely fine in Google Colab, whereas when I try to run it locally on PyCharm, it gives following error:
Failed to find data adapter that can handle input
Any suggestions?
Can you tell me the code where the error occurred?
It should be available in logs under your PyCharm console.
Looking at your comments, it seems that the model is expecting an array while you provided a list.
I was facing the same issue. Turns out it was a in the form of a list. I had to convert the fields into a numpy array like:
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
thats it!
Try it out and let me know if it works.
I'm trying to optimize my tensorflow model serving performance by applying grappler, I'm working on a C++ tensorflow-serving service.
AFAIK, I should do the grappler stuff after LoadSavedModel. But I'm not sure what exactly should I do, should I write the op optimization myself or I just call the API?
I've Google searched for quite a while and didn't see problem-solving post or code snippets.
Could you give me any advice or code example for this?
I've found an answer by searching the tensorflow code base.
tensorflow::grappler::GrapplerItem item;
item.fetch = std::vector<std::string>{output_node_};
item.graph = bundle_.meta_graph_def.graph_def();
tensorflow::RewriterConfig rw_cfg;
rw_cfg.add_optimizers("constfold");
rw_cfg.add_optimizers("layout");
auto new_graph_def = bundle_.meta_graph_def.mutable_graph_def();
tensorflow::grappler::MetaOptimizer meta_opt(nullptr, rw_cfg);
meta_opt.Optimize(nullptr, item, new_graph_def);
By adding the code lines above, I got my GraphDef-Serialized-Filesize reduce from 20MB to 6MB, so surely it did the pruning. But I found the session.Run() cost more time than before.
============
update:
The usage above is incorrect. The default setting optimizes graph with grappler, and runs when load saved models. You could learn the right usage by review the LoadSavedModel related codes.
I am trying to change my code from edward to tensorflow_probability.edward2. The issue is that, whenever I define a posterior distribution, I use a_post = ed.copy(a, {u: qu}, scope='a_post') but .copy API seems no longer available:
module 'tensorflow_probability.python.edward2' has no attribute 'copy'
What's the tensorflow_probability way of doing the same operation?
In edward, copy depended on unsupported TF graph-walking and copying. In edward2, the approach is based on tracing, using 'interceptors'. Check out https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/deep_exponential_family.py for an example of VI using the 'tape' interceptor.
Update: this one might be a simpler and/or more familiar (LDA) example: https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/latent_dirichlet_allocation_edward2.py
I am trying to setup a cluster for load balancing. I am using the Java Graph API. In the documentation there is this code:
final OrientGraphFactory factory = new OrientGraphFactory("remote:localhost/demo");
factory.setConnectionStrategy(OStorageRemote.CONNECTION_STRATEGY.ROUND_ROBIN_CONNECT);
OrientGraphNoTx graph = factory.getNoTx();
I copied and pasted the code exactly like this and I get this compilation error
"incompatible types: CONNECTION_STRATEGY cannot be converted to
String"
The only relevant import I have is:
import com.orientechnologies.orient.client.remote.OStorageRemote;
Can you please help?
Has anyone tried this?
Thanks.
You could use
OStorageRemote.CONNECTION_STRATEGY.ROUND_ROBIN_CONNECT.toString()
Hope it helps