I am trying to solve an optimization problem with PyPSA in Google Colab using Gurobi as solver, which normally works fine on my local computer (but takes too much time). When I try to run this in Google Colab I always get the error of the size-limited license, although I have the correct non-size-limited academic license.
Before trying to run this in Google Colab I followed the steps indicated in the post "Google Colab: Installation and Licensing" (Gurobi) and created an environment to solve my model using the license:
e = gp.Env(empty=True)
e.setParam('WLSACCESSID', 'your wls accessid (string)')
e.setParam('WLSSECRET', 'your wls secret (string)')
e.setParam('LICENSEID', <your license id (integer)>)
e.start()
The problem is that the model to be optimized is inside PyPSA and must not be created by myself with a line like:
model = gp.Model(env=e)
as indicated in the aforementioned post.
So what I need is to find out how to make the PyPSA model run in the correct environment.
I am using this dictionary to specify some parameters for the Gurobi Solver when running the optimization:
solver_options = {'Crossover':0,
'Method' :2,
'BarHomogeneous' :1
}
network.lopf(snapshots = network.snapshots, pyomo = False, solver_name = 'gurobi',
solver_options=solver_options,
extra_functionality = extra_functionalities)
How can I make the PyPSA optimization problem run in the correct environment?
Thank you in advance for your help.
Regards,
Sebastian
Related
The error occur when running gridsearchcv using catboostclassifier and bootstrap_type='Bayesian'. Any idea what that error means?
I had similar issue while training my data with CatBoostClassifier. I had set the "subsample" parameter to "0.5". After removing this parameter, it worked for me.
More details at: enter link description here
Hope this is useful.
I am doing research in deep learning using Tensor flow 2 object detection API. I am getting this error while running the model training. I followed the Gilbert Tanner and Edje Electronics tutorial for basic installations and environment settings. I am using the TensorFlow object detection API new GitHub Commit. I converted the all .protos files into .py files but still facing this error. I am attaching a screenshot of this error. please check this and let me know if you can help.
Thanks in Advance. Error
I had the same problem and also with target_assigner.proto, center_net.proto, you have to add all three .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto
to protoc command. So whole command should be:
protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto
I am trying to create a tensor board in Jupyter anaconda the following way. The error occurs when write_images = True, otherwise, this code works fine. Any reason why this happens?
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=1,
write_graph = True,
write_images = False
update_freq = 'epoch'
profile_batch = 3,
embeddings_freq=1
)
And I get
UnknownError: Failed to rename: logs\20200219-202538\train\checkpoint.tmp67d5ca45d1404cc584a86cf42d2761d3 to: logs\20200219-202538\train\checkpoint : Access is denied.
; Input/output error
Seems to be random on which epoch it occurs.
I had something similar, it seems like the path where you want to safe the checkpoint, which is referred to tensorbaord is not available or denied. Do you know colab ? I would suggest you copy your code and run your training up there (only if your dataset isnt too large). You can copy your dataset in your Google Drive and access with colab. If it is working in colab, than you may dont have a problem with your code, but propably with your anaconda restrictions.
Mount Google Drive (Colab), Colab basics
I know i couldnt solve your problem, but propably this can help you and boost your learning speed with a juicy free Cloud GPU.
I had the same issue (was running on window's machine) I manually gave full permissions to the folder (right click on folder and edit permission --> give full access to 'everyone' user) and everything went fine.
If you are working on a unix system I think you can try to do the same (chmod 777 <dir_name>).
P.S. be aware of 'full permission' and 'chmod 777'. As now, anyone access to the system can view/edit contents of the folder.
I'm trying to optimize my tensorflow model serving performance by applying grappler, I'm working on a C++ tensorflow-serving service.
AFAIK, I should do the grappler stuff after LoadSavedModel. But I'm not sure what exactly should I do, should I write the op optimization myself or I just call the API?
I've Google searched for quite a while and didn't see problem-solving post or code snippets.
Could you give me any advice or code example for this?
I've found an answer by searching the tensorflow code base.
tensorflow::grappler::GrapplerItem item;
item.fetch = std::vector<std::string>{output_node_};
item.graph = bundle_.meta_graph_def.graph_def();
tensorflow::RewriterConfig rw_cfg;
rw_cfg.add_optimizers("constfold");
rw_cfg.add_optimizers("layout");
auto new_graph_def = bundle_.meta_graph_def.mutable_graph_def();
tensorflow::grappler::MetaOptimizer meta_opt(nullptr, rw_cfg);
meta_opt.Optimize(nullptr, item, new_graph_def);
By adding the code lines above, I got my GraphDef-Serialized-Filesize reduce from 20MB to 6MB, so surely it did the pruning. But I found the session.Run() cost more time than before.
============
update:
The usage above is incorrect. The default setting optimizes graph with grappler, and runs when load saved models. You could learn the right usage by review the LoadSavedModel related codes.
I am trying to run tests by adding a version of tornado downloaded from github.com in the sys.path.
[tests]
recipe = zc.recipe.testrunner
extra-paths = ${buildout:directory}/parts/tornado/
defaults = ['--auto-color', '--auto-progress', '-v']
But when I run bin/tests I get the following error :
ImportError: No module named tornado
Am I not understanding how to use extra-paths ?
Martin
Have you tried looking into generated bin/tests script if it contains your path? It will tell definitely if your buildout.cfg is correct or not. Maybe problem is elsewhere. Because it seem that your code is ok.
If you happen to regularly include various branches from git/mercurial or elsewhere to buildout, you might be interested in mr.developer. mr.developer can download and add package to develop =. You wont need to set extra-path in every section.