CatBoost Error: bayesian bootstrap doesn't support taken fraction option - bayesian

The error occur when running gridsearchcv using catboostclassifier and bootstrap_type='Bayesian'. Any idea what that error means?

I had similar issue while training my data with CatBoostClassifier. I had set the "subsample" parameter to "0.5". After removing this parameter, it worked for me.
More details at: enter link description here
Hope this is useful.

Related

Getting this ImportErrorcan "not import name fpn_pb2", when trying to run the training using tensor flow 2 object detection API

I am doing research in deep learning using Tensor flow 2 object detection API. I am getting this error while running the model training. I followed the Gilbert Tanner and Edje Electronics tutorial for basic installations and environment settings. I am using the TensorFlow object detection API new GitHub Commit. I converted the all .protos files into .py files but still facing this error. I am attaching a screenshot of this error. please check this and let me know if you can help.
Thanks in Advance. Error
I had the same problem and also with target_assigner.proto, center_net.proto, you have to add all three .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto
to protoc command. So whole command should be:
protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\fpn.proto .\object_detection\protos\target_assigner.proto .\object_detection\protos\center_net.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto

TensorFlow in PyCharm value error: Failed to find data adapter that can handle input

I am referring TensorFlow speciliazation from Coursera where a certain piece of code works absolutely fine in Google Colab, whereas when I try to run it locally on PyCharm, it gives following error:
Failed to find data adapter that can handle input
Any suggestions?
Can you tell me the code where the error occurred?
It should be available in logs under your PyCharm console.
Looking at your comments, it seems that the model is expecting an array while you provided a list.
I was facing the same issue. Turns out it was a in the form of a list. I had to convert the fields into a numpy array like:
training_padded = np.array(training_padded)
training_labels = np.array(training_labels)
testing_padded = np.array(testing_padded)
testing_labels = np.array(testing_labels)
thats it!
Try it out and let me know if it works.

How to use tensorflow grappler?

I'm trying to optimize my tensorflow model serving performance by applying grappler, I'm working on a C++ tensorflow-serving service.
AFAIK, I should do the grappler stuff after LoadSavedModel. But I'm not sure what exactly should I do, should I write the op optimization myself or I just call the API?
I've Google searched for quite a while and didn't see problem-solving post or code snippets.
Could you give me any advice or code example for this?
I've found an answer by searching the tensorflow code base.
tensorflow::grappler::GrapplerItem item;
item.fetch = std::vector<std::string>{output_node_};
item.graph = bundle_.meta_graph_def.graph_def();
tensorflow::RewriterConfig rw_cfg;
rw_cfg.add_optimizers("constfold");
rw_cfg.add_optimizers("layout");
auto new_graph_def = bundle_.meta_graph_def.mutable_graph_def();
tensorflow::grappler::MetaOptimizer meta_opt(nullptr, rw_cfg);
meta_opt.Optimize(nullptr, item, new_graph_def);
By adding the code lines above, I got my GraphDef-Serialized-Filesize reduce from 20MB to 6MB, so surely it did the pruning. But I found the session.Run() cost more time than before.
============
update:
The usage above is incorrect. The default setting optimizes graph with grappler, and runs when load saved models. You could learn the right usage by review the LoadSavedModel related codes.

Textacy - Vectorizer Weighting Error

I've recently found Textacy and as i go through the API reference guide I'm running into an error for the Vectorizer. If i add any options from the API reference I get a TypeError: unexpected keyword argument. I get this error for other options in addition to weighting.
I installed textacy using pip and I'm using Python3 on Ubuntu. Any help is appreciated. Thanks!
vectorizer = textacy.vsm.Vectorizer(weighting='tfidf')
TypeError: __init__() got an unexpected keyword argument 'weighting'
Ran into the same problem. The API documentation does not reflect the current Vectorizer keyword arguments. The Vectorizer now provides different keyword arguments to allow more control over how TF*IDF is applied.
vectorizer = textacy.Vectorizer(tf_type='linear', apply_idf=True, idf_type='smooth')
tf_type applies standard term frequency (TF), apply_idf=True applies the inverse document frequency (IDF). From the repo comments, idf_type='smooth' adds one to each document frequency in order to avoid zero divisions.
To see more information about the options check the comment at line 182 in the repository here: https://github.com/chartbeat-labs/textacy/blob/master/textacy/vsm/vectorizers.py

Tensorflow tf.batch_matrix_diag (no attribute error)

One one machine, I have tensorflow version 0.11.0rc0 and on another machine tensorflow 0.10.0rc0. On the latter, tf.batch_matrix_diag works fine, but on the former, I get the error AttributeError: 'module' object has no attribute 'batch_matrix_diag'
---EDIT---
The same error is occuring for batch_cholesky as well..
Could someone please explain how to fix this?
I believe you need to use matrix_diag instead of batch_matrix_diag due to this change.