tf.contrib.learn yields error message "module has no attribute 'learn' " - tensorflow

Here is a snippet of my code taken directly from the tf.contrib.learn tutorial on tensorflow.org:
# Load Data Sets
training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
filename = IRIS_TRAINING,
target_dtype = np.int,
features_dtype = np.float32)
Here is the error message:
AttributeError Traceback (most recent call last)
<ipython-input-14-7122d1244c55> in <module>()
11
12 # Load Data Sets
---> 13 training_set = tf.contrib.learn.datasets.base.load_csv_with_header(
14 filename = IRIS_TRAINING,
15 target_dtype = np.int,
AttributeError: 'module' object has no attribute 'learn'
Clearly the module has the attribute learn since tensorflow has a section on learning tf.contrib.learn. What am I doing wrong? All guidance is appreciated.

Related

Failing SSD model on Google Colab custom data config error

I'm getting the following error on training SSD on Google colab on a custom dataset:
''''
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/mmcv/utils/registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
TypeError: SingleStageDetector: init() got an unexpected keyword argument 'roi_head'
# The new config inherits a base config to highlight the necessary modification
_base_ = '../ssd/ssd300_coco.py'
# We also need to change the num_classes in head to match the dataset's annotation
model = dict(
roi_head=dict(
bbox_head=dict(num_classes=4)))
# mask_head=dict(num_classes=4)))
# Modify dataset related settings
dataset_type = 'COCODataset'
classes = ('blade','knife','gun','shuriken')
data = dict(
train=dict(
img_prefix='../drive/MyDrive/PS1/PS-1/GDXray_TOD/train',
classes=classes,
ann_file='../drive/MyDrive/PS1/PS-1/GDXray_TOD/GDXray_train.json'),
# val=dict(
# img_prefix='balloon/val/',
# classes=classes,
# ann_file='balloon/val/annotation_coco.json'),
test=dict(
img_prefix='../drive/MyDrive/PS1/PS-1/GDXray_TOD/test', classes=classes,
ann_file='../drive/MyDrive/PS1/PS-1/GDXray_TOD/GDXray_test.json'))
''''

ValueError: could not broadcast input array from shape (16,18,3) into shape (16)

I was trying to instance segment my RGB images using pixellib library. However, I encountered the problem from segmentImage function. From stacktrace, I found the issue within init.py, and I have no idea why it needs to broadcast from 3D arrays to 1D. 20 Images from another folder I tried earlier didn't counter any of these.
P.S. This was my first question on StackOverflow. if I miss any necessary details, please let me know.
for file in os.listdir(test_path):
abs_test_path = os.path.join(test_path, file)
if file.endswith('.jpg'):
filename = os.path.splitext(file)[0]
if (os.path.isfile(abs_test_path)):
out_path = out_seg_path + filename
segment_image.segmentImage(abs_test_path, show_bboxes=True,
save_extracted_objects=True,
extract_segmented_objects=True)
im_0 = cv2.imread('segmented_object_1.jpg')
cv2.imwrite(out_path + '_1.jpg', im_0)
im_1 = cv2.imread('segmented_object_2.jpg')
cv2.imwrite(out_path + '_2.jpg', im_1)
This is my error
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-835299843033> in <module>
15
16 segment_image.segmentImage(abs_test_path, show_bboxes=True,
---> 17 save_extracted_objects=True, extract_segmented_objects=True)
18
19 # segment_image.segmentImage('segmented_object_1.jpg', show_bboxes=True, output_image_name=out_path + '_1.jpg',
~\anaconda3\envs\mask_rcnn\lib\site-packages\pixellib\instance\__init__.py in segmentImage(self, image_path, show_bboxes, extract_segmented_objects, save_extracted_objects, mask_points_values, output_image_name, text_thickness, text_size, box_thickness, verbose)
762 cv2.imwrite(save_path, extracted_objects)
763
--> 764 extracted_objects = np.array(ex, dtype=object)
765
766 if mask_points_values == True:
ValueError: could not broadcast input array from shape (16,18,3) into shape (16)
There isn't enough information to help you.
I don't know what segment_image.segmentImage is, or what it expects. And I don't have your jpg file to test.
I have an idea of why the problem line raises this error, but since it occurs in an unknown function I can't suggest any fixes.
extracted_objects = np.array(ex, dtype=object)
ex probably is a list of arrays, arrays that match in some some dimensions but not others. It's trying to make an object dtype array of those arrays, but due to the mix of shapes it raises an error.
An simple example that raises the same error:
In [151]: ex = [np.ones((3, 4, 3)), np.ones((3, 5, 3))]
In [152]: np.array(ex, object)
Traceback (most recent call last):
Input In [152] in <module>
np.array(ex, object)
ValueError: could not broadcast input array from shape (3,4,3) into shape (3,)

How do I define the split_train_test in Python?

I am working on a python code to generate network traffic using GANs and I'm getting this error that split_train_test is not defined. I have imported train_test_split from sklearn.model_selection but it doesnt seem to work. What am I not doing right?
This is the error message;
NameError Traceback (most recent call last)
<ipython-input-153-a2836ba27bc4> in <module>
9 cross_validation_flg = False
10 benign_file = '../data/attack_normal_data/benign_data.csv'
---> 11 benign_model, benign_test_loader = run_main(benign_file, num_features=41)
12 # Save the model checkpoint
13 torch.save(benign_model.state_dict(), 'benign_model_epoches%d.ckpt' % num_epochs)
<ipython-input-147-e59bfccfe2c7> in run_main(input_file, num_features)
5 dataset = TrafficDataset(input_file, transform=None, normalization_flg=True)
6
----> 7 train_sampler, test_sampler = split_train_test(dataset, split_percent=0.7, shuffle=True)
8 cntr = Counter(dataset.y)
9 print('dataset: ', len(dataset), ' y:', sorted(cntr.items()))
NameError: name 'split_train_test' is not defined
It should be train_test_split
Reference the docs

AttributeError: 'list' object has no attribute 'sents'

How can I resolve this attribute error in spaCy?
from __future__ import unicode_literals, print_function
from spacy.lang.en import English
nlp = English()
sentencizer = nlp.create_pipe("sentencizer")
nlp.add_pipe(sentencizer)
assert len(list(doc.sents)) == 2
This is the traceback:
AttributeError Traceback (most recent call last)
<ipython-input-81-0459326012bf> in <module>
5 sentencizer = nlp.create_pipe("sentencizer")
6 nlp.add_pipe(sentencizer)
----> 7 assert len(list(doc.sents)) == 2
AttributeError: 'list' object has no attribute 'sents'
If your goal is to tokenize (split) sentences, below is a code sample using spaCy.
import spacy
nlp = spacy.load('en_core_web_lg')
raw_text = 'Hello, world. Here are two sentences.'
doc = nlp(raw_text)
sentences = [sent.string.strip() for sent in doc.sents]
assert len(sentences) == 2
print(sentences)
Output:
['Hello, world.', 'Here are two sentences.']

How to initialize tf.metrics members in TensorFlow?

The below is a part of my project code.
with tf.name_scope("test_accuracy"):
test_mean_abs_err, test_mean_abs_err_op = tf.metrics.mean_absolute_error(labels=label_pl, predictions=test_eval_predict)
test_accuracy, test_accuracy_op = tf.metrics.accuracy(labels=label_pl, predictions=test_eval_predict)
test_precision, test_precision_op = tf.metrics.precision(labels=label_pl, predictions=test_eval_predict)
test_recall, test_recall_op = tf.metrics.recall(labels=label_pl, predictions=test_eval_predict)
test_f1_measure = 2 * test_precision * test_recall / (test_precision + test_recall)
tf.summary.scalar('test_mean_abs_err', test_mean_abs_err)
tf.summary.scalar('test_accuracy', test_accuracy)
tf.summary.scalar('test_precision', test_precision)
tf.summary.scalar('test_recall', test_recall)
tf.summary.scalar('test_f1_measure', test_f1_measure)
# validation metric init op
validation_metrics_init_op = tf.variables_initializer(\
var_list=[test_mean_abs_err_op, test_accuracy_op, test_precision_op, test_recall_op], \
name='validation_metrics_init')
However, when I run it, errors occur like this:
Traceback (most recent call last):
File "./run_dnn.py", line 285, in <module>
train(wnd_conf)
File "./run_dnn.py", line 89, in train
name='validation_metrics_init')
File "/export/local/anaconda2/lib/python2.7/site-
packages/tensorflow/python/ops/variables.py", line 1176, in
variables_initializer
return control_flow_ops.group(*[v.initializer for v in var_list], name=name)
AttributeError: 'Tensor' object has no attribute 'initializer'
I realize that I cannot create a validation initializer like that. I want to re-calculate the corresponding metrics when I save a new checkpoint model and apply a new round of validation. So, I have to re-initialize the metrics to be zero.
But how to reset all these metrics to be zero? Many thanks to your help!
I sovled the problem in the following way after referring to the blog (Avoiding headaches with tf.metrics).
# validation metrics
validation_metrics_var_scope = "validation_metrics"
test_mean_abs_err, test_mean_abs_err_op = tf.metrics.mean_absolute_error(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_accuracy, test_accuracy_op = tf.metrics.accuracy(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_precision, test_precision_op = tf.metrics.precision(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_recall, test_recall_op = tf.metrics.recall(labels=label_pl, predictions=test_eval_predict, name=validation_metrics_var_scope)
test_f1_measure = 2 * test_precision * test_recall / (test_precision + test_recall)
tf.summary.scalar('test_mean_abs_err', test_mean_abs_err)
tf.summary.scalar('test_accuracy', test_accuracy)
tf.summary.scalar('test_precision', test_precision)
tf.summary.scalar('test_recall', test_recall)
tf.summary.scalar('test_f1_measure', test_f1_measure)
# validation metric init op
validation_metrics_vars = tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope=validation_metrics_var_scope)
validation_metrics_init_op = tf.variables_initializer(var_list=validation_metrics_vars, name='validation_metrics_init')
a minimal working example that can be run line by line in a python terminal:
import tensorflow as tf
s = tf.Session()
acc = tf.metrics.accuracy([0,1,0], [0.1, 0.9, 0.8])
ini = tf.variables_initializer(tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES))
s.run([ini])
s.run([acc])