TFX: Using Transformed data in the Evaluator - tfx

TLDR;
i'm facing an issue with the Evaluator model. All examples of using the Evaluator component use the label from the original ExampleGen data as the source of labels. But I want to give it labels that I compute during the pipeline.
Is there a way that I can one-hot encode the labels on the fly before giving it to the Evaluator?
The alternative would be to one-hot encode the data and in the Transform component and then load it again with the ImportExampleGen component but that is very expensive for time and memory.
Long version:
I am running a language modeling pipeline, where I have a text as an input and I want to train an LSTM-based LM.
My steps so far is:
Ingest the text data using ImportExampleGen and tokenize them using a vocab file
output = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(
splits=[
example_gen_pb2.SplitConfig.Split(name="train", hash_buckets=45),
example_gen_pb2.SplitConfig.Split(name="eval", hash_buckets=5),
]
)
)
# Load the data from our prepared TFDS folder
example_gen = ImportExampleGen(input_base=str(data_root), output_config=output)
context.run(example_gen)
Transform the text data into 2 tensors of MAX_LEN shape (with padding if needed). One for the input of the model and one for output (one-shifted).
This is how it looks after transformation:
{'label_sentence': array([17843, 1863, 30003, 32, 4, 30003, 30003, 30003, 30003,
30003, 12551, 30003, 22696, 30003, 30003, 30003, 30003, 30003,
30003, 210, 29697, 30003, 3813, 2262, 30003, 313, 370,
667, 27087, 186, 182, 30003, 370, 10500, 186, 182,
30003, 370, 8366, 186, 182, 30003, 9949, 1789, 30003,
30003, 158, 1863, 30003, 8, 5169, 3, 67, 4229,
3, 239, 3843, 30003, 5, 682, 1887, 28241, 30003,
16798, 30003, 116, 4, 207, 1320, 1529, 30003, 2,]),
'training_sentence': array([ 1, 17843, 1863, 30003, 32, 4, 30003, 30003, 30003,
30003, 30003, 12551, 30003, 22696, 30003, 30003, 30003, 30003,
30003, 30003, 210, 29697, 30003, 3813, 2262, 30003, 313,
370, 667, 27087, 186, 182, 30003, 370, 10500, 186,
182, 30003, 370, 8366, 186, 182, 30003, 9949, 1789,
30003, 30003, 158, 1863, 30003, 8, 5169, 3, 67,
4229, 3, 239, 3843, 30003, 5, 682, 1887, 28241,
30003, 16798, 30003, 116, 4, 207, 1320, 1529, 30003])}
During the training process I one-hot encode the labels on the fly (with vocab size of 30K) before the model ingests it (This is to save space in time compared to doing it in the Transform component).
Here's that part of the training code:
train_dataset = train_dataset.map(lambda x, y: (x, tf.one_hot(y, depth=NUM_CLASSES)))
eval_dataset = eval_dataset.map(lambda x, y: (x, tf.one_hot(y, depth=NUM_CLASSES)))
mirrored_strategy = tf.distribute.MirroredStrategy()
with mirrored_strategy.scope():
model = get_model()
tensorboard_callback = keras.callbacks.TensorBoard(
log_dir=fn_args.model_run_dir, update_freq="batch"
)
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
callbacks=[tensorboard_callback],
)
Evaluation is where i'm facing an issue. All examples of using the Evaluator component use the label from the original ExampleGen data as the source of labels.
Is there a way that I can one-hot encode the labels on the fly before giving it to the Evaluator?
The alternative would be to one-hot encode the data and in the Transform component and then load it again with the ImportExampleGen component but that is very expensive for time and memory.

Related

Write to binary file with pickle in given format

I save my data in dataValues.bin file with command
xyStep = 10
zStep = 5
xRange = 1000
yRange = 1000
zRange = 50
zBase = 700
pickle.dump(dataValues, open('dataValues.bin', 'wb'))
After that, I used data = pickle.load() and I printed data with the command print(data).
The file containing dataValues.bin looks like this:
[[[25103. 22739. 25191. 25313. 22338. 22040. 24238. 25049. 25165. 0.]
[24551. 25130. 22559. 20837. 20452. 23132. 23490. 25049. 25129. 0.]
[25211. 25211. 25373. 24060. 22675. 25105. 23020. 22145. 20837. 0.]
[25009. 21020. 24766. 20574. 24118. 22930. 20332. 21789. 20655. 0.]
[25070. 24523. 22032. 21060. 24482. 24682. 21971. 23531. 21445. 0.]
[22194. 23308. 24746. 21404. 25292. 21080. 23915. 25252. 23500. 0.]
[21404. 21125. 25130. 22609. 25233. 23490. 25090. 25427. 21141. 0.]
[20856. 22546. 24077. 20509. 23378. 23652. 25252. 22882. 25313. 0.]
[24893. 21263. 22690. 22761. 23450. 25110. 24364. 20245. 25313. 0.]
[25070. 21642. 21465. 21954. 21080. 20535. 21716. 21384. 24889. 0.]]
[[25373. 20734. 23006. 25171. 20979. 21695. 24939. 25211. 23024. 0.]
[23060. 25394. 25191. 25171. 22335. 22877. 22396. 25110. 20756. 0.]
[25191. 21020. 20468. 25044. 24563. 25151. 20696. 24566. 21809. 0.]
[23348. 23753. 20520. 25066. 25353. 25151. 23531. 20756. 25151. 0.]
[20613. 22920. 21668. 24390. 24514. 23142. 25211. 22901. 23743. 0.]
[23682. 23348. 22214. 22476. 25212. 20513. 23520. 25110. 22920. 0.]
[23341. 21141. 24057. 21402. 24019. 20798. 20716. 25251. 21303. 0.]
[22740. 23612. 22923. 20777. 20472. 20898. 21566. 21116. 25252. 0.]
[24633. 21668. 22274. 21263. 21737. 21749. 23672. 20372. 25142. 0.]
[25211. 20540. 22550. 21222. 22784. 25049. 21627. 25272. 24327. 0.]]
[[21999. 23875. 25313. 21627. 25009. 24604. 25110. 25009. 24017. 0.]
[21242. 23207. 25130. 24615. 22310. 25191. 20655. 24227. 22563. 0.]
[21121. 22861. 25171. 20473. 21957. 25394. 25171. 22133. 25211. 0.]
[20696. 22603. 25373. 20700. 20916. 25393. 25203. 24300. 20736. 0.]
[20464. 23037. 24928. 21668. 23004. 21997. 23026. 25171. 22608. 0.]
[20372. 23105. 23046. 21931. 25434. 20696. 20320. 20999. 25414. 0.]
[20686. 22984. 24792. 23733. 25151. 22862. 23227. 22352. 23166. 0.]
[21574. 22857. 25239. 24381. 21384. 25171. 25313. 24989. 20655. 0.]
[23187. 22741. 24804. 25049. 24486. 25353. 25191. 25146. 23009. 0.]
[23531. 22625. 24256. 25353. 22197. 23510. 24639. 24893. 25373. 0.]]]
How to save this data in such a format, that will make it look like this after printing:
{'xyStep': 25, 'xRange': 3000, 'yRange': 3000, 'zRange': 300, 'zBase': -7500, 'data': [[array([ 464, 403, 406, 421, 488, 464, 485, 507, 496,
451, 445, 450, 463, 414, 401, 473, 446, 420,
427, 479, 490, 486, 482, 490, 446, 412, 369,
432, 424, 431, 472, 478, 451, 466, 462, 460,
449, 393, 377, 361, 522, 1160, 1271, 8891, 9428,
4510, 5265, 4960, 4381, 4219, 4318, 3870, 3070, 3242,
1906, 990, 894, 890, 857, 725, 521, 410, 252,
193, 161, 170, 169, 168, 153, 167, 138, 106,
133, 118, 103, 137, 256, 436, 474, 477, 463],
dtype=int32), array([1062, 1045, 1012, 1006, 1063, 1049, 1026, 1027, 1112, 1013, 992,
1007, 1026, 949, 988, 1052, 1083, 1017, 1037, 1044, 1030, 921,
1010, 984, 930, 917, 1047, 1012, 976, 970, 1034, 1013, 993,
1001, 1044, 971, 919, 978, 925, 962, 998, 1045, 955, 981,
624, 577, 553, 587, 536, 552, 577, 654, 615, 607, 623,
604, 545, 572, 539, 512, 510, 561, 542, 539, 560, 568,
594, 632, 592, 548, 544, 508, 501, 499, 533, 553, 533,
548, 591, 607, 569, 541, 568, 514, 477, 465, 535, 544,
499, 495, 540, 545, 544, 496, 464, 463], dtype=int32), array([1062, 1036, 1042, 1092, 995, 981, 1011, 1115, 1022, 992, 1022,
1115, 1102, 1046, 1069, 1102, 987, 960, 1011, 975, 970, 998,
1093, 1008, 1007, 1051, 1045, 996, 989, 1063, 1055, 951, 999,
961, 1039, 1050, 1018, 1030, 1062, 1018, 971, 964, 1025, 1027,
578, 510, 491, 510, 553, 524, 535, 566, 550, 544, 546,
567, 565, 538, 536, 559, 476, 473, 498, 551, 524, 546,
580, 562, 512, 507, 511, 531, 477, 465, 528, 525, 466,
547, 509, 524, 528, 535, 505, 519, 537, 530, 441, 461,
514, 507], dtype=int32)]]}

TFBertForSequenceClassification for multi label classification

I am trying to fine-tune a bert model for multi-label classification.
here is how my data looks like. I have put the entire code on this colab notebook
({'input_ids': <tf.Tensor: shape=(128,), dtype=int32, numpy=
array([ 2, 8318, 1379, 7892, 2791, 20630, 1, 4, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(128,), dtype=int32, numpy=
array([1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)>},
<tf.Tensor: shape=(7,), dtype=int64, numpy=array([1, 0, 0, 0, 0, 0, 0])>)
The first element is the id,
The second element corresponds to the attention_masks
the third one are the labels - here I have 7 lables.
First effort:
MODEL_NAME_OR_PATH = 'HooshvareLab/bert-fa-base-uncased'
NUM_LABELS = 7
from transformers import TFBertForSequenceClassification, BertConfig
model = TFBertForSequenceClassification.from_pretrained(
MODEL_NAME_OR_PATH,
config=BertConfig.from_pretrained(MODEL_NAME_OR_PATH, num_labels=NUM_LABELS, problem_type="multi_label_classification")
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam', loss=loss, metrics=['accuracy'])
history = model.fit(train_dataset, epochs=1, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7)
which ends up with the following error
InvalidArgumentError Traceback (most recent call last)
<ipython-input-48-4408a1f17fbe> in <module>()
10 loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
11 model.compile(optimizer='adam', loss=loss, metrics=['accuracy'])
---> 12 history = model.fit(train_dataset, epochs=1, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7)
13
14
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
Detected at node 'Equal' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 577, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 606, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 556, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-48-4408a1f17fbe>", line 12, in <module>
history = model.fit(train_dataset, epochs=1, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1156, in train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 459, in update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/metrics_utils.py", line 70, in decorated
update_op = update_state_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/metrics.py", line 178, in update_state_fn
return ag_update_state(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/metrics.py", line 729, in update_state
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/metrics.py", line 4086, in sparse_categorical_accuracy
return tf.cast(tf.equal(y_true, y_pred), backend.floatx())
Node: 'Equal'
required broadcastable shapes
[[{{node Equal}}]] [Op:__inference_train_function_187978]
Second Effort inspired by this piece of code
from transformers import TFBertPreTrainedModel
from transformers import TFBertMainLayer
class TFBertForMultilabelClassification(TFBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super(TFBertForMultilabelClassification, self).__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.bert = TFBertMainLayer(config, name='bert')
self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.classifier = tf.keras.layers.Dense(config.num_labels,
kernel_initializer='random_normal', #get_initializer(config.initializer_range),
name='classifier',
activation='sigmoid')
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
return outputs # logits, (hidden_states), (attentions)
MODEL_NAME_OR_PATH = 'HooshvareLab/bert-fa-base-uncased'
NUM_LABELS = len(y_train[0])
model = TFBertForMultilabelClassification.from_pretrained(MODEL_NAME_OR_PATH, num_labels=NUM_LABELS)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001,epsilon=1e-08, clipnorm=1)
# we do not have one-hot vectors, we can use sparce categorical cross entropy and accuracy
loss = tf.keras.losses.BinaryCrossentropy()
metric = tf.keras.metrics.CategoricalAccuracy()
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
history = model.fit(train_dataset, epochs=1, validation_data=valid_dataset)
returns the following error
InvalidArgumentError Traceback (most recent call last)
<ipython-input-49-8aa1173bef76> in <module>()
4 metric = tf.keras.metrics.CategoricalAccuracy()
5 model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
----> 6 history = model.fit(train_dataset, epochs=1, validation_data=valid_dataset)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
InvalidArgumentError: Graph execution error:
Detected at node 'Equal' defined at (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 577, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 606, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 556, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-49-8aa1173bef76>", line 6, in <module>
history = model.fit(train_dataset, epochs=1, validation_data=valid_dataset)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1384, in fit
tmp_logs = self.train_function(iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1156, in train_step
self.compiled_metrics.update_state(y, y_pred, sample_weight)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 459, in update_state
metric_obj.update_state(y_t, y_p, sample_weight=mask)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/metrics_utils.py", line 70, in decorated
update_op = update_state_fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/metrics.py", line 178, in update_state_fn
return ag_update_state(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/metrics.py", line 729, in update_state
matches = ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/metrics.py", line 4086, in sparse_categorical_accuracy
return tf.cast(tf.equal(y_true, y_pred), backend.floatx())
Node: 'Equal'
required broadcastable shapes
[[{{node Equal}}]] [Op:__inference_train_function_214932]
I hope I believe given major changes both in tf2 and (TF-based) huggingface transformers
UPDATE
Here is the entire code with a dummy dataset; the whole thing is also available on this colab notebook
load the libraries
import os
import pandas as pd
import numpy as np
from transformers import TFBertPreTrainedModel
from transformers import TFBertMainLayer
from keras.preprocessing.sequence import pad_sequences
from tqdm import tqdm
from transformers import BertTokenizer
import tensorflow as tf
make a dummy data
x_train = ['هان از وقتی که زفتم مدرسه',
'معاویه برادر شمر',
'وقتی که از پنجره سرشرو میاره بیرون دالی میکنه',
'هر دو سحرند این کجا و آن کجا']
y_train = [[1, 0, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]
x_test, x_valid = x_train, x_train
y_test, y_valid = y_train, y_train
add the configs
# general config
MAX_LEN = 128
batch_size = 32
TRAIN_BATCH_SIZE = batch_size
VALID_BATCH_SIZE = batch_size
TEST_BATCH_SIZE = batch_size
EPOCHS = 3
EEVERY_EPOCH = 1000
LEARNING_RATE = 2e-5
CLIP = 0.0
make the data huggingface friendly
MODEL_NAME_OR_PATH = 'HooshvareLab/bert-fa-base-uncased'
tokenizer = BertTokenizer.from_pretrained(MODEL_NAME_OR_PATH)
MAX_LEN = 128
def tokenize_sentences(sentences, tokenizer, max_seq_len = 128):
tokenized_sentences = []
for sentence in tqdm(sentences):
tokenized_sentence = tokenizer.encode(
sentence, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = max_seq_len, # Truncate all sentences.
)
tokenized_sentences.append(tokenized_sentence)
return tokenized_sentences
def create_attention_masks(tokenized_and_padded_sentences):
attention_masks = []
for sentence in tokenized_and_padded_sentences:
att_mask = [int(token_id > 0) for token_id in sentence]
attention_masks.append(att_mask)
return np.asarray(attention_masks)
train_ids = tokenize_sentences(x_train, tokenizer, max_seq_len = 128)
train_ids = pad_sequences(train_ids, maxlen=MAX_LEN, dtype="long", value=0, truncating="post", padding="post")
train_masks = create_attention_masks(train_ids)
valid_ids = tokenize_sentences(x_valid, tokenizer, max_seq_len = 128)
valid_ids = pad_sequences(valid_ids, maxlen=MAX_LEN, dtype="long", value=0, truncating="post", padding="post")
valid_masks = create_attention_masks(valid_ids)
test_ids = tokenize_sentences(x_test, tokenizer, max_seq_len = 128)
test_ids = pad_sequences(test_ids, maxlen=MAX_LEN, dtype="long", value=0, truncating="post", padding="post")
test_masks = create_attention_masks(test_ids)
create the datasets
def create_dataset(ids, masks, labels):
def gen():
for i in range(len(ids)):
yield (
{
"input_ids": ids[i],
"attention_mask": masks[i]
},
labels[i],
)
return tf.data.Dataset.from_generator(
gen,
({"input_ids": tf.int32, "attention_mask": tf.int32}, tf.int64),
(
{
"input_ids": tf.TensorShape([None]),
"attention_mask": tf.TensorShape([None])
},
tf.TensorShape([None]),
),
)
train_dataset = create_dataset(train_ids, train_masks, y_train)
valid_dataset = create_dataset(valid_ids, valid_masks, y_valid)
test_dataset = create_dataset(test_ids, test_masks, y_test)
that is how the data looks like
for item in train_dataset.take(1):
print(item)
Approach 1
class TFBertForMultilabelClassification(TFBertPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super(TFBertForMultilabelClassification, self).__init__(config, *inputs, **kwargs)
self.num_labels = config.num_labels
self.bert = TFBertMainLayer(config, name='bert')
self.dropout = tf.keras.layers.Dropout(config.hidden_dropout_prob)
self.classifier = tf.keras.layers.Dense(config.num_labels,
kernel_initializer='random_normal', #get_initializer(config.initializer_range),
name='classifier',
activation='sigmoid')
def call(self, inputs, **kwargs):
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get('training', False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
return outputs # logits, (hidden_states), (attentions)
NUM_LABELS = len(y_train[0])
model = TFBertForMultilabelClassification.from_pretrained(MODEL_NAME_OR_PATH, num_labels=NUM_LABELS)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001,epsilon=1e-08, clipnorm=1)
# we do not have one-hot vectors, we can use sparce categorical cross entropy and accuracy
loss = tf.keras.losses.BinaryCrossentropy()
metric = tf.keras.metrics.CategoricalAccuracy()
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
history = model.fit(train_dataset, epochs=1, validation_data=valid_dataset)
with an error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-36-8aa1173bef76> in <module>()
4 metric = tf.keras.metrics.CategoricalAccuracy()
5 model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
----> 6 history = model.fit(train_dataset, epochs=1, validation_data=valid_dataset)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
AttributeError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1145, in train_step
if list(y_pred.keys())[0] == "loss":
AttributeError: 'tuple' object has no attribute 'keys'
Approach 2:
MODEL_NAME_OR_PATH = 'HooshvareLab/bert-fa-base-uncased'
NUM_LABELS = 7
from transformers import TFBertForSequenceClassification, BertConfig
model = TFBertForSequenceClassification.from_pretrained(
MODEL_NAME_OR_PATH,
config=BertConfig.from_pretrained(MODEL_NAME_OR_PATH, num_labels=NUM_LABELS, problem_type="multi_label_classification")
)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer='adam', loss=loss, metrics=['accuracy'])
history = model.fit(train_dataset, epochs=1, steps_per_epoch=115, validation_data=valid_dataset, validation_steps=7)
and the error
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1151, in train_step
loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1863, in sparse_categorical_crossentropy
y_true, y_pred, from_logits=from_logits, axis=axis)
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5203, in sparse_categorical_crossentropy
labels=target, logits=output)
Node: 'sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits'
logits and labels must have the same first dimension, got logits shape [128,7] and labels shape [7]
[[{{node sparse_categorical_crossentropy/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]] [Op:__inference_train_function_67923]

What is the difference between `scipy.stats.expon.rvs()` and `numpy.random.exponential()`?

Simulating exponential random variables with the same mean interval time with different methods gives rise to different x axis scales
How often do we get no-hitters?
The number of games played between each no-hitter in the modern era (1901-2015) of Major League Baseball is stored in the array nohitter_times.
If you assume that no-hitters are described as a Poisson process, then the time between no-hitters is Exponentially distributed. As you have seen, the Exponential distribution has a single parameter, which we will call $τ$, the typical interval time.
The value of the parameter $τ$ that makes the exponential distribution best match the data is the mean interval time (where time is in units of number of games) between no-hitters.
# Here you go with the data
nohitter_times = np.array([ 843, 1613, 1101, 215, 684, 814, 278, 324, 161, 219, 545,
715, 966, 624, 29, 450, 107, 20, 91, 1325, 124, 1468,
104, 1309, 429, 62, 1878, 1104, 123, 251, 93, 188, 983,
166, 96, 702, 23, 524, 26, 299, 59, 39, 12, 2,
308, 1114, 813, 887, 645, 2088, 42, 2090, 11, 886, 1665,
1084, 2900, 2432, 750, 4021, 1070, 1765, 1322, 26, 548, 1525,
77, 2181, 2752, 127, 2147, 211, 41, 1575, 151, 479, 697,
557, 2267, 542, 392, 73, 603, 233, 255, 528, 397, 1529,
1023, 1194, 462, 583, 37, 943, 996, 480, 1497, 717, 224,
219, 1531, 498, 44, 288, 267, 600, 52, 269, 1086, 386,
176, 2199, 216, 54, 675, 1243, 463, 650, 171, 327, 110,
774, 509, 8, 197, 136, 12, 1124, 64, 380, 811, 232,
192, 731, 715, 226, 605, 539, 1491, 323, 240, 179, 702,
156, 82, 1397, 354, 778, 603, 1001, 385, 986, 203, 149,
576, 445, 180, 1403, 252, 675, 1351, 2983, 1568, 45, 899,
3260, 1025, 31, 100, 2055, 4043, 79, 238, 3931, 2351, 595,
110, 215, 0, 563, 206, 660, 242, 577, 179, 157, 192,
192, 1848, 792, 1693, 55, 388, 225, 1134, 1172, 1555, 31,
1582, 1044, 378, 1687, 2915, 280, 765, 2819, 511, 1521, 745,
2491, 580, 2072, 6450, 578, 745, 1075, 1103, 1549, 1520, 138,
1202, 296, 277, 351, 391, 950, 459, 62, 1056, 1128, 139,
420, 87, 71, 814, 603, 1349, 162, 1027, 783, 326, 101,
876, 381, 905, 156, 419, 239, 119, 129, 467])
First Approach:
import scipy.stats as stats
# computing the distribution parameter
avg_interval = np.mean(nohitter_times)
# Set the seed
np.random.seed(42)
# Simulating the distribution
rvs = stats.expon.rvs(avg_interval, size=100000)
#Plotting the distribution
#sns.histplot(rvs, kde=True, bins=100, color='skyblue', stat='density');
_ = plt.hist(rvs, bins=50, density=True, histtype="step")
_ = plt.xlabel('Games between no-hitters')
_ = plt.ylabel('PDF');
Second Approach:
# Seed random number generator
np.random.seed(42)
# Compute mean no-hitter time: tau
tau = np.mean(nohitter_times)
# Draw out of an exponential distribution with parameter tau: inter_nohitter_time
inter_nohitter_time = np.random.exponential(tau, 100000)
# Plot the PDF and label axes
_ = plt.hist(inter_nohitter_time, bins=50, density=True, histtype="step")
_ = plt.xlabel('Games between no-hitters')
_ = plt.ylabel('PDF')
As you can see, the two plots are totally different in terms of the x axis ticks. I don't know why?
I have just found out that i should have specified the scale named argument of the expon.rvs function
# computing the distribution parameter
avg_interval = np.mean(nohitter_times)
# Set the seed
np.random.seed(42)
# Simulating the distribution
rvs = stats.expon.rvs(scale=avg_interval, size=100000)
#Plotting the distribution
#sns.histplot(rvs, kde=True, bins=100, color='skyblue', stat='density');
_ = plt.hist(rvs, bins=50, density=True, histtype="step")
_ = plt.xlabel('Games between no-hitters')
_ = plt.ylabel('PDF');

Python image_list to np.array

I used python's list to add multiple numpy.array images read by opencv:
[array([[[167, 145, 121],
[164, 142, 118],
[167, 145, 121],
...,
[248, 243, 214],
[246, 242, 213],
[249, 245, 216]],
[[172, 150, 126],
[168, 146, 122],
[163, 141, 117],
...,
[249, 244, 214],
[246, 242, 213],
[248, 244, 215]],
...,]
I want to turn the outermost list into a numpy array, that is, a 4-axis tensor np.array:
array([[[[167, 145, 121],
[164, 142, 118],
[167, 145, 121],
...,
[248, 243, 214],
[246, 242, 213],
[249, 245, 216]],
[[168, 146, 122],
[164, 142, 118],
[164, 142, 118],
...,
[248, 243, 214],
[246, 242, 213],
[249, 245, 216]],
[[172, 150, 126],
[168, 146, 122],
[163, 141, 117],
...,
[249, 244, 214],
[246, 242, 213],
[248, 244, 215]],
...,]
However, if I use np.array(mylist) directly, it becomes:
array([array([[[167, 145, 121],
[164, 142, 118],
[167, 145, 121],
...,
[248, 243, 214],
[246, 242, 213],
[249, 245, 216]],
[[168, 146, 122],
[164, 142, 118],
[164, 142, 118],
...,
[248, 243, 214],
[246, 242, 213],
[249, 245, 216]],
...,
[249, 244, 214],
[246, 242, 213],
[248, 244, 215]],
....]
Is there a way to convert this?
Does all the images have the same shape (width, heigh and number of channels)? If so, doing a np.array(mylist) should have worked just fine. For example, here I created 10 random images:
my_list = [np.random.randint(0, 255, size=(1920, 1080, 3), dtype=np.uint8) for i in range(10)]
converted = np.array(my_list)
Which results in what you expects:
array([[[[213, 60, 51],
[229, 125, 207],
[104, 139, 243],
...,
[166, 219, 32],
[116, 27, 108],
[ 99, 79, 21]],
[[176, 141, 170],
[107, 131, 83],
[ 23, 210, 126],
...,
[147, 41, 167],
[203, 118, 86],
[175, 5, 88]]]], dtype=uint8)
Now, if there are images with different shapes, you need to manually define the resulting shape. Otherwise it will fail and give you a warning (VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes is deprecated.)
For instance, I created a random list of images in different sizes and selected the biggest dimension, padding the results with zeros.
my_list = [np.random.randint(0, 255, size=(1920-i, 1080-i, 3), dtype=np.uint8) for i in range(10)]
largest_shape = np.max(np.array([m.shape for m in my_list]), axis=0)
result = np.zeros([len(my_list)]+largest_shape.tolist())
for i, m in enumerate(my_list):
result[i, :m.shape[0], :m.shape[1], :m.shape[2]] = m

Tensorflow giving strange error in relation to variable reuse, stating a kernel already exists

Fantastic news I figured this out and will keep the solution here for posterity.
I needed to begin my script with tf.restore_default_graph()
I am in the beginning stages of writing a GAN in Tensorflow, and I am getting a weird error message in regards to whether or not I intend to be reusing a variable. It is basically saying (I think) that I am trying to define a kernel twice for one of my convolutions. Code and error are attached. Thank you!
import tensorflow as tf
import numpy as np
import os
from definitions import *
"""
HYPERPARAMETERS
"""
BATCH_SIZE = 10 #number of slices in the batches fed to Discrim
NUM_STEPS = 100 #number of iterations before we save
GEN_LR = 1e-5
DIS_LR = 1e-5
EPS = 1e-10
KERNEL = 3
x=tf.placeholder(tf.float32,shape=[BATCH_SIZE,256,256,1],name='GenInput')
y=tf.placeholder(tf.float32,shape=[BATCH_SIZE,256,256,1],name='GenOutput')
#label=tf.placeholder(tf.int32, name='IsReal') #1=real 0=generated
#whole_dataset=Dataset2D('/Users/Karl/Inputs/training set/DEC-MRI_training','/Users/Karl/Inputs/training set/ROI_Liu_modified/')
def gen(x):
with tf.variable_scope('GenBlk1'):
with tf.variable_scope('conv1'):
conv1=tf.layers.conv2d(x, 32, (KERNEL, KERNEL), strides=(1, 1), padding="same")
conv1=tf.nn.relu(conv1)
with tf.variable_scope('conv2'):
conv2=tf.layers.conv2d(conv1, 32, (KERNEL, KERNEL), strides=(1, 1), padding="same")
conv2=tf.nn.relu(conv2)
with tf.variable_scope('conv3'):
conv3=tf.layers.conv2d(conv2, 5, (KERNEL, KERNEL), strides=(1, 1), padding="same")
conv3=tf.nn.relu(conv3)
#xp=tf.layers.max_pooling2d(inputs, pool_size,strides,padding='valid')
return conv3
def discriminator(y):
with tf.variable_scope('DisBlk1'):
y=tf.layers.conv2d(y, 32, (KERNEL, KERNEL), strides=(1, 1), padding="same")
y=tf.nn.relu(y)
y=tf.layers.conv2d(y, 32, (KERNEL, KERNEL), strides=(1, 1), padding="same")
y=tf.nn.relu(y)
y=tf.layers.conv2d(y, 32, (KERNEL, KERNEL), strides=(1, 1), padding="same")
y=tf.nn.relu(y)
y=tf.layers.dense(y,2)
#xp=tf.layers.max_pooling2d(inputs, pool_size,strides,padding='valid')
return y
def main(x,whole_dataset):
#ops
pred = gen(x)
discrim_fake = discriminator(predict)
#discrim_real = discriminator(y)
#gLoss= =
#summaries
with tf.name_scope("generator_output"):
tf.summary.image("outputs", pred)
tf.summary.scalar("discriminator_loss", dLoss)
tf.summary.scalar("generator_loss_GAN", gLoss)
for var in tf.trainable_variables():
tf.summary.histogram(var.op.name + "/values", var)
saver = tf.train.Saver(max_to_keep=10)
GLOBAL_STEP=0
#with tf.Session() as sess:
# while True: #main loop
main(x,whole_dataset)
This is the error:
runfile('/Users/Karl/Research/NNStuff/GAN_breast/main.py', wdir='/Users/Karl/Research/NNStuff/GAN_breast')
Reloaded modules: definitions
Traceback (most recent call last):
File "<ipython-input-74-b7a187cb0f1a>", line 1, in <module>
runfile('/Users/Karl/Research/NNStuff/GAN_breast/main.py', wdir='/Users/Karl/Research/NNStuff/GAN_breast')
File "/Users/Karl/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 880, in runfile
execfile(filename, namespace)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/Karl/Research/NNStuff/GAN_breast/main.py", line 77, in <module>
main(x,whole_dataset)
File "/Users/Karl/Research/NNStuff/GAN_breast/main.py", line 63, in main
tf.summary.image("outputs", gen(x))
File "/Users/Karl/Research/NNStuff/GAN_breast/main.py", line 31, in gen
conv1=tf.layers.conv2d(x, 32, (KERNEL, KERNEL), strides=(1, 1), padding="same")
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 551, in conv2d
return layer.apply(inputs)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 503, in apply
return self.__call__(inputs, *args, **kwargs)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 443, in __call__
self.build(input_shapes[0])
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/layers/convolutional.py", line 137, in build
dtype=self.dtype)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 383, in add_variable
trainable=trainable and self.trainable)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 367, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter
use_resource=use_resource)
File "/Users/Karl/anaconda/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 664, in _get_single_variable
name, "".join(traceback.format_list(tb))))
ValueError: Variable GenBlk1/conv1/conv2d/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
File "/Users/Karl/Research/NNStuff/GAN_breast/main.py", line 30, in generator
with tf.variable_scope('conv1'):
File "/Users/Karl/Research/NNStuff/GAN_breast/main.py", line 55, in main
#ops
File "/Users/Karl/Research/NNStuff/GAN_breast/main.py", line 77, in <module>
main(x,whole_dataset)