AttributeError: 'Sequential' object has no attribute 'run_eagerly' - tensorflow

I'm trying to try using this model to train on rock, paper, scissor pictures. However, it was trained on 1800 pictures and only has an accuracy of 30-40%. I was then trying to use TensorBoard to see whats going on, but the error in the title appears.
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
model = Sequential()
model.add(Conv2D(256, kernel_size=(4, 4),
activation='relu',
input_shape=(64,64,3)))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Conv2D(196, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(Conv2D(128, (4, 4), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Conv2D(96, (4, 4), activation='relu'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(3, activation='softmax'))
''' here it instantiates the tensorboard '''
tensorboard = TensorBoard(log_dir="C:/Users/bamla/Desktop/RPS project/Logs")
model.compile(loss="sparse_categorical_crossentropy",
optimizer="SGD",
metrics=['accuracy'])
model.summary()
''' Here its fitting the model '''
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
This outputs:
Traceback (most recent call last):
File "c:/Users/bamla/Desktop/RPS project/Testing.py", line 82, in <module>
model.fit(x_train, y_train, batch_size=50, epochs = 3, callbacks=
[tensorboard])
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training.py", line 1178, in fit
validation_freq=validation_freq)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\engine\training_arrays.py", line 125, in fit_loop
callbacks.set_model(callback_model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\keras\callbacks.py", line 68, in set_model
callback.set_model(model)
File "C:\Users\bamla\AppData\Local\Programs\Python\Python37\lib\site-
packages\tensorflow\python\keras\callbacks.py", line 1509, in set_model
if not model.run_eagerly:
AttributeError: 'Sequential' object has no attribute 'run_eagerly'
Also, if you have any tips on how to improve the accuracy it would be appreciated!

The problem is here:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from tensorflow.python.keras.callbacks import TensorBoard
Do not mix keras and tf.keras imports, these are not compatible with each other, and produce weird errors as the ones you are seeing.

I changed from tensorflow.python.keras.callbacks import TensorBoard
to from keras.callbacks import TensorBoard and it worked for me.

for me, this did the job:
from tensorflow.keras import datasets, layers, models
from tensorflow import keras

It seems that you are mixing imports from keras and tensorflow.keras (last one is preferred).
https://www.pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
And most importantly, going forward all deep learning practitioners
should switch their code to TensorFlow 2.0 and the tf.keras package.
The original keras package will still receive bug fixes, but moving
forward, you should be using tf.keras.
Try with:
import tensorflow
Conv2D = tensorflow.keras.layers.Conv2D
MaxPooling2D = tensorflow.keras.layers.MaxPooling2D
Dense = tensorflow.keras.layers.Dense
Flatten = tensorflow.keras.layers.Flatten
Dropout = tensorflow.keras.layers.Dropout
TensorBoard = tensorflow.keras.callbacks.TensorBoard
model = tensorflow.keras.Sequential()

Related

What is the classification algorithm used by Keras?

I've created sound classifier build using Keras from some tutorials in the internet. Here is my model code
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, InputLayer, Dropout, Conv1D, Conv2D, Flatten, Reshape, MaxPooling1D, MaxPooling2D, BatchNormalization, TimeDistributed
from tensorflow.keras.optimizers import Adam
model = Sequential()
model.add(Reshape((int(input_length / 40), 40), input_shape=(input_length, )))
model.add(Conv1D(8, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=2, strides=2, padding='same'))
model.add(Dropout(0.25))
model.add(Conv1D(16, kernel_size=3, activation='relu', padding='same'))
model.add(MaxPooling1D(pool_size=2, strides=2, padding='same'))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(classes, activation='softmax', name='y_pred'))
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999)
# this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself
BATCH_SIZE = 32
train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=False)
validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False)
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
model.fit(train_dataset, epochs=1000, validation_data=validation_dataset, verbose=2, callbacks=callbacks)
My teacher ask me what is algorithm I use for classifying (he said something like K-NN, Naive Bayes, SVM or something like that), and I don't know what I'm using.
You're using a Convolutional Neural Network (CNN)

InvalidArgumentError: Incompatible shapes: [29] vs. [29,7,7,2]

so I'm new right here and in Python also. I'm trying to make my own network. I found some pictures of docs and cats 15x15 and unfortunatly couldn't make this basic network...
So, these are libraries which I'm using
from tensorflow.keras.models import Sequential
from tensorflow.keras import utils
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import keras
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import GlobalMaxPooling2D
Body
train_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'drive/MyDrive/cats vs dogs/cats vs dogs/training',
color_mode="rgb",
batch_size=32,
image_size=(150, 150),
shuffle=True,
seed=42,
validation_split=0.1,
subset='training',
interpolation="bilinear",
follow_links=False,
)
validation_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'drive/MyDrive/cats vs dogs/cats vs dogs/training',
color_mode="rgb",
batch_size=32,
image_size=(150, 150),
shuffle=True,
seed=42,
validation_split=0.1,
subset='validation',
interpolation="bilinear",
follow_links=False,
)
test_dataset = tf.keras.preprocessing.image_dataset_from_directory(
'drive/MyDrive/cats vs dogs/cats vs dogs/test',
batch_size = 32,
image_size = (150, 150),
interpolation="bilinear"
)
model = Sequential()
model.add(keras.Input(shape=(150, 150, 3)))
model.add(Conv2D(32, 5, strides=2, activation="relu"))
model.add(Conv2D(32, 3, activation="relu"))
model.add(MaxPooling2D(3))
model.add(Dense(250, activation='sigmoid'))
model.add(Dense(100))
model.add(MaxPooling2D(3))
model.add(Dense(2))
model.summary()
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_dataset, validation_data=validation_dataset, epochs=5, verbose=2)
And I get this error
Incompatible shapes: [29] vs. [29,7,7,2]
[[node gradient_tape/binary_crossentropy/mul_1/BroadcastGradientArgs
(defined at /usr/local/lib/python3.7/dist-packages/keras/optimizer_v2/optimizer_v2.py:464)
]] [Op:__inference_train_function_4364]
Errors may have originated from an input operation.
Input Source operations connected to node
gradient_tape/binary_crossentropy/mul_1/BroadcastGradientArgs:
In[0] gradient_tape/binary_crossentropy/mul_1/Shape:
In[1] gradient_tape/binary_crossentropy/mul_1/Shape_1
I was trying to change from binary_crossentropy to categorical_crossentrapy but it didn't help, I suppose my mistake is in datasets or inputs but I don't know how to solve it :(
Really hope to find help here!
[my architecture][1]
[1]: https://i.stack.imgur.com/w4Y9N.png
You need to flatten your prediction somewhere, otherwise you are outputing an image (29 samples of size 7x7 with 2 channels), while you simply want a flat 2 dimensional logits (so shape 29x2). The architecture you are using is somewhat odd, did you mean to have flattening operation before first Dense layer, and then no "maxpooling2d" (as it makes no sense for flattened signal)? Mixing relu and sigmoid activations is also quite non standard, I would encourage you to start with established architectures rather than try to compose your own to get some intuitions.
model = Sequential()
model.add(keras.Input(shape=(150, 150, 3)))
model.add(Conv2D(32, 5, strides=2, activation="relu"))
model.add(Conv2D(32, 3, activation="relu"))
model.add(MaxPooling2D(3))
model.add(Flatten())
model.add(Dense(250, activation="relu"))
model.add(Dense(100, activation="relu"))
model.add(Dense(2))
model.summary()

Tensorflow: ValueError: Data cardinality is ambiguous:

I recently started learning Tensorflow and am following this guide.
https://pythonprogramming.net/convolutional-neural-network-deep-learning-python-tensorflow-keras/
I am attempting to use my own data sheet with two labels as well (car and not car).
This is my code:
import tensorflow as tf
from tensorflow.keras.datasets import cifar10
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
import pickle
pickle_in = open("X.pickle","rb")
X = pickle.load(pickle_in)
pickle_in = open("y.pickle","rb")
y = pickle.load(pickle_in)
X = X/255.0
model = Sequential()
model.add(Conv2D(256, (3, 3), input_shape=X.shape[1:]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(256, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors
model.add(Dense(64))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(X, y, batch_size=32, epochs=3, validation_split=0.3)
model.save('car.model')
However, I'm getting an error that I do not understand how to fix.
raise ValueError(msg)
ValueError: Data cardinality is ambiguous:
x sizes: 8406
y sizes: 0
Please provide data which shares the same first dimension.
Appreciate the help!

super(type, obj): obj must be an instance or subtype of type in Keras

I implement the following to build tiny yolo v2 from scratch using Keras with Tensorflow backend
My code was working fine in Keras 2.1.5
But when i updated to Keras 2.1.6 i ran in to an error
""kernel_constraint=None,
TypeError: super(type, obj): obj must be an instance or subtype of type ""
Please help me out
Thank you so much
import tensorflow as tf
import keras
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPooling2D, Dropout, Flatten,
Reshape, LeakyReLU, BatchNormalization
def yolo():
model = Sequential()
model.add(Conv2D(16,(3,3), padding='same',input_shape=(416,416,3),data_format='channels_last'))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(32,(3,3), padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64,(3,3), padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128,(3,3), padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(128,(3,3), padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(LeakyReLU(alpha=0.1))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(12,(1,1), padding='same'))
model.add(BatchNormalization(axis=-1))
model.add(LeakyReLU(alpha=0.1))
model.add(Reshape((13,13,2,6)))
return model
model = yolo()
model.summary()
It can be caused by working without restarting the python kernel after the update.

Flatten function in Keras

Problem defining the NN architecture
I'm trying to create a CNN with Keras for the CIFAR-10 image dataset (https://keras.io/datasets/), but I can't get the Flatten function to work even though it appears in the Keras library: https://keras.io/layers/core/#flatten
Here is the error message:
NameError Traceback (most recent call last)
<ipython-input-9-aabd6bce9082> in <module>()
12 nn.add(Conv2D(64, 3, 3, activation='relu'))
13 nn.add(MaxPooling2D(pool_size=(2, 2)))
---> 14 nn.add(Flatten())
15 nn.add(Dense(128, activation='relu'))
16 nn.add(Dense(10, activation='softmax'))
NameError: name 'Flatten' is not defined
I'm using Jupyter running Python 2.7 and Keras 1.1.1. Below is the code for the NN:
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.layers import Dense, Activation
nn = Sequential()
nn.add(Conv2D(32, 3, 3, activation='relu', input_shape=(32, 32, 3)))
# Max-pool reduces the size of inputs, by taking the largest pixel-value from a grid
nn.add(MaxPooling2D(pool_size=(2, 2)))
nn.add(Conv2D(64, 3, 3, activation='relu'))
nn.add(MaxPooling2D(pool_size=(2, 2)))
nn.add(Flatten())
nn.add(Dense(128, activation='relu'))
nn.add(Dense(10, activation='softmax'))
Thanks in advance,
-Johan B.
try to import the layer first:
from keras.layers.convolutional import Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
nn = Sequential()
nn.add(Conv2D(32, 3, 3, activation='relu', input_shape=(32, 32, 3)))
# Max-pool reduces the size of inputs, by taking the largest pixel-value from a grid
nn.add(MaxPooling2D(pool_size=(2, 2)))
nn.add(Conv2D(64, 3, 3, activation='relu'))
nn.add(MaxPooling2D(pool_size=(2, 2)))
nn.add(Flatten())
nn.add(Dense(128, activation='relu'))
nn.add(Dense(10, activation='softmax'))