i am trying to import :
from sklearn.cluster import KMeans as sk_KMeans
but i get the following error:
AttributeError: module 'numpy' has no attribute 'float'
how do i fix this?
Hello sorry I am very new at Python and I cannot figure out what I am missing. I am trying to use plotfile, my code is
import matplotlib.pyplot as plt
plt.plotfile('somefile.dat',delimiter=' ', cols=(0, 1),
names=('A, 'B'), marker='o')
and gives me the following error
AttributeError: module 'matplotlib.pyplot' has no attribute 'plotfile'
I get the following error with the code import keras:
AttributeError: module 'tensorflow.python.util.dispatch' has no attribute 'add_fallback_dispatch_list'
I'm trying a small program to capture emotion from an image from here
error
LOCAL.ALL_OBJECTS[generic_utils.to_snake_case(key)] = value
AttributeError: module 'keras.utils.generic_utils' has no attribute 'to_snake_case'
Code.py
from fer import FER
import matplotlib.pyplot as plt
from tensorflow.keras.utils import to_snake_case
def emotionCapture():
img = plt.imread("happy.jpg")
detector = FER(mtcnn=True)
print(detector.detect_emotions(img))
plt.imshow(img)
predicted_emotion, score = detector.top_emotion(img)
print (predicted_emotion)
return predicted_emotion
emotionCapture()
Keras version 2.4.3
Tensorflow version 2.5.0
opencv-python version 4.5.2.52
After removing 'mtcnn=True' from detector = FER(mtcnn=True) the function worked.
I want to wrap a tensorflow function in a Keras Lambda layer as per the docs. However, my inputs are complex64. Here is a more complete example of the code i am using to replicate this behavior:
import numpy as np
from keras.models import Model
from keras.layers import Input, Lambda
import tensorflow as tf
np.set_printoptions(precision=3, threshold=3, edgeitems=3)
def layer0(inp):
z = inp[0] + inp[1]
num = tf.cast(tf.real(z), tf.complex64)
return z/num
if __name__ == "__main__":
shape = (1,10,5)
z1 = Input(shape=shape[1:], dtype=np.complex64)
z2 = Input(shape=shape[1:], dtype=np.complex64)
#s = Lambda(layer0, output_shape=shape)([z1, z2])
s = Lambda(layer0)([z1, z2])
model = Model(inputs=[z1,z2], outputs=s)
z1_in = np.asarray(np.random.normal(size=shape) + np.random.normal(size=shape)*1j, 'complex64')
z2_in = np.asarray(np.random.normal(size=shape) + np.random.normal(size=shape)*1j, 'complex64')
s_out = model.predict([z1_in, z2_in])
print(s_out)
which gives the following error:
Traceback (most recent call last):
File "complex_lambda.py", line 32, in <module>
s = Lambda(layer0)([z1, z2])
File "complex_lambda.py", line 18, in layer0
return z/num
TypeError: x and y must have the same dtype, got tf.float32 != tf.complex64
However, if I use the commented line instead:
s = Lambda(layer0, output_shape=shape)([z1, z2])
The code runs just fine. It seems that "output_shape=(...)" is necessary to make the division in the lambda function work. While this solution solves the problem for a single output variable, it doesn't work when having multiple outputs.
I cannot replicate your issue. Which version of tensorflow are you using? Are you using the keras package, or the tensorflow.keras submodule ?
At any rate, I think you can fix your issue by specifying the dtype of the Lambda layer : s = Lambda(lambda x: tf.math.real(x[0] + x[1]), dtype='complex64')([z1, s2])