I'd like to normalize the inputs going into my neural network but, as I'm defining my model in this way:
df = pd.read_csv(r'C:\Users\Davide Mori\PycharmProjects\pythonProject\Dataset.csv')
print(df)
target_column = ['W_mag', 'W_phase']
predictors = list(set(list(df.columns)) - set(target_column))
X = df[predictors].values
Y = df[target_column].values
def get_model(n_inputs, n_outputs):
model = Sequential()
model.add(Dense(1000,input_dim= n_inputs, activation='relu'))
#model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
model.add(Dense(1000, activation='linear', activity_regularizer=regularizers.l1(0.0001)))
model.add(Activation('relu'))
model.add(Dense(n_outputs, activation='linear'))
model.compile(optimizer="adam", loss="mean_squared_error", metrics=["mean_squared_error"])
model.summary()
return model
n_inputs, n_outputs = X.shape[1], Y.shape[1]
model = get_model(n_inputs, n_outputs)
# fit the model on all data
model.fit(X, Y, epochs=100, batch_size=1)
how do I apply the lambda layer to my inputs? Isn't wrong the commented line position? Because If I put the lambda layer there I'm normalizing what is already be "transformed" by the first hidden layer,right? How can I solve this problem?
This is the error I have when putting the lambda layer before everything else :
2020-10-12 15:08:46.036872: I
tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm
2020.2.2\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 197,
in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the
script
File "C:\Program Files\JetBrains\PyCharm
2020.2.2\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line
18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/Davide Mori/PycharmProjects/pythonProject/prova_rete_sfs.py",
line 60, in <module>
model = get_model(n_inputs, n_outputs)
File "C:/Users/Davide Mori/PycharmProjects/pythonProject/prova_rete_sfs.py",
line 52, in get_model
model.summary()
File "C:\Users\Davide Mori\Anaconda3\envs\pythonProject\lib\site-
packages\tensorflow_core\python\keras\engine\network.py", line 1302, in
summary
raise ValueError('This model has not yet been built. '
ValueError: This model has not yet been built. Build the model first by
calling `build()` or calling `fit()` with some data, or specify an
`input_shape` argument in the first layer(s) for automatic build.
Related
I am trying to use pre-trained resnet and fine-tune it using triplet loss. The following code I came up with is a combination of tutorials I found on the topic:
import pathlib
import tensorflow as tf
import tensorflow_addons as tfa
with tf.device('/cpu:0'):
INPUT_SHAPE = (32, 32, 3)
BATCH_SIZE = 16
data_dir = pathlib.Path('/home/user/dataset/')
base_model = tf.keras.applications.ResNet50V2(
weights='imagenet',
pooling='avg',
include_top=False,
input_shape=INPUT_SHAPE,
)
# following two lines are added after edit, originally it was model = base_model
head_model = tf.keras.layers.Lambda(lambda x: tf.math.l2_normalize(x, axis=1))(base_model.output)
model = tf.keras.Model(inputs=base_model.input, outputs=head_model)
datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=10,
zoom_range=0.1,
)
generator = datagen.flow_from_directory(
data_dir,
target_size=INPUT_SHAPE[:2],
batch_size=BATCH_SIZE,
seed=42,
)
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tfa.losses.TripletSemiHardLoss(),
)
model.fit(
generator,
epochs=5,
)
Unfortunately after running the code I get the following error:
Found 4857 images belonging to 83 classes.
Epoch 1/5
Traceback (most recent call last):
File "ReID/external_process.py", line 35, in <module>
model.fit(
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 108, in _method_wrapper
return method(self, *args, **kwargs)
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/keras/engine/training.py", line 1098, in fit
tmp_logs = train_function(iterator)
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py", line 840, in _call
return self._stateless_fn(*args, **kwds)
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 2829, in __call__
return graph_function._filtered_call(args, kwargs) # pylint: disable=protected-access
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1843, in _filtered_call
return self._call_flat(
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 1923, in _call_flat
return self._build_call_outputs(self._inference_function.call(
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 545, in call
outputs = execute.execute(
File "/home/user/videolytics/venv_python/lib/python3.8/site-packages/tensorflow/python/eager/execute.py", line 59, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 1328 values, but the requested shape has 16
[[{{node TripletSemiHardLoss/PartitionedCall/Reshape}}]] [Op:__inference_train_function_13749]
Function call stack:
train_function
2020-10-23 22:07:09.094736: W tensorflow/core/kernels/data/generator_dataset_op.cc:103] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated.
[[{{node PyFunc}}]]
The dataset directory has 83 subdirectories, one per class and each of this subdirectories contains images of given class. The dimension 1328 in the error output is the batch size (16) times number of classes (83), and the dimension 16 is the batch size (both dimensions change accordingly if I change the BATCH_SIZE.
To be honest I do not really understand the error, so any solution or even any kind of indight where is the problem is deeply appreciated.
The problem is that the TripletSemiHardLoss expects
labels y_true to be provided as 1-D integer Tensor with shape [batch_size] of multi-class integer labels
but the flow_from_directory by default generate categorical labels; using class_mode="sparse" should fix the problem.
I am trying to compile the Keras model to train and test the dataset. But in the compilation process, the following error messages show up. Could anyone help me how to solve this? I have been checking other pages and followed their suggestions, but none of them really helps me to solve that.
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation="relu"), # Rectified Linear Unit.
tf.keras.layers.Dense(10, activation="softmax")
model.compile(optimizer="adam", loss="sparse_categorial_crossentropy", metrics=["accuracy"])
The lines below appear when I try to compile and run.
Traceback (most recent call last):
File "/home/eaindra/PycharmProjects/NeuralNetwork/Tensorflow1.py", line 42, in <module>
model.compile(optimizer="adam", loss="sparse_categorial_crossentropy", metrics=["accuracy"])
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
result = method(self, *args, **kwargs)
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 336, in compile
self.loss, self.output_names)
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1351, in prepare_loss_functions
loss_functions = [get_loss_function(loss) for _ in output_names]
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1351, in <listcomp>
loss_functions = [get_loss_function(loss) for _ in output_names]
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_utils.py", line 1087, in get_loss_function
loss_fn = losses.get(loss)
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/losses.py", line 1183, in get
return deserialize(identifier)
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/losses.py", line 1174, in deserialize
printable_module_name='loss function')
File "/home/eaindra/anaconda3/envs/tensor/lib/python3.6/site-packages/tensorflow_core/python/keras/utils/generic_utils.py", line 210, in deserialize_keras_object
raise ValueError('Unknown ' + printable_module_name + ':' + object_name)
ValueError: Unknown loss function:sparse_categorial_crossentropy
It seems to be a typo on the loss function. You wrote categorial instead of categorical and missed the closing square brackets in the model definition.
Fixed code segment attached below;
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation="relu"), # Rectified Linear Unit.
tf.keras.layers.Dense(10, activation="softmax")])
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"])
I am trying to train a model with new data samples in each iteration in a loop in keras (using tensorflow backend). Due to GPU memory error after some iterations, I appended K.clear_session(). However, after one iteration, the code throws the error:
'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
If I remove K.clear_session() at end, there is no error. Is there anyone who can explain why this error comes in second iteration?
I tried other methods (for gpu release) but none of them worked and this is my last option. But it throws error. I have pasted an example code which can produce the error. Please NOTE that this is not the actual code, I just made an example to reproduce the error which I am facing in actual code.
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
import random
seed_value= 0
import os
import keras
os.environ['PYTHONHASHSEED']=str(seed_value)
random.seed(0)
np.random.seed(0)
from keras import backend as K
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
for i in range(3):
base_model = tf.keras.applications.resnet50.ResNet50(weights='imagenet', input_shape=(32, 32, 3),
include_top=False)
x = base_model.output
x = tf.keras.layers.GlobalAveragePooling2D()(x)
output = tf.keras.layers.Dense(10, activation='softmax',
kernel_initializer=tf.keras.initializers.RandomNormal(seed=4))(x)
model = tf.keras.Model(inputs=base_model.input, outputs=output)
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
for layer in base_model.layers:
layer.trainable = False
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001)
model.compile(optimizer=optimizer, loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train,y_train,batch_size=1024,epochs=1,verbose=1)
K.clear_session()
Traceback (most recent call last):
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1092, in _run
subfeed, allow_tensor=True, allow_operation=False)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3490, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:/codes/experiments-AL/breakhis/40X-M-B/codes-AL/error_debug.py", line 22, in <module>
include_top=False)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\applications\__init__.py", line 70, in wrapper
return base_fun(*args, **kwargs)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\applications\resnet50.py", line 32, in ResNet50
return resnet50.ResNet50(*args, **kwargs)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\keras_applications\resnet50.py", line 291, in ResNet50
model.load_weights(weights_path)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\network.py", line 1544, in load_weights
saving.load_weights_from_hdf5_group(f, self.layers)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\engine\saving.py", line 806, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\keras\backend.py", line 2784, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\sirshad\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1095, in _run
'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
Process finished with exit code 1
I was able to overcome this issue by saving the imagenet pre-trained model to disk and then loading everytime in loop after I call tf.keras.backend.clear_session(). So saving the base model to file and then loading works. But I am still confused why it did not work before with
base_model = tf.keras.applications.resnet50.ResNet50
I am trying to define a basic ConvNet using one Conv1D operation as follows:
n_class = 10
# channels last (default)
input_shape = (100, 1)
inp = Input(shape=input_shape)
x = Conv1D(4, kernel_size=9, activation=activations.relu, padding='same')(inp)
x = MaxPool1D(pool_size=5)(x)
x = Flatten()(x)
out = Dense(n_class, activation='softmax')(x)
model = Model(inputs=inp, outputs=out)
and that works fine (which uses data_format='channels_last' as default). However, if, instead, I want to use data_format='channels_first':
# "channels_first" inputs with shape (batch, channels, length).
input_shape = (1, 100)
inp = Input(shape=input_shape)
x = Conv1D(4, kernel_size=9, activation=activations.relu, data_format='channels_first', padding='same')(inp)
x = MaxPool1D(pool_size=5, data_format='channels_first')(x)
x = Flatten()(x)
out = Dense(n_class, activation='softmax')(x)
model = Model(inputs=inp, outputs=out)
I get the following error when defining the Conv1D layer:
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 1596, in <module>
globals = debugger.run(setup['file'], None, None, is_module)
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevd.py", line 974, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/edward/Desktop/baselines/cnn_mel.py", line 720, in <module>
model = get_1d_dummy_model(params_learn=params_learn, params_extract=params_extract)
File "/Users/edward/Desktop/baselines/architectures.py", line 71, in get_1d_dummy_model
x = Conv1D(4, kernel_size=9, activation=activations.relu, data_format='channels_first', padding='same')(inp)
File "/Users/edward/miniconda3/envs/baseline/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/Users/edward/miniconda3/envs/baseline/lib/python3.6/site-packages/keras/layers/convolutional.py", line 337, in __init__
**kwargs)
TypeError: __init__() got multiple values for keyword argument 'data_format'
Any insight as to what is being done wrong? thanks!
It seems that you are not using the most recent code (not just most recent release). For Conv1D layer, data_format='channels_first' is not supported even in the most recent release 2.1.6. You will need to clone and use codes from the master branch. The support is added by this commit on 5/7/2018. The documentation is always synced to the master which can be confusing. The idea (from the Keras creator François Chollet) is that
versions only exist to force PyPI users to upgrade. They are not meaningful. You should be constantly synced to master.
You can find some old Keras documentation here.
I am having difficulty implementing the pre-trained Xception model for binary classification over new set of classes. The model is successfully returned from the following function:
#adapted from:
#https://github.com/fchollet/keras/issues/4465
from keras.applications.xception import Xception
from keras.layers import Input, Flatten, Dense
from keras.models import Model
def get_xception(in_shape,trn_conv):
#Get back the convolutional part of Xception trained on ImageNet
model = Xception(weights='imagenet', include_top=False)
#Here the input images have been resized to 299x299x3, so this is the
#same as Xception's native input
input = Input(in_shape,name = 'image_input')
#Use the generated model
output = model(input)
#Only train the top fully connected layers (keep pre-trained feature extractors)
for layer in model.layers:
layer.trainable = False
#Add the fully-connected layers
x = Flatten(name='flatten')(output)
x = Dense(2048, activation='relu', name='fc1')(x)
x = Dense(2048, activation='relu', name='fc2')(x)
x = Dense(2, activation='softmax', name='predictions')(x)
#Create your own model
my_model = Model(input=input, output=x)
my_model.compile(loss='binary_crossentropy', optimizer='SGD')
return my_model
This returns fine, however when I run this code:
model=get_xception(shp,trn_feat)
in_data=HDF5Matrix(str_trn,'/inputs')
labels=HDF5Matrix(str_trn,'/labels')
model.fit(in_data,labels,shuffle="batch")
I get the following error:
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/engine/training.py", line 1576, in fit
self._make_train_function()
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/engine/training.py", line 960, in _make_train_function
loss=self.total_loss)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/legacy/interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/keras/optimizers.py", line 169, in get_updates
v = self.momentum * m - lr * g # velocity
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 705, in _run_op
return getattr(ops.Tensor, operator)(a._AsTensor(), *args)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 865, in binary_op_wrapper
return func(x, y, name=name)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.py", line 1088, in _mul_dispatch
return gen_math_ops._mul(x, y, name=name)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_math_ops.py", line 1449, in _mul
result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/tsmith/.virtualenvs/keras/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[204800,2048]
[[Node: training/SGD/mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"](SGD/momentum/read, training/SGD/Variable/read)]]
I have been tracing the function calls for hours now and still can't figure out what is happening. The system should be far above and beyond the requirements. System specs:
Ubuntu Version: 14.04.5 LTS
Tensorflow Version: 1.3.0
Keras Version: 2.0.7
28x dual core Inten Xeon processor (1.2 GHz)
4x NVidia GeForce 1080 (8Gb memory each)
Any clues as to what is going wrong here?
Per Yu-Yang, the simplest solution was to reduce the batch size, everything ran fine after that!