Strange error while creating a convolution neural network - tensorflow

i want to create convolutional neural network model in keras, first of all i have imported all necessary library like this
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
than i have tried following model
model =Sequential()
model.add(Conv2D(filters=32,kernel_size=(5,5),padding='valid',input_shape=(1,28,28),activation='relu',data_format='channels_first'))
model.add(MaxPooling2D(2,2, dim_ordering='tf'))
moodel.add(Dropout(0.2))
model.add(Flattenn())
model.add(Dense(128,activation='relu'))
model.add(Dense(num_classes,activation='softmax'))
model.compile(loss='categorical_crossentropy' , optimizer='adam' , metrics=['accuracy' ])
but i have got following error :
TypeError: The added layer must be an instance of class Layer. Found: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7faaefc57438>
could you help me please to fix this error

moodel.add(Dropout(0.2))
Typo - moodel -> model
model.add(Flattenn())
Typo - Flattenn -> Flatten
And you should use Convolution2D instead of Conv2D import.
from tensorflow.keras.layers import Convolution2D
model = Sequential()
model.add(Convolution2D(filters=32,...

Related

i put the function model.sequential() from keras but then say me that module is not callable

from keras import models
from keras import layers
network = models.Sequential()
and then say to me that module is not callable and i dont now what to do whit the layers

still same error, module 'tensorflow' has no attribute 'get_default_graph'

Could you please help me with tensorflow? I have been very frustrated with tensorflow for many months. I know they don't work well with python 3.7. Now I am using python 3.6 with tensorflow 2.0.
What the heck this tensorflow? It is so frustrated to use tensorflow.
Here is my code:
import keras
import keras.backend as K
from keras.layers.core import Activation
from keras.models import Sequential,load_model
from keras.layers import Dense, Dropout, LSTM
The error of AttributeError: module 'tensorflow' has no attribute 'get_default_graph' is for next line:
model = Sequential()
Thank you so much.
Try importing libraries like that:
from tensorflow.keras.layers import Dense, Dropout, LSTM
I solved it. I downgraded tensorflow from 2.0 to 1.8. then it runs fine.

How do i write a custom wavelet activation function for a Wavelet Neural Network using Keras or tensorflow

Trying to build a Wavelet Neural Network using Keras/Tensorflow. For this Neural Network I am supposed to use a Wavelet function as my activation function.
I have tried doing this by simply calling creating a custom activation function. However there seems to be an issue in regards to the backpropagation
import numpy as np
import pandas as pd
import pywt
import matplotlib.pyplot as plt
import tensorflow as tf
from keras.models import Model
import keras.layers as kl
from keras.layers import Input, Dense
import keras as kr
from keras.layers import Activation
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
def custom_activation(x):
return pywt.dwt(x, 'db1') -1
get_custom_objects().update({'custom_activation':Activation(custom_activation)})
model = Sequential()
model.add(Dense(12, input_dim=8, activation=custom_activation))
model.add(Dense(8, activation=custom_activation)
model.add(Dense(1, activation=custom_activation)
i get the following error for running the code in its entirety
SyntaxError: invalid syntax
if i run
model = Sequential()
model.add(Dense(12, input_dim=8, activation=custom_activation))
model.add(Dense(8, activation=custom_activation)
i get the following error
SyntaxError: unexpected EOF while parsing
and if i run
model = Sequential()
model.add(Dense(12, input_dim=8, activation=custom_activation))
I get the following error
TypeError: Cannot convert DType to numpy.dtype
model.add() is a function call. You must close parenthesis, otherwise it is a syntax error.
These two lines in your code example will cause a syntax error.
model.add(Dense(8, activation=custom_activation)
model.add(Dense(1, activation=custom_activation)
Regarding the 2nd question:
I get the following error
TypeError: Cannot convert DType to numpy.dtype
This seems like a numpy function was invoked with the incorrect arguments. Perhaps you can try to figure out which line in the script caused the error.
Also, an activation function must be written in keras backend operations. Or you need to manually compute the gradients for it. Neural network training requires being able to compute the gradients of a function on the reverse pass in order to adjust the weights. As far as I understand it you can't just call an arbitrary python library as an activation function; you have to either re-implement its operations using tensor operations or you have the option of using python operations on eager tensors if you know how to compute the gradients manually.

How to transform keras model to tpu model

I am trying to transform my Keras model in the Google cloud console into a TPU model. Unfortunatelly I am getting an error as shown below. My minimal example is the following:
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation
import tensorflow as tf
import os
model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Dense(32))
model.add(Activation('relu'))
model.compile(optimizer='rmsprop', loss='mse')
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)))
My output is:
Using TensorFlow backend.
Traceback (most recent call last):
File "cloud_python4.py", line 11, in <module>
tpu_model = tf.contrib.tpu.keras_to_tpu_model(AttributeError: module 'tensorflow.contrib.tpu' has no attribute 'keras_to_tpu_model'
The keras_to_tpu_model method seems experimental as indicated on the tensorflow website. Has it recently been removed? If so, how can I proceed to make use of TPUs to estimate my Keras model? If the keras_to_tpu_model method would be still available, why can I not invoke it?
I am assuming you defined you TPU_WORKER as below
import os
TPU_WORKER = ‘grpc://’ + os.environ[‘COLAB_TPU_ADDR’]
Instead of converting your model to TPU, build a distribution strategy. This is the method by which the batch will be distributed to the eight TPUs and how the loss from each will be calculated.
resolver = tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER)
tf.contrib.distribute.initialize_tpu_system(resolver)
strategy = tf.contrib.distribute.TPUStrategy(resolver)
With the strategy build and compile your model. This should work quite nicely for regression.
with strategy.scope():
model = Sequential()
model.add(Dense(32, input_dim=784))
model.add(Dense(32))
model.add(Activation('relu'))
model.compile(optimizer='rmsprop', loss='mse')
Import keras from tensorflow.
This is because tf.contrib.tpu.keras_to_tpu_model( )' requires a tensorflow version Model, not the keras version.
For example, use from tensorflow.keras.layers import Dense, Activation instead. And so on.

Getting Cuda code from Tensorflow or Keras

I have a code in Keras (or its TF version). I want to have a CUDA code which is equivalence to it. Is there a way to get it?
I know that from Keras I can look at the basic graph topology using the following code:
# LSTM for sequence classification in the IMDB dataset
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras import backend as K
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
max_review_length = 500
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
g = K.get_session().graph
# GIVES THE GRAPH TOPOLOGY!:
graph_def = g.as_graph_def()
Is there a way to have the .cc file that represent this code?
Thanks!
There is no functionality in TensorFlow to generate C++ CUDA source code from a graph, but the XLA framework supports ahead-of-time compilation, which generates efficient bytecode from your TensorFlow graph, which you can then execute on your CUDA-capable GPU.