Error adding layers in neural network in tensorflow [closed] - tensorflow

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.utils import to_categorical
network=models.Sequential() # this initializes a sequential model that we will call network
network.add(layers.Dense(10, activation = 'relu') # this adds a dense hidden layer
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
I am trying to create a 2 layer neural network model in tensorflow and am getting this error:
File "<ipython-input-6-0dde2ff676f8>", line 7
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer
^
SyntaxError: invalid syntax
May I know why I'm getting this error for output layer but not for hidden layer? Thanks.

You have missed a closing bracket.
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras.utils import to_categorical
network=models.Sequential() # this initializes a sequential model that we will call network
network.add(layers.Dense(10, activation = 'relu')) # this adds a dense hidden layer
network.add(layers.Dense(8, activation = 'softmax')) # this is the output layer

Related

Using Sparse Tensors as Input for Autoencoders

I have an One-hot-encoded sparse matrix which can't be transformed into a normal matrix due to its size.
I would like to reduce the dimensions using an autoencoder. Currently I am trying to use Tensorflow and its Keras library for that.
The Tensorflow docs state that sparse tensors exist and that they can be used in Keras (see https://www.tensorflow.org/guide/sparse_tensor).
The Problem is that all autoencoders I've found in the internet do not seem to work with sparse tensors.
I have prepared a small code example which stops after the first training epoch with the error message: "Failed to convert elements of SparseTensor to Tensor. Consider casting elements to a supported type.".
My Questions would be:
Do you have an idea to improve the Code or ideally do you have an example which I can look up?
If not: Do you have other ideas on how to do what I would like to do (e.g. another library, other method, etc.)?
Code Example:
#necessary imports
import tensorflow as tf
from keras.models import Model, Sequential
from keras.layers import Input, Dense, ActivityRegularization
from tensorflow.keras import backend as K
from tensorflow.keras import regularizers
#example one-hot-encoded matrix with 10 records with each one out of 4 distinct categories
sparse_tensor = tf.sparse.SparseTensor(indices=[[0,3], [1,3], [2,0], [3,1], [4,0], [5,2], [6,2], [7,1], [8,3], [9,1]],
values=[1 for i in range(10)],
dense_shape=[10, 4])
encoder = Sequential([
Input(shape=(4,), sparse=True),
Dense(1, activation = 'relu'),
ActivityRegularization(l1=1e-3)
])
decoder = Sequential([
Dense(4, activation = 'sigmoid', input_shape = (1, )),
])
autoencoder = Sequential([encoder, decoder])
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x=sparse_tensor, y=sparse_tensor, epochs=5, batch_size=5, shuffle=True)

How to use Bahdanau attention for timeseries prediction? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Can we use Bahdanau attention for multivariate time-series prediction problem? Using the Bahdanau implementation from here, I have come up with following code for time series prediction.
from tensorflow.keras.layers import Input, LSTM, Concatenate, Flatten
from attention_keras import AttentionLayer
from tensorflow.keras import Model
num_inputs = 5
seq_length = 10
inputs = Input(shape=(seq_length, num_inputs), name='inputs')
lstm_out = LSTM(64, return_sequences=True)(inputs)
lstm_out = LSTM(64, return_sequences=True)(lstm_out)
# Attention layer
attn_layer = AttentionLayer(name='attention_layer')
attn_out, attn_states = attn_layer([lstm_out, lstm_out])
# Concat attention input and LSTM output, in original code it was decoder LSTM
concat_out = Concatenate(axis=-1, name='concat_layer')([lstm_out, attn_out])
flat_out = Flatten()(concat_out)
# Dense layer
dense_out = Dense(seq_length, activation='relu')(flat_out)
predictions= dense_time(1)(dense_out)
# Full model
full_model = Model(inputs=inputs, outputs=predictions)
full_model.compile(optimizer='adam', loss='mse')
For my data, the model does perform better than vanilla LSTM without attention, but I am not sure if this implementation make sense or not?

Strange error while creating a convolution neural network

i want to create convolutional neural network model in keras, first of all i have imported all necessary library like this
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
than i have tried following model
model =Sequential()
model.add(Conv2D(filters=32,kernel_size=(5,5),padding='valid',input_shape=(1,28,28),activation='relu',data_format='channels_first'))
model.add(MaxPooling2D(2,2, dim_ordering='tf'))
moodel.add(Dropout(0.2))
model.add(Flattenn())
model.add(Dense(128,activation='relu'))
model.add(Dense(num_classes,activation='softmax'))
model.compile(loss='categorical_crossentropy' , optimizer='adam' , metrics=['accuracy' ])
but i have got following error :
TypeError: The added layer must be an instance of class Layer. Found: <tensorflow.python.keras.layers.convolutional.Conv2D object at 0x7faaefc57438>
could you help me please to fix this error
moodel.add(Dropout(0.2))
Typo - moodel -> model
model.add(Flattenn())
Typo - Flattenn -> Flatten
And you should use Convolution2D instead of Conv2D import.
from tensorflow.keras.layers import Convolution2D
model = Sequential()
model.add(Convolution2D(filters=32,...

How to Reproduce Same Result Using Conv2d in Tensorflow.Keras?

I read many posts on Stack Overflow as well Github on this topic, but I think my situation might be little different.
My code starts like below, and I can consistently reproduce the result 100% if I only use Dense layer.
import numpy as np
import random as rn
import tensorflow as tf
import os
os.environ['PYTHONHASHSEED'] = '0'
np.random.seed(1)
rn.seed(2)
session_conf = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1)
from tensorflow.keras import backend as K
tf.set_random_seed(3)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
However, every time I run, I get different results if I insert this one line "model.add(Conv2D(32, 3, activation='relu'))" before "model.add(Flatten())".
Input> flatten > dense produces consistent result, but input > conv2d > flatten > dense produces different result every time I run the code.
I'd appreciate any guidance.

Getting Cuda code from Tensorflow or Keras

I have a code in Keras (or its TF version). I want to have a CUDA code which is equivalence to it. Is there a way to get it?
I know that from Keras I can look at the basic graph topology using the following code:
# LSTM for sequence classification in the IMDB dataset
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras import backend as K
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
max_review_length = 500
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
g = K.get_session().graph
# GIVES THE GRAPH TOPOLOGY!:
graph_def = g.as_graph_def()
Is there a way to have the .cc file that represent this code?
Thanks!
There is no functionality in TensorFlow to generate C++ CUDA source code from a graph, but the XLA framework supports ahead-of-time compilation, which generates efficient bytecode from your TensorFlow graph, which you can then execute on your CUDA-capable GPU.