Using Sparse Tensors as Input for Autoencoders - tensorflow

I have an One-hot-encoded sparse matrix which can't be transformed into a normal matrix due to its size.
I would like to reduce the dimensions using an autoencoder. Currently I am trying to use Tensorflow and its Keras library for that.
The Tensorflow docs state that sparse tensors exist and that they can be used in Keras (see https://www.tensorflow.org/guide/sparse_tensor).
The Problem is that all autoencoders I've found in the internet do not seem to work with sparse tensors.
I have prepared a small code example which stops after the first training epoch with the error message: "Failed to convert elements of SparseTensor to Tensor. Consider casting elements to a supported type.".
My Questions would be:
Do you have an idea to improve the Code or ideally do you have an example which I can look up?
If not: Do you have other ideas on how to do what I would like to do (e.g. another library, other method, etc.)?
Code Example:
#necessary imports
import tensorflow as tf
from keras.models import Model, Sequential
from keras.layers import Input, Dense, ActivityRegularization
from tensorflow.keras import backend as K
from tensorflow.keras import regularizers
#example one-hot-encoded matrix with 10 records with each one out of 4 distinct categories
sparse_tensor = tf.sparse.SparseTensor(indices=[[0,3], [1,3], [2,0], [3,1], [4,0], [5,2], [6,2], [7,1], [8,3], [9,1]],
values=[1 for i in range(10)],
dense_shape=[10, 4])
encoder = Sequential([
Input(shape=(4,), sparse=True),
Dense(1, activation = 'relu'),
ActivityRegularization(l1=1e-3)
])
decoder = Sequential([
Dense(4, activation = 'sigmoid', input_shape = (1, )),
])
autoencoder = Sequential([encoder, decoder])
autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.fit(x=sparse_tensor, y=sparse_tensor, epochs=5, batch_size=5, shuffle=True)

Related

How can I make my Neural Network predict new output values?

I'm working on a project where I have 3 inputs (v, f, n) and 1 output (delta(t)).
I'm trying to test the effect of the inputs on the output and to figure out which input is the most effective in different situations, therefore I would like to predict new output values that depend on new inputs values.
I have been testing this system and I got the following data table:
This table contains 1000 rows.
I'm new to this whole Neural Network thing, so I don't know what should be the Activation function, the loss function, etc.
I've been trying use some Keras models, but I'm getting wrong predictions when trying model.predict() some inputs values.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
model = Sequential()
model.add(Dense(16, activation='relu', input_shape=(3,)))
model.add(Dense(16, activation='relu'))
model.add(Dense(1))
model.compile(optimizer=Adam(), loss='mse')
data = np.array(pd.read_excel(r'Data.xlsx'))
x = data[:, :3]
y = data[:, 3]
target = model.fit(x, y, validation_split=0.2, epochs=15000,
batch_size=256)
# check some predictions:
print(model.predict([[0.9, 840370875, 240]]))

Tries to understand Tensorflow input_shape

I have some confusions regarding to Tensorflow input_shape.
Suppose there are 3 documents (each row) in "doc" defined below, and the vocabulary has 4 words (each sublist in each row).
Further suppose that each word is represented by 2 numbers via word embedding.
The program only works when I specify input_shape=(3,4,2) under a Dense layer.
But when I use a LSTM layer, the program only works when input_shape=(4,2) but not when input_shape=(3,4,2).
So how to specify the input shape for such inputs? How to make sense of it?
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
doc=[
[[1,0],[0,0],[0,0],[0,0]],
[[0,0],[1,0],[0,0],[0,0]],
[[0,0],[0,0],[1,0],[0,0]]
]
model=Sequential()
model.add(Dense(2,input_shape=(3,4,2))) # model.add(LSTM(2,input_shape=(4,2)))
model.compile(optimizer=Adam(learning_rate=0.0001),loss="sparse_categorical_crossentropy",metrics=("accuracy"))
model.summary()
output=model.predict(doc)
print(model.weights)
print(output)
The input_shape argument in a keras.layers.LTSM layer expects a 2D array with a shape of [timesteps, features]. Your doc has the shape [batch_size, timesteps, features] and therefore one dimension too much.
You can use the batch_input_shape argument instead, if you want feed batch_size, too.
To do so, you have just to replace this line of your code:
model.add(LSTM(2,input_shape=(4,2)))
With this one:
model.add(LSTM(2,batch_input_shape=(3,4,2)))
If you're setting a specific batch_size in your model and then feed a different size other than 3 (in your case), you will get an error. Using input_shape instead you have the flexibility to feed any batch size to the network.

Why Keras raises shape error in last dense layer?

I built a simple NN to distinguish integers from decimals, my input data is 1 dimensional array,and the final output should be the probability of integer.
At first, I succeeded when last layer(name:output) had 1 unit. But it raised ValueError when I changed the last dense layer to two units,for I wanted to output both probabilities of number x as integer and decimal.
from tensorflow.python.keras.models import Sequential,load_model
from tensorflow.python.keras.utils import np_utils
from tensorflow.python.keras.layers import Dense
from tensorflow.python.keras.layers import Activation
from tensorflow import keras
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
def train():
t=[]
a=[]
for i in range (0,8000): #generate some training data
ran=np.random.randint(2)
if(ran==0):
y=np.random.uniform(-100,100)
t.append(y)
a.append(0)
else:
y=np.random.randint(1000)
t.append(y)
a.append(1)
t=np.asarray(t)
a=np.asarray(a)
pt=t.reshape(-1,1) #reshape for fit()
pa=a.reshape(-1,1)
pt,pa=shuffle(pt,pa)
model=Sequential()
dense=Dense(units=32,input_shape=(1,),activation='relu')
dense2=Dense(units=64,activation='relu')
output=Dense(units=2,activation='softmax') # HERE is the problem
model.add(dense)
model.add(dense2)
model.add(output)
model.summary()
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
model.fit(pt,pa,validation_split=0.02,batch_size=10, epochs=50, verbose=2)
model.save('integer_predictor.h5')
train()
ValueError: Error when checking target: expected dense_2 to have shape (2,) but got array with shape (1,)
This should solve your problem
model.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['accuracy'])
Since you have 2 outputs, you cant use binary cross_entropy since its a 2 class classification problem. Also, when your inputs are not one-hot encoded you will need sparse_categorical_crossentropy. If you have one hot features then categorical_crossentropy will work with outputs > 1.
Read this to get more insight into this.

Tensorflow 2.0 save preprocessing tonkezier for nlp into tensorflow server

I have trained a tensforflow 2.0 keras model to make some natural language processing.
What I am doing basically is get the title of different news and predicting in what category they belong. In order to do that I have to tokenize the sentences and then add 0 to fill the array to have the same lenght that I defined:
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
max_words = 1500
tokenizer = Tokenizer(num_words=max_words )
tokenizer.fit_on_texts(x.values)
X = tokenizer.texts_to_sequences(x.values)
X = pad_sequences(X, maxlen = 32)
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, GRU,InputLayer
numero_clases = 5
modelo_sentimiento = Sequential()
modelo_sentimiento.add(InputLayer(input_tensor=tokenizer.texts_to_sequences, input_shape=(None, 32)))
modelo_sentimiento.add(Embedding(max_palabras, 128, input_length=X.shape[1]))
modelo_sentimiento.add(LSTM(256, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))
modelo_sentimiento.add(LSTM(256, dropout=0.2, recurrent_dropout=0.2))
modelo_sentimiento.add(Dense(numero_clases, activation='softmax'))
modelo_sentimiento.compile(loss = 'categorical_crossentropy', optimizer='adam',
metrics=['acc',f1_m,precision_m, recall_m])
print(modelo_sentimiento.summary())
Now once trained I want to deploy it for example in tensorflow serving, but I don't know how to save this preprocessing(tokenizer) into the server, like make a scikit-learn pipeline, it is possible to do it here? or I have to save the tokenizer and make the preprocessing by my self and then call the model trained to predict?
Unfortunately, you won't be able to do something as elegant as a sklearn Pipeline with Keras models (at least I'm not aware of) easily. Of course you'd be able to create your own Transformer which will achieve the preprocessing you need. But given my experience trying to incorporate custom objects in sklearn pipelines, I don't think it's worth the effort.
What you can do is save the tokenizer along with metadata using,
with open('tokenizer_data.pkl', 'wb') as handle:
pickle.dump(
{'tokenizer': tokenizer, 'num_words':num_words, 'maxlen':pad_len}, handle)
And then load it when you want to use it,
with open("tokenizer_data.pkl", 'rb') as f:
data = pickle.load(f)
tokenizer = data['tokenizer']
num_words = data['num_words']
maxlen = data['maxlen']

Tensorflow Keras Regression model - Biased Results

I am trying to implement a Keras Regression model on a dataset for my learning purpose. I have taken the data from the Kaggle Loan Default Prediction Challenge and I am trying to predict whether a person will default on a loan or not
The target column seems to be imbalanced and majority of the observations seems to have "0" as their value. I have tried the following approaches to overcome this data imbalance (a) Downsampled the Majority class (b) Upsample the Minority class (c) use the SMOTE algorithm. But these approaches do not seem to help the cause and prediction from the model is biased only towards "0" since majority of the classes in the dataset is "0". I have used the resample method from sklearn for performing the downsampling and upsampling.
What different approaches can I try to overcome this problem and achieve a good accuracy with my model on this data and get a realistic prediction from the model. I am sharing my code
from keras.models import Sequential
from keras.layers import Dense
from keras.regularizers import L1L2
import pandas
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import Imputer
from sklearn.metrics import roc_auc_score
import statsmodels.api as sm
from sklearn import preprocessing as pre
train = pandas.read_csv('/train_v2.csv/train_v2.csv')
# Defining the target column
train_loss = train.loss
# Defining the features for the model
train = train[['f527','f528','f271']]
# Defining the imputer function
imp = Imputer()
# Fitting the imputation function to the training dataset
imp.fit(train)
train = imp.transform(train)
train=pre.StandardScaler().fit_transform(train)
# Splitting the data into Training and Testing samples
X_train,X_test,y_train,y_test = train_test_split( train,
train_loss,test_size=0.3, random_state=42)
# logistic regression with L1 and L2 regularization
reg = L1L2(l1=0.01, l2=0.01)
model = Sequential()
model.add(Dense(13,kernel_initializer='normal', activation='relu',
W_regularizer=reg, input_dim=X_train.shape[1]))
model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# Compile model
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, nb_epoch=10, validation_data=(X_test, y_test))