How can I see the value of final features that are being trained inside tensorflow model. Like in the below case I am trying to multi-hot my column 'x' and I want to see how the features are going to my model.
This is very easy to do in sklearn but being new to Tensorflow I dont understand how is it possible.
import tensorflow as tf
import pandas as pd
data = {'x':['a c', 'a b', 'b c'], 'y': [1, 1, 0]}
df = pd.DataFrame(data)
Y = df['y']
X = df.drop('y', axis=1)
indicator_features = [tf.feature_column.indicator_column(categorical_column=
tf.feature_column.categorical_column_with_vocabulary_list(key = 'x',
vocabulary_list = ['a','b','c']))]
model = tf.estimator.LinearClassifier(feature_columns=indicator_features,
model_dir = "/tmp/samplemodel")
training_input_fn = tf.estimator.inputs.pandas_input_fn(x = X,
y=Y,
batch_size=64,
shuffle= True,
num_epochs = None)
model.train(input_fn=training_input_fn,steps=1000)
I have been able to print the values by enabling the eager execution in tensorflow.
Posting my solution below. Welcome for any other ideas as well.
import tensorflow as tf
import tensorflow.feature_column as fc
import pandas as pd
PATH = "/tmp/sample.csv"
tf.enable_eager_execution()
COLUMNS = ['education','label']
train_df = pd.read_csv(PATH, header=None, names = COLUMNS)
#train_df['education'] = train_df['education'].str.split(" ")
def easy_input_function(df, label_key, num_epochs, shuffle, batch_size):
label = df[label_key]
ed = tf.string_split(df['education']," ")
df['education'] = ed
ds = tf.data.Dataset.from_tensor_slices((dict(df),label))
if shuffle:
ds = ds.shuffle(10000)
ds = ds.batch(batch_size).repeat(num_epochs)
return ds
ds = easy_input_function(train_df, label_key='label', num_epochs=5, shuffle=False, batch_size=5)
for feature_batch, label_batch in ds.take(1):
print('Some feature keys:', list(feature_batch.keys())[:5])
print()
print('A batch of education :', feature_batch['education'])
print()
print('A batch of Labels:', label_batch )
print(feature_batch)
education_vocabulary_list = [
'Bachelors', 'HS-grad', '11th', 'Masters', '9th', 'Some-college',
'Assoc-acdm', 'Assoc-voc', '7th-8th', 'Doctorate', 'Prof-school',
'5th-6th', '10th', '1st-4th', 'Preschool', '12th']
education = tf.feature_column.categorical_column_with_vocabulary_list('education', vocabulary_list=education_vocabulary_list)
fc.input_layer(feature_batch, [fc.indicator_column(education)])
Related
I am currently receiving one of the following errors (depending on the sequence of data prep):
TypeError: Inputs to a layer should be tensors. Got: <tensorflow.python.data.ops.dataset_ops._NestedVariant object at 0x000001E02F62FB00>
TypeError: Inputs to a layer should be tensors. Got: <_VariantDataset shapes: OrderedDict
Background: I have some parquet files, where each file is a multi-variate time-series. Since I am using the files for a multivariate time-series classification problem, I am storing the labels in a single numpy array. I need to use tf.data.Dataset for reading the files, since I cannot fit them all in memory.
Here is a working example that reproduces my error:
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Masking, LSTM, Dropout, Dense
#!pip install tensorflow-io
import tensorflow_io as tfio
num_files = 10
num_features = 3
num_timesteps = 50
num_classes = 2
batch_size = 2
for i in range(num_files):
df = pd.DataFrame({"A": np.random.rand(num_timesteps), "B": np.random.rand(num_timesteps), "C": np.random.rand(num_timesteps)})
df.to_parquet("file_{}.parquet".format(i))
columns_init = {"A": tf.TensorSpec(tf.TensorShape([]), tf.float32), "B": tf.TensorSpec(tf.TensorShape([]), tf.float32), "C": tf.TensorSpec(tf.TensorShape([]), tf.float32)}
labels = np.array([0, 1, 1, 1, 0, 1, 0, 0, 1, 0])
train_split_size = 0.8
num_train_files = int(train_split_size * num_files)
train_names = ["file_{}.parquet".format(i) for i in range(num_train_files)]
val_names = ["file_{}.parquet".format(i) for i in range(num_train_files, num_files)]
y_train = labels[ : num_train_files]
y_val = labels[num_train_files : num_files]
def map_fn(file_names, label_ds):
return tfio.IODataset.from_parquet(file_names, columns=columns_init), label_ds
train_ds = tf.data.Dataset.from_tensor_slices((train_names, y_train))
train_ds = train_ds.shuffle(buffer_size = num_train_files)
train_ds = train_ds.map(map_fn)
train_ds = train_ds.batch(batch_size)
train_ds = train_ds.prefetch(batch_size)
val_ds = tf.data.Dataset.from_tensor_slices((val_names, y_val))
# No need for shuffling the validation set
val_ds = val_ds.map(map_fn)
val_ds = val_ds.batch(batch_size)
val_ds = val_ds.prefetch(batch_size)
ip = Input(shape=(num_timesteps, num_features))
x = Masking()(ip)
x = LSTM(8)(x)
x = Dropout(0.8)(x)
out = Dense(1, activation='softmax')(x)
model = Model(ip, out)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=["accuracy"])
model.fit(train_ds, epochs=10, validation_data=val_ds)
How do I overcome this error? I would prefer to keep my files separate and shuffle only how they are batched, since I don't want to meddle with the time-series sequences within the files. Is there a similar solution for .csv files instead of .parquet. I prefer parquet files because they are lighter and easier to read, but I am happy to convert my files if there is no turnaround.
For anyone experiencing a similar issue, I found a workaround, which was not straightforward. In this case, I defined a common_ds function for reading all the data from the files. I applied batching, where the batch size is equal to the time-series length to split the observations as they were stored. (Note: this assumes that the files are already preprocessed and all the files have equal number of rows.) After combining the features with the labels, the data is shuffled and batched according to the desired batch size. The final step uses the pack_features_function to change the format into tensor shapes that can be fed to the model.
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras import Model
from tensorflow.keras.layers import Masking, LSTM, Dropout, Dense, Input
#!pip install tensorflow-io
import tensorflow_io as tfio
num_files = 10
num_features = 3
num_timesteps = 50
num_classes = 2
batch_size = 2
for i in range(num_files):
df = pd.DataFrame({"A": np.random.rand(num_timesteps),
"B": np.random.rand(num_timesteps),
"C": np.random.rand(num_timesteps)})
df.to_parquet("file_{}.parquet".format(i))
columns_init = {"A": tf.TensorSpec(tf.TensorShape([]), tf.float32),
"B": tf.TensorSpec(tf.TensorShape([]), tf.float32),
"C": tf.TensorSpec(tf.TensorShape([]), tf.float32)}
labels = np.array([0, 1, 1, 1, 0, 1, 0, 0, 1, 0])
train_split_size = 0.8
num_train_files = int(train_split_size * num_files)
train_names = ["file_{}.parquet".format(i) for i in range(num_train_files)]
val_names = ["file_{}.parquet".format(i) for i in range(num_train_files, num_files)]
y_train = labels[ : num_train_files]
y_val = labels[num_train_files : num_files]
def make_common_ds(files):
common_ds = tfio.IODataset.from_parquet(files[0], columns=columns_init)
for file_name in files[1:]:
ds = tfio.IODataset.from_parquet(file_name, columns=columns_init)
common_ds = common_ds.concatenate(ds)
return common_ds
def pack_features_vector(features, labels):
"""Pack the features into a single array."""
features = tf.stack(list(features.values()), axis=2)
return features, labels
train_names_ds = make_common_ds(train_names)
train_names_ds = train_names_ds.batch(num_timesteps)
train_label_ds = tf.data.Dataset.from_tensor_slices(y_train)
train_ds = tf.data.Dataset.zip((train_names_ds, train_label_ds))
train_ds = train_ds.shuffle(buffer_size = num_train_files)
train_ds = train_ds.batch(batch_size)
train_ds = train_ds.prefetch(batch_size)
train_ds = train_ds.map(pack_features_vector)
val_names_ds = make_common_ds(val_names)
val_names_ds = val_names_ds.batch(num_timesteps)
val_label_ds = tf.data.Dataset.from_tensor_slices(y_val)
val_ds = tf.data.Dataset.zip((val_names_ds, val_label_ds))
# No need to shuffle the validation set
val_ds = val_ds.batch(batch_size)
val_ds = val_ds.prefetch(batch_size)
val_ds = val_ds.map(pack_features_vector)
ip = Input(shape=(num_timesteps, num_features))
x = Masking()(ip)
x = LSTM(8)(x)
x = Dropout(0.8)(x)
out = Dense(1, activation='softmax')(x)
model = Model(ip, out)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=["accuracy"])
model.fit(train_ds, epochs=10, validation_data=val_ds)
I'm using following simple architecture to train my model, but my model is showing 2-3 hours as elapsed time per epoch when I'm also using masked input with my padded input, why it's happening like that.
please find the below code for my model
class lstm_raw(tf.keras.Model):
def __init__(self,name='spectrogram'):
super().__init__(name=name)
self.lstm = tf.keras.layers.LSTM(32,activation="tanh",kernel_initializer=tf.keras.initializers.he_uniform(seed=45),kernel_regularizer=tf.keras.regularizers.l2())
self.dense1 = tf.keras.layers.Dense(64,activation="relu",kernel_initializer=tf.keras.initializers.he_uniform(seed=45))
self.dense2 = tf.keras.layers.Dense(10,kernel_initializer=tf.keras.initializers.he_uniform(seed=45))
def call(self,X):
lstm_output = self.lstm(X[0],mask=X[1])
dense1 = self.dense1(lstm_output)
dense2 = self.dense2(dense1)
return dense2
with tf.device('/device:GPU:0'):
model1.fit(x=[X_train_pad_seq_test,X_train_mask_test],y=y_train,epochs=20,batch_size=4,steps_per_epoch=len(X_train_pad_seq_test)//4)
my input shape as below
((1400, 17640, 1), (1400, 17640, 1))
The culprit in your code is the activation="relu" in your LSTM layer.
Tensorflow accelerates LSTM cells using CuDNN if and only if the activation is set to tanh.
Replace relu with tanh, and see your model take off!
Here's a generic example, which shouldn't take more than 1-2 minutes, max.
from pandas_datareader import data as wb
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pylab import rcParams
from sklearn.preprocessing import MinMaxScaler
start = '2019-06-30'
end = '2020-06-30'
tickers = ['GOOG']
thelen = len(tickers)
price_data = []
for ticker in tickers:
prices = wb.DataReader(ticker, start = start, end = end, data_source='yahoo')[['Open','Adj Close']]
price_data.append(prices.assign(ticker=ticker)[['ticker', 'Open', 'Adj Close']])
#names = np.reshape(price_data, (len(price_data), 1))
df = pd.concat(price_data)
df.reset_index(inplace=True)
for col in df.columns:
print(col)
#used for setting the output figure size
rcParams['figure.figsize'] = 20,10
#to normalize the given input data
scaler = MinMaxScaler(feature_range=(0, 1))
#to read input data set (place the file name inside ' ') as shown below
df['Adj Close'].plot()
plt.legend(loc=2)
plt.xlabel('Date')
plt.ylabel('Price')
plt.show()
ntrain = 80
df_train = df.head(int(len(df)*(ntrain/100)))
ntest = -80
df_test = df.tail(int(len(df)*(ntest/100)))
#importing the packages
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM
#dataframe creation
seriesdata = df.sort_index(ascending=True, axis=0)
new_seriesdata = pd.DataFrame(index=range(0,len(df)),columns=['Date','Adj Close'])
length_of_data=len(seriesdata)
for i in range(0,length_of_data):
new_seriesdata['Date'][i] = seriesdata['Date'][i]
new_seriesdata['Adj Close'][i] = seriesdata['Adj Close'][i]
#setting the index again
new_seriesdata.index = new_seriesdata.Date
new_seriesdata.drop('Date', axis=1, inplace=True)
#creating train and test sets this comprises the entire data’s present in the dataset
myseriesdataset = new_seriesdata.values
totrain = myseriesdataset[0:255,:]
tovalid = myseriesdataset[255:,:]
#converting dataset into x_train and y_train
scalerdata = MinMaxScaler(feature_range=(0, 1))
scale_data = scalerdata.fit_transform(myseriesdataset)
x_totrain, y_totrain = [], []
length_of_totrain=len(totrain)
for i in range(60,length_of_totrain):
x_totrain.append(scale_data[i-60:i,0])
y_totrain.append(scale_data[i,0])
x_totrain, y_totrain = np.array(x_totrain), np.array(y_totrain)
x_totrain = np.reshape(x_totrain, (x_totrain.shape[0],x_totrain.shape[1],1))
#LSTM neural network
lstm_model = Sequential()
lstm_model.add(LSTM(units=50, return_sequences=True, input_shape=(x_totrain.shape[1],1)))
lstm_model.add(LSTM(units=50))
lstm_model.add(Dense(1))
lstm_model.compile(loss='mean_squared_error', optimizer='adadelta')
lstm_model.fit(x_totrain, y_totrain, epochs=10, batch_size=1, verbose=2)
#predicting next data stock price
myinputs = new_seriesdata[len(new_seriesdata) - (len(tovalid)+1) - 60:].values
myinputs = myinputs.reshape(-1,1)
myinputs = scalerdata.transform(myinputs)
tostore_test_result = []
for i in range(60,myinputs.shape[0]):
tostore_test_result.append(myinputs[i-60:i,0])
tostore_test_result = np.array(tostore_test_result)
tostore_test_result = np.reshape(tostore_test_result,(tostore_test_result.shape[0],tostore_test_result.shape[1],1))
myclosing_priceresult = lstm_model.predict(tostore_test_result)
myclosing_priceresult = scalerdata.inverse_transform(myclosing_priceresult)
totrain = df_train
tovalid = df_test
#predicting next data stock price
myinputs = new_seriesdata[len(new_seriesdata) - (len(tovalid)+1) - 60:].values
# Printing the next day’s predicted stock price.
print(len(tostore_test_result));
print(myclosing_priceresult);
For reference:
https://github.com/ASH-WICUS/Notebooks/blob/master/Long%20Short%20Term%20Memory%20-%20Stock%20Price%20Prediction.ipynb
My question is: I want to calculate the derivation of "time input" and "dense_input". Before asking question, I search the soluatoin of calculaing jacobian matrix by keras function.
After running it, But I got this error:
File "\keras\backend\tensorflow_backend.py", line 2614, in _call
dtype=tensor.dtype.base_dtype.name))
AttributeError: 'list' object has no attribute 'dtype'
Here is my simple version:
from keras.models import *
from keras.layers import *
import keras.backend as K
import pandas as pd
from keras import optimizers
def get_model(timestamp, features):
time_input = Input(shape=(timestamp, features,), name='time_input')
lstm_out = LSTM(4)(time_input)
dense_hidden_units = 2
dense_input_layer = Input(shape=(dense_length,), name='dense_input_layer')
final_input_layer = concatenate([lstm_out, dense_input_layer])
# Disable biases in the hidden layer
dense_1 = Dense(units=dense_hidden_units, use_bias=False, activation='sigmoid')(final_input_layer)
# Disable bias in output layer
output_layer = Dense(units=1, use_bias=False, name='final_output')(dense_1)
model = Model(
inputs=[time_input, dense_input_layer],
outputs=output_layer
)
print(model.summary())
return model
if __name__ == '__main__':
timestamp = 3
features = 1
dense_length = 3
temp_data = pd.DataFrame([
[1, 2, 3, 2, 3, 4],
])
time_data = temp_data.values.reshape(-1, timestamp, features)
dense_data = temp_data.values.reshape(-1, dense_length)
target_data = np.array([1, 2])
print(time_data.shape)
print(dense_data.shape)
print(target_data.shape)
model = get_model(
timestamp, features
)
Ada = optimizers.Adagrad(lr=0.09, epsilon=1e-04)
model.compile(loss='mse', optimizer=Ada, metrics=['mse'])
model.fit(
{
'time_input': time_data,
'dense_input_layer': dense_data,
},
{
'final_output': target_data
},
epochs=1, batch_size=1
)
time_input = model.get_layer('time_input').input
GPP_input_layer = model.get_layer('dense_input_layer').input
J = K.gradients(model.output, [time_input, GPP_input_layer])
jacobianTime = K.function([[time_input, GPP_input_layer], K.learning_phase()], J)
deriRes = jacobianTime([time_data, dense_data]) # this line throw exception
print(deriRes[0])
Thanks for help!
You have an extra set of brackets.
jacobianTime = K.function([[time_input, GPP_input_layer], K.learning_phase()], J)
to
jacobianTime = K.function([time_input, GPP_input_layer, K.learning_phase()], J)
I was able to run your code like this at least.
I have a code, which is like that
import tensorflow as tf
import numpy as np
sequences = np.array([[1,3,4],[5,6,7,8],[9,10,11,12,13],[14,15]])
def generator():
for el in sequences:
yield el, np.random.randn(3,5).astype('float32')
def parser(dense_tensor,spectrogram):
labels = tf.contrib.layers.dense_to_sparse(dense_tensor)
return spectrogram,labels
dataset = tf.data.Dataset().from_generator(generator, output_types= (tf.int64, tf.float32), output_shapes=([None],[None,None]))
dataset = dataset.map(lambda den, spec: parser(den,spec)).batch(2)
iter = dataset.make_initializable_iterator()
spectrogram,labels = iter.get_next()
with tf.Session() as sess:
sess.run(iter.initializer)
while True:
try:
spar,spe = sess.run([labels,spectrogram])
print(spar, spe.shape)
except Exception as e:
#print(e)
break
where I am using the tf.data to get the labels and spectrogram for speech to text. I have put a toy example above, it is ok, if I have a same length signal for speech, but for different length signal in batch, I need to do padded_batch, but dense_to_sparse does not allow the padded batch, any solution where I can use padded_batch with sparse tensor?
import tensorflow as tf
import numpy as np
def generator():
for el in sequences:
yield el, np.random.randn(np.random.randint(1,4),5).astype('float32')
def parser(dense_tensor,spectrogram):
#labels = tf.contrib.layers.dense_to_sparse(dense_tensor, eos_token=100)
labels = dense_tensor
return spectrogram,labels
dataset = tf.data.Dataset().from_generator(generator, output_types= (tf.int64, tf.float32), output_shapes=([None],[None,None]))
dataset = dataset.map(lambda den, spec: parser(den,spec)).padded_batch(2, ([None,None],[None]),padding_values=(0. , tf.constant(100,dtype=tf.int64)))
iter = dataset.make_initializable_iterator()
spectrogram,labels = iter.get_next()
res = tf.contrib.layers.dense_to_sparse(labels,eos_token=100)
print(res)
with tf.Session() as sess:
sess.run(iter.initializer)
while True:
try:
spar,spe,res1 = sess.run([labels,spectrogram,res])
print(res1, spar,spe)
except Exception as e:
#print(e)
break
I tried to use LSTM network using reset callback for expected future
predictions as follows:
import numpy as np, pandas as pd, matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.callbacks import LambdaCallback
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
raw = np.sin(2*np.pi*np.arange(1024)/float(1024/2)).reshape(-1,1)
scaler = MinMaxScaler(feature_range=(-1, 1))
scaled = scaler.fit_transform(raw)
data = pd.DataFrame(scaled)
window_size = 3
data_s = data.copy()
for i in range(window_size):
data = pd.concat([data, data_s.shift(-(i+1))], axis = 1)
data.dropna(axis=0, inplace=True)
ds = data.values
n_rows = ds.shape[0]
ts = int(n_rows * 0.8)
train_data = ds[:ts,:]
test_data = ds[ts:,:]
train_X = train_data[:,:-1]
train_y = train_data[:,-1]
test_X = test_data[:,:-1]
test_y = test_data[:,-1]
print (train_X.shape)
print (train_y.shape)
print (test_X.shape)
print (test_y.shape)
batch_size = 3
n_feats = 1
train_X = train_X.reshape(train_X.shape[0], batch_size, n_feats)
test_X = test_X.reshape(test_X.shape[0], batch_size, n_feats)
print(train_X.shape, train_y.shape)
regressor = Sequential()
regressor.add(LSTM(units = 64, batch_input_shape=(1, batch_size, n_feats),
activation = 'sigmoid',
stateful=True, return_sequences=False))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
resetCallback = LambdaCallback(on_epoch_begin=lambda epoch,logs: regressor.reset_states())
regressor.fit(train_X, train_y, batch_size=1, epochs = 1, callbacks=[resetCallback])
previous_inputs = test_X
regressor.reset_states()
previous_predictions = regressor.predict(previous_inputs, batch_size=1)
previous_predictions = scaler.inverse_transform(previous_predictions).reshape(-1)
test_y = scaler.inverse_transform(test_y.reshape(-1,1)).reshape(-1)
plt.plot(test_y, color = 'blue')
plt.plot(previous_predictions, color = 'red')
plt.show()
inputs = test_X
future_predicitons = regressor.predict(inputs, batch_size=1)
n_futures = 7
regressor.reset_states()
predictions = regressor.predict(previous_inputs, batch_size=1)
print (predictions)
future_predicts = []
currentStep = predictions[:,-1:,:]
for i in range(n_futures):
currentStep = regressor.predict(currentStep, batch_size=1)
future_predicts.append(currentStep)
regressor.reset_states()
future_predicts = np.array(future_predicts, batch_size=1).reshape(-1,1)
future_predicts = scaler.inverse_transform(future_predicts).reshape(-1)
all_predicts = np.concatenate([predicts, future_predicts])
plt.plot(all_predicts, color='red')
plt.show()
but i got the following error. I could not figure out how to solve it for expected predictions.
currentStep = predictions[:,-1:,:]
IndexError: too many indices for array
PS this code has been adapted from https://github.com/danmoller/TestRepo/blob/master/testing%20the%20blog%20code%20-%20train%20and%20pred.ipynb
When you defined the regressor, you used return_sequences=False.
So, the regressor is returning 2D, tensors (without the steps), not 3D.
So you can't get elements from predictions using three indices as you did.
Possibilities:
With return_sequences=False, every prediction will be only the last step.
With return_sequences=True, every prediction will contain steps, even if only one step.