CNTK complains about Feature Not Implemented - cntk

I have the following network in Brainscript.
BrainScriptNetworkBuilder = {
inputDim = 4
labelDim = 1
embDim = 20
hiddenDim = 40
model = Sequential (
EmbeddingLayer {embDim} : # embedding
RecurrentLSTMLayer {hiddenDim, goBackwards=false} : # LSTM
DenseLayer {labelDim} # output layer
)
# features
t = DynamicAxis{}
features = SparseInput {inputDim, tag="feature", dynamicAxis=t}
anomaly = Input {labelDim, tag="label"}
# model application
z = model (features)
zp = ReconcileDynamicAxis(z, anomaly)
# loss and metric
ce = CrossEntropyWithSoftmax (anomaly, zp)
errs = ClassificationError (anomaly, zp)
featureNodes = (features)
labelNodes = (anomaly)
criterionNodes = (ce)
evaluationNodes = (errs)
outputNodes = (z)
}
and my data looks like this:
2 |Features -0.08169 -0.07840 -0.09580 -0.08748
2 |Features 0.00354 -0.00089 0.02832 0.00364
2 |Features -0.18999 -0.12783 -0.02612 0.00474
2 |Features 0.16097 0.11350 -0.01656 -0.05995
2 |Features 0.09638 0.07632 -0.04359 0.02183
2 |Features -0.12585 -0.08926 0.02879 -0.00414
2 |Features -0.10224 -0.18541 -0.16963 -0.05655
2 |Features 0.08327 0.15853 0.02869 -0.17020
2 |Features -0.25388 -0.25438 -0.08348 0.13638
2 |Features 0.20168 0.19566 -0.11165 -0.40739 |IsAnomaly 0
When I run the cntk command to try and train a model, I get the following exception.
EXCEPTION occurred: Inside File: Matrix.cpp Line: 1323 Function: Microsoft::MSR::CNTK::Matrix::SetValue -> Feature Not Implemented.
What am I missing?

Here are some suggestions:
Firstly, inputs should match the type in the data as described in the reader. So the features variable should not be a Sparse, as the Input in the data is dense.
Secondly the LSTM will output a sequence of outputs, one for each sample in the input sequence. You need to ignore all but the last one.
model = Sequential ( DenseLayer {embDim} : # embedding
RecurrentLSTMLayer {hiddenDim, goBackwards=false} : # LSTM
BS.Sequences.Last : #Use only the last in the LSTM sequence
DenseLayer {labelDim, activation=Sigmoid} # output layer
)

Related

Doing a ML predict but prices appears as in CSV

I am learning ML and running my code on prediction. When I run the code, I find the prices in the csv is the same as the predict, what am I doing wrong?
----CODE---
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
melbourne_file_path = 'melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
melbourne_data = melbourne_data.dropna(axis=0)
y = melbourne_data.Price
melbourne_features = ['Rooms', 'Price', 'Bathroom', 'Landsize', 'Lattitude', 'Longtitude']
X = melbourne_data[melbourne_features]
print(X.describe())
print(X.head())
melbourne_model = DecisionTreeRegressor(random_state=1)
melbourne_model.fit(X, y)
print("Making predictions for the following 5 houses:")
print(X.head())
print("The predictions are")
print(melbourne_model.predict(X.head()))
-----OUTPUT----
Rooms Price ... Lattitude Longtitude
count 6196.000000 6.196000e+03 ... 6196.000000 6196.000000
mean 2.931407 1.068828e+06 ... -37.807904 144.990201
std 0.971079 6.751564e+05 ... 0.075850 0.099165
min 1.000000 1.310000e+05 ... -38.164920 144.542370
25% 2.000000 6.200000e+05 ... -37.855438 144.926198
50% 3.000000 8.800000e+05 ... -37.802250 144.995800
75% 4.000000 1.325000e+06 ... -37.758200 145.052700
max 8.000000 9.000000e+06 ... -37.457090 145.526350
[8 rows x 6 columns]
Rooms Price Bathroom Landsize Lattitude Longtitude
1 2 1035000.0 1.0 156.0 -37.8079 144.9934
2 3 1465000.0 2.0 134.0 -37.8093 144.9944
4 4 1600000.0 1.0 120.0 -37.8072 144.9941
6 3 1876000.0 2.0 245.0 -37.8024 144.9993
7 2 1636000.0 1.0 256.0 -37.8060 144.9954
Making predictions for the following 5 houses:
Rooms Price Bathroom Landsize Lattitude Longtitude
1 2 1035000.0 1.0 156.0 -37.8079 144.9934
2 3 1465000.0 2.0 134.0 -37.8093 144.9944
4 4 1600000.0 1.0 120.0 -37.8072 144.9941
6 3 1876000.0 2.0 245.0 -37.8024 144.9993
7 2 1636000.0 1.0 256.0 -37.8060 144.9954
The predictions are
[1035000. 1465000. 1600000. 1876000. 1636000.]
First, split your data into a train and test file.
Next, train the model using the .fit() function using your x_train and y_train datasets.
Then, run the .predict() function to make a prediction and assign the values as a list in the y_pred variable.
Finally, Make sure not to include the column that you are trying to predict in melbourne_features.
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
melbourne_file_path = 'melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
melbourne_data = melbourne_data.dropna(axis=0)
y = melbourne_data.Price
#Make sure not to include the column that you are trying to predict.
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'Lattitude', 'Longtitude']
X = melbourne_data[melbourne_features]
print(X.describe())
print(X.head())
#Enter 0.50 when you wanted to have 50 percent of your data to be tested and 50 percent to be trained.
x_train, x_test, y_train, y_test = train_test_split(X,Y, test_size = 0.50)
melbourne_model = DecisionTreeRegressor(random_state=1)
#Alternatively, you can use RandomForestRegressor to lower down your mean absolute error compare to DecisionTreeRegressor.
#melbourne_model = RandomForestRegressor(n_estimators = 1000)
#Fit the x_train and y_train data only. In other words, train the model.
melbourne_model.fit(x_train, y_train)
#Finally, make a prediction.
y_pred = melbourne_model.predict(x_test)
print("Making predictions for the following 5 houses:")
print(x_test.head())
print("The predictions are")
print(pd.DataFrame({'Actual Price':y_test,
'Predicted Price': y_pred
}
)
)
#The mean absolute error is a single number that you can plus or minus
#from your prediction price to get the best estimate of the actual price
#Your goal is to have as low mean absolute error as possible.
print(f'Mean Absolute Error : {mean_absolute_error(y_test, y_pred)}')
Source:
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
https://www.geeksforgeeks.org/python-decision-tree-regression-using-sklearn/
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html
Additional Reference:
https://www.youtube.com/watch?v=PaFPbb66DxQ
https://www.youtube.com/watch?v=YSB7FtzeicA
https://www.youtube.com/watch?v=BFaadIqWlAg
https://www.youtube.com/watch?v=ENvSybznF_o
https://www.youtube.com/watch?v=yXoxdXMvD7c

Link Probability prediction between two nodes using Machine Learning or Deep Learning where node to node mapping is given

Can someone please direct me to a tutorial provide a starting idea for the problem given below.
I have a mapping of Authors to co authors given as follows:
mapping
>>
{0: [2860, 3117],
1: [318, 1610, 1776, 1865, 2283, 2507, 3076, 3108, 3182, 3357, 3675, 4040],
2: [164, 413, 1448, 1650, 3119, 3238],
} # this is just sample
link_attributes.iloc[:5,:7]
>>
first id keyword_0 keyword_10 keyword_13 keyword_15 keyword_2
0 4 0 1.0 1.0 1.0 1.0
1 9 1 1.0 1.0 1.0 1.0
2 7 2 1.0 NaN 1.0 1.0
3 6 3 1.0 1.0 NaN 1.0
4 9 4 1.0 1.0 1.0 1.0
I have to predict the probability of having a link between a Source and Sink
For example if I am given a Source=13 and Sink=31 then I have to find the probability of having a link between 13 and 31. All the links are un-directed.
import json
import numpy
from keras import Sequential
from keras.layers import Dense
def get_keys(data, keys): # get all keys from json file
if isinstance(data, list):
for item in data:
get_keys(item, keys)
if isinstance(data, dict):
sub_keys = data.keys()
for sub_key in sub_keys:
keys.append(sub_key)
# get all keys, each key is a feature of instances
json_data = open("nodes.json") # read 4016 instances
jdata = json.load(json_data)
keys = []
get_keys(jdata, keys)
keys = set(keys)
print(set(keys))
def build_instance(json_object): # use to build instance from json object, ex: instance = [f0,f1,f2,f3,....f404]
features = []
features.append(json_object.get('id'))
for key in keys:
value = json_object.get(key)
if value is None:
value = 0
elif key == 'id':
continue
features.append(value)
return features
# read all instances and format them, each instance will be [f0,f1, f2,...], as i read from json file, each instance will have 405 features
instances = []
num_of_instances = 0
for item in jdata:
features = build_instance(item)
instances.append(features)
num_of_instances = num_of_instances + 1
print(num_of_instances)
# read "author_id - co author ids" file
traintxt = open('train.txt', 'r')
lines = traintxt.readlines()
au_vs_co_auth_list = []
for line in lines:
line = line.split('\t', 200)
print(line)
# convert value from string to int
string = line[0] # example line[0] = '14 445'
id_vs_coauthor = string.split(" ", 200)
id = id_vs_coauthor[0]
co_author = id_vs_coauthor[1]
line[0:1] = [int(id), int(co_author)]
for i in range(2, len(line)):
line[i] = int(line[i])
au_vs_co_auth_list.append(line)
print(len(au_vs_co_auth_list)) # we have 4016 authors
X_train = []
Y_train = []
generated_train_pairs = []
train_num = 30000 # choose 30000 random training instances
for i in range(train_num):
print(i)
index1 = numpy.random.randint(0, len(au_vs_co_auth_list), 1)[0]
co_authors_of_index1 = au_vs_co_auth_list[index1]
author_id_of_index_1 = au_vs_co_auth_list[index1][0]
if index1 % 2 == 0: # try to create a sample that two author is not related
index2 = numpy.random.randint(0, len(au_vs_co_auth_list), 1)[0]
author_id_of_index_2 = au_vs_co_auth_list[index2][0]
# make sure id1 != id2 and auth 1 and auth2 are not related
while (index1 == index2) or (author_id_of_index_2 in co_authors_of_index1):
index2 = numpy.random.randint(0, len(au_vs_co_auth_list), 1)[0]
author_id_of_index_2 = au_vs_co_auth_list[index2][0]
y = [0, 1] # [relative=FALSE,non-related = TRUE]
else: # try to create a sample that two author is related
author_id_of_index_2 = numpy.random.randint(1, len(co_authors_of_index1),size=1)[0]
y = [1, 0] # [relative=TRUE,non-related = FALSE]
x = instances[author_id_of_index_1][1:] + instances[author_id_of_index_2][
1:] # x = [feature1, feature2,...feature404',feature1', feature2',...feature404']
X_train.append(x)
Y_train.append(y)
X_train = numpy.asarray(X_train)
Y_train = numpy.asarray(Y_train)
print(X_train.shape)
print(Y_train.shape)
# now we have x_train, y_train, build model right now
model = Sequential()
model.add(Dense(512, input_shape=X_train[0].shape, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(2, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, Y_train, batch_size=512, epochs=3, verbose=2)
model.save("model.h5")
# now to predict probability of linking between two author ids
id1 = 11 # just random
id2 = 732 # just random
author1 = None
author2 = None
for item in jdata:
if item.get('id') == id1:
author1 = build_instance(item)
if item.get('id') == id2:
author2 = build_instance(item)
if author1 is not None and author2 is not None:
break
x_test = author1[1:] + author2[1:]
x_test = numpy.expand_dims(numpy.asarray(x_test), axis=0)
probability = model.predict(x_test)
print("author id ", id1, " and author id ", id2, end=" ")
if probability[0][1] > probability[0][0]:
print("Not related")
else:
print("Related")
print(probability)
Output:
author id 11 and author id 732 related
Before diving into how to find a solution, I recommend understand your data well, and spend a good part of your time digesting the problem and preparing a dataset.
So from the scenario you described it seems to me your problem is given two Nodes and their attributes predict if there is a link this can interpreted as a binary classification task. I will provide an initial minimalistic simple solution.
what confused me is that you mentioned you have only link_attributes.iloc[:5,:7] link_attributes but not node attributes. In the case you have node attributes it makes more sense because then we just make a combinations of pairs of nodes, and label the pairs wich are not connected as 0 or not_connected and the ones connected as 1 or connected.
So let's make a dataset. As I'm didn't exactly understand what the link attributes mean, let's generate some random data but we can adapt a better example if you edit your question with more details about your data.
About creating a Dataset
For every nodes in the mapping we will create 10 dummy random columns just for the sake of demonstrating.
Then we will create a list of all authors and coauthor called list_of_authors and generate pairs out of this calling it pair_of_authors.
for every pair of authors we will label them as linked or not linked using mapping, for that I created a function called check_if_pair_is_linked.
after this I will show how to create a simple baseline solution for the task. We will use scikit-learn with has a big list of easy to use models for classification.
Code
I folded the code and describe it in 3 major simple steps:
prepare your inputs to create a dataset (using mappings and attributes)
create dataset (for every pair of authors, label then as linked or not and concatenate their attributes)
Use sci-kit learn to fit, predict and evaluate a a model
import itertools
import pandas as pd
import numpy as np
from sklearn.metrics import classification_report
from sklearn import svm
######## first preparing data to create a dataset
+-- 17 lines: ### we already have mappings {---------------------------------------------------------------------------------------------------
This part creates -----> mappings => {
0:[coauthor12,coauthor17231,...],
1:[...],
...,
732: [...]
}
i author_attributes => {
0:[a0_1,attr0_2,...,attr0_10],
1:[a1_1,attr1_2,...,attr1_10],
...,
732: [...]
}
#### Generating our Dataset and preparing dataset to the scikit-learn library(and most other) format
### The idea is generating pairs of authors regardless if they're linked or not and label if a pair is linked
+-- 24 lines: {--------------------------------------------------------------------------------------------------------------------------------
This part creates, a list of pairs of authors containing (attributes_of_both_authors, is_linked_label)
-----> dataset = [
([a0_1,...,a0_10,a1_1,...,a1_10],label_pair0_1)),
([a0_1,...,a0_10,a2_1,...,a2_10],label_pair1_2),
...
([a142_1,...,a142_10,a37_1,...,a37_10],label_pair142_37),
]
#### Training and evaluating a simple machine learning solution
+-- 12 lines: ---------------------------------------------------------------------------------------------------------------------------------This part outputs a report about the model after training the model with a training dataset and evaluate the model in a test dataset (I used the same train data and test data but dont ever do that in a real scenario)
-----> precision recall f1-score support
0 0.93 1.00 0.96 466
1 1.00 0.10 0.18 40
Solution:
import itertools
import pandas as pd
import numpy as np
from sklearn.metrics import classification_report
from sklearn import svm
######## first preparing data to create a dataset
#### we already have mappings
def generate_random_author_attributes(mapping):
author_attributes = {}
for author in mapping.keys():
author_attributes[author] = np.random.random(10).tolist()
for coauthors in mapping.values():
for coauthor in coauthors:
if author_attributes.get(coauthor, False):
pass # check if this coauthor was alredy added
else:
author_attributes[coauthor] = np.random.random(10).tolist()
return author_attributes
mapping = {
0: [2860, 3117],
1: [318, 1610, 1776, 1865, 2283, 2507, 3076, 3108, 3182, 3357, 3675, 4040],
2: [164, 413, 1448, 1650, 3119, 3238],
}
#### hopefully you have attributes like this, for each author you have some attributes, I created 10 random values for each for demonstrating
author_attributes = generate_random_author_attributes(mapping)
#### Generating our Dataset and preparing dataset to the scikit-learn library(and most other) format
### The idea is generating pairs of authors regardless if they're linked or not and label if a pair is linked
def check_if_pair_is_linked(pair_of_authors, mapping):
''' return 1 if linked, 0 if not linked'''
coauthors_of_author0 = mapping.get(pair_of_authors[0],[])
coauthors_of_author1 = mapping.get(pair_of_authors[1],[])
if(pair_of_authors[1] in coauthors_of_author0) or (pair_of_authors[0] in coauthors_of_author1):
return 1
else:
return 0
def create_dataset(author_attributes, mapping):
list_of_all_authors = list(itertools.chain.from_iterable([coauthors for coauthors in mapping.values()]))
list_of_all_authors.extend(mapping.keys())
dataset = []
for pair_of_authors in itertools.permutations(list_of_all_authors, 2):
pair_label = check_if_pair_is_linked(pair_of_authors, mapping)
pair_attributes = author_attributes[pair_of_authors[0]] + author_attributes[pair_of_authors[1]]
dataset.append((pair_attributes,pair_label))
return dataset
dataset=create_dataset(author_attributes, mapping)
X_train = [pair_attributes for pair_attributes,_ in dataset]
y_train = [pair_label for _,pair_label in dataset]
#### Training and evaluating a simple machine learning solution
binary_classifier = svm.SVC()
binary_classifier.fit(X_train, y_train)
#### Checking if the model is good
X_test = X_train # never use you train data as test data
y_test = y_train
true_pairs_label = y_test
predicted_pairs_label = binary_classifier.predict(X_test)
print(classification_report(true_pairs_label, predicted_pairs_label))
OUTPUT
precision recall f1-score support
0 0.93 1.00 0.96 466
1 1.00 0.15 0.26 40
accuracy 0.93 506
macro avg 0.97 0.57 0.61 506
weighted avg 0.94 0.93 0.91 506

Finding loss mask of variable length in keras tensorflow

Trying to build loss function which captures the below functionality, which mask the output values once 'end of sequence' is encountered.
Given a tensor of shape [BatchSize,MaxSequenceLenght,OutputNodes]
Consider the below example
batch size = 3
Max Sequence Length=4
OutputNodes = 3
predicted = [[[0.1,0.3,0.2],[0.4,0.6,0.8],[0.5,0.2,0.3],[0.0,0.0,0.99]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.9],[0.4,0.6,0.8]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.1],[0.4,0.6,0.1]]]
I am dedicating the last output node to symbolise the 'end of sequence(EOS)' here node=2 . Nodes are labelled as (0, 1 and 2)
Based on the predicted value, I have to return a mask which tries to find the first occurrence of EOS.
In the above example,
first row has following sequence (argmax) => 1,2,0,2
Second row has following sequence => 1,1,2,2
Third row has following sequence => 1,1,9,1
So my mask should be
[[1,0,0,0],
[1,1,0,0],
[1,1,1,1]
The mask will ensure, the values post the EOS is ignored or not considered in calculating the loss.
Below is my code snipped I tried
sequence_cluster_asign = keras.backend.argmax(sequence_values,axis=-1)
loss_mask = []
for seq in K.tf.unstack(sequence_cluster_asign):
##appendEOS- To make sure tf.where is not empty
seq = tf.concat([seq,endOfSequenceTensor],axis=0)
endOfSequenceLocation = K.tf.where(K.tf.equal(seq,endOfSequence))[0][0]
loss_mask.append(tf.sequence_mask(endOfSequenceLocation,max_decoder_seq_length,dtype=tf.float32))
final_mask = K.stack(loss_mask)
Error encountered : ValueError: Cannot infer num from shape (?,?)
If you want to get mask in your question, you can use the following method.
import tensorflow as tf
import keras
from keras import backend as K
sequence_values = K.placeholder(shape=(None, 4, 3))
sequence_cluster_asign = keras.backend.argmax(sequence_values,axis=-1)
# keras version
result = K.cast(K.less(sequence_cluster_asign,sequence_values.get_shape().as_list()[-1]-1),dtype='int32')
result = K.cumprod(result,axis=-1)
# tensorflow version
# result = tf.cast(tf.less(sequence_cluster_asign,sequence_values.get_shape().as_list()[-1]-1),dtype=tf.int32)
# result = tf.cumprod(result,axis=-1)
predicted = [[[0.1,0.3,0.2],[0.4,0.6,0.8],[0.5,0.2,0.3],[0.0,0.0,0.99]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.9],[0.4,0.6,0.8]],
[[0.1,0.3,0.2],[0.4,0.9,0.8],[0.5,0.2,0.1],[0.4,0.6,0.1]]]
with tf.Session() as sess:
print(result.eval(feed_dict={sequence_values:predicted}))
[[1 0 0 0]
[1 1 0 0]
[1 1 1 1]]

Multi-Target and Multi-Class prediction

I am relatively new to machine learning as well as tensorflow. I would like to train the data so that predictions with 2 targets and multiple classes could be made. Is this something that can be done? I was able to implement the algorithm for 1 target but don't know how I need to do it for a second target as well.
An example dataset:
DayOfYear Temperature Flow Visibility
316 8 1 4
285 -1 1 4
326 8 2 5
323 -1 0 3
10 7 3 6
62 8 0 3
56 8 1 4
347 7 2 5
363 7 0 3
77 7 3 6
1 7 1 4
308 -1 2 5
364 7 3 6
If I train (DayOfYear Temperature Flow) I can predict the Visibility quite well. But I need to predict Flow as well somehow. I am pretty sure that Flow will influence Visibility so I am not sure how to go with that.
This is the implementation that I have
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import urllib
import numpy as np
import tensorflow as tf
# Data sets
TRAINING = "/ml_baetterich_learn.csv"
TEST = "/ml_baetterich_test.csv"
VALIDATION = "/ml_baetterich_validation.csv"
def main():
# Load datasets.
training_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=TRAINING,
target_dtype=np.int,
features_dtype=np.int,
target_column=-1)
test_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=TEST,
target_dtype=np.int,
features_dtype=np.int,
target_column=-1)
validation_set = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=VALIDATION,
target_dtype=np.int,
features_dtype=np.int,
target_column=-1)
# Specify that all features have real-value data
feature_columns = [tf.contrib.layers.real_valued_column("", dimension=3)]
# Build 3 layer DNN with 10, 20, 10 units respectively.
classifier = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns,
hidden_units=[10, 20, 10],
n_classes=9,
model_dir="/tmp/iris_model")
# Define the training inputs
def get_train_inputs():
x = tf.constant(training_set.data)
y = tf.constant(training_set.target)
return x, y
# Fit model.
classifier.fit(input_fn=get_train_inputs, steps=4000)
# Define the test inputs
def get_test_inputs():
x = tf.constant(test_set.data)
y = tf.constant(test_set.target)
return x, y
# Define the test inputs
def get_validation_inputs():
x = tf.constant(validation_set.data)
y = tf.constant(validation_set.target)
return x, y
# Evaluate accuracy.
accuracy_test_score = classifier.evaluate(input_fn=get_test_inputs,
steps=1)["accuracy"]
accuracy_validation_score = classifier.evaluate(input_fn=get_validation_inputs,
steps=1)["accuracy"]
print ("\nValidation Accuracy: {0:0.2f}\nTest Accuracy: {1:0.2f}\n".format(accuracy_validation_score,accuracy_test_score))
# Classify two new flower samples.
def new_samples():
return np.array(
[[327,8,3],
[47,8,0]], dtype=np.float32)
predictions = list(classifier.predict_classes(input_fn=new_samples))
print(
"New Samples, Class Predictions: {}\n"
.format(predictions))
if __name__ == "__main__":
main()
Option 1: multi-headed model
You could use a multi-headed DNNEstimator model. This treats Flow and Visibility as two separate softmax classification targets, each with their own set of classes. I had to modify the load_csv_without_header helper function to support multiple targets (which could be cleaner, but is not the point here - feel free to ignore its details).
import numpy as np
import tensorflow as tf
from tensorflow.python.platform import gfile
import csv
import collections
num_flow_classes = 4
num_visib_classes = 7
Dataset = collections.namedtuple('Dataset', ['data', 'target'])
def load_csv_without_header(fn, target_dtype, features_dtype, target_columns):
with gfile.Open(fn) as csv_file:
data_file = csv.reader(csv_file)
data = []
targets = {
target_cols: []
for target_cols in target_columns.keys()
}
for row in data_file:
cols = sorted(target_columns.items(), key=lambda tup: tup[1], reverse=True)
for target_col_name, target_col_i in cols:
targets[target_col_name].append(row.pop(target_col_i))
data.append(np.asarray(row, dtype=features_dtype))
targets = {
target_col_name: np.array(val, dtype=target_dtype)
for target_col_name, val in targets.items()
}
data = np.array(data)
return Dataset(data=data, target=targets)
feature_columns = [
tf.contrib.layers.real_valued_column("", dimension=1),
tf.contrib.layers.real_valued_column("", dimension=2),
]
head = tf.contrib.learn.multi_head([
tf.contrib.learn.multi_class_head(
num_flow_classes, label_name="Flow", head_name="Flow"),
tf.contrib.learn.multi_class_head(
num_visib_classes, label_name="Visibility", head_name="Visibility"),
])
classifier = tf.contrib.learn.DNNEstimator(
feature_columns=feature_columns,
hidden_units=[10, 20, 10],
model_dir="iris_model",
head=head,
)
def get_input_fn(filename):
def input_fn():
dataset = load_csv_without_header(
fn=filename,
target_dtype=np.int,
features_dtype=np.int,
target_columns={"Flow": 2, "Visibility": 3}
)
x = tf.constant(dataset.data)
y = {k: tf.constant(v) for k, v in dataset.target.items()}
return x, y
return input_fn
classifier.fit(input_fn=get_input_fn("tmp_train.csv"), steps=4000)
res = classifier.evaluate(input_fn=get_input_fn("tmp_test.csv"), steps=1)
print("Validation:", res)
Option 2: multi-labeled head
If you keep your CSV data separated by commas, and keep the last column for all the classes a row might have (separated by some token such as space), you can use the following code:
import numpy as np
import tensorflow as tf
all_classes = ["0", "1", "2", "3", "4", "5", "6"]
def k_hot(classes_col, all_classes, delimiter=' '):
table = tf.contrib.lookup.index_table_from_tensor(
mapping=tf.constant(all_classes)
)
classes = tf.string_split(classes_col, delimiter)
ids = table.lookup(classes)
num_items = tf.cast(tf.shape(ids)[0], tf.int64)
num_entries = tf.shape(ids.indices)[0]
y = tf.SparseTensor(
indices=tf.stack([ids.indices[:, 0], ids.values], axis=1),
values=tf.ones(shape=(num_entries,), dtype=tf.int32),
dense_shape=(num_items, len(all_classes)),
)
y = tf.sparse_tensor_to_dense(y, validate_indices=False)
return y
def feature_engineering_fn(features, labels):
labels = k_hot(labels, all_classes)
return features, labels
feature_columns = [
tf.contrib.layers.real_valued_column("", dimension=1), # DayOfYear
tf.contrib.layers.real_valued_column("", dimension=2), # Temperature
]
classifier = tf.contrib.learn.DNNEstimator(
feature_columns=feature_columns,
hidden_units=[10, 20, 10],
model_dir="iris_model",
head=tf.contrib.learn.multi_label_head(n_classes=len(all_classes)),
feature_engineering_fn=feature_engineering_fn,
)
def get_input_fn(filename):
def input_fn():
dataset = tf.contrib.learn.datasets.base.load_csv_without_header(
filename=filename,
target_dtype="S100", # strings of length up to 100 characters
features_dtype=np.int,
target_column=-1
)
x = tf.constant(dataset.data)
y = tf.constant(dataset.target)
return x, y
return input_fn
classifier.fit(input_fn=get_input_fn("tmp_train.csv"), steps=4000)
res = classifier.evaluate(input_fn=get_input_fn("tmp_test.csv"), steps=1)
print("Validation:", res)
We are using DNNEstimator with a multi_label_head, which uses sigmoid crossentropy rather than softmax crossentropy as a loss function. This means that each of the output units/logits are passed through the sigmoid function, which gives the likelihood of the data point belonging to that class, i.e. the classes are computed independently and are not mutually exclusive as they are with softmax crossentropy. This means that you could have between 0 and len(all_classes) classes set for each row in the training set and final predictions.
Also notice that the classes are represented as strings (and k_hot makes the conversion to token indices), so that you could use arbitrary class identifiers such as category UUIDs in e-commerce settings. If the categories in the 3rd and 4th column are different (Flow ID 1 != Visibility ID 1), you could prepend the column name to each class ID, e.g.
316,8,flow1 visibility4
285,-1,flow1 visibility4
326,8,flow2 visibility5
For a description of how k_hot works, see my other SO answer. I decided to use k_hot as a separate function (rather than define it directly in feature_engineering_fn because it's a distinct piece of functionality, and probably TensorFlow will soon have a similar utility function.
Note that if you're now using the first two columns to predict the last two columns, your accuraccy will certainly go down, as the last two columns are highly correlated and using one of them will give you a lot of information about the other. Actually, your code was using only the 3rd column, which was kind of a cheat anyway if the goal is to predict the 3rd and 4th columns.

How to keep calculated values in a Tensorflow graph (on the GPU)?

How can we make sure that a calculated value will not be copied back to CPU/python memory, but is still available for calculations in the next step?
The following code obviously doesn't do it:
import tensorflow as tf
a = tf.Variable(tf.constant(1.),name="a")
b = tf.Variable(tf.constant(2.),name="b")
result = a + b
stored = result
with tf.Session() as s:
val = s.run([result,stored],{a:1.,b:2.})
print(val) # 3
val=s.run([result],{a:4.,b:5.})
print(val) # 9
print(stored.eval()) # 3 NOPE:
Error : Attempting to use uninitialized value _recv_b_0
The answer is to store the value in a tf.Variable by storing to it using the assign operation:
working code:
import tensorflow as tf
with tf.Session() as s:
a = tf.Variable(tf.constant(1.),name="a")
b = tf.Variable(tf.constant(2.),name="b")
result = a + b
stored = tf.Variable(tf.constant(0.),name="stored_sum")
assign_op=stored.assign(result)
val,_ = s.run([result,assign_op],{a:1.,b:2.})
print(val) # 3
val=s.run(result,{a:4.,b:5.})
print(val[0]) # 9
print(stored.eval()) # ok, still 3