predict 2 dimension output from 10 numbers by tensorflow - tensorflow

I want to predict one number from 10 numbers
What I want to do is predict t from mat
Each mat[i] is corrsponding to t[i]
Of course I have more then 5 rows in mat and t , just simplifies the problem now.
I have written the code like this below.
#There is target data `t` and traindata `mat[0]`,`mat[1]`,`mat[2]`....
t = [0,1,0,1,0] #answer 2 dimension
limit = 10# number of degrees
mat = [[2,-2,3,-4,2,2,3,5,3,6], #10 degrees number of mat[0] leads t[0]
[1,3,-3,2,2,5,1,3,2,3], #10 degrees number of mat[1] leads t[1]
[-2,3,2,-2,2,-2,1,3,4,5], #10 degrees number of mat[2] leads t[2]
[-2,2,-1,-2,2,-2,7,3,9,2], #10 degrees number of mat[3] leads t[3]
[-2,-3,2,-2,2,-4,1,-4,4,5], #10 degrees number of mat[4] leads t[4]
]
x = tf.placeholder(tf.float32,[None,10])
w = tf.Variable(tf.zeros([10,5]))
y = tf.matmul(x,w)
t = tf.placeholder(tf.float32,[None,1])
loss = tf.reduce_sum(tf.square(y-t))
train_step = tf.train.AdamOptimizer().minimize(loss)
sess = tf.Session()
sess.run(tf.initialize_all_variables())
train_t = np.array(mat)
train_t = train_t.reshape([limit,5])
train_x = np.zeros([limit,5])
# initialize
for row, num in enumerate(range(1,limit + 1)):
for col, n in enumerate(range(0,5)):
train_x[row][col] = num**n
i = 0
for _ in range(100000):
i += 1
sess.run(train_step,feed_dict={x:train_x,t:train_t})
if i % 10000 == 0:
loss_val = sess.run(loss,feed_dict={x:train_x,t:train_t})
print('step : %d,Loss: %f' % (i,loss_val))
w_val = sess.run(w)
pprint("w_val")
pprint(w_val)
However this shows error like this
Traceback (most recent call last):
File "wisdom2.py", line 60, in <module>
sess.run(train_step,feed_dict={x:train_x,t:train_t})
File "/Users/whitebear/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/Users/whitebear/tensorflow/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 975, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (10, 5) for Tensor 'Placeholder:0', which has shape '(?, 10)'

The problem is that the shape of your placeholder and the shape of your input do not match. The placeholder x expects a value with N rows and 10 columns, but train_x has 10 rows and 5 columns. Likewise, t should have N rows and 1 column, but the passed value train_t has 10 rows and 5 columns. You should either change the shape of your placeholders or the shape of your inputs.

Related

How to display the convolution filters used on a CNN with Tensorflow?

I would like to produce figures similar to this one:
To do that, with Tensorflow I load my model and then, using this code I am about to select the variable with filters from one layer :
# search for the name of the specific layer with the filters I want to display
for v in tf.trainable_variables():
print(v.name)
# store the filters into a variable
var = [v for v in tf.trainable_variables() if v.name == "model/center/kernel:0"][0]
doing var.eval() I am able to store var into a numpy array.
This numpy array have this shape: (3, 3, 512, 512) which correspond to the kernel size: 3x3 and the number of filters: 512.
My problem is the following: How can I extract 1 filter from this 3,3,512,512 array to display it ? If I understand how to do that, I will find how to display the 512 filters
Since you are using Tensorflow, you might be using tf.keras.Sequential for building the CNN Model, and model.summary() gives the names of all the Layers, along with Shapes, as shown below:
Once you have the Layer Name, you can Visualize the Convolutional Filters of that Layer of CNN as shown in the code below:
#-------------------------------------------------
#Utility function for displaying filters as images
#-------------------------------------------------
def deprocess_image(x):
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
x += 0.5
x = np.clip(x, 0, 1)
x *= 255
x = np.clip(x, 0, 255).astype('uint8')
return x
#---------------------------------------------------------------------------------------------------
#Utility function for generating patterns for given layer starting from empty input image and then
#applying Stochastic Gradient Ascent for maximizing the response of particular filter in given layer
#---------------------------------------------------------------------------------------------------
def generate_pattern(layer_name, filter_index, size=150):
layer_output = model.get_layer(layer_name).output
loss = K.mean(layer_output[:, :, :, filter_index])
grads = K.gradients(loss, model.input)[0]
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
iterate = K.function([model.input], [loss, grads])
input_img_data = np.random.random((1, size, size, 3)) * 20 + 128.
step = 1.
for i in range(80):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
img = input_img_data[0]
return deprocess_image(img)
#------------------------------------------------------------------------------------------
#Generating convolution layer filters for intermediate layers using above utility functions
#------------------------------------------------------------------------------------------
layer_name = 'conv2d_4'
size = 299
margin = 5
results = np.zeros((8 * size + 7 * margin, 8 * size + 7 * margin, 3))
for i in range(8):
for j in range(8):
filter_img = generate_pattern(layer_name, i + (j * 8), size=size)
horizontal_start = i * size + i * margin
horizontal_end = horizontal_start + size
vertical_start = j * size + j * margin
vertical_end = vertical_start + size
results[horizontal_start: horizontal_end, vertical_start: vertical_end, :] = filter_img
plt.figure(figsize=(20, 20))
plt.savefig(results)
The above code Visualizes only 64 filters of a Layer. You can change it accordingly.
For more information, you can refer this article.

indices = 2 is not in [0, 1)

I'm working on a seq2sql project and I successfully build a model but when training I get an error. I'm not using any Keras embedding layer.
M=13 #Question Length
d=40 #Dimention of the LSTM
C=12 #number of table Columns
batch_size=9
inputs1=Input(shape=(M,100),name='question_token')
Hq=Bidirectional(LSTM(d,return_sequences=True),name='QuestionENC')(inputs1) #this is HQ shape is (num_samples,13,80)
inputs2=Input(shape=(C,3,100),name='col_token')
col_lstm_layer=Bidirectional(LSTM(d,return_sequences=False),name='ColENC')
def hidd(te):
t=tf.Variable(initial_value=1,dtype=tf.int32)
for i in range(batch_size):
t=tf.assign(t,i)
Z = tf.nn.embedding_lookup(te, t)
print(col_lstm_layer(Z))
h=tf.reshape(col_lstm_layer(Z),[1,C,d*2])
if i==0:
# cols_last_hidden=tf.Variable(initial_value=h)
cols_last_hidden=tf.stack(h)#this is because it gives an error if we use tf.Variable here
else:
cols_last_hidden=tf.concat([cols_last_hidden,h],0)#shape of this one is (num_samples,num_col,80) 80 is last encoding of each column
return cols_last_hidden
cols_last_hidden=Lambda(hidd)(inputs2)
Hq=Dense(d*2,name='QuestionLastEncode')(Hq)
I=tf.Variable(initial_value=1,dtype=tf.int32)
J=tf.Variable(initial_value=1,dtype=tf.int32)
K=1
def get_col_att(tensors):
global K,all_col_attention
if K:
t=tf.Variable(initial_value=1,dtype=tf.int32)
for i in range(batch_size):
t=tf.assign(t,i)
x = tf.nn.embedding_lookup(tensors[0], t)
# print("tensors[1]:",tensors[1])
y = tf.nn.embedding_lookup(tensors[1], t)
# print("x shape",x.shape,"y shape",y.shape)
y=tf.transpose(y)
# print("x shape",x.shape,"y",y.shape)
Ecol=tf.reshape(tf.transpose(tf.tensordot(x,y,axes=1)),[1,C,M])
if i==0:
# all_col_attention=tf.Variable(initial_value=Ecol,name=""+i)
all_col_attention=tf.stack(Ecol)
else:
all_col_attention=tf.concat([all_col_attention,Ecol],0)
K=0
print("all_col_attention",all_col_attention)
return all_col_attention
total_alpha_sel_lambda=Lambda(get_col_att,name="Alpha")([Hq,cols_last_hidden])
total_alpha_sel=Dense(13,activation="softmax")(total_alpha_sel_lambda)
# print("Hq",Hq," total_alpha_sel_lambda shape",total_alpha_sel_lambda," total_alpha_sel shape",total_alpha_sel.shape)
def get_EQcol(tensors):
global K
if K:
t=tf.Variable(initial_value=1,dtype=tf.int32)
global all_Eqcol
for i in range(batch_size):
t=tf.assign(t,i)
x = tf.nn.embedding_lookup(tensors[0], t)
y = tf.nn.embedding_lookup(tensors[1], t)
Eqcol=tf.reshape(tf.tensordot(x,y,axes=1),[1,C,d*2])
if i==0:
# all_Eqcol=tf.Variable(initial_value=Eqcol,name=""+i)
all_Eqcol=tf.stack(Eqcol)
else:
all_Eqcol=tf.concat([all_Eqcol,Eqcol],0)
K=0
print("all_Eqcol",all_Eqcol)
return all_Eqcol
K=1
EQcol=Lambda(get_EQcol,name='EQcol')([total_alpha_sel,Hq])#total_alpha_sel(12x13) Hq(13xd*2)
EQcol=Dropout(.2)(EQcol)
L1=Dense(d*2,name='L1')(cols_last_hidden)
L2=Dense(d*2,name='L2')(EQcol)
L1_plus_L2=Add()([L1,L2])
pre=Flatten()(L1_plus_L2)
Psel=Dense(12,activation="softmax")(pre)
model=Model(inputs=[inputs1,inputs2],outputs=Psel)
model.compile(loss='categorical_crossentropy', optimizer='adam',metrics=['accuracy'])
model.summary()
earlyStopping=EarlyStopping(monitor='val_loss', patience=7, verbose=0, mode='auto')
history=model.fit([Equestion,Col_Embeddings],y_train,epochs=50,validation_split=.1,shuffle=False,callbacks=[earlyStopping],batch_size=batch_size)
The shapes of the Equestion, Col_Embeddings, and y_train are (10, 12, 3, 100) ,(10, 13, 100) and (10, 12).
I searched about this error but in all cases they have used an embedding layer incorrectly. Here I get this error even though I'm not using one.
indices = 2 is not in [0, 1)
[[{{node lambda_3/embedding_lookup_2}} = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:#col_token_2"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_col_token_2_0_1, lambda_3/Assign_2, lambda_3/embedding_lookup_2/axis)]]
The problem here was the batch size is defined at the graph level.here i have used batch_size =9 for the graph and yes i get batch size of 9 for training by the validation split .1 for the full batch size of 10 but for the validation i left only one sample because 10*.1 is one.
So the batch size of 1 cannot be passed to the graph because it needs batch size of 9.that's why this error comes
As for the solution i put the batch_size=1 and then it works fine also got a good accuracy by using batch_size=1.
Hope this will help someone.
Cheers ..
For me this error was due to bad form of my input data. You have to double check your input data to the model and it depends on your model input.

while_loop error in Tensorflow

I tried to use while_loop in Tensorflow, but when I try to return the target output from callable in while loop, it gives me an error because the shape is increased every time.
The output should be contains (0 or 1) values based on data value (input array). If data value is large than 5 return 1 else return 0. The returned value must be added into output
This is the code::
import numpy as np
import tensorflow as tf
data = np.random.randint(10, size=(30))
data = tf.constant(data, dtype= tf.float32)
global output
output= tf.constant([], dtype= tf.float32)
i = tf.constant(0)
c = lambda i: tf.less(i, 30)
def b(i):
i= tf.add(i,1)
cond= tf.cond(tf.greater(data[i-1], tf.constant(5.)), lambda: tf.constant(1.0), lambda: tf.constant([0.0]))
output =tf.expand_dims(cond, axis = i-1)
return i, output
r,out = tf.while_loop(c, b, [i])
print(out)
sess= tf.Session()
sess.run(out)
The error::
r, out = tf.while_loop(c, b, [i])
ValueError: The two structures don't have the same number of elements.
First structure (1 elements): [tf.Tensor 'while/Identity:0' shape=()
dtype=int32]
Second structure (2 elements): [tf.Tensor 'while/Add:0' shape=()
dtype=int32, tf.Tensor 'while/ExpandDims:0' shape=unknown
dtype=float32>]
I use tensorflow-1.1.3 and python-3.5
How can I change my code to gives me the target result?
EDIT::
I edit the code based on #mrry answer, but I still have an issue that the output is incorrect answer
the output is numbers summation
a = tf.ones([10,4])
print(a)
a = tf.reduce_sum(a, axis = 1)
i =tf.constant(0)
c = lambda i, _:tf.less(i,10)
def Smooth(x):
return tf.add(x,2)
summ = tf.constant(0.)
def b(i,_):
global summ
summ = tf.add(summ, tf.cast(Smooth(a[i]), tf.float32))
i= tf.add(i,1)
return i, summ
r, smooth_l1 = tf.while_loop(c, b, [i, smooth_l1])
print(smooth_l1)
sess = tf.Session()
print(sess.run(smooth_l1))
the out put is 6.0 (wrong).
The tf.while_loop() function requires that the following four lists have the same length, and the same type for each element:
The list of arguments to the cond function (c in this case).
The list of arguments to the body function (b in this case).
The list of return values from the body function.
The list of loop_vars representing the loop variables.
Therefore, if your loop body has two outputs, you must add a corresponding argument to b and c, and a corresponding element to loop_vars:
c = lambda i, _: tf.less(i, 30)
def b(i, _):
i = tf.add(i, 1)
cond = tf.cond(tf.greater(data[i-1], tf.constant(5.)),
lambda: tf.constant(1.0),
lambda: tf.constant([0.0]))
# NOTE: This line fails with a shape error, because the output of `cond` has
# a rank of either 0 or 1, but axis may be as large as 28.
output = tf.expand_dims(cond, axis=i-1)
return i, output
# NOTE: Use a shapeless `tf.placeholder_with_default()` because the shape
# of the output will vary from one iteration to the next.
r, out = tf.while_loop(c, b, [i, tf.placeholder_with_default(0., None)])
As noted in the comments, the body of the loop (specifically the call to tf.expand_dims()) seems to be incorrect and this program won't work as-is, but hopefully this is enough to get you started.
If you see this error:
ValueError: The two structures don't have the same number of elements.
If you see it in a while_loop, that means your inputs and outputs out of the while loop have different shapes.
I solved it by making sure that I return the same structure of loop_vars from my while loop function, the condition function must also accept same loop vars.
Here is an example code
loop_vars = [i, loss, batch_size, smaller_str_lens]
def condition(*loop_vars):
i = loop_vars[0]
batch_size = loop_vars[2]
return tf.less(i, batch_size)
def body(*loop_vars):
i, loss, batch_size, smaller_str_lens = loop_vars
tf.print("The loop passed here")
## logic here
i = tf.add(i, 1)
return i, loss, batch_size, smaller_str_lens
loss = tf.while_loop(condition, compare_strings, loop_vars)[1]
The body func must return loop vars, and the condition func must accept loop vars

ValueError: setting an array element with a sequence (LogisticRegression with Array based feature)

Thanks in advance for any guidance. I'm attempting to do classification via Logistic Regression using scikit-learn where the X is Intercept and one field that is an array of heartrate data called heartrate. Based on researching others who've also faced this error I've made sure the heartrate arrays are all the same shape/size.
It's getting the value error in sklearn/utils/validation.py line 382, in check_array on the line where a copy of the dataframe is done via array = np.array(array, dtype=dtype, order=order, copy=copy). I suspect that my arrays aren't contiguous in memory and that's what's posing the problem but not sure...
Here are some code snip-its to help sleuth out the problem:
def get_training_set(self):
training_set = []
after_date = datetime.utcnow() - timedelta(weeks=8)
before_date = datetime.utcnow() - timedelta(hours=12)
activities = self.strava_client.get_activities(after=after_date, before=before_date)
for act in activities:
if act.has_heartrate:
streams = self.strava_client.get_activity_streams(activity_id=act.id, types=['heartrate'])
heartrate = np.array(list(filter(lambda x: x is not None, streams['heartrate'].data)))
fixed_heartrate = np.pad(heartrate, (0, 15000 - len(heartrate)), 'constant')
item = {'activity_type': self.classes.index(act.type),'heartrate': fixed_heartrate}
training_set.append(item)
return pd.DataFrame(training_set)
def train(self):
df = self.get_training_set()
df['Intercept'] = np.ones((len(df),))
y = df[['activity_type']]
X = df[['Intercept', 'heartrate']]
y = np.ravel(y)
#
model = LogisticRegression()
self.debug('y={}'.format(y))
model = model.fit(X,y)
The exception occurs in fit...
Thanks in advance for any guidance.
Respect,
Mike
copied from comment for improved formatting:
/python3.5/site-packages/sklearn/linear_model/logistic.py", line 1173, in
fit order="C")
File "/python3.5/site-packages/sklearn/utils/validation.py", line 521, in
check_X_y ensure_min_features, warn_on_dtype, estimator)
File "/lib/python3.5/site-packages/sklearn/utils/validation.py", line 382, in
check_array array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence
and the other comment:
X and y look like this:
X.shape=(29, 2)
y.shape=(29,)
X=[[1 array([74, 74, 77, ..., 0, 0, 0])]
[1 array([66, 67, 69, ..., 0, 0, 0])]
...
[1 array([92, 92, 91, ..., 0, 0, 0])]
[1 array([79, 79, 79, ..., 0, 0, 0])]]
y=[ 0 11 11 0 1 0 11 0 11 1 0 11 0 0 11 0 0 0 0 0 11 0 11 0 0 0 11 0 0]
Do things work better if you change train() so look like this?
def train(self):
df = self.get_training_set()
df['Intercept'] = 1 # (a)
y = df['activity_type'].values # (b)
X = [np.concatenate(( np.array([col1]), col2 )) for col1, col2 in df[['Intercept', 'heartrate']].values.T]
model = LogisticRegression()
model.fit(X,y) # (c)
(a) A sequence of the correct length will be generated
(b) Use values to return an numpy array instead of another dataframe
(c) fit is done inplace

word2vec_basic not working (Tensorflow)

I am new to word-embedding and Tensorflow. I am working on a project where I need to apply word2vec to health data.
I used the code for Tensorflow website (word2vec_basic.py). I modified a little this code to make it read my data instead of "text8.zip" and it runs normally until the last step:
num_steps = 100001
with tf.Session(graph=graph) as session:
# We must initialize all variables before we use them.
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()<code>
This code is exactly the same as the example so I don't think it is wrong. the error It gave is:
KeyError Traceback (most recent call last)
<ipython-input-20-fc4c5c915fc6> in <module>()
34 for k in xrange(top_k):
35 print(nearest[k])
---> 36 close_word = reverse_dictionary[nearest[k]]
37 log_str = "%s %s," % (log_str, close_word)
38 print(log_str)
KeyError: 2868
I changed the size of the input data but it still gives the same error.
I would really appreciate if someone could give me some advice on how to fix this problem.
If the vocabulary size is less than default maximum (50000), you should modify the number.
At the last of step 2, let's modify vocabulary_size to actual dictionary size.
data, count, dictionary, reverse_dictionary = build_dataset(words)
del words # Hint to reduce memory.
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10], [reverse_dictionary[i] for i in data[:10]])
#add this line to modify
vocabulary_size = len(dictionary)
print('Dictionary size', len(dictionary))