I'm trying some exercise.
I looked for this problem before, but didn't find one for my problem.
This code seems to work with trainX, but not with trainY.
I have 1672 data for trainY in 1D for one neuron output.
batch_dim = trainX.shape[0]
input_dim = windowSize
hidden_dim = 6
output_dim = 1
O: batch_dim=1 with value "1672"
X = trainX[index:index+batch_dim,:]
Y = trainY[index:index+batch_dim,:]
index = index+batch_dim
The problem seems to be in the dimension. So I try to reshape it
Y = np.reshape(trainY[index:index+batch_dim,:],-1,1)
but it doesn't solve anything. The output still work, but error still there.
I just wanted the error to go away.
The variable size output:
batch_dim = 1 (value = 1672)
index = 1 (value = 0)
X : (1672,3)
Y : (1672,)
Y = trainY[index:index+batch_dim,:]
IndexError: too many indices for array
Related
I am a begineer in plotting graphs in bokeh. So please forgive me if this is a stupid question.
I am trying to plot a line grpah, where my data is in a dataframe and I have provided the x and y axis as lists.
But some of my data in y axis has nonetype objects in it.
when it is nonetype in "datapoints" column the corresponding "datapoint_count" has a list like [1]. Otherwise the "dataponts" colums dhould have a list of 20 floats and corresponding datapoint_count column should have a list of 1-20 digits.
So basically I want the x axis of the graph to show a range of 1-20y axis should plot the datapoints whichill range between 90.0 - 180.0
When I am running the code there is no python error but if I go to the browser and check developer's tool it says that the bokeh could not set initial ranges.
data=df
random_figure = figure(title='random', x_axis_label="Index", y_axis_label="random [ms]",
plot_width=800, plot_height=400, output_backend="webgl")
random_figure.add_tools(random_hover)
id_values = data['testcase_id'].drop_duplicates()
data_temp= data[['id', 'datapoints']].copy()
data_temp['datapoint_count'] = None
data_temp['datapoint_count'] = data_temp['datapoint_count'].astype(object)
for indexes, item in data_temp.iterrows():
if item['datapoints'] is None or str(item['datapoints']) == '[]': # this has nonetype or strings
item['datapoints'] = [0]
else:
item['datapoints'] = [float(x) for x in item['datapoints'].strip('[').strip(']').split(',')]
iter_nr = 0
raw_data_count = []
for each in item['datapoints']:
iter_nr += 1
datapoint_count.append(iter_nr)
data_temp.at[indexes, 'datapoint_count'] = datapoint_count
name_dict_random = {'name': [], 'legend': [], 'label': []}
logging.info('START OF DRAWINGS')
for ind, id in enumerate(id_values):
it_color = Turbo256[random.randint(0, 255)]
name_glyph_random = random_figure.line(x='datapoint_count',
y='datapoints',
line_width=2,
legend_label=str(id),
source=data_temp.where(
data_temp['id'] == id).dropna(),
color=it_color)
name_dict_random['name'].append(name_glyph_random)
name_dict_random['label'].append(str(id))
logging.info('AFTER DRAWINGS LOOP')
for label in range(len(data.id.unique())):
name_dict_random['legend'].append(random_figure.legend.items[label])
initial_value = []
options = list(data.id.unique())
for i, name in enumerate(options):
options[i] = str(name)
for i in range(len(options)):
if name_dict_random['label'][i] in initial_value:
name_dict_random['name'][i].visible = True
name_dict_random['legend'][i].visible = True
else:
name_dict_random['name'][i].visible = False
name_dict_random['legend'][i].visible = False
I have solved it now.
Actually though the dataframe showed that the rows content arrays they were actually categorized as objects.
Bokeh could not understand what to do with object in the axis.
So now I have referred them wih iloc:
x = data[data['id'] == id]['datapoint_count'].iloc[0]
y = data[data['id'] == id]['datapoint'].iloc[0]
name_glyph_handover = handover_figure.line(x=x, y=y, line_width=2,
legend_label=str(id), color=it_color)
I'm trying to have a layer in keras that takes a flat tensor x (doesn't have zero value in it and shape = (batch_size, units)) multiplied by a mask (of the same shape), and it will sort it in the way that masked values will be placed first in the output (the order of the elements value doesn't matter). For clarity here is an example (batch_size = 1, units = 8):
It seems simple but the problem is that I can't find a good solution. Any code or idea is appreciated.
My current code is as below, If you know a more efficient way please let me know.
class Sort(keras.layers.Layer):
def call(self, inputs):
x = inputs.numpy()
nonx, nony = x.nonzero() # idxs of nonzero elements
zero = [np.where(x == 0)[0][0], np.where(x == 0)[1][0]] # idx of first zero
x_shape = tf.shape(inputs)
result = np.zeros((x_shape[0], x_shape[1], 2), dtype = 'int') # mapping matrix
result[:, :, 0] += zero[0]
result[:, :, 1] += zero[1]
p = np.zeros((x_shape[0]), dtype = 'int')
for i, j in zip(nonx, nony):
result[i, p[i]] = [i, j]
p[i] += 1
y = tf.gather_nd(inputs, result)
return y
In a problem I want to solve using Tensorflow, I want to build a n-dimensional rank tensor that is 'diagonal' by blocks. That is, I want to generate a tensor object from a concatenation of low order tensors.
I have tried to define the whole tf.Variable tensor and then to impose the value 0 to some variables but Tensorflow does not allow assignments when working with variable tensors.
Moreover, I would want to create 'diagonal' tensors with the same independent variables, as, for example, using a stacked 2D representation, being A a 2 dimensional tensor:
T = [A, 0;0 , A]
My current source code:
shape1 = [3,3,10,10]
shape2 = [3,3]
i1 = tf.truncated_normal(shape1, stddev=1.0, dtype = tf.float32)
i2 = tf.truncated_normal(shape2, stddev=1.0, dtype = tf.float32)
A = tf.Variable(i1)
V = tf.Variable(i2)
for i in range(10):
for j in range(10):
if i != j:
A[:,:,i,j] = tf.zeros((3,3))
else:
A[:,:,i,j] = V
Of course, this code returns the error Variable object does not support item assignment.
What I want, at the end of the day, is to define a variable tensor such as:
T[:,:,i,j] = tf.zeros([D0,D1]), if i != j
and
T[:,:,i,j] = A, if i = j
with A = tf.variable([D0,D1])
Thank you very much in advance!
One way would be to use tf.stack, which converts a list of tensors of dimension n to a tensor of dimension n+1.
l = []
for i in range(10):
li = [V * 0.0 if i != j else V for j in range(10)]
Ai = tf.stack(li)
l.append(Ai)
A = tf.stack(l)
I am building machine learning models for a certain data set. Then, based on the constraints and bounds for the outputs and inputs, I am trying to find the input parameters for the most minimized answer.
The problem which I am facing is that, when the model is a linear regression model or something like lasso, the minimization works perfectly fine.
However, when the model is "Decision Tree", it constantly returns the very initial value that is given to it. So basically, it does not enforce the constraints.
import numpy as np
import pandas as pd
from scipy.optimize import minimize
I am using the very first sample from the input data set for the optimization. As it is only one sample, I need to reshape it to (1,-1) as well.
x = df_in.iloc[0,:]
x = np.array(x)
x = x.reshape(1,-1)
This is my Objective function:
def objective(x):
x = np.array(x)
x = x.reshape(1,-1)
y = 0
for n in range(df_out.shape[1]):
y = Model[n].predict(x)
Y = y[0]
return Y
Here I am defining the bounds of inputs:
range_max = pd.DataFrame(range_max)
range_min = pd.DataFrame(range_min)
B_max=[]
B_min =[]
for i in range(range_max.shape[0]):
b_max = range_max.iloc[i]
b_min = range_min.iloc[i]
B_max.append(b_max)
B_min.append(b_min)
B_max = pd.DataFrame(B_max)
B_min = pd.DataFrame(B_min)
bnds = pd.concat([B_min, B_max], axis=1)
These are my constraints:
con_min = pd.DataFrame(c_min)
con_max = pd.DataFrame(c_max)
Here I am defining the constraint function:
def const(x):
x = np.array(x)
x = x.reshape(1,-1)
Y = []
for n in range(df_out.shape[1]):
y = Model[n].predict(x)[0]
Y.append(y)
Y = pd.DataFrame(Y)
a4 =[]
for k in range(Y.shape[0]):
a1 = Y.iloc[k,0] - con_min.iloc[k,0]
a2 = con_max.iloc[k, 0] - Y.iloc[k,0]
a3 = [a2,a1]
a4 = np.concatenate([a4, a3])
return a4
c = const(x)
con = {'type': 'ineq', 'fun': const}
This is where I try to minimize. I do not pick a method as the automatically picked model has worked so far.
sol = minimize(fun = objective, x0=x,constraints=con, bounds=bnds)
So the actual constraints are:
c_min = [0.20,1000]
c_max = [0.3,1600]
and the max and min range for the boundaries are:
range_max = [285,200,8,85,0.04,1.6,10,3.5,20,-5]
range_min = [215,170,-1,60,0,1,6,2.5,16,-18]
I think you should check the output of 'sol'. At times, the algorithm is not able to perform line search completely. To check for this, you should check message associated with 'sol'. In such a case, the optimizer returns initial parameters itself. There may be various reasons of this behavior. In a nutshell, please check the output of sol and act accordingly.
Arad,
If you have not yet resolved your issue, try using scipy.optimize.differential_evolution instead of scipy.optimize.minimize. I ran into similar issues, particularly with decision trees because of their step-like behavior resulting in infinite gradients.
Here is how my code looks like:
N = 16, num_ckfs = 5
init_variances = tf.placeholder(tf.float64, shape=[ num_ckfs, N],name='inital_variances')
init_states = tf.placeholder(tf.float64, shape=[num_ckfs, N], name='init_states')
#some more code
predicted_state = prior_state_expanded + kalman_gain * diff_expanded
error_covariance = sum_cov_cholesky + tf.batch_matmul(kg , kalman_gain, adj_x=True)
projected_output = tf.batch_matmul(predicted_state,input_vectors_extra, adj_y=True)
session = tf.Session()
init_var = [10 for i in range(N)]
init_var_ckfs = [init_var for i in range(num_ckfs)]
init_state = [0 for i in range(N)]
init_state_ckfs = [init_state for i in range(num_ckfs)]
for timestep in range(10):
out= session.run([projected_output, predicted_state, error_covariance], {init_variances:init_var_ckfs, init_states:init_state_ckfs })
#for the next tilmestep, I want to initialize init_state_ckfs with the predicted_state, and the init_var_ckfs with error_covariance.
#predicted_state is a tensor with shape (num_ckfs, 1, N)
#error_covariance is a tensor with shape (num_ckfs, N, N): I just need the diagonal elements from each of the N X N matrices
Although I have mentioned this in the code as a comment, I will mention it here again. I am wanting to know how to use the updated tensors from the previous time step by converting them into lists and feeding them as inputs for the next time step. Can someone please help me.
use tf.assign to assign to the placeholder the last value of the variable. As long as the Session is active the state is preserved