I have computed gradients (using compute_gradient()) grads_and_vars1 and grads_and_vars2. Now I need to add both these gradients and store in grads_and_vars3 and use grads_and_vars3 to apply gradients.
but grads_and_vars is a tuple. So how can I do this operation?
In Python, the tuple data type is immutable.
Therefore if you have to "update" a field of a tuple, you have to create a new tuple and overwrite the old one.
Also, if you have to add two tuples, you can't use the + operator because it will create a new tuple concatenating the two tuples.
In order to create a new tuple that's the element wise sum of 2 tuples, you can convert them to numpy arrays, then sum them and convert it back tu tuple.
Since grads_and_vars is a list of tuples (gradient, variable) and you want to add only the gradients part, you can loop over these list (that I suppose have the same lenght) and create a new list of (gradient, variable) tuple. I also suppose that variable is the same and in the same position in both grads_and_vars1 and grads_and_vars2.
For example, if we have:
grads_and_vars1 = [ (1,2), (0,1) , (-1, 1) ]
grads_and_vars2 = [ (1,2), (0,1) , (-1, 1) ]
we can get:
grads_and_vars3 = [(grads_and_vars1[idx][0] + grads_and_vars2[idx][0], grads_and_vars1[idx][1]) for idx in range(len(grads_and_vars1))]
that's:
[(2, 2), (0, 1), (-2, 1)]
Related
For example, given a tensor m which its shape is [28, 28].
I want to randomly select five regions with the tensor, the shape of each region is [3, 3].
Then, I want to modify the values of these regions.
One sulution would be random extraction inside a loop:
import random
tensor = tf.ones(shape=(28,28))
desired_shape = (3,3)
dim1 = random.randint(0,tensor.shape[0] - desired_shape[0])
dim2 = random.randint(0,tensor.shape[1] - desired_shape[1])
extracted_tensor = tensor[dim1:dim1+desired_shape[0]][:,dim2 + desired_shape[1]]
First import the random module and create a (or use your) tensor. Set your desired_shape.
Then create two random variables, one for each dimension and extract the tensor via sublisting.
But, keep in mind, that you cannot assign values to a tensor in tensorflow as this thread says.
To solve this, first convert it to a numpy array, change the values and convert it to a tensor again, so this would be a solution for your issue.
np_arr = tensor.numpy()
for i in range(5):
dim1 = random.randint(0,tensor.shape[0] - desired_shape[0])
dim2 = random.randint(0,tensor.shape[1] - desired_shape[1])
np_arr[dim1:dim1+desired_shape[0]][:,dim2 + desired_shape[1]] = [1,2,3] # any value
new_tens = tf.convert_to_tensor(np_arr)
I'm trying to slice a Tensor of shape (?, 32, 32) along the first dimension. I have to select two rows with indexes stored in another Tensor of shape (1, 2). I want something like array[list of indexes, :, :] in numpy.
How can I do it? I need this operation to compute a loss inside the model_fn function, passed to my custom Tensorflow Estimator.
I solved it using tf.gather_nd. I reshaped the tensor containing the indexes with:
ids = tf.reshape(tensor_with_indexes, shape=(-1, 1))
and then I applied:
new_tensor = tf.gather_nd(original_tensor, ids)
Does sklearn PCA consider the columns of the dataframe as the vectors to reduce or the rows as vectors to reduce ?
Because when doing this:
df=pd.DataFrame([[1,-21,45,3,4],[4,5,89,-5,6],[7,-4,58,1,19],[10,11,74,20,12],[13,14,15,45,78]]) #5 rows 5 columns
pca=PCA(n_components=3)
pca.fit(df)
df_pcs=pd.DataFrame(data=pca.components_, index = df.index)
I get the following error:
ValueError: Shape of passed values is (5, 3), indices imply (5, 5)
Rows represent samples and columns represent features. PCA reduces the dimensionality of the data, ie features. So columns.
So if you are talking about vectors, then it considers a row as single feature vector and reduces its size.
If you have a dataframe of shape say [100, 6] and PCA n_components is set to 3. So your output will be [100, 3].
# You need this
df_pcs=pca.transform(df)
# This produces error because shapes dont match.
df_pcs=pd.DataFrame(data=pca.components_, index = df.index)
pca.components_ is an array of [3,5] and your index parameter is using the df.index which is of shape [5,]. Hence the error. pca.components_ represents a completely different thing.
According to documentation:-
components_ : array, [n_components, n_features]
Principal axes in feature space, representing the
directions of maximum variance in the data.
I am trying to build Logistic Regression model, data.Exam1 is the first column
reg = linear_model.LogisticRegression()
X = list(data.Exam1.values.reshape(-1,1)).........(1)
I have performed this operation
type(X[0]) returns numpy.ndarray
reg.fit accepts parameters which contains all float items in the list, so I did this because of this exception ValueError: Unknown label type: 'continuous'
newX = []
for item in X:
type(float(item))
newX.append(float(item))
so when I tried to do
reg.fit(newX,newY,A)
It throws me this exception
Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
which I already did in (1), and when I try to reshape again it returns ndarray again, how can I have reshape and convert items to float simultaneously??
Adapting our solution from chat
You are trying to understand Admission (type: bool) as a function of Exam scores (Exam1: float, Exam2: float). The crux of your issue is that sklearn.linear_model.LogisticRegression expects two inputs:
X: a vector/matrix of training data with the shape (number of observations, number of predictors) with type float
Y: a vector of categorical outcomes (in this case binary) with the shape (number of observations, 1) with type bool or int
They way you are calling it is trying to fit Exam2 (float) as a function of Exam1 (float). This is the fundamental issue. Further complicating matters is the way you are recasting your reshaped numpy array as a list. Assuming data is a pandas.DataFrame, you want something like:
X = np.vstack((data.Exam1, data.Exam2)).T
print X.shape # should be (100, 2)
reg.fit(X, data.Admitted)
Here, both data.Exam1 and data.Exam2 are vectors of length 100. Using np.vstack combines them into the shape (2, 100), so we take the transpose so that we have it oriented properly with observations along the first dimension (100, 2). No need to recast as list or even take data.Exam1.values as the pd.Series gets recast as np.array during np.vstack. Similarly, data.Admitted (with shape (100,)) plays nicely with reg.fit.
I need to compare a bunch of numpy arrays with different dimensions, say:
a = np.array([1,2,3])
b = np.array([1,2,3],[4,5,6])
assert(a == b[0])
How can I do this if I do not know either the shape of a and b, besides that
len(shape(a)) == len(shape(b)) - 1
and neither do I know which dimension to skip from b. I'd like to use np.index_exp, but that does not seem to help me ...
def compare_arrays(a,b,skip_row):
u = np.index_exp[ ... ]
assert(a[:] == b[u])
Edit
Or to put it otherwise, I wan't to construct slicing if I know the shape of the array and the dimension I want to miss. How do I dynamically create the np.index_exp, if I know the number of dimensions and positions, where to put ":" and where to put "0".
I was just looking at the code for apply_along_axis and apply_over_axis, studying how they construct indexing objects.
Lets make a 4d array:
In [355]: b=np.ones((2,3,4,3),int)
Make a list of slices (using list * replicate)
In [356]: ind=[slice(None)]*b.ndim
In [357]: b[ind].shape # same as b[:,:,:,:]
Out[357]: (2, 3, 4, 3)
In [358]: ind[2]=2 # replace one slice with index
In [359]: b[ind].shape # a slice, indexing on the third dim
Out[359]: (2, 3, 3)
Or with your example
In [361]: b = np.array([1,2,3],[4,5,6]) # missing []
...
TypeError: data type not understood
In [362]: b = np.array([[1,2,3],[4,5,6]])
In [366]: ind=[slice(None)]*b.ndim
In [367]: ind[0]=0
In [368]: a==b[ind]
Out[368]: array([ True, True, True], dtype=bool)
This indexing is basically the same as np.take, but the same idea can be extended to other cases.
I don't quite follow your questions about the use of :. Note that when building an indexing list I use slice(None). The interpreter translates all indexing : into slice objects: [start:stop:step] => slice(start, stop, step).
Usually you don't need to use a[:]==b[0]; a==b[0] is sufficient. With lists alist[:] makes a copy, with arrays it does nothing (unless used on the RHS, a[:]=...).