I have a list of the coefficient to degree 1 polynomials, with a[i][0]*x^1 + a[i][1]
a = np.array([[ 1. , 77.48514702],
[ 1. , 0. ],
[ 1. , 2.4239275 ],
[ 1. , 1.21848739],
[ 1. , 0. ],
[ 1. , 1.18181818],
[ 1. , 1.375 ],
[ 1. , 2. ],
[ 1. , 2. ],
[ 1. , 2. ]])
And running into issues with the following operation,
np.polydiv(reduce(np.polymul, a), a[0])[0] != reduce(np.polymul, a[1:])
where
In [185]: reduce(np.polymul, a[1:])
Out[185]:
array([ 1. , 12.19923307, 63.08691612, 179.21045388,
301.91486027, 301.5756213 , 165.35814595, 38.39582615,
0. , 0. ])
and
In [186]: np.polydiv(reduce(np.polymul, a), a[0])[0]
Out[186]:
array([ 1.00000000e+00, 1.21992331e+01, 6.30869161e+01, 1.79210454e+02,
3.01914860e+02, 3.01575621e+02, 1.65358169e+02, 3.83940472e+01,
1.37845155e-01, -1.06809521e+01])
First of all the remainder of np.polydiv(reduce(np.polymul, a), a[0]) is way bigger than 0, 827.61514239 to be exact, and secondly, the last two terms to quotient should be 0, but way larger from 0. 1.37845155e-01, -1.06809521e+01.
I'm wondering what are my options to improve the accuracy?
There is a slightly complicated way to keep the product first and then divide structure.
By first employ n points and evaluate on a.
xs = np.linspace(0, 1., 10)
ys = np.array([np.prod(list(map(lambda r: np.polyval(r, x), a))) for x in xs])
then do the division on ys instead of coefficients.
ys = ys/np.array([np.polyval(a[0], x) for x in xs])
finally recover the coefficient using polynomial interpolation with xs and ys
from scipy.interpolate import lagrange
lagrange(xs, ys)
Related
How to construct an equivalent multivariate normal distribution in tensorflow-probability, using TransformedDistribution and tfb.ScaleMatvecLinearOperator?
I'm reading about a tutorial on a bijector in tensorflow_probability: tfp.bijectors.ScaleMatvecLinearOperator.
An example was provided.
n = 10000
loc = 0
scale = 0.5
normal = tfd.Normal(loc=loc, scale=scale)
The above codes creates a univariate normal distribution.
tril = tf.random.normal((2, 4, 4))
scale_low_tri = tf.linalg.LinearOperatorLowerTriangular(tril)
scale_low_tri.to_dense()
The above codes created a tensor consisting of 2 lower triangular matrix:
<tf.Tensor: shape=(2, 4, 4), dtype=float32, numpy=
array([[[-0.56953585, 0. , 0. , 0. ],
[ 1.1368589 , 0.32028311, 0. , 0. ],
[-0.8328388 , -1.9963025 , -0.6005632 , 0. ],
[ 0.596155 , -0.214932 , 1.0988408 , -0.41731614]],
[[ 2.0778096 , 0. , 0. , 0. ],
[-1.1863967 , 2.4897904 , 0. , 0. ],
[ 0.38001925, 1.4962028 , 1.7609248 , 0. ],
[ 2.9253726 , 0.7047957 , 0.050508 , 0.58643174]]],
dtype=float32)>
Then a matrix-vector multiplication bijector is created:
scale_lin_op = tfb.ScaleMatvecLinearOperator(scale_low_tri)
After that, a TransformedDistribution is constructed as follows:
mvn = tfd.TransformedDistribution(normal, scale_lin_op, batch_shape=[2], event_shape=[4]) #
This should have worked in the old versions of tensorflow_probability. However the constructor of TransformedDistribution is changed now and does not accept the last two parameters batch_shape and event_shape. Therefore I tried to use the following way to do the same:
mvn2 = tfd.TransformedDistribution(
distribution=tfd.Sample(
normal,
sample_shape=[4] # base_dist.event_shape == [4]
),
bijector=scale_lin_op, ) # batch_shape=[2], event_shape=[4]
mvn2
And the result seems to have the correct batch_shape and event_shape
<tfp.distributions.TransformedDistribution 'scale_matvec_linear_operatorSampleNormal' batch_shape=[2] event_shape=[4] dtype=float32>
Then, another distribution for comparison is created:
mvn3 = tfd.MultivariateNormalLinearOperator(loc=loc, scale=scale_low_tri)
mvn3
According to the tutorial, the TransformedDistribution mvn2 should be equivalent to the MultivariateNormalLinearOperator mvn3.
# Check
xn = normal.sample((n, 2, 4)) # sample_shape = (n, 2, 4)
tf.norm(mvn2.log_prob(xn) - mvn3.log_prob(xn)) / tf.norm(mvn2.log_prob(xn))
<tf.Tensor: shape=(), dtype=float32, numpy=0.7498207>
But in my result they are not equivalent. (If they are, the above tensor should be 0)
What have I done wrong?
why the following two lines of code compute the same thing ,but I get the different results.
kernel1 = np.diag(np.exp(-scale*eigen_values))
kernel2 = np.exp(-scale*np.diag(eigen_values))
the
np.all(kernel1==kernel2)
output
False
Look at the values! Then you'll see the problem: when given a 1-d array, numpy.diag creates a 2-d array with zeros in the off-diagonal positions. In kernel1, you do diag last, so the off-diagonal values are 0. In kernel2, you apply exp after diag, and exp(0) is 1, so in kernel2, the off-diagonal terms are all 1. (Remember that numpy.exp is applied element-wise; it is not the matrix exponential.)
In [19]: eigen_values = np.array([1, 0.5, 0.1])
In [20]: scale = 1.0
In [21]: np.diag(np.exp(-scale*eigen_values))
Out[21]:
array([[0.36787944, 0. , 0. ],
[0. , 0.60653066, 0. ],
[0. , 0. , 0.90483742]])
In [22]: np.exp(-scale*np.diag(eigen_values))
Out[22]:
array([[0.36787944, 1. , 1. ],
[1. , 0.60653066, 1. ],
[1. , 1. , 0.90483742]])
I have a numpy array (N,M) where some of the columns should be one-hot encoded. Please help to make a one-hot encoding using numpy and/or tensorflow.
Example:
[
[ 0.993, 0, 0.88 ]
[ 0.234, 1, 1.00 ]
[ 0.235, 2, 1.01 ]
.....
]
The 2nd column here ( with values 3 and 2 ) should be one hot-encoded, I know that there are only 3 distinct values ( 0, 1, 2 ).
The resulting array should look like:
[
[ 0.993, 0.88, 0, 0, 0 ]
[ 0.234, 1.00, 0, 1, 0 ]
[ 0.235, 1.01, 1, 0, 0 ]
.....
]
Like that I would be able to feed this array into the tensorflow.
Please notice that 2nd column was removed and it's one-hot version was appended in the end of each sub-array.
Any help would be highly appreciated.
Thanks in advance.
Update:
Here is what I have right now:
Well, not exactly...
1. I have more than 3 columns in the array...but I still want to do it only with 2nd..
2. First array is structured, ie it's shape is (N,)
Here is what I have:
def one_hot(value, max_value):
value = int(value)
a = np.zeros(max_value, 'uint8')
if value != 0:
a[value] = 1
return a
# data is structured array with the shape of (N,)
# it has strings, ints, floats inside..
# was get by np.genfromtxt(dtype=None)
unique_values = dict()
unique_values['categorical1'] = 1
unique_values['categorical2'] = 2
for row in data:
row[col] = unique_values[row[col]]
codes = np.zeros((data.shape[0], len(unique_values)))
idx = 0
for row in data:
codes[idx] = one_hot(row[col], len(unique_values)) # could be optimised by not creating new array every time
idx += 1
data = np.c_[data[:, [range(0, col), range(col + 1, 32)]], codes[data[:, col].astype(int)]]
Also trying to concatenate via:
print data.shape # shape (5000,)
print codes.shape # shape (5000,3)
data = np.concatenate((data, codes), axis=1)
Here's one approach -
In [384]: a # input array
Out[384]:
array([[ 0.993, 0. , 0.88 ],
[ 0.234, 1. , 1. ],
[ 0.235, 2. , 1.01 ]])
In [385]: codes = np.array([[0,0,0],[0,1,0],[1,0,0]]) # define codes here
In [387]: codes
Out[387]:
array([[0, 0, 0], # encoding for 0
[0, 1, 0], # encoding for 1
[1, 0, 0]]) # encoding for 2
# Slice out the second column and append one-hot encoded array
In [386]: np.c_[a[:,[0,2]], codes[a[:,1].astype(int)]]
Out[386]:
array([[ 0.993, 0.88 , 0. , 0. , 0. ],
[ 0.234, 1. , 0. , 1. , 0. ],
[ 0.235, 1.01 , 1. , 0. , 0. ]])
I have this code:
mm = np.array([[1, 4, 7, 8], [2, 2, 8, 4], [1, 13, 1, 5]])
mm = np.column_stack(mm)
mmCov = np.cov(mm, rowvar=0)
print("covariance\n", mmCov)
# my code to get correlations
mmResCor = np.zeros(shape=(3, 3))
for i in range(len(mmCov)):
for j in range(len(mmCov[i])):
mmResCor[i][j] = mmCov[i][j] / (math.sqrt(mmCov[i][i] * mmCov[j] [j]))
print("correlaciones a mano\n", mmResCor)
mmCor = np.corrcoef(mmCov, rowvar=0)
print("correlations\n", mmCor)
X = csr_matrix(mmCor)
XX = minimum_spanning_tree(X)
print("minimun spanning tree\n", XX)
first: each column represents a variable, with observations in the rows
numpy corrcoef use this relation with covariance matrix:
R_{ij} = \frac{ C_{ij} } { \sqrt{ C_{ii} * C_{jj} } }
when I use numpy corrcoef I get this matrix
correlations
[[ 1. 0.8660254 -0.82603319]
[ 0.8660254 1. -0.99717646]
[-0.82603319 -0.99717646 1. ]]
but when I apply "my code" to get the same result...
mmResCor = np.zeros(shape=(3, 3))
for i in range(len(mmCov)):
for j in range(len(mmCov[i])):
mmResCor[i][j] = mmCov[i][j] / (math.sqrt(mmCov[i][i] * mmCov[j][j]))
I get this matrix
correlaciones a mano
[[ 1. 0.67082039 0. ]
[ 0.67082039 1. -0.5 ]
[ 0. -0.5 1. ]]
why do I get differents results if its suppose I am doing the same?
One more question:
When I apply minimun_spanning_tree I get this:
minimun spanning tree
(0, 2) -0.826033187631
(1, 2) -0.997176464953
Is there any way to represent these or can I save this result in some variables?
The np.corrcoef should take the data as the input. You're passing the covariance matrix as input. If you pass the data, you get the same result as your manual computation:
>>> np.corrcoef(mm, rowvar=0)
array([[ 1. , 0.67082039, 0. ],
[ 0.67082039, 1. , -0.5 ],
[ 0. , -0.5 , 1. ]])
Regarding the minimum spanning tree, I'm not sure what your question is, but the output XX is a sparse matrix which stores a matrix representation of the tree.
If you want to evaluate a 1-d array for multiple arguments efficiently i.e. without for-loop, you can do this:
x = array([1, 2, 3])
def gen_1d_arr(x):
arr = array([2 + x, 2 - x,])
return arr
gen_1d_arr(x).T
and you get:
array([[ 3, 1],
[ 4, 0],
[ 5, -1]])
Okay, but how do you do this for 2-d array like below:
def gen_2d_arr(x):
arr = array([[2 + x, 2 - x,],
[2 * x, 2 / x]])
return arr
and obtain this?:
array([[[ 3. , 1. ],
[ 2. , 2. ]],
[[ 4. , 0. ],
[ 4. , 1. ]],
[[ 5. , -1. ],
[ 6. , 0.66666667]]])
Also, is this generally possible for n-d arrays?
Look at what you get with your function
In [274]: arr = np.array([[2 + x, 2 - x,],
[2 * x, 2 / x]])
In [275]: arr
Out[275]:
array([[[ 3. , 4. , 5. ],
[ 1. , 0. , -1. ]],
[[ 2. , 4. , 6. ],
[ 2. , 1. , 0.66666667]]])
In [276]: arr.shape
Out[276]: (2, 2, 3)
The 3 comes from x. The middle 2 comes from [2+x, 2-x] pairs, and the 1st 2 from the outer list.
Looks like what you want is a (3,2,2) array. One option is to apply a transpose or axis swap to arr.
arr.transpose([2,0,1])
The basic operation of np.array([arr1,arr2]) is to construct a new array with a new dimension in front, i.e. with shape (2, *arr1(shape)).
There are other operations that combine arrays. np.concatenate and its variants hstack, vstack, dstack, column_stack, join arrays. .reshape() and [None,...], atleast_nd etc add dimensions. Look at the code of the stack functions to get some ideas on how to combine arrays using these tools.
On the question of efficiency, my time tests show that concatenate operations are generally faster than np.array. Often np.array converts its inputs to lists, and reparses the values. This gives it more power in cooercing arrays to specific dtypes, but at the expense of time. But I'd only worry about this with large arrays where construction time is significant.