TensorFlow Probability (tfp) equivalent of np.quantile() - numpy

I am trying to find a TensorFlow equivalent of np.quantile(). I have found tfp.stats.quantiles() (tfp stands for TensorFlow Probability). However, its constructs are a bit different from that of np.quantile().
Consider the following example:
import tensorflow_probability as tfp
import tensorflow as tf
import numpy as np
inputs = tf.random.normal((1, 4096, 4))
print("NumPy")
print(np.quantile(inputs.numpy(), q=0.9, axis=1, keepdims=False))
I am not sure from the TFP docs how I could write the above using tfp.stats.quantile(). I tried checking out the source code of both methods, but it didn't help.

Let me try to be more helpful here than I was on GitHub.
There is a difference in behavior between np.quantile and tfp.stats.quantiles. The key difference here is that numpy.quantile will
Compute the q-th quantile of the data along the specified axis.
where q is the
Quantile or sequence of quantiles to compute, which must be between 0 and 1 inclusive.
and tfp.stats.quantiles
Given a vector x of samples, this function estimates the cut points by returning num_quantiles + 1 cut points
So you need to tell tfp.stats.quantiles how many quantiles you want and then select out the qth quantile. If it isn't clear how to do this just from the API, if you look at the source for tfp.stats.quantiles (for v0.19.0) we can see that it shows us how we can get a similar return structure as NumPy.
For completeness, setting up a virtual environment with
$ cat requirements.txt
numpy==1.24.2
tensorflow==2.11.0
tensorflow-probability==0.19.0
allows us to run
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
inputs = tf.random.normal((1, 4096, 4), dtype=tf.float64)
q = 0.9
numpy_quantiles = np.quantile(inputs.numpy(), q=q, axis=1, keepdims=False)
tfp_quantiles = tfp.stats.quantiles(
inputs, num_quantiles=100, axis=1, interpolation="linear"
)[int(q * 100)]
assert np.allclose(numpy_quantiles, tfp_quantiles.numpy())
print(f"{numpy_quantiles=}")
# numpy_quantiles=array([[1.31727661, 1.2699167 , 1.28735237, 1.27137588]])
print(f"{tfp_quantiles=}")
# tfp_quantiles=<tf.Tensor: shape=(1, 4), dtype=float64, numpy=array([[1.31727661, 1.2699167 , 1.28735237, 1.27137588]])>

You could also use tfp.stats.percentile(inputs, 90., axis=1, keepdims=False) -- the only difference from quantile is the 90. replacing .90.

Related

How numpy Gaussian Distribution works?

I am new to numpy. There is a part which in not so clear to me in numpy gaussian distribution.
from numpy import random
x = random.normal(size=(2, 3))
print(x)
This generate x as:
[[ 1.15917179 -1.29036526 0.88074813]
[-1.07674284 2.35437249 -0.54746161]]
But i don't know what this number says, and how they get created or how do we know this represents gaussian distribution?

predicting p of binomial with beta prior in edward2 & tensorflow2

The following code predicts the p of the binomial distribution by using beta as prior. Somehow, sometimes, I get meaningless results (acceptance rate = 0). When I write the same logic with pymc3, I have no issue.
I couldn't see what I am missing here.
import numpy as np
import tensorflow as tf
import tensorflow_probability as tfp
import edward2 as ed
from pymc3.stats import hpd
import numpy as np
import seaborn
import matplotlib.pyplot as plt
p_true = .15
N = [10, 100, 1000]
successN = np.random.binomial(p=p_true, n=N)
print(N)
print(successN)
def beta_binomial(N):
p = ed.Beta(
concentration1=tf.ones( len(N) ),
concentration0=tf.ones( len(N) ),
name='p'
)
return ed.Binomial(total_count=N, probs=p, name='obs')
log_joint = ed.make_log_joint_fn(beta_binomial)
def target_log_prob_fn(p):
return log_joint(N=N, p=p, obs=successN)
#kernel = tfp.mcmc.HamiltonianMonteCarlo(
# target_log_prob_fn=target_log_prob_fn,
# step_size=0.01,
# num_leapfrog_steps=5)
kernel = tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=target_log_prob_fn,
step_size=.01
)
trace, kernel_results = tfp.mcmc.sample_chain(
num_results=1000,
kernel=kernel,
num_burnin_steps=500,
current_state=[
tf.random.uniform(( len(N) ,))
],
trace_fn=(lambda current_state, kernel_results: kernel_results),
return_final_kernel_results=False)
p, = trace
p = p.numpy()
print(p.shape)
print('acceptance rate ', np.mean(kernel_results.is_accepted))
def printSummary(name, v):
print(name, v.shape)
print(np.mean(v, axis=0))
print(hpd(v))
printSummary('p', p)
for data in p.T:
print(data.shape)
seaborn.distplot(data, kde=False)
plt.savefig('p.png')
Libraries:
pip install -U pip
pip install -e git+https://github.com/google/edward2.git#4a8ed9f5b1dac0190867c48e816168f9f28b5129#egg=edward2
pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow-2.0.0-cp37-cp37m-manylinux2010_x86_64.whl#egg=tensorflow
pip install tensorflow-probability
Sometimes I see the following (when acceptance rate=0):
And, sometimes I see the following (when acceptance rate>.9):
When I get unstable results in Bayesian inference (I use mc-stan, but it's also using NUTS), it's usually because either the priors and likelihood are mis-specified, or the hyperparameters are not good for the problem.
That first graph shows that the sampler never moved away from the initial guess at the answers (hence the 0 acceptance rate). It also worries me that the green distribution seems to be right on 0. The beta(1,1) has positive probability at 0 but a p=0 might be an unstable solution here? (as in, the sampler may not be able to calculate the derivative at that point and returns a NaN, so doesn't know where to sample next?? Complete guess there).
Can you force the initial condition to be 0 and see if that always creates a failed sampling?
Other than that, I would try tweaking the hyperparameters, such as step size, number of iterations, etc...
Also, you may want to simplify the example by only using one N. Might help you diagnose. Good luck!
random.uniform's maxval default value is None. I changed it to 1, the result became stable.
random.uniform(( len(N) ,), minval=0, maxval=1)

Using SMOTE with NaN values

Is there a way one can use SMOTE with NaNs?
Here is a dummy prog to try using SMOTE in presence of NaN values
# Imports
from collections import Counter
import numpy as np
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import Imputer
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import make_pipeline
from imblearn.combine import SMOTEENN
# Load data
bc = load_breast_cancer()
X, y = bc.data, bc.target
# Initial number of samples per class
print('Number of samples for both classes: {} and {}.'.format(*Counter(y).values()))
# SMOTEd class distribution
print('Dataset has %s missing values.' % np.isnan(X).sum())
_, y_resampled = SMOTE().fit_sample(X, y)
print('Number of samples for both classes: {} and {}.'.format(*Counter(y_resampled).values()))
# Generate artificial missing values
X[X > 1.0] = np.nan
print('Dataset has %s missing values.' % np.isnan(X).sum())
#_, y_resampled = make_pipeline(Imputer(), SMOTE()).fit_sample(X, y)
sm = SMOTE(ratio = 'auto',k_neighbors = 5, n_jobs = -1)
smote_enn = SMOTEENN(smote = sm)
x_train_res, y_train_res = smote_enn.fit_sample(X, y)
print('Number of samples for both classes: {} and {}.'.format(*Counter(y_resampled).values()))
I get the following output/error:
Number of samples for both classes: 212 and 357.
Dataset has 0 missing values.
Number of samples for both classes: 357 and 357.
Dataset has 6051 missing values.
ValueError: Input contains NaN, infinity or a value too large for dtype('float64').
You already include the answer. Notice that fit_resample is used instead of fit_sample. You should use the make_pipeline as follows:
# Imports
import numpy as np
from collections import Counter
from sklearn.datasets import load_breast_cancer
from sklearn.impute import SimpleImputer
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import make_pipeline
from imblearn.combine import SMOTEENN
# Load data
bc = load_breast_cancer()
X, y = bc.data, bc.target
X[X > 1.0] = np.nan
# Over-sampling
smote = SMOTE(ratio='auto',k_neighbors=5, n_jobs=-1)
smote_enn = make_pipeline(SimpleImputer(), SMOTEENN(smote=smote))
_, y_res = smote_enn.fit_resample(X, y)
# Class distribution
print('Number of samples for both classes: {} and {}.'.format(*Counter(y_res).values()))
Check also your imbalanced-learn version.
Generally no, SMOTE is preparing a data set for further model fitting.
Usual models (like random forest, etc.) do not work with NAin the label variable, because what are you actually predicting here? The same goes for NAin the predictor variables where most algorithms either do not work or simply ignore cases with NA.
So the error is pretty much by design because you cannot and should not have missing values in your training data set for the algorithm and logically you do not want to "balance" cases with missing values, you only want to SMOTE cases with valid labels.
If you feel that missing labels still represent valid information that should be balanced (e.g. you actually want to oversample the NAclass because you think it is underpreresented), then it should not be a missing value but rather a defined value called "Unknown" or something else, indicating a known class with the characteristic of "NA" but I do not really see any research questions where this makes sense.
Update 1:
Another way to go is to impute the missing values first so that you actually have three steps in fitting your model:
Imputing missing values (using MICE or similar)
SMOTE to balance training set
Fit algorithm/model

Evaluation of log density for various values of `mean`

I can evaluate the log probability density of a multivariate normal by doing
import numpy as np
import scipy.stats
scipy.stats.multivariate_normal.logpdf([0,0], mean = np.zeros(2), cov = np.eye(2))
Now, I'm interested in evaluating the log density of the point [0,0] over a variety of values of mean. Here is what I have tried
import numpy as np
import scipy.stats
grid = np.linspace(-2,2,51)
x,y = np.meshgrid(grid,grid)
scipy.stats.multivariate_normal.logpdf([0,0], mean = np.stack([x,y], axis = -1), cov = np.eye(2))
This results in an error: ValueError: Array 'mean' must be a vector of length 5202.
How can I evaluate the log density of a multivariate normal over a variety of values of mean?
As your error suggest logpdf is waiting a 1D array for the mean argument.
Since your covariance matrix is 2x2, you should give him a 2x1 array to mean.
If you want to evaluate the density for multiple mean values you can use a for loop after flattening x and y as follows :
import numpy as np
import scipy.stats
grid = np.linspace(-2,2,51)
x,y = np.meshgrid(grid,grid)
x,y = x.flatten(), y.flatten()
res = []
for i in range(len(x)):
x_i, y_i = x[i], y[i]
res.append(scipy.stats.multivariate_normal.logpdf([0,0], mean =[x_i,y_i], cov = np.eye(2)))
You can also also use list comprehension in place of the for loop :
res = [scipy.stats.multivariate_normal.logpdf([0,0], mean =[x_i,y_i], cov = np.eye(2)) for i in range(len(x))]
To visualize the result you can use matplotlib.pyplot :
import matplotlib.pyplot as plt
plt.figure()
plt.scatter(x,y,c=res)
plt.show()
But I don't see the point of trying to evaluate the multivariate gaussian logpdf over several mean values.
In the case of a multivariate normal distribution the argument x and the mean m have symmetric roles as you can see in the exponential term : (x-m)^T Sigam^(-1) (x-m).
What you are doing is equivalent to evaluate the logpdf of a multivariate gaussian of mean [0,0] and of covariance eye(2).

Locally weighted smoothing for binary valued random variable

I have a random variable as follows:
f(x) = 1 with probability g(x)
f(x) = 0 with probability 1-g(x)
where 0 < g(x) < 1.
Assume g(x) = x. Let's say I am observing this variable without knowing the function g and obtained 100 samples as follows:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import binned_statistic
list = np.ndarray(shape=(200,2))
g = np.random.rand(200)
for i in range(len(g)):
list[i] = (g[i], np.random.choice([0, 1], p=[1-g[i], g[i]]))
print(list)
plt.plot(list[:,0], list[:,1], 'o')
Plot of 0s and 1s
Now, I would like to retrieve the function g from these points. The best I could think is to use draw a histogram and use the mean statistic:
bin_means, bin_edges, bin_number = binned_statistic(list[:,0], list[:,1], statistic='mean', bins=10)
plt.hlines(bin_means, bin_edges[:-1], bin_edges[1:], lw=2)
Histogram mean statistics
Instead, I would like to have a continuous estimation of the generating function.
I guess it is about kernel density estimation but I could not find the appropriate pointer.
straightforward without explicitly fitting an estimator:
import seaborn as sns
g = sns.lmplot(x= , y= , y_jitter=.02 , logistic=True)
plug in x= your exogenous variable and analogously y = dependent variable. y_jitter is jitter the point for better visibility if you have a lot of data points. logistic = True is the main point here. It will give you the logistic regression line of the data.
Seaborn is basically tailored around matplotlib and works great with pandas, in case you want to extend your data to a DataFrame.