I want to get probability per class, for my output, in tensorflow.
Using softmax yields the following.
A : 0.7
B : 0.2
C : 0.1
But, what I want is
A probability : 0.8
B probability : 0.6
C probability : 0.7
Instead of using softmax, use tf.nn.sigmoid as
tf.nn.sigmoid(<output-tensor>)
Related
I am new to scipy minimize. I want to minimize a function. There are 2 vectors in play :
x : 4 element vector of spending
y : 4 element vector of cost per customer
each element in y is defined something like 50 from 0 to 100000, and 0.0005 * X from 100000 to infinity
The objective function is to minimize the spend :
def objective(x):
x1=x[0]
x2=x[1]
x3=x[2]
x4=x[3]
return x1+x2+x3+x4
As the constraint I have the number of users I have to sign up for like this :
def constraint1(x,y):
x[0]/y[0]+x[1]/y[1]+x[2]/y[2]+x[3]/y[3]>5035
bounds and definition like this
b=(0,1000000)
bnds=(b,b,b,b)
con1={'type': 'ineq', 'fun': constraint1}
x0=[20000,20000,20000,20000]
sol= minimize(objective,x0, method= "SLSQP",bounds=bnds,constraints=con1)
I simply do not know to define the Y vector properly. Any feedback or help wound be very much appreciated .
I can't find any example anywhere on the internet .
I would like to learn using the exponential law to calculate a probability.
This my exponential lambda : 0.0035
What is the probability that my object becomes defectuous before 100 hours of work ? P(X < 100)
How could I write this with numpy or sci kit ? Thanks !
Edit : this is the math :
P(X < 100) = 1 - e ** -0.0035 * 100 = 0.3 = 30%
Edit 2 :
Hey guys, I maybe have found something there, hi hi :
http://web.stanford.edu/class/archive/cs/cs109/cs109.1192/handouts/pythonForProbability.html
Edit 3 :
This is my attempt with scipy :
from scipy import stats
B = stats.expon(0.0035) # Declare B to be a normal random variable
print(B.pdf(1)) # f(1), the probability density at 1
print(B.cdf(100)) # F(2) which is also P(B < 100)
print(B.rvs()) # Get a random sample from B
but B.cdf is wrong : it prints 1, while it should print 0.30, please help !
B.pdf prints 0.369 : What is this ?
Edit 4 : I've done it with the python math lib like this :
lambdaCalcul = - 0.0035 * 100
MyExponentialProbability = 1 - math.exp(lambdaCalcul)
print("My probability is",MyExponentialProbability * 100 , "%");
Any other solution with numpy os scipy is appreciated, thank you
The expon(..) function takes as parameters loc and scale (which correspond to the mean and the standard deviation. Since the standard deviation is the inverse of the variance, we thus can construct such distribution with:
B = stats.expon(scale=1/0.0035)
Then the cummulative distribution function says for P(X < 100):
>>> print(B.cdf(100))
0.2953119102812866
I have a dataframe (or a series) of measured voltage (in V) indexed by timestamps (in seconds). I want to know the duration of the longest segment (=consecutive values) of voltage greater than a threshold.
Example:
time voltage
0.0 1.2
0.1 1.8
0.2 2.2
0.3 2.3
0.4 1.9
0.5 1.5
0.6 2.1
0.7 2.3
0.8 2.2
0.9 1.9
1.0 1.6
In this example, threshold is 2.0 V, and desired answer is 0.3 seconds
Real data is made of 10k or more samples, and number of segments of values above the threshold is completly random, there is even the possibility of having only one segment with all values above the threshold.
I think the first step is too identified these segments et separate them, then perform calculation of duration.
You can create a True and False sequence with boolean indexing. Then use value_counts and max to get the longest sequence:
s = df.voltage > 2
(~s).cumsum()[s].value_counts().max()
Output
3
IIUC
n=2
s=df.voltage.gt(n)
df.time[s].groupby((~s).cumsum()).diff().sum()
Out[1218]: 0.30000000000000004
And if you need the longest duration , Notice here is from 0.6 to 0.8 which should be 0.2 second ..
df.time[s].groupby((~s).cumsum()).apply(lambda x : x.diff().sum()).max()
Out[1221]: 0.20000000000000007
Ok, so I have a neural network that classifies fire size into 3 groups, 0-1, 1 - 100, and over 100 acres. I need a loss function that weights the loss as double when the classifier guesses a class that is off by 2 (Actual = 0, predicted = 3)
I need a loss function that weights the loss as double when the classifier guesses a class that is off by 2 (Actual = 0, predicted = 3)
double of what?.
A)Is it the double the loss value when the classifier guesses correctly,
B)or double the loss value when the classifier is off by 1.
C)Can we relax this 'double' constraint, and can we assume that any suitable higher power would suffice?
Let us assume A).
Let f(x) denotes the probability that your input variable belong to a particular class. Note that, in f(x), x is the absolute value of the difference in categorical value.
Then we see that f(0)=0.5 is a solution for assumption A. This means that f(1)=0.25 and f(2)=0.25. Btw, the fact that f(1)==f(2) doesn't look natural.
Assume that your classifier calculates a function f(x), and uses it as follows.
def classifier_output(firesize):
if (firesize >=0 and firesize < 1.0):
return [f(0), f(1), f(2)]
elif (firesize >= 1.0 and firesize < 100.0):
return [f(1), f(0), f(1)]
else :
assert(firesize > 100.0)
return (f(2), f(1), f(0)]
The constraints are
C1)
f(x) >=0
C2)
the components of your output vector should always sum to 1.0
ie. sum of all three components of the return value should always be 1.
C3)
When the true class and predicted class differ by 2, the 1-hot encoding loss
will be -log(f(2)), According to assumption A, this should equal -2log(f(0)).
ie:
log(f(2))=2*log(f(0))
This translates to
f(2) = f(0)*f(0)
Let us put z=f(0). Now f(2)=z*z. We don't know f(1). Let us assume, f(1)=y.
From the constraint C2,
We have the following equations,
z+ z*z + y=1
z + 2*y=1
A solution to the above is z=0.5, y=0.25
If you assume B), you wont be able to find such a function.
Conditional Independence Example Photo
The entire pdf lesson
it's on page 8.
I've been looking at this for a long time now; can anyone explain how for the P13 we end up with <0.31,0.69>? I'm not sure how the a' gets distributed here. When I calculate 0.2(0.04+0.16+0.16) for the x column I get 0.072, so how do we end up with 0.31?
Thank you.
The α is a normalization constant that is supposed to ensure that you have proper probabilities, i.e. values in [0, 1] that sum to 1. As such, it has to be 1 over the sum of all possible values. For your example, we calculate it as follows.
Let's first evaluate the single expressions in the tuple:
0.2 * (0.04 + 0.16 + 0.16) = 0.072
0.8 * (0.04 + 0.16) = 0.16
Notice that these two values do not specify a probability distribution (they don't sum to 1).
Therefore, we calculate the normalization constant α as 1 over the sum of these values:
α = 1 / (0.072 + 0.16) = 4.310345
With this, we normalize the original values as follows:
0.072 * α = 0.310345
0.16 * α = 0.689655
Notice how these values do indeed specify a probability distribution now. (They are in [0, 1] and sum to 1).
I hope this helps :)