I'm fairly new to python so bare with me. I have plotted a histogram using some generated data. This data has many many points. I have defined it with the variable vals. I have then plotted a histogram with these values, though I have limited it so that only values between 104 and 155 are taken into account. This has been done as follows:
bin_heights, bin_edges = np.histogram(vals, range=[104, 155], bins=30)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2.
plt.errorbar(bin_centres, bin_heights, np.sqrt(bin_heights), fmt=',', capsize=2)
plt.xlabel("$m_{\gamma\gamma} (GeV)$")
plt.ylabel("Number of entries")
plt.show()
Giving the above plot:
My next step is to take into account values from vals which are less than 120. I have done this as follows:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
I need to plot a curve on the same plot as the histogram, which follows the form B(x) = Ae^(-x/λ)
I then estimated a value of λ using the maximum likelihood estimator formula:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
#print(background_data)
N_background=len(background_data)
print(N_background)
sigma_background_data=sum(background_data)
print(sigma_background_data)
lamb = (sigma_background_data)/(N_background) #maximum likelihood estimator for lambda
print('lambda estimate is', lamb)
where lamb = λ. I got a value of roughly lamb = 27.75, which I know is correct. I now need to get an estimate for A.
I have been advised to do this as follows:
Given a value of λ, find A by scaling the PDF to the data such that the area beneath
the scaled PDF has equal area to the data
I'm not quite sure what this means, or how I'd go about trying to do this. PDF means probability density function. I assume an integration will have to take place, so to get the area under the data (vals), I have done this:
data_area= integrate.cumtrapz(background_data, x=None, dx=1.0)
print(data_area)
plt.plot(background_data, data_area)
However, this gives me an error
ValueError: x and y must have same first dimension, but have shapes (981555,) and (981554,)
I'm not sure how to fix it. The end result should be something like:
See the cumtrapz docs:
Returns: ... If initial is None, the shape is such that the axis of integration has one less value than y. If initial is given, the shape is equal to that of y.
So you are either to pass an initial value like
data_area = integrate.cumtrapz(background_data, x=None, dx=1.0, initial = 0.0)
or discard the first value of the background_data:
plt.plot(background_data[1:], data_area)
Related
I'm trying to implement the Gradient descent method for solving $Ax = b$ for a positive definite symmetric matrix $A$ of size about $9600 \times 9600$. I thought my code was relatively simple
#Solves the problem Ax = b for x within epsilon tolerance or until MAX_ITERATION is reached
def GradientDescent(Amat,target,epsilon = .01,MAX_ITERATION = 100,x=np.zeros(9604):
CurrentRes = target-np.matmul(Amat,x)
count = 0
while(np.linalg.norm(CurrentRes)> epsilon and count < MAX_ITERATION):
Ar = np.matmul(Amat,CurrentRes)
alpha = CurrentRes.T.dot(CurrentRes)/CurrentRes.T.dot(Ar)
x = x+alpha*CurrentRes
Ax = np.matmul(Amat,x)
CurrentRes = target-Ax
count = count+1
return(x,count,norm(CurrentRes))
#A is square matrix about 9600x9600 and b is about 9600x1
GDSum = GradientDescent(A,b)
but the above takes almost 3 minutes to run a single iteration of the main while loop.
I didn't think that $9600 \times 9600$ was too big for NumPy to handle effectively, but even the step of computing alpha which is just the quotient of two dot products is taking over 30 seconds.
I tried error-testing the code by timing each action in the while loop, and they are all running much slower than expected. A single matrix multiplication is taking almost a minute. The steps involving vector addition or subtraction at least seem to be running quickly.
#A is square matrix about 9600x9600 and b is about 9600x1
GDSum = GradientDescent(A,b)
Perhaps the most relevant bit of information is missing.
Your function is fast when A and b are Numpy arrays, but it's terribly slow when they are lists.
Is that your case?
I am trying to place some reasonable axis increments on my vb .net graph. I have used:
Chart1.Series(0).Points.DataBindXY(Wavelength, Normalised)
Chart1.ChartAreas(0).AxisX.RoundAxisValues()
Chart1.ChartAreas(0).AxisX.Minimum = 0
Chart1.ChartAreas(0).AxisX.Maximum = 2048
Chart1.ChartAreas(0).AxisX.Interval = 100
Which plots the graph corresponding to indicies 0 to 2048, in intervals of 100. However as the x axis array starts at 341.1049 and has non integral spacings, the x axis has nasty data labels with many decimal points.
Is there a way of displaying from say 300 to say 10000 with increments of 100?
Here's my chart, see the increments have many decimals and aren't nicely spaced
It is because the AxisX.Maximum and .Interval uses interval spacings of the X axis series and not actual values. Though I can't seem to find any reference of any alternatives.
Now, I understood what you were asking. Intervall is not what you are looking for. You need AxisX.MajorUnit. Intervalls says after how many points should a label be rendered. For ex. if you set it to 1 you will have a label on your axis for every point in the series.
With MajorUnit and MinorUnit you control what you mean as Intervall.
Here you have sth that could help you, you can use the LabelStyleFormatlike this to format your axis labels:
With Chart1.ChartAreas(0).AxisX
.Minimum = 300
.Maximum = 10000
.MajorUnit= 100
'//Here you can format the axis labels
.LabelStyle.Format = "0.###"
.Title = "TestTitle"
.TitleFont = New Font(New FontFamily("Arial"), 9, FontStyle.Bold)
End With
This is for the X axis, to change the Y axis just use Chart1.ChartAreas(0).AsisY.
I am trying to understand how I can go about finding or calculating the intersection points of a straight line and a diagonal scatter plot. Just to give a better idea, on an X,Y plot, if I have a straight horizontal line at y= # (any number), that crosses an array of scatters points (which form a diagonal line), how can I calculate points of intersection the two lines?
The problem that I am having is that the scattered array has multiple points around my horizontal line, what I would like to do is find the point that hits the horizontal line first, and the point that hits the horizontal line the last.
please refer to the image for a better understanding. The two points that are annotated are the ones that I am trying to extract with VBA. Is this possible? The image shows two sets of scattered arrays, I am only interested in figuring out the method for 1 of the arrays. If I can extract this for 1 scattered array, I can replicate the method for the next one.
http://imgur.com/9YTNeco
It's hard to give you any specifics without knowing the structure of your Data. But this is the approach I'd use.
I'll assume your data looks like this (for both of the plots)
A B
x1 y1
x2 y2
x3 y3
Loop through the axis like so:
'the y values need to be as high as the black axis you've got there
'I'll assume that's zero
i = 0
k = .Cells(1,1)
'we begin at the first x-value in your column
for i = 0 to Worksheets("Sheet name").UsedRange.Rows.Count
'now we are looking for the lowest value of x, k will be this value
if .Cells(i,1) < k Then
if .cells(i,2) = 0 Then '0 = y-value of the "black" axis
k = .Cells(i,1)
End If
End If
'every time we find a lower value than our existing k
'we will assign it to k
Next
The lowest value will be your "low limit"-point.
You can use that same kind of algorithm for the highest value of the same scatter plot (just change the "<" to ">" or the lowest and highest value for the one, just change the Column ID.
HTH
I'm overplotting multicolored lines on an image, the color of the lines is supposed to represent a given parameter that varies between roughtly -1 and 3.
The following portion of code is the one that builds these lines :
x = self._tprun.r[0,p,::100] # x coordinate
y = self._tprun.r[1,p,::100] # y coordinate
points = np.array([x, y]).T.reshape(-1, 1, 2)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
# 'color' is the parameter that will color the line
vmin = self._color[p,:].min()
vmax = self._color[p,:].max()
lc = LineCollection(segments,
cmap=plt.get_cmap('jet'),
norm=plt.Normalize(vmin=vmin,vmax=vmax))
lc.set_array(self._color[p,:])
lc.set_linewidth(1)
self._ax.add_collection(lc)
This code is inside a loop on 'p' and so it will create several lines at locations given by the arrays 'x' and 'y' and for which the color should be given by the value of 'self._color[p,:]'.
As I said, '_color[p,:]' roughly varies between -1 and 3. Here is an example of what '_color[p,:]' may be :
My problem is that the lines that are created appear without much variation of the color, they all look kind of monochrome dark blue whereas _color[p,:] varies much and I ask for the normalization to take its min/max values.
here is an example of such a line (look at the oscillating dark blue line, other black lines are a contour of another value) :
Is there something I'm missing in the way these functions work?
Got it!
Answer to the question is here :
x = self._tprun.r[0,p,::100] # re-sample every 100 values !!
y = self._tprun.r[1,p,::100] #
# [...]
#lc.set_array(self._color[p,:]) # self._color[p,:] is not resampled
lc.set_array(self._color[p,::100]) # this works because resampled
meaning that the 'color' array was actually much larger than the arrays used for position of the line segments.... only the first values of '_color' where used where its values do not vary that much.
I've created a codebook using k-means of size 4000x300 (4000 centroids, each with 300 features). Using the codebook, I then want to label an input vector (for purposes of binning later on). The input vector is of size Nx300, where N is the total number of input instances I receive.
To compute the labels, I calculate the closest centroid for each of the input vectors. To do so, I compare each input vector against all centroids and pick the centroid with the minimum distance. The label is then just the index of that centroid.
My current Matlab code looks like:
function labels = assign_labels(centroids, X)
labels = zeros(size(X, 1), 1);
% for each X, calculate the distance from each centroid
for i = 1:size(X, 1)
% distance of X_i from all j centroids is: sum((X_i - centroid_j)^2)
% note: we leave off the sqrt as an optimization
distances = sum(bsxfun(#minus, centroids, X(i, :)) .^ 2, 2);
[value, label] = min(distances);
labels(i) = label;
end
However, this code is still fairly slow (for my purposes), and I was hoping there might be a way to optimize the code further.
One obvious issue is that there is a for-loop, which is the bane of good performance on Matlab. I've been trying to come up with a way to get rid of it, but with no luck (I looked into using arrayfun in conjunction with bsxfun, but haven't gotten that to work). Alternatively, if someone know of any other way to speed this up, I would be greatly appreciate it.
Update
After doing some searching, I couldn't find a great solution using Matlab, so I decided to look at what is used in Python's scikits.learn package for 'euclidean_distance' (shortened):
XX = sum(X * X, axis=1)[:, newaxis]
YY = Y.copy()
YY **= 2
YY = sum(YY, axis=1)[newaxis, :]
distances = XX + YY
distances -= 2 * dot(X, Y.T)
distances = maximum(distances, 0)
which uses the binomial form of the euclidean distance ((x-y)^2 -> x^2 + y^2 - 2xy), which from what I've read usually runs faster. My completely untested Matlab translation is:
XX = sum(data .* data, 2);
YY = sum(center .^ 2, 2);
[val, ~] = max(XX + YY - 2*data*center');
Use the following function to calculate your distances. You should see an order of magnitude speed up
The two matrices A and B have the columns as the dimenions and the rows as each point.
A is your matrix of centroids. B is your matrix of datapoints.
function D=getSim(A,B)
Qa=repmat(dot(A,A,2),1,size(B,1));
Qb=repmat(dot(B,B,2),1,size(A,1));
D=Qa+Qb'-2*A*B';
You can vectorize it by converting to cells and using cellfun:
[nRows,nCols]=size(X);
XCell=num2cell(X,2);
dist=reshape(cell2mat(cellfun(#(x)(sum(bsxfun(#minus,centroids,x).^2,2)),XCell,'UniformOutput',false)),nRows,nRows);
[~,labels]=min(dist);
Explanation:
We assign each row of X to its own cell in the second line
This piece #(x)(sum(bsxfun(#minus,centroids,x).^2,2)) is an anonymous function which is the same as your distances=... line, and using cell2mat, we apply it to each row of X.
The labels are then the indices of the minimum row along each column.
For a true matrix implementation, you may consider trying something along the lines of:
P2 = kron(centroids, ones(size(X,1),1));
Q2 = kron(ones(size(centroids,1),1), X);
distances = reshape(sum((Q2-P2).^2,2), size(X,1), size(centroids,1));
Note
This assumes the data is organized as [x1 y1 ...; x2 y2 ...;...]
You can use a more efficient algorithm for nearest neighbor search than brute force.
The most popular approach are Kd-Tree. O(log(n)) average query time instead of the O(n) brute force complexity.
Regarding a Maltab implementation of Kd-Trees, you can have a look here