How do I estimate the point of fracture of a curve in gnuplot - iteration

I want to estimate the point of fracture (x_F) (red circle) via ternary operator to restrict the range of my plot to it.
Example plot
To achieve a restriction to the X(Y_max)-value the stats command in combination with ternary-operator seems to be sufficient:
stats 'foo.csv' u 5 nooutput name 'Y_'
stats 'foo.csv' u 4 every ::Y_index_max::Y_index_max nooutput
X_max = STATS_max
plot 'foo.csv' u 4:(($4 <= X_max) ? $5 : 1/0) w l notitle
I cannot use the X_max-variable, because there a several points beyond the point of fracture (x_n > x_F) due to measurement errors. My idea was to compare the x-entries $4 to one another and to save the first point which satisfies $4_prev > $4_curr and to save it as x_F=$4_prev.

A simple delta-function seems to do the trick: delta(x)=(D=x-old_D,old_D=x,D) and old_D=NaN in combination with the ternary operator (delta($4)>0 ? $5 : 1/0) whereas $5 is the y-value, which will be plotted as long as the difference of two sequent x-values is positive.

You want to discard any data point after dx has become negative for the first time, right? You'll need a flag variable, i called it broken, which is set after the first occurrence of dx < 0:
broken = 0
plot dataf us (lastx=thisx, thisx=$4): \
(broken ? 1/0 :($4 < lastx) ? $5 : (broken=1, 1/0))
This uses the comma as "serial evaluator", same as in C, etc.
(Not tested now, as i don't have a suitable data set at hand and was too lazy to create one.)
Update: You can put the assignment broken=0 into the plot
plot broken=0, dataf us ....
, to be able to replot, zoom that plot etc.

Related

Plotting an exponential function given one parameter

I'm fairly new to python so bare with me. I have plotted a histogram using some generated data. This data has many many points. I have defined it with the variable vals. I have then plotted a histogram with these values, though I have limited it so that only values between 104 and 155 are taken into account. This has been done as follows:
bin_heights, bin_edges = np.histogram(vals, range=[104, 155], bins=30)
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2.
plt.errorbar(bin_centres, bin_heights, np.sqrt(bin_heights), fmt=',', capsize=2)
plt.xlabel("$m_{\gamma\gamma} (GeV)$")
plt.ylabel("Number of entries")
plt.show()
Giving the above plot:
My next step is to take into account values from vals which are less than 120. I have done this as follows:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
I need to plot a curve on the same plot as the histogram, which follows the form B(x) = Ae^(-x/λ)
I then estimated a value of λ using the maximum likelihood estimator formula:
background_data=[j for j in vals if j <= 120] #to avoid taking the signal bump, upper limit of 120 MeV set
#print(background_data)
N_background=len(background_data)
print(N_background)
sigma_background_data=sum(background_data)
print(sigma_background_data)
lamb = (sigma_background_data)/(N_background) #maximum likelihood estimator for lambda
print('lambda estimate is', lamb)
where lamb = λ. I got a value of roughly lamb = 27.75, which I know is correct. I now need to get an estimate for A.
I have been advised to do this as follows:
Given a value of λ, find A by scaling the PDF to the data such that the area beneath
the scaled PDF has equal area to the data
I'm not quite sure what this means, or how I'd go about trying to do this. PDF means probability density function. I assume an integration will have to take place, so to get the area under the data (vals), I have done this:
data_area= integrate.cumtrapz(background_data, x=None, dx=1.0)
print(data_area)
plt.plot(background_data, data_area)
However, this gives me an error
ValueError: x and y must have same first dimension, but have shapes (981555,) and (981554,)
I'm not sure how to fix it. The end result should be something like:
See the cumtrapz docs:
Returns: ... If initial is None, the shape is such that the axis of integration has one less value than y. If initial is given, the shape is equal to that of y.
So you are either to pass an initial value like
data_area = integrate.cumtrapz(background_data, x=None, dx=1.0, initial = 0.0)
or discard the first value of the background_data:
plt.plot(background_data[1:], data_area)

How to calculate slope of the line

I am trying to calculate the slope of the line for a 50 day EMA I created from the adjusted closing price on a few stocks I downloaded using the getSymbols function.
My EMA looks like this :
getSymbols("COLUM.CO")
COLUM.CO$EMA <- EMA(COLUM.CO[,6],n=50)
This gives me an extra column that contains the 50 day EMA on the adjusted closing price. Now I would like to include an additional column that contains the slope of this line. I'm sure it's a fairly easy answer, but I would really appreciate some help on this. Thank you in advance.
A good way to do this is with rolling least squares regression. rollSFM does a fast and efficient job for computing the slope of a series. It usually makes sense to look at the slope in relation to units of price activity in time (bars), so x can simply be equally spaced points.
The only tricky part is working out an effective value of n, the length of the window over which you fit the slope.
library(quantmod)
getSymbols("AAPL")
AAPL$EMA <- EMA(Ad(AAPL),n=50)
# Compute slope over 50 bar lookback:
AAPL <- merge(AAPL, rollSFM(Ra = AAPL[, "EMA"],
Rb = 1:nrow(AAPL), n = 50))
The column labeled beta contains the rolling window value of the slope (alpha contains the intercept, r.squared contains the R2 value).

Constraining on a Minimum of Distinct Values with PuLP

I've been using the PuLP library for a side project (daily fantasy sports) where I optimize the projected value of a lineup based on a series of constraints.
I've implemented most of them, but one constraint is that players must come from at least three separate teams.
This paper has an implementation (page 18, 4.2), which I've attached as an image:
It seems that they somehow derive an indicator variable for each team that's one if a given team has at least one player in the lineup, and then it constrains the sum of those indicators to be greater than or equal to 3.
Does anybody know how this would be implemented in PuLP?
Similar examples would also be helpful.
Any assistance would be super appreciated!
In this case you would define a binary variable t that sets an upper limit of the x variables. In python I don't like to name variables with a single letter but as I have nothing else to go on here is how I would do it in pulp.
assume that the variables lineups, players, players_by_team and teams are set somewhere else
x_index = [i,p for i in lineups for p in players]
t_index = [i,t for i in lineups for t in teams]
x = LpVariable.dicts("x", x_index, lowBound=0)
t = LpVAriable.dicts("t", t_index, cat=LpBinary)
for l in teams:
prob += t[i,l] <=lpSum([x[i,k] for k in players_by_team[l]])
prob += lpSum([t[i,l] for l in teams]) >= 3

Find or calculate intersection points of a straight line with a diagonal scatter plot using VBA

I am trying to understand how I can go about finding or calculating the intersection points of a straight line and a diagonal scatter plot. Just to give a better idea, on an X,Y plot, if I have a straight horizontal line at y= # (any number), that crosses an array of scatters points (which form a diagonal line), how can I calculate points of intersection the two lines?
The problem that I am having is that the scattered array has multiple points around my horizontal line, what I would like to do is find the point that hits the horizontal line first, and the point that hits the horizontal line the last.
please refer to the image for a better understanding. The two points that are annotated are the ones that I am trying to extract with VBA. Is this possible? The image shows two sets of scattered arrays, I am only interested in figuring out the method for 1 of the arrays. If I can extract this for 1 scattered array, I can replicate the method for the next one.
http://imgur.com/9YTNeco
It's hard to give you any specifics without knowing the structure of your Data. But this is the approach I'd use.
I'll assume your data looks like this (for both of the plots)
A B
x1 y1
x2 y2
x3 y3
Loop through the axis like so:
'the y values need to be as high as the black axis you've got there
'I'll assume that's zero
i = 0
k = .Cells(1,1)
'we begin at the first x-value in your column
for i = 0 to Worksheets("Sheet name").UsedRange.Rows.Count
'now we are looking for the lowest value of x, k will be this value
if .Cells(i,1) < k Then
if .cells(i,2) = 0 Then '0 = y-value of the "black" axis
k = .Cells(i,1)
End If
End If
'every time we find a lower value than our existing k
'we will assign it to k
Next
The lowest value will be your "low limit"-point.
You can use that same kind of algorithm for the highest value of the same scatter plot (just change the "<" to ">" or the lowest and highest value for the one, just change the Column ID.
HTH

find ranges to create Uniform histogram

I need to find ranges in order to create a Uniform histogram
i.e: ages
to 4 ranges
data_set = [18,21,22,24,27,27,28,29,30,32,33,33,42,42,45,46]
is there a function that gives me the ranges so the histogram is uniform?
in this case
ranges = [(18,24), (27,29), (30,33), (42,46)]
This example is easy, I'd like to know if there is an algorithm that deals with complex data sets as well
thanks
You are looking for the quantiles that split up your data equally. This combined with cutshould work. So, suppose you want n groups.
set.seed(1)
x <- rnorm(1000) # Generate some toy data
n <- 10
uniform <- cut(x, c(-Inf, quantile(x, prob = (1:(n-1))/n), Inf)) # Determine the groups
plot(uniform)
Edit: now corrected to yield the correct cuts in the ends.
Edit2: I don't quite understand the downvote. But this also works in your example:
data_set = c(18,21,22,24,27,27,28,29,30,32,33,33,42,42,45,46)
n <- 4
groups <- cut(data_set, breaks = c(-Inf, quantile(data_set, prob = 1:(n-1)/n), Inf))
levels(groups)
With some minor renaming nessesary. For slightly better level names, you could also put in min(x) and max(x) instead of -Inf and Inf.