how to draw the second curve based Y2axis while the first curve is based on YAxis? - zedgraph

I'm using ZedGraph.
I have 2 curves to draw, the first curve is based on the scale of YAxis, while the second one is based on the Y2Axis, the value in first curve is far bigger than the second value.
In my project, both curves are based on YAxis, which makes the graph ugly.
Does anyone has experience to draw second curve based on the Y2Axis?
Here is my code: (What should I change?)
PointPairList p1 = new PointPairList(),
p2 = new PointPairList();
//code to add data into p1 and p2
GraphPane gp = new GraphPane();
gp.AddCurve(p1, "", Color.Black);
gp.AddCurve(p2, "", Color.Blue);
gp.XAxis.Scale.Min = v1;
gp.Y2Axis.Scale.Max = v2;
gp.AxisChange();
gp.XAxis.Scale.IsUseTenPower = false;
gp.Y2Axis.Scale.IsUseTenPower=false;
Thank you.
If I want to set the Y2Axis share the same grid of Y1Axis, after:
LineItem curveY2 = gp.AddCurve(p2, "", Color.Blue);
...
curveY2 .IsY2Axis = true;
i.e., the grid is based on Y1Axis, then Y2Axis has the same grid but with different lable.
For example, Y1Axis is from 1 to 300, and have 7 rows, however Y2Axis has 1 to 20, I want the Y2Axis also have 7 rows (same as the Y1Axis), which function should I use?

LineItem curveY2 = gp.AddCurve(p2, "", Color.Blue);
...
curveY2 .IsY2Axis = true;
//If you have more than one axis on the related side, you have to assign the index of the axis
curveY2 .YAxisIndex = 0;

Related

Downsample array based on sliding window indices

I am trying to downsample or reduce the resolution of a 3D array only on the first two axes. For example, if the array size is 40x50x300, downsampling it with a degree of 2 will make it 20x25x300
for this purpose, I have found a function in scikit-image
def create_img(nX, nY, nMZ):
"this function creates a demo array"
img = np.zeros((nX,nY,nMZ))
img_sp = np.arange(nX*nY).reshape(nX, nY) + 1
for r in range(img.shape[0]):
for c in range(img.shape[1]):
img[r,c,:] = img_sp[r,c]
return img
image = create_img(7, 5, 10)
Now every pixel in the image(2D) has corresponding values on the z axis.
from skimage.measure import block_reduce
img_block = block_reduce(image, block_size=(2, 2, 1), cval=0, func=np.min)
now, the block_reduce function will take every minimum value in sliding 2x2 window and downsample the image on the x and y-axis.
If func arg is changed to np.max it will take the maximum value in the 2x2 window. The other supporting func are np.mean, np.median and so...
But I want to take XY values based on location/indices for example 0th element on 2x2 or max indice elements.
How to achieve that?

How do I create a 10x10 grid for polygons<1 kilometer in turf.js?

How do I create a 10x10 grid for polygons<1 kilometer in turf.js?
let gridOptions = {units: 'kilometers'};
let grid = turf.squareGrid([ 176.4218616, -37.8028137, 176.4288378, -37.7992033 ], 100, gridOptions);
This produces an empty result. I presume because the polygon is too small.
That's right: your area is less than 1sqkm:
const area_in_sqkm = turf.convertArea(
turf.area(
turf.bboxPolygon([176.4218616, -37.8028137, 176.4288378, -37.7992033])
),
'meters', 'kilometers'
)
// area_in_sqkm = 0.2466
The units are applied to the cellSide argument when calling squareGrid. In your example your cellSide value is 100 meaning a grid with cells of size 100km². Change this value to resize the cells:
let grid = turf.squareGrid(bbox, 0.1, { units: 'kilometers'})

computing cumulative distribution of a conditional probability distribution

I have a conditional probability of z for the given m, p(z|m), where the coefficients are chosen in order that integral over z in the limit of [0,1.5] and m in the range of [18:28] would be equal to one.
def p(z,m):
if (m<21.25):
E = { 'ft':0.55, 'alpha': 2.99, 'z0':0.191, 'km':0.089, 'kt':0.25 }
S = { 'ft':0.39, 'alpha': 2.15, 'z0':0.121, 'km':0.093, 'kt':-0.175 }
I={ 'ft':0.06, 'alpha': 1.77, 'z0':0.045, 'km':0.096, 'kt':-0.9196 }
Evalue=E['ft']*np.exp(-1*E['kt']*(m-18))*z**E['alpha']*np.exp(-1*(z/(E['z0']+E['km']*(m-18)))**E['alpha'])
Svalue=S['ft']*np.exp(-1*S['kt']*(m-18))*z**S['alpha']*np.exp(-1*(z/(S['z0']+S['km']*(m-18)))**S['alpha'])
Ivalue=I['ft']*np.exp(-1*I['kt']*(m-18))*z**I['alpha']*np.exp(-1*(z/(I['z0']+I['km']*(m-18)))**I['alpha'])
value=Evalue+Svalue+Ivalue
elif(m>=21.25):
E = { 'ft':0.25, 'alpha': 1.957, 'z0':0.321, 'km':0.196, 'kt':0.565 }
S = { 'ft':0.61, 'alpha': 1.598, 'z0':0.291, 'km':0.167, 'kt':0.155 }
I = { 'ft':0.14, 'alpha': 0.964, 'z0':0.170, 'km':0.129, 'kt':0.1759 }
Evalue=E['ft']*np.exp(-1*E['kt']*(m-18))*z**E['alpha']*np.exp(-1*(z/(E['z0']+E['km']*(m-18)))**E['alpha'])
Svalue=S['ft']*np.exp(-1*S['kt']*(m-18))*z**S['alpha']*np.exp(-1*(z/(S['z0']+S['km']*(m-18)))**S['alpha'])
Ivalue=I['ft']*np.exp(-1*I['kt']*(m-18))*z**I['alpha']*np.exp(-1*(z/(I['z0']+I['km']*(m-18)))**I['alpha'])
value=Evalue+Svalue+Ivalue
return value
I would like to draw a sample from this distribution, therefore I made a grid points in z and m plane to estimate the cumulative distribution, the cumulative integral over m reaches to one but the cumulative integral over z doesn't give me one in the edge. I don't know why it won't get converged to one?!!
grid_m = np.linspace(18, 28, 1000)
grid_z = np.linspace(0, 1.5, 1000)
dz = np.diff(grid_z[:2])
# get cdf on grid, use cumtrapz
prob_zgm=np.empty((grid_z.shape[0], grid_m.shape[0]),float)
for i in range(grid_z.shape[0]):
for j in range(grid_m.shape[0]):
prob_zgm[i,j]=p(grid_z[i],grid_m[j])
pr = np.column_stack((np.zeros(prob_zgm.shape[0]),prob_zgm))
dm = np.diff(grid_m[:2])
cdf_zgm = integrate.cumtrapz(pr, dx=dm, axis=1)
cdf = integrate.cumtrapz(pr, dx=dz, axis=0)
Which assumption might cause this inconsistency or I compute something wrongly?
Update: The cumulative distribution cdf_zgm is shown as
In the rest, in order to get the inverse of the probability, it is the approach I have used:
# fix bounds of cdf_zgm
cdf_zgm[:, 0] = 0
cdf_zgm[:, -1] = 1
#Interpolate the data using a linear spline to "grid_q" samples
grid_q = np.linspace(0, 1, 200)
grid_qm = np.empty((len(grid_m), len(grid_q)), float)
for i in range(len(grid_m)):
grid_qm[i] = interpolate.interp1d(cdf_zgm[i], grid_z)(grid_q)
# build 2d interpolation for z as function of (q,m)
z_interp = interpolate.interp2d(grid_q, grid_m, grid_qm)
#sample magnitude
ng=20000
r = dist_m.rvs(ng)
rvs_u = np.random.rand(ng)
rvs_z = np.asarray([z_interp(rvs_u[i], r[i]) for i in range(len(rvs_u))]).ravel()
Is it right approach to fix the boundaries of CDF to one?
I don't know what's wrong with that code. But here are a couple of different ideas to try:
(1) Just sum the array elements instead of trying to compute the numerical integrals. It is simpler that way. (Summing the array elements is essentially computing a rectangle rule approximation, which as it turns out, is actually more accurate than the trapezoidal rule.)
(2) Instead of trying to create a whole 2-d array at once, write a function which creates just a 1-d slice of p(z | m) for a given value of m. Then just sum those elements to get the cumulative probability.

zedgraph common majortick for all y axis

I'm using Zedgraph to display multiple y axis (both YAxis and Y2Axis).
When having multple yaxis it becomes rather hard to compare curves with all the major ticks. On the picture below each curve has its own major tick:
https://dl.dropbox.com/u/70476173/problem.png
I would like the graph to share the same major ticks so that it is easy to compare the curves. I have tried with the code:
//majorTickCount = 12.0
var min = Math.Floor(yAxis.Scale.Min);
var max = Math.Ceiling(yAxis.Scale.Max);
var step = (max - min) / majorTickCount;
var wholeStep = step;
max = min + wholeStep * majorTickCount;
//yAxis.Scale.MajorStepAuto = true;
//yAxis.Scale.MajorStepAuto = false;
//yAxis.Scale.MinGrace = 0;
//yAxis.Scale.MaxGrace = 0;
yAxis.Scale.Min = min;
yAxis.Scale.Max = max;
yAxis.Scale.MajorStep = wholeStep;
yAxis.Scale.BaseTic = min;
This seems to create the desired effect, but with a problem:
https://dl.dropbox.com/u/70476173/problem2.png
The red curves 2nd and 3rd point has the value 6, but as you can see on the picture, the point lies below the majorgrid for 6. I believe the problem is that the majorstep is calculated to 2.5 and the y axis label displaying 6 should rather be 6.1 or something like that.
TL;DR: How do I make all my y axes share the same major steps
Any idea of how I can scale the y axis so that they share the same major grid?

Storing plot objects in a list

I asked this question yesterday about storing a plot within an object. I tried implementing the first approach (aware that I did not specify that I was using qplot() in my original question) and noticed that it did not work as expected.
library(ggplot2) # add ggplot2
string = "C:/example.pdf" # Setup pdf
pdf(string,height=6,width=9)
x_range <- range(1,50) # Specify Range
# Create a list to hold the plot objects.
pltList <- list()
pltList[]
for(i in 1 : 16){
# Organise data
y = (1:50) * i * 1000 # Get y col
x = (1:50) # get x col
y = log(y) # Use natural log
# Regression
lm.0 = lm(formula = y ~ x) # make linear model
inter = summary(lm.0)$coefficients[1,1] # Get intercept
slop = summary(lm.0)$coefficients[2,1] # Get slope
# Make plot name
pltName <- paste( 'a', i, sep = '' )
# make plot object
p <- qplot(
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
print(p)
pltList[[pltName]] = p
}
# close the PDF file
dev.off()
I have used sample numbers in this case so the code runs if it is just copied. I did spend a few hours puzzling over this but I cannot figure out what is going wrong. It writes the first set of pdfs without problem, so I have 16 pdfs with the correct plots.
Then when I use this piece of code:
string = "C:/test_tabloid.pdf"
pdf(string, height = 11, width = 17)
grid.newpage()
pushViewport( viewport( layout = grid.layout(3, 3) ) )
vplayout <- function(x, y){viewport(layout.pos.row = x, layout.pos.col = y)}
counter = 1
# Page 1
for (i in 1:3){
for (j in 1:3){
pltName <- paste( 'a', counter, sep = '' )
print( pltList[[pltName]], vp = vplayout(i,j) )
counter = counter + 1
}
}
dev.off()
the result I get is the last linear model line (abline) on every graph, but the data does not change. When I check my list of plots, it seems that all of them become overwritten by the most recent plot (with the exception of the abline object).
A less important secondary question was how to generate a muli-page pdf with several plots on each page, but the main goal of my code was to store the plots in a list that I could access at a later date.
Ok, so if your plot command is changed to
p <- qplot(data = data.frame(x = x, y = y),
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
ylim = c(0,10),
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
then everything works as expected. Here's what I suspect is happening (although Hadley could probably clarify things). When ggplot2 "saves" the data, what it actually does is save a data frame, and the names of the parameters. So for the command as I have given it, you get
> summary(pltList[["a1"]])
data: x, y [50x2]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
However, if you don't specify a data parameter in qplot, all the variables get evaluated in the current scope, because there is no attached (read: saved) data frame.
data: [0x0]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
So when the plot is generated the second time around, rather than using the original values, it uses the current values of x and y.
I think you should use the data argument in qplot, i.e., store your vectors in a data frame.
See Hadley's book, Section 4.4:
The restriction on the data is simple: it must be a data frame. This is restrictive, and unlike other graphics packages in R. Lattice functions can take an optional data frame or use vectors directly from the global environment. ...
The data is stored in the plot object as a copy, not a reference. This has two
important consequences: if your data changes, the plot will not; and ggplot2 objects are entirely self-contained so that they can be save()d to disk and later load()ed and plotted without needing anything else from that session.
There is a bug in your code concerning list subscripting. It should be
pltList[[pltName]]
not
pltList[pltName]
Note:
class(pltList[1])
[1] "list"
pltList[1] is a list containing the first element of pltList.
class(pltList[[1]])
[1] "ggplot"
pltList[[1]] is the first element of pltList.
For your second question: Multi-page pdfs are easy -- see help(pdf):
onefile: logical: if true (the default) allow multiple figures in one
file. If false, generate a file with name containing the
page number for each page. Defaults to ‘TRUE’.
For your main question, I don't understand if you want to store the plot inputs in a list for later processing, or the plot outputs. If it is the latter, I am not sure that plot() returns an object you can store and retrieve.
Another suggestion regarding your second question would be to use either Sweave or Brew as they will give you complete control over how you display your multi-page pdf.
Have a look at this related question.