I am complete Newbie on programming with R, especially with the library spatstat. And so i hope that anyone can help me.
I want to simulate a Boolean Model. In my case, this is a Poisson-Pointprocess with closed circles with radius r around the points of the point process.
With X = rpoispp (100) I can already simulate the point process. But I have no idea how to generate the circles around the dots.
my google research was unfortunately not successful.
Thanks for help,
Perry
Googleing 'Boolean model spatstat' gave me
this helpfile as one of the
first hits. In the examples section there is an example of a Boolean model
in the unit square with disc radius 0.2:
library(spatstat)
X <- discs(runifpoint(15) %mark% 0.2, npoly=16)
plot(X, main = "", col = "gray")
Is this what you need?
Above X is a polygonal approximation of the union of discs. Alternatively
the Boolean model is often represented simply as a marked point
pattern with the disc radius as the mark. What is more convenient depends on
the context.
FYI: There are several equivalent ways of attaching a (vector) of marks m
to a point pattern X: marks(X) <- m and X %mark% m.
Created on 2018-11-08 by the reprex package (v0.2.1)
Simply make a marked point pattern and use the correct value of markscale
when plotting with plot.ppp:
library(spatstat)
X <- ppp(0.5, 0.5, window = owin()) # Single point in middle of unit square
marks(X) <- 0.5
plot.ppp(X, markscale = 1, legend = FALSE) # Mark interpreted directly as diameter of cirle
plot.ppp(X, markscale = 2, legend = FALSE) # 2*mark interpreted as diameter of cirle (i.e. mark = radius)
Created on 2018-11-11 by the reprex package (v0.2.1)
Now, After a little self-study I came to the following results:
> X=rpoispp(8)
> m=1
> marks(X)=m
> plot(X)
i get this picture:enter image description here
Now, i want to scale the Radii of the marks. I do not understand how R sets the radii of the circles. I found this on a tutorial site: "If markscale is given, then a mark value of m is plotted as a circle of radius m * markscale (if m is positive)". I tested it and was surprised by the result. If I understood correctly, the radius of the markers would have to stay the same with
>plot(X, markscale = 1)
and would have to double by
>plot(X, markscale = 2)
but it does not.
PS: Sorry for my bad english.
Perry
Related
I am trying to understand the fisheye model by Scaramuzza, which is implemented in Matlab, see https://de.mathworks.com/help/vision/ug/fisheye-calibration-basics.html#mw_8aca38cc-44de-4a26-a5bc-10fb312ae3c5
The backprojection (uv to xyz) seems fairly straightforward according to the following equation:
, where rho=sqrt(u^2 +v^2)
However, how does the projection (from xyz to uv) work?! In my understanding we get a rather complex set of equations. Unfortunately, I don't find any details on that....
Okay, I believe I understand it now fully after analyzing the functions of the (windows) calibration toolbox by Scaramuzza, see https://sites.google.com/site/scarabotix/ocamcalib-toolbox/ocamcalib-toolbox-download-page
Method 1 found in file "world2cam.m"
For the projection, use the same equation above. In the projection case, the equation has three known (x,y,z) and three unknown variables (u,v and lambda). We first substitute lambda with rho by realizing that
u = x/lambda
v = y/lambda
rho=sqrt(u^2+v^2) = 1/lambda * sqrt(x^2+y^2) --> lambda = sqrt(x^2+y^2) / rho
After that, we have the unknown variables (u,v and rho)
u = x/lambda = x / sqrt(x^2+y^2) * rho
v = y/lambda = y / sqrt(x^2+y^2) * rho
z / lambda = z /sqrt(x^2+y^2) * rho = a0 + a2*rho^2 + a3*rho^3 + a4*rho^4
As you can see, the last equation now has only one unknown, namely rho. Thus, we can solve it easily using e.g. the roots function in matlab. However, the result does not always exist nor is it necessarily unique. After solving the unknown variable rho, calculating uv is very simple using the equation above.
This procedure needs to be performed for each point (x,y,z) separately and is thus rather computationally expensive for an image.
Method 2 found in file "world2cam_fast.m"
The last equation has the form rho(x,y,z). However, if we define m = z / sqrt(x^2+y^2) = tan(90°-theta), it only depends on one variable, namely rho(m).
Instead of solving this equation rho(m) for every new m, the authors "plot" the function for several values of m and fit an 8th order polynomial to these points. Using this polynomial they can calculate an approximate value for rho(m) much quicker in the following.
This becomes clear, because "world2cam_fast.m" makes use of ocam_model.pol, which is calculated in "undistort.m". "undistort.m" in turn makes use of "findinvpoly.m".
Edit: I've now fixed the problem I asked about. The spheres were leaving the box in the corners, where the if statements (in the while loop shown below) got confused. In the bits of code that reverse the individual components of velocity on contact with walls, some elif statements were used. When elif is used (as far as I can tell) if the sphere exceeds more than one position limit at a time, the program only reverses the velocity component for one of them. This is rectified when replacing elif with simply if. I'm not sure if I quite understand the reason behind this, so hopefully someone cleverer than I will comment such information, but for now, if anyone has the same problem, I hope my limited input helps!
Some context first:
I'm trying to build a model of the kinetic theory of gases in VPython, as a revision exercise for my (Physics) degree. This involves me building a hollow box and putting a bunch of spheres in it, randomly positioned throughout the box. I then need to assign each of the spheres its own random velocity and then use a loop to adjust the position of each sphere with reference to its velocity vector and a time step.
The spheres should also undergo elastic collisions with each wall and all other spheres.
When a sphere meets a wall in the x-direction, its x-velocity component is reversed and similarly in the y and z directions.
When a sphere meets another sphere, they swap velocities.
Currently, my code works so far as creating the right number of spheres and distributing them randomly and giving each sphere its own random velocity. The spheres also move as they should, except for collisions. The spheres should all stay inside the box as they should bounce off all the walls. They appear to be bouncing off each other, however, occasionally a sphere or two will go straight through the box.
I am extremely new to programming and I don't quite understand what's going on here or why it's happening but I'd be very grateful if someone could help me.
Below is the code I have so far (I've tried to comment what I'm doing at each step):
##########################################################
# This code is meant to create an empty box and then create
# a certain number of spheres (num_spheres) that will sit inside
# the box. Each sphere will then be assigned a random velocity vector.
# A loop will then adjust the position of each sphere to make them
# move. The spheres will undergo elastic collisions with the box walls
# and also with the other spheres in the box.
##########################################################
from visual import *
import random as random
import numpy as np
num_spheres = 15
fps = 24 #fps of while loop (later)
dt = 1.0/fps #time step
l = 40 #length of box
w = 2 #width of box
radius = 0.5 #radius of spheres
##########################################################
# Creating an empty box with sides length/height l, width w
wallR = box(pos = (l/2.0,0,0), size=(w,l,l), color=color.white, opacity=0.25)
wallL = box(pos = (-l/2.0,0,0), size=(w,l,l), color=color.white, opacity=0.25)
wallU = box(pos = (0,l/2.0,0), size=(l,w,l), color=color.white, opacity=0.25)
wallD = box(pos = (0,-l/2.0,0), size=(l,w,l), color=color.white, opacity=0.25)
wallF = box(pos = (0,0,l/2.0), size=(l,l,w), color=color.white, opacity=0.25)
wallB = box(pos = (0,0,-l/2.0), size=(l,l,w), color=color.white, opacity=0.25)
#defining a function that creates a list of 'num_spheres' randomly positioned spheres
def create_spheres(num):
global l, radius
particles = [] # Create an empty list
for i in range(0,num): # Loop i from 0 to num-1
v = np.random.rand(3)
particles.append(sphere(pos= (3.0/4.0*l) * (v - 0.5), #pos such that spheres are inside box
radius = radius, color=color.red, index=i))
# each sphere is given an index for ease of referral later
return particles
#defining a global variable = the array of velocities for the spheres
velarray = []
#defining a function that gives each sphere a random velocity
def velocity_spheres(sphere_list):
global velarray
for sphere in spheres:
#making the sign of each velocity component random
rand = random.randint(0,1)
if rand == 1:
sign = 1
else:
sign = -1
mu = 10 #defining an average for normal distribution
sigma = 0.1 #defining standard deviation of normal distribution
# 3 random numbers form the velocity vector
vel = vector(sign*random.normalvariate(mu, sigma),sign*random.normalvariate(mu, sigma),
sign*random.normalvariate(mu, sigma))
velarray.append(vel)
spheres = create_spheres(num_spheres) #creating some spheres
velocity_spheres(spheres) # invoking the velocity function
while True:
rate(fps)
for sphere in spheres:
sphere.pos += velarray[sphere.index]*dt
#incrementing sphere position by reference to its own velocity vector
if abs(sphere.pos.x) > (l/2.0)-w-radius:
(velarray[sphere.index])[0] = -(velarray[sphere.index])[0]
#reversing x-velocity on contact with a side wall
elif abs(sphere.pos.y) > (l/2.0)-w-radius:
(velarray[sphere.index])[1] = -(velarray[sphere.index])[1]
#reversing y-velocity on contact with a side wall
elif abs(sphere.pos.z) > (l/2.0)-w-radius:
(velarray[sphere.index])[2] = -(velarray[sphere.index])[2]
#reversing z-velocity on contact with a side wall
for sphere2 in spheres: #checking other spheres
if sphere2 != sphere:
#making sure we aren't checking the sphere against itself
if abs(sphere2.pos-sphere.pos) < (sphere.radius+sphere2.radius):
#if the other spheres are touching the sphere we are looking at
v1 = velarray[sphere.index]
#noting the velocity of the first sphere before the collision
velarray[sphere.index] = velarray[sphere2.index]
#giving the first sphere the velocity of the second before the collision
velarray[sphere2.index] = v1
#giving the second sphere the velocity of the first before the collision
Thanks again for any help!
The elif statements within the while loop in the code given in the original question are/were the cause of the problem. The conditional statement, elif, is only applicable if the original, if, condition is not satisfied. The circumstance wherein a sphere meets the corner of the box satisfies at least two of the conditions for reversing velocity components. This means that, while one would expect (at least) two velocity components to be reversed, only one is. That is, the direction specified by the if statement is reversed, whereas the component(s) mentioned in the elif statement(s) are not, as the first condition has been satisfied and, hence, the elif statements are ignored.
If each elif is changed to be a separate if statement, the code works as intended.
First off, this is a homework question. The problem is ex. 2.6 from pg.26 of An Introduction to Applied Multivariate Analysis. It's laid out as:
Construct a bubble plot of the earthquake data using latitude and longitude as the scatterplot and depth as the circles, with greater depths giving smaller circles. In addition, divide the magnitudes into three equal ranges and label the points in your bubble plot with a different symbol depending on the magnitude group into which the point falls.
I have figured out that symbols, which is in base graphics does not work well with lattice. Also, I haven't figured out if lattice has the functionality to change symbol size (i.e. bubble size). I bought the lattice book in a fit of desperation last night, and as I see in some of the examples, it is possible to symbol color and shape for each "cut" or panel. I am then working under the assumption that symbol size could then also be manipulated, but I haven't been able to figure out how.
My code looks like:
plot(xyplot(lat ~ long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude",
panel = function(x,y){
grid.circle(x,y,r=sqrt(quakes$depth),draw=TRUE)
}
))
Where I attempt to use the grid package to draw the circles, but when this executes, I just get a blank plot. Could anyone please point me in the right direction? I would be very grateful!
Here is the some code for creating the plot that you need without using the lattice package. I obviously had to generate my own fake data so you can disregard all of that stuff and go straight to the plotting commands if you want.
####################################################################
#Pseudo Data
n = 20
latitude = sample(1:100,n)
longitude = sample(1:100,n)
depth = runif(n,0,.5)
magnitude = sample(1:100,n)
groups = rep(NA,n)
for(i in 1:n){
if(magnitude[i] <= 33){
groups[i] = 1
}else if (magnitude[i] > 33 & magnitude[i] <=66){
groups[i] = 2
}else{
groups[i] = 3
}
}
####################################################################
#The actual code for generating the plot
plot(latitude[groups==1],longitude[groups==1],col="blue",pch=19,ylim=c(0,100),xlim=c(0,100),
xlab="Latitude",ylab="Longitude")
points(latitude[groups==2],longitude[groups==2],col="red",pch=15)
points(latitude[groups==3],longitude[groups==3],col="green",pch=17)
points(latitude[groups==1],longitude[groups==1],col="blue",cex=1/depth[groups==1])
points(latitude[groups==2],longitude[groups==2],col="red",cex=1/depth[groups==2])
points(latitude[groups==3],longitude[groups==3],col="green",cex=1/depth[groups==3])
You just need to add default.units = "native" to grid.circle()
plot(xyplot(lat ~ long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude",
panel = function(x,y){
grid.circle(x,y,r=sqrt(quakes$depth),draw=TRUE, default.units = "native")
}
))
Obviously you need to tinker with some of the settings to get what you want.
I have written a package called tactile that adds a function for producing bubbleplots using lattice.
tactile::bubbleplot(depth ~ lat*long | cut(mag, 3), data=quakes,
layout=c(3,1), xlab="Longitude", ylab="Latitude")
does anyone know the method or code to add a second x axis to a TGraph in CERN's ROOT program? Ive been searching the root website and its documentation almost always confuses me. What i need is just one plot of data, but a second X axis on top whose values are a function of the bottom x axis' values. Its basically so lazy people dont have to convert from the numbers of the bottom x axis to the top x axis.
For a simple example (if i wasnt clear)
Say you have a sine curve which is some function of theta. On the top x axis we could have degrees whereas on the bottom we could have radians with 360deg corresponding to 2pi rad...
Any help would be appreciated!
TGaxis is the class you are looking for to draw extra axes wherever you desire. Grabbing the world coordinate for your pad you can then superimpose like so. Replace low and high with the appropriate limits.
// your graph code here...
TGraph->Draw("AP");
TGaxis *axis = new TGaxis(gPad->GetUxmin(),gPad->GetUymax(),gPad->GetUxmax(),gPad->GetUymax(),low,high,510,"+L");
axis->Draw();
Check out TGaxis documentation for more examples.
(A previous answer I had was deleted as it was just a link to the site listed as a reference below. I hope this is more in line with the community guidelines.)
I think this might do what you want.
void axis2() {
TH1F *h = new TH1F("h","test",30,-3,3);
h->FillRandom("gaus",10000);
h->Draw();
TText t;
t.SetTextSize(0.02);
t.SetTextAlign(22);
Double_t yt = - h->GetMaximum()/15.;
for (Int_t i=1;i<=30;i++) t.DrawText(h->GetBinCenter(i),yt,Form("%d",i%10));
}
It doesn't create another taxis but shows you how to draw text at the same location of the axis. The answer comes from Rene Brun himself (one of the main authors of root) so I don't think you can have two x axes.
Source:
http://root.cern.ch/phpBB3/viewtopic.php?f=3&t=7110
Here is an example showing how to proceed.
https://root.cern/doc/master/twoscales_8C.html
I have 2 consecutive images from a camera and I want to estimate the change in camera pose:
I calculate the optical flow:
Const MAXFEATURES As Integer = 100
imgA = New Image(Of [Structure].Bgr, Byte)("pic1.bmp")
imgB = New Image(Of [Structure].Bgr, Byte)("pic2.bmp")
grayA = imgA.Convert(Of Gray, Byte)()
grayB = imgB.Convert(Of Gray, Byte)()
imagesize = cvGetSize(grayA)
pyrBufferA = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
pyrBufferB = New Emgu.CV.Image(Of Emgu.CV.Structure.Gray, Byte) _
(imagesize.Width + 8, imagesize.Height / 3)
features = MAXFEATURES
featuresA = grayA.GoodFeaturesToTrack(features, 0.01, 25, 3)
grayA.FindCornerSubPix(featuresA, New System.Drawing.Size(10, 10),
New System.Drawing.Size(-1, -1),
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03))
features = featuresA(0).Length
Emgu.CV.OpticalFlow.PyrLK(grayA, grayB, pyrBufferA, pyrBufferB, _
featuresA(0), New Size(25, 25), 3, _
New Emgu.CV.Structure.MCvTermCriteria(20, 0.03D),
flags, featuresB(0), status, errors)
pointsA = New Matrix(Of Single)(features, 2)
pointsB = New Matrix(Of Single)(features, 2)
For i As Integer = 0 To features - 1
pointsA(i, 0) = featuresA(0)(i).X
pointsA(i, 1) = featuresA(0)(i).Y
pointsB(i, 0) = featuresB(0)(i).X
pointsB(i, 1) = featuresB(0)(i).Y
Next
Dim Homography As New Matrix(Of Double)(3, 3)
cvFindHomography(pointsA.Ptr, pointsB.Ptr, Homography, HOMOGRAPHY_METHOD.RANSAC, 1, 0)
and it looks right, the camera moved leftwards and upwards:
Now I want to find out how much the camera moved and rotated. If I declare my camera position and what it's looking at:
' Create camera location at origin and lookat (straight ahead, 1 in the Z axis)
Location = New Matrix(Of Double)(2, 3)
location(0, 0) = 0 ' X location
location(0, 1) = 0 ' Y location
location(0, 2) = 0 ' Z location
location(1, 0) = 0 ' X lookat
location(1, 1) = 0 ' Y lookat
location(1, 2) = 1 ' Z lookat
How do I calculate the new position and lookat?
If I'm doing this all wrong or if there's a better method, any suggestions would be very welcome, thanks!
For pure camera rotation R = A-1HA. To prove this consider image to plane homographies H1=A and H2=AR, where A is camera intrinsic matrix. Then H12=H2*H1-1=A-1RA, from which you can obtain R
Camera translation is harder to estimate. If the camera translates you have to a find fundamental matrix first (not homography): xTFx=0 and then convert it into an essential matrix E=ATFA; Then you can decompose E into rotation and translation E=txR, where tx means a vector product matrix. Decomposition is not obvious, see this.
The rotation you get will be exact while the translation vector can be found only up to scale. Intuitively this scaling means that from the two images alone you cannot really say whether the objects are close and small or far away and large. To disambiguate we may use a familiar size objects, known distance between two points, etc.
Finally note that a human visual system has a similar problem: though we "know" the distance between our eyes, when they are converged on the object the disparity is always zero and from disparity alone we cannot say what the distance is. Human vision relies on triangulation from eyes version signal to figure out absolute distance.
Well what your looking at is in simple terms a Pythagorean theorem problem a^2 + b^2 = c^2. However when it comes to camera based applications things are not very easy to accurately determine. You have found half of the detail you need for "a" however finding "b" or "c" is much harder.
The Short Answer
Basically it can't be done with a single camera. But it can be with done with two cameras.
The Long Winded Answer (Thought I'd explain in more depth, no pun intended)
I'll try and explain, say we select two points within our image and move the camera left. We know the distance from the camera of each point B1 is 20mm and point B2 is 40mm . Now lets assume that we process the image and our measurement are A1 is (0,2) and A2 is (0,4) these are related to B1 and B2 respectively. Now A1 and A2 are not measurements; they are pixels of movement.
What we now have to do is multiply the change in A1 and A2 by a calculated constant which will be the real world distance at B1 and B2. NOTE: Each one these is different according to measurement B*. This all relates to Angle of view or more commonly called the Field of View in photography at different distances. You can accurately calculate the constant if you know the size of each pixel on the camera CCD and the f number of the lens you have inside the camera.
I would expect this isn't the case so at different distances you have to place an object of which you know the length and see how many pixels it takes up. Close up you can use a ruler to make things easier. With these measurements. You take this data and form a curve with a line of best fit. Where the X-axis will be the distance of the object and the Y-axis will be the constant of pixel to distance ratio that you must multiply your movement by.
So how do we apply this curve. Well it's guess work. In theory the larger the measurement of movement A* the closer the object to the camera. In our example our ratios for A1 > A2 say 5mm and 3mm respectively and we would now know that point B1 has moved 10mm (2x5mm) and B2 has moved 6mm (2x6mm). But let's face it - we will never know B and we will never be able to tell if a distance moved is 20 pixels of an object close up not moving far or an object far away moving a much great distance. This is why things like the Xbox Kinect use additional sensors to get depth information that can be tied to the objects within the image.
What you attempting could be attempted with two cameras as the distance between these cameras is known the movement can be more accurately calculated (effectively without using a depth sensor). The maths behind this is extremely complex and I would suggest looking up some journal papers on the subject. If you would like me to explain the theory, I can attempt to.
All my experience comes from designing high speed video acquisition and image processing for my PHD so trust me, it can't be done with one camera, sorry. I hope some of this helps.
Cheers
Chris
[EDIT]
I was going to add a comment but this is easier due to the bulk of information:
Since it is the Kinect I will assume you have some relevant depth information associated with each point if not you will need to figure out how to get this.
The equation you will need to start of with is for the Field of View (FOV):
o/d = i/f
Where:
f is equal to the focal length of the lens usually given in mm (i.e. 18 28 30 50 are standard examples)
d is the object distance from the lens gathered from kinect data
o is the object dimension (or "field of view" perpendicular to and bisected by the optical axis).
i is the image dimension (or "field stop" perpendicular to and bisected by the optical axis).
We need to calculate i, where o is our unknown so for i (which is a diagonal measurement),
We will need the size of the pixel on the ccd this will in micrometres or µm you will need to find this information out, For know we will take it as being 14um which is standard for a midrange area scan camera.
So first we need to work out i horizontal dimension (ih) which is the number of pixels of the width of the camera multiplied by the size of the ccd pixel (We will use 640 x 320)
so: ih = 640*14um = 8960um
= 8960/1000 = 8.96mm
Now we need i vertical dimension (iv) same process but height
so: iv = (320 * 14um) / 1000 = 4.48mm
Now i is found by Pythagorean theorem Pythagorean theorem a^2 + b^2 = c^2
so: i = sqrt(ih^2 _ iv^2)
= 10.02 mm
Now we will assume we have a 28 mm lens. Again, this exact value will have to be found out. So our equation is rearranged to give us o is:
o = (i * d) / f
Remember o will be diagonal (we will assume of object or point is 50mm away):
o = (10.02mm * 50mm) / 28mm
17.89mm
Now we need to work out o horizontal dimension (oh) and o vertical dimension (ov) as this will give us the distance per pixel that the object has moved. Now as FOV α CCD or i is directly proportional to o we will work out a ratio k
k = i/o
= 10.02 / 17.89
= 0.56
so:
o horizontal dimension (oh):
oh = ih / k
= 8.96mm / 0.56 = 16mm per pixel
o vertical dimension (ov):
ov = iv / k
= 4.48mm / 0.56 = 8mm per pixel
Now we have the constants we require, let's use it in an example. If our object at 50mm moves from position (0,0) to (2,4) then the measurements in real life are:
(2*16mm , 4*8mm) = (32mm,32mm)
Again, a Pythagorean theorem: a^2 + b^2 = c^2
Total distance = sqrt(32^2 + 32^2)
= 45.25mm
Complicated I know, but once you have this in a program it's easier. So for every point you will have to repeat at least half the process as d will change on therefore o for every point your examining.
Hope this gets you on your way,
Cheers
Chris