What's the fastest way to find if a point is in one of many rectangles? - minecraft

So basically im doing this for my minecraft spigot plugin (java). I know there are already some land claim plugins but i would like to make my own.
For this claim plugin i'd like to know how to get if a point (minecraft block) is inside a region (rectangle). i know how to check if a point is inside a rectangle, the main problem is how to check as quickly as possible when there are like lets say 10.000 rectangles.
What would be the most efficient way to check 10.000 or even 100.000 without having to manually loop through all of them and check every single rectangle?

Is there a way to add a logical test when the rectangles get generated in a way that checks if they hold that point? In that case you could set a boolean to true if they contain that point when generated, and then when checking for that minecraft block the region (rectangle) replies with true or false.
This way you run the loops or checks when generating the rectangles, but when running the game the replies should happen very fast, just check if true or false for bool ContainsPoint.

If your rectangles are uniformly placed neighbors of each other in a big rectangle, then finding which rectangle contains point is easy:
width = (maxX-minX)/num_rectangles_x;
height = same but for y
idx = floor( (x - minX)/width );
idy = floor( (y - minY)/height );
id_square = idx + idy*num_rectangles_x;
If your rectangles are randomly placed, then you should use a spatial acceleration structure like octree. Then check if point is in root, then check if point is in one of its nodes, repeat until you find a leaf that includes the point. 10000 tests per 10milliseconds should be reachable on cpu. 1 million tests per 10ms should be ok for a gpu. But you may need to implement a sparse version of the octree and a space filling curve order for leaf nodes to have better caching, to reach those performance levels.

Related

How to access net displacements in pyiron

Using pyiron, I want to calculate the mean square displacement of the ions in my system. How do I see the total displacement (i.e. not folded back by periodic boundary conditions) without dumping very frequently and checking when an atom passes over the boundary and gets wrapped?
Try to compare job['output/generic/unwrapped_positions'][-1] and job.structure.positions+job.output.total_displacements[-1]. If they deliver the same values, it's definitely fine both ways. If not, you can post the relevant lines in your notebook here.
I'd like to add a few comments to Jan's answer:
While job['output/generic/unwrapped_positions'] returns the unwrapped positions parsed from the output files, job.output.total_displacements returns the displacement of atoms calculated from each pair of consecutive snapshots. So if an atom moves more than half the box length in any direction, job.output.total_displacements will give wrong coordinates. Therefore, job['output/generic/unwrapped_positions'] is generally more trustworthy, but it is not available in all the codes (since some codes simply do not provide an output for unwrapped positions).
Moreover, if an interactive job is used, it is possible that job.structure.positions does not return the initial positions, i.e. job.structure.positions+job.output.total_displacements won't be initial positions + displacements.
So, in short, my answer to your question would be rather "Use job['output/generic/unwrapped_positions'] and if it's not available, use job.structure.positions+job.output.total_displacements but be aware of potential problems you might be running into."

Making cylindrical space in Repast Simphony?

I am trying to model the interior of an epithelial space and am stuck on movement around the interior edges of a cylindrical space. Basically, I'm trying to implement StickyBorders and keep agents on those borders in a cylindrical space that I am creating.
Is there a way to use cylindrical coordinates in Repast Simphony? I found this example (https://www.researchgate.net/publication/259695792_An_Agent-Based_Model_of_Vascular_Disease_Remodeling_in_Pulmonary_Arterial_Hypertension) where they seem to have done something similar, but the paper doesn't explain methods in much depth, and I don't believe this is an example in the repast simphony models.
Currently, I have a class of epithelial cells that are set up to form a cylinder and other agents start just inside that cylinder. To move, they are choosing their most desired spot (similar to the Zombie code) then pointing to a new location in the direction of that desired location within one grid square of that original location. They check that new point before moving to it and make sure that there are at least two other epithelial cells in the immediate moore neighborhood, to ensure they stay against the wall.
GridPoint intendedpt = new GridPoint((int)Math.rint(alongX),(int)Math.rint(alongY),(int)Math.rint(alongZ));
GridCellNgh<EpithelialCell> nearEpithelium = new GridCellNgh<EpithelialCell>(mac_grid, intendedpt, EpithelialCell.class, 1,1,1);
List<GridCell<EpithelialCell>> EpiCells = nearEpithelium.getNeighborhood(false);
int nearbyEpiCellsCount=0;
for (GridCell<EpithelialCell> cell: EpiCells) {
nearbyEpiCellsCount++;
}
if (nearbyEpiCellsCount<2) {
System.out.println(this + " leaving epithelial wall /r");
RunEnvironment.getInstance().pauseRun();
//TODO: where to go if false
}
I am wondering if there is a way to either set the boundaries of the space to be a cylinder or to check which side of the agent is against the wall and restrict its movement in that direction.
The sticky border code (StickyBorders.java) essentially just checks if the point that the agent moves to is beyond any of the space's dimensions, and if so the point is clamped to that dimension. So, for example, if the space is 3x4 and an agent's movement would take it to 4,2, then that point becomes 3,2 and the agent is placed there. Can you do something like that in this case? If not, can you edit your question to explain why not and maybe that will help us understand better.
The approach we took in that model was to use a 3D grid space with custom borders and query methods. The space itself was still Cartesian - we just visualized it as a cylinder using custom display code. Using the Cartesian grid was an reasonable approximation for this application since the cell dimensions were significantly smaller that the vessel radius, so curvature effects were neglected. The boundary conditions on the vessel space were wrap around in the angular dimension, so that cells could move continuously around the circumference of the vessel, and the axial boundary conditions were also wrapped, as we assumed a long enough vessel length that this would be reasonable. The wall thickness dimension had hard boundaries at the basement membrane (y=0) and at the fluid interface (y=wall thickness).
Depending on which type of space you are using, you will need to implement a PointTranslator or GridPointTranslator that performs the border functions. If you want specific examples of the code I suggest you reach out to the author's directly.

CorePlot - dynamic x-axis data using two arrays

This is more of an open discussion topic than anything else. Currently I'm storing 50 Float32 values in my NSMutableArray *voltageArray before I refresh my CPTPlot *plot. Every time I obtain 50 values, I remove the previous 50 from the voltageArray and repeat the process....always displaying the 50 values in "real time" on my plot.
However, the data I'm receiving (which is voltage coming from a Cypress BLE module equipped with a pressure transducer) is so quick that any variation (0.4 V to 4.0 V; no pressure to lots of pressure) cannot be seen on my graph. It just shows up as a straight line, varying up and down without showing increased or decreased slopes.
To show overall change, I wanted to take those 50 values, store them in the first index of another NSMutableArray *stampArray and use the index of stampArray to display information. Meanwhile, the numberOfRecordsForPlot: method would look like this:
- (NSUInteger)numberOfRecordsForPlot:(CPTPlot *)plotnumberOfRecords {
return (DATA_PER_STAMP * _stampCount);
}
This would initially be 50, then after 50 pieces of data are captured from the BLE module, _stampCount would increase by one, and the number of records for plot would increase by 50 (till about 2500-10000 range, then I'd refresh the whole the thing and restart the process.)
Is this the right approach? How would I be able to make the first 50 points stay on the graph, while building the next 50, etc.? Imagine an y = x^2 graph, and what the graph looks like when applying integration (the whole breaking the area under the curve into rectangles).
Look at the "Real Time Plot" demo in the Plot Gallery example app included with Core Plot. It starts off with an empty plot, adding a new point each cycle until reaching the maximum number of points. After that, one old point is removed for each new one added so the total number stays constant. The demo uses a timer to pass random data to the plot, but your app can of course collect data from anywhere. Be sure to always interact with the graph from the main thread.
I doubt you'll be able to display 10,000 data points on one plot (does your display have enough pixels to resolve that many points?). If not, you'll get much better drawing performance if you filter and/or smooth the data to remove some of the points before sending them to the plot.

How can I compare two NSImages for differences?

I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.

Projectile hit coordinates at the apex of its path

I have a projectile that I would like to pass through specific coordinates at the apex of its path. I have been using a superb equation that giogadi outlined here, by plugging in the velocity values it produces into chipmunk's cpBodyApplyImpulse function.
The equation has one drawback that I haven't been able to figure out. It only works when the coordinates that I want to hit have a y value higher than the cannon (where my projectile starts). This means that I can't shoot at a downward angle.
Can anybody help me find a suitable equation that works no matter where the target is in relation to the cannon?
As pointed out above, there isn't any way to make the apex be lower than the height of the cannon (without making gravity work backwards). However, it is possible to make the projectile pass through a point below the cannon; the equations are all here. The equation you need to solve is:
angle = arctan((v^2 [+-]sqrt(v^4 - g*(x^2+2*y*v^2)))/g*x)
where you choose a velocity and plug in the x and y positions of the target - assuming the cannon is at (0,0). The [+-] thing means that you can choose either root. If the argument to the square root function is negative (an imaginary root) you need a larger velocity. So, if you are "in range" you have two possible angles for any particular velocity (other than in the maximum range 45 degree case where the two roots should give the same answer).
I suspect one trajectory will tend to 'look' much more sensible than the other, but that's something to play around with once you have something working. You may want to stick with the apex grazing code for the cases where the target is above the cannon.