Patches occupied by turtle shape in NetLogo - size

I would like a little bit of help with understanding and using patch shape and size vs origin. I am trying to mark the patches that are exactly under a specific turtle shape. For example, if the turtle is a rectangle of (w x h) I would like to change color or properties of all patches under that shape, not only at the origin patch. Of course, with a rectangle maybe I can manually color the patches under, but is there any option to modify patches under a more complicated turtle shape? Thank you.

Well there is a kludgey way to do this that has some artifacts of aliasing and other minor issues like transferring all visible objects (turtles, links, labels, drawing layer, etc) to the pcolor of a patch. But at least it's possible. It takes advantage of the included bitmap extension. Main idea is in paint-patches below.
extensions [bitmap]
to setup
clear-all
resize-world 0 199 0 199
set-patch-size 1
ask n-of 30 patches [ sprout 1 [set size 15]]
end
to paint-patches
let bmap bitmap:from-view
bitmap:copy-to-pcolors bmap true
ask turtles [ht] ; to show that the turtle shape is now painted to pcolors
end

That's impossible in NetLogo. Turtle shapes are purely visual. There's no way to access the exact contours of a turtle shape and then somehow use the contour as the basis for a computation.
If you're working with a small set of known shapes, like maybe square/triangle/circle, then you could handle each of the cases individually and write your own code to color patches corresponding to the shape. But if you need this capability in general, you're stuck.
You could write an extension to do it, but the extension would have to contain all original code to actually do the work of computing the overlap between the shape and the patch grid. There is no existing code inside NetLogo that does the computation you want.

Related

How can I plot a portion of a surface in a specified region?

I have a parametric surface in 3D. I would like to observe parts of this surface, specifically, the part with z > 0 and the part with x2 + y2 + z2 < c.
A few methods that I tried:
Naïvely throwing away the rest of the data, for instance setting X[Z<0] = nan etc. Since this does not line up with the parametrization that I chose, it would create ragged edges. Is there some sort of "antialiasing" interpolation options that I can choose? I would be grateful for a pointer to the docs for numpy or plotly.
Trying to set the alpha of the color scale. This sort of works, it seems to introduce some incorrect rendering. In the picture below, the dark green lump should be at the front of the light green disk. Is there something that I did wrong?
On the other hand, I couldn't locate in the manual a way to set "two dimensional" color scales, so that I can simultaneously set the opacity according to the z value and the hue according to some other quantity of interest. Is this possible?
Is there a convenient method to achieve my goal? Or can I improve my attempts above? Any help is appreciated!

Netlogo gis extension: How to set world size to give specific scale to patches

I'm developing a epidemiological model using GIS data of a small town. One of the submodels in my model is about "infections": I need that an infected agent has a certain probability to infect other agents which are on his same patch.
In order to properly model this fact, I need my patches to have a specific area, for example 100 square meters. There is a way in which I can set the world size so that I am sure of the exact area a single patch is representing?
Thank you very much.
First of all, you might check the stackoverflow guide to asking questions. Having a minimal reproducible example also helps. Following the guidelines of Stackoverflow helps our community ;)
The way you define the patch scale with the GIS extensions is indeed not very clear. A good tutorial is available in Chapter 6 of this book.
First, have a raster file (e.g. .asc) with a defined resolution (e.g. 10 x 10 m). You can take a look on how to do this in QGIS and other GIS softwares. Make sure to export it to .asc and reproject it to your target SRC, otherwise you might fall in this problem.
Here's a simple code for you.
patches-own [ infectability ]
to setup-patches
; define a rasterfile:
set rastermap gis:load-dataset "C:/folder/yourfile.asc"
; define SRC:
gis:load-coordinate-system "C:/folder/yourfile.prj"
; make each raster cell = patch in NetLogo
let width floor (gis:width-of rastermap / 2)
let height floor (gis:height-of rastermap / 2)
resize-world (-1 * width ) width (-1 * height ) height
; define your patch size in pixels (makes your world size bigger/smaller in the Interface):
set-patch-size 1
; define world boundaries:
gis:set-world-envelope gis:envelope-of rastermap
; apply the raster data to your patches:
gis:apply-raster rastermap infectability
; make your patches look dangerous:
ask patches with [ infectability > 0.8 ] [ set pcolor red ]
end
After that, you will have to use some procedures making turtles ask patches to access the patch variable infectability. Good luck! ;)

Netlogo - Id like to set the size of a turtle as the size of the patch it's standing on

Id like to set the size of a turtle as the size of the patch it's standing on.
Even better I need turtles which are bigger as 4 or 16 patches.
If for example i have a squared world with 16x16 patches id like to have turtles that can be big 1x1 or 2x2 or 4x4 etc....
and the turtle should overlap perfectly the patches: it might be 1 patch (1x1 case), 4 (2x2 case) etc...
abott setting the size of the turtle equal to the sie of the patch for perfect overlapping in trying wit this code:
hatch-turtle 1 [set size [size] of patch-here ]
but it gives me the error:
A patch can't access a turtle variable without specifying which turtle.
Maybe try some variation of:
ask turtles [ set size patch-size ]
perhaps scaling by a multiplier as needed. Note that size is a per-turtle variable, but patch-size is a global reporter, because all patches are always the same size in pixels.
Note that size is measured in patches, while patch-size is measured in pixels.
I really don't understand at all what you're trying to do here, but the above is legal NetLogo code, anyway.
A turtle's size is measured in units of patches, so if you want your turtles to be the same size as the patches they are standing on, that's:
ask turtles [ set size 1 ]
but 1 is the default size, so in order to get this behavior, you actually don't need to do anything at all.
This answer comes years after the question was asked, but I leave it here hoping that it helps others who may encounter the same problem (as I did). Below I first clarify the problem and then offer a solution.
Clarification: It is implied by the problem that OP has defined a square shape for the turtles. The default size of square turtles in NetLogo is 1, which means that by default a square turtle should completely fill a patch. However, OP still observed blank space between square turtles that are placed next to each other. The aim of this answer is to remove that blank space for square turtles of size 1.
Solution: To solve this problem, note that the default square shape of turtles in NetLogo is made up of a colored inner area and a thick colorless border. The blank space that the OP observed between the turtles was in fact composed of the colorless borders of square shapes. In order to produce a figure with colored squares placed immediately adjacent to each other (that is, without any apparent space between them), it suffices to define a new square shape with no border. This new square shape should be defined such that the inner area of the square fills the entire patch. This can be done using the Turtle Shapes Editor from the Tools menu: find the square shape, create a duplicate of it, and modify the new shape in the graphical editor. To modify the shape, click on its top-left corner and drag that corner to the top-left corner of the graphical editor window. Then do the same with the bottom-right corner.

Contours based on a "label mask"

I have images that have had features extracted with a contouring algorithm (I'm doing astrophysical source extraction). This approach yields a "feature map" that has each pixel "labeled" with an integer (usually ~1000 unique features per map).
I would like to show each individual feature as its own contour.
One way I could accomplish this is:
for ii in range(labelmask.max()):
contour(labelmask,levels=[ii-0.5])
However, this is very slow, particularly for large images. Is there a better (faster) way?
P.S.
A little testing showed that skimage's find-contours is no faster.
As per #tcaswell's comment, I need to explain why contour(labels, levels=np.unique(levels)+0.5)) or something similar doesn't work:
1. Matplotlib spaces each subsequent contour "inward" by a linewidth to avoid overlapping contour lines. This is not the behavior desired for a labelmask.
2. The lowest-level contours encompass the highest-level contours
3. As a result of the above, the highest-level contours will be surrounded by a miniature version of whatever colormap you're using and will have extra-thick contours compared to the lowest-level contours.
Sorry for answering my own... impatience (and good luck) got the better of me.
The key is to use matplotlib's low-level C routines:
I = imshow(data)
E = I.get_extent()
x,y = np.meshgrid(np.linspace(E[0],E[1],labels.shape[1]), np.linspace(E[2],E[3],labels.shape[0]))
for ii in np.unique(labels):
if ii == 0: continue
tracer = matplotlib._cntr.Cntr(x,y,labels*(labels==ii))
T = tracer.trace(0.5)
contour_xcoords,contour_ycoords = T[0].T
# to plot them:
plot(contour_xcoords, contour_ycoords)
Note that labels*(labels==ii) will put each label's contour at a slightly different location; change it to just labels==ii if you want overlapping contours between adjacent labels.

face alignment algorithm on images

How can I do a basic face alignment on a 2-dimensional image with the assumption that I have the position/coordinates of the mouth and eyes.
Is there any algorithm that I could implement to correct the face alignment on images?
Face (or image) alignment refers to aligning one image (or face in your case) with respect to another (or a reference image/face). It is also referred to as image registration. You can do that using either appearance (intensity-based registration) or key-point locations (feature-based registration). The second category stems from image motion models where one image is considered a displaced version of the other.
In your case the landmark locations (3 points for eyes and nose?) provide a good reference set for straightforward feature-based registration. Assuming you have the location of a set of points in both of the 2D images, x_1 and x_2 you can estimate a similarity transform (rotation, translation, scaling), i.e. a planar 2D transform S that maps x_1 to x_2. You can additionally add reflection to that, though for faces this will most-likely be unnecessary.
Estimation can be done by forming the normal equations and solving a linear least-squares (LS) problem for the x_1 = Sx_2 system using linear regression. For the 5 unknown parameters (2 rotation, 2 translation, 1 scaling) you will need 3 points (2.5 to be precise) for solving 5 equations. Solution to the above LS can be obtained through Direct Linear Transform (e.g. by applying SVD or a matrix pseudo-inverse). For cases of a sufficiently large number of reference points (i.e. automatically detected) a RANSAC-type method for point filtering and uncertainty removal (though this is not your case here).
After estimating S, apply image warping on the second image to get the transformed grid (pixel) coordinates of the entire image 2. The transform will change pixel locations but not their appearance. Unavoidably some of the transformed regions of image 2 will lie outside the grid of image 1, and you can decide on the values for those null locations (e.g. 0, NaN etc.).
For more details: R. Szeliski, "Image Alignment and Stitching: A Tutorial" (Section 4.3 "Geometric Registration")
In OpenCV see: Geometric Image Transformations, e.g. cv::getRotationMatrix2D cv::getAffineTransform and cv::warpAffine. Note though that you should estimate and apply a similarity transform (special case of an affine) in order to preserve angles and shapes.
For the face there is lot of variability in feature points. So it won't be possible to do a perfect fit of all feature points by just affine transforms. The only way to align all the points perfectly is to warp the image given the points. Basically you can do a triangulation of image given the points and do a affine warp of each triangle to get the warped image where all the points are aligned.
Face detection could be handled based on the just eye positions.
Herein, OpenCV, Dlib and MTCNN offers to detect faces and eyes. Besides, it is a python based framework but deepface wraps those methods and offers an out-of-the box detection and alignment function.
detectFace function applies detection and alignment in the background respectively.
#!pip install deepface
from deepface import DeepFace
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
DeepFace.detectFace("img.jpg", detector_backend = backends[0])
Besides, you can apply detection and alignment manually.
from deepface.commons import functions
img = functions.load_image("img.jpg")
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
detected_face = functions.detect_face(img = img, detector_backend = backends[3])
plt.imshow(detected_face)
aligned_face = functions.align_face(img = img, detector_backend = backends[3])
plt.imshow(aligned_face)
processed_img = functions.detect_face(img = aligned_face, detector_backend = backends[3])
plt.imshow(processed_img)
There's a section Aligning Face Images in OpenCV's Face Recognition guide:
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html#aligning-face-images
The script aligns given images at the eyes. It's written in Python, but should be easy to translate to other languages. I know of a C# implementation by Sorin Miron:
http://code.google.com/p/stereo-face-recognition/