Place sprites at certain positions using a shader - fragment-shader

What I achieved:
I created a shader (based on this code with this plant asset) that takes a black and white reference image (like a height map) and places plants on a grid determined by the darkness/lightness-value of the respective pixel.
Where I am stuck:
I am trying to get a more organic look (instead of looking like a tileset) so I used poisson-disc-sampling to get random-looking coordinates for where I want to place the plants. In "normal" code I'd just loop through the array of the generated coordinates and draw a sprite ... but I have no clue how to do that with a shader.
Question:
Is it even possible to use a shader to place images at certain positions?
(I could obviously generate a static image but I am planning to adjust the shader to simulate wind going over the vegetation like this)
Thanks already in advance for any tip or help!

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

Calculating the area and position of dynamically formed polygons

Hi stackoverflow community,
This is a continuation of a question I asked 6 months regarding calculating the area and position of dynamically formed rectangles. The solution provided for that worked a treat but now I want to take this a step further.
Some background - I'm working on a puzzle game using Cocos2D/Box2D were the player draws lines on the screen. Depending on were the player draws, I want to then work out the area and position of polygons that appear as a result of the drawn lines.
In the following image, the black border represents a playing area, this will always be the same shape. The grey lines are player drawn and will always be straight. The green square is an obstacle. The obstacle objects will be convex shapes. The formed polygons (3 in this case) are the blue areas and are the shapes I'm trying to get the coordinates and area for.
I think I'll be fine with working out the area of a polygon using determinants but before that, I need to work out the coordinates of the blue polygons and I'm not sure how to do this.
I've got the lines (x,y) coordinates for both ends, the coordinates for the obstacle and the corner coordinates for the black border. Using those, is it possible to work out the coordinates of the blue polygons or am I approaching this the wrong way?
UPDATE - response to duffymo
Thanks for your answer. To explain further, each object mentioned is defined and encapsulated in a class i.e. I've got a Line/Obstacle/PlayingArea object. My polygon object is encapsulated in a 'Rectangle' object. Each one of these objects has it's own properties associated with it such as its coordinates/area/ID/state etc...
In order to keep track of all the objects, I've got an over-seeing singleton object which holds all of the Line objects / Obstacle objects etc in their own respective array. This way, I can loop through say all Lines and know were each one has been drawn by the player.
The game is a bit like classic JezzBall so I need to be able to create these polygon shapes when a user draws a line because the polygon shape will be used as my way of detecting if that particular area contains a ball. If not the area needs to be filled.
Since you already have the nodes and edges for your polygons, I'd recommend that you calculate the centroids, perimeters, and areas using contour integration You can express the centroids and areas as contour integrals using Green's theorem.
You can use Gaussian quadrature to do piecewise integration along each edge.
It'll be fast and accurate; it'll work on polygons of arbitrary complexity.
UPDATE: Objective-C is an object-oriented language. I don't know it myself, but I believe it's based on ideas from C and C++. Since that's the case, I'd recommend that you start writing more in terms of objects. Arrays of coordinates? They need to encapsulated together. I'd suggest a Point abstraction that encapsulates a point (id, x, y) together. Make a Grid that has a List of Points.
It sounds like users supply the relationship between Points to form Polygons. That's not clear from your description, so it's not a surprise that you're having trouble implementing it.

Which pixels did that drawmesh operation just draw to?

Ok, it's a relatively simple problem, I want to know where, in screen space, a particular mesh was just drawn. I plan on then storing that information in a data store of some kind so that when I interact with something in screen space, I can lookup in the register and find the object, i.e, click on the spaceship drawn on the screen and then select target etc.
I can't find any way of finding out which pixels the mesh was drawn to though...
Alternatively, if I'm missing something obvious regarding what it is that I Want to do, please let me know!
There is no easy way to do that. But you can use another texture as render target and render those meshes with unique colors.
So for example you give #FF0000 to your mesh A and draw it also to your second render target with that color. Now when you select a pixel from 2nd render target and look at that color, if it is #FF0000 you can understand that, the pixel is a part of mesh A. Thus you can easily pick the mesh drawn on a certain pixel when you click one of those pixels.
Why dont you Unproject your screen space coords into 3D space? The only complication I had was the fact that I'd be left with a plane, I could check if a Mesh intersected with that plane but I often had multiple candidates for 'picking'.
Check out Google for DirectX Unproject and there are various articles discussing it. It's sometimes complicated for some to implement but done well it's actually pretty nifty; don't get put off by the people online who say it doesn't work, it does work!

Simple algorithm for tracking a rectangular blob

I have created an experimental fast rectangular object tracking system; it will be used for headtracking and controllling objects in 3D engine (Ogre3D).
For now I am able to show to the webcam any kind of bright colored rectangle (text markers are good objects) and system registers basic properties of this object (hue/value/lightness and initial width and height in 0 degrees rotation).
After I have registered the trackable object, I do some simple frame processing to create grayscale probabilty map.
So now I have 2 known things:
1) 4 corners for the last object position (it's always a rectangle but it may be rotated)
2) a pretty rectangular (but still far from perfect) blob which is the brightest in the frame. I can get coordinates of any point of the blob without problems, point detection is stable enough.
I can find a bounding rectangle of the object without problems, but I have a problem with detecting the object corners themselves.
I need the simplest possible (quick&dirty would be great) algorithm to scan the image starting with some known coordinates (a point inside the blob) and detect new 4 x,y coordinates of a "blobish" rectangle corners (not corners of a bounding box but corners of the rectangular blob itself).
Ready-to-use C++ function would be awesome, but somehow google doesn't like me today :(
I think that it would be overkill to use some complicated function form OpenCV library just to extract 4 points of a single rectanglular blob. But if you know a quick and efficient way how to do it using OpenCV (it must be real-time and light on CPU because I'll run the 3D engine at the same time) then I would be really grateful.
You can apply Hough transform on segmented image to detect lines. Using detected lines you can calculate their intersection to find the corner coordinates of the blob.