Calculating the area and position of dynamically formed rectangles - objective-c

Hello stackoverflow community,
I'm working on a puzzle game using Cocos2D/Box2D were the player draws lines on the screen. Depending on were the player draws, I want to then work out the area and position of the rectangles that appear as a result of the drawn lines.
I've currently got an array of all lines in the game so I know their (x, y) positions and sizes but I'm at a lost as to how to calculate the area and Cartesian coordinates of the rectangles that are dynamically formed. To help illustrate the problem, please see the following:
In the image above, you can see a black border. Contained within this are 4 grey lines which have been drawn by the player. From this, 5 blue rectangles have been formed. Any guidance or advice on how I can calculate the area and Cartesian coordinates of the rectangles would be a great help.

I wonder if it would be easier to convert the lines into a set of rectangles?
Start with a list of rectangles which only contains the main big rectangle. For each line, see which rectangle in the list contains it. Remove that rectangle from the list of rectangles and replace it with 2 smaller rectangles defined by the line.
Once you have the list of rectangles, you can easily calculate their area by just doing (width * height).

Related

how can I create this combination/Venn diagram using python? or any other easy tool recommendation?

Rectangle frame is an area that needs to be split in 4 parts (not equal at all). I need sizing according to population in each area. Inside the rectangle I need to divide between two categories which can be part of any quadrants (split of rectangles). Circle could be oval or whole rectangle could be circular. But I need it easy and quick! This has been easy design on paper.
Thanks.
enter image description here

UIImageView half moon slice

I'm trying to create an app with groups you can switch between. My idea was to pick the first 3 photo's of the members in the group, and lay the images over each other. Adding three images over each other is not really difficult, the difficult part for me is to make the other two images show up like a "half moon" beneath the other images. See the attached image for an example.
It isn't really a half moon. It's more like a crescent moon or lunate shape.
The principle is not a difficult one. Practice as follows:
Start with an image, roughly a square.
Make an image context the same size as the image.
Fill a circle the size of the image, roughly offset about a third of its width to the left.
Fill another circle the size of the image, roughly offset about two thirds of its width to the left, using Clear blend mode.
Extract the resulting image from the image context.
You now have the desired lunate shape:
Now use that lunate shape as a mask or clipping area for the original image:

Calculating the area and position of dynamically formed polygons

Hi stackoverflow community,
This is a continuation of a question I asked 6 months regarding calculating the area and position of dynamically formed rectangles. The solution provided for that worked a treat but now I want to take this a step further.
Some background - I'm working on a puzzle game using Cocos2D/Box2D were the player draws lines on the screen. Depending on were the player draws, I want to then work out the area and position of polygons that appear as a result of the drawn lines.
In the following image, the black border represents a playing area, this will always be the same shape. The grey lines are player drawn and will always be straight. The green square is an obstacle. The obstacle objects will be convex shapes. The formed polygons (3 in this case) are the blue areas and are the shapes I'm trying to get the coordinates and area for.
I think I'll be fine with working out the area of a polygon using determinants but before that, I need to work out the coordinates of the blue polygons and I'm not sure how to do this.
I've got the lines (x,y) coordinates for both ends, the coordinates for the obstacle and the corner coordinates for the black border. Using those, is it possible to work out the coordinates of the blue polygons or am I approaching this the wrong way?
UPDATE - response to duffymo
Thanks for your answer. To explain further, each object mentioned is defined and encapsulated in a class i.e. I've got a Line/Obstacle/PlayingArea object. My polygon object is encapsulated in a 'Rectangle' object. Each one of these objects has it's own properties associated with it such as its coordinates/area/ID/state etc...
In order to keep track of all the objects, I've got an over-seeing singleton object which holds all of the Line objects / Obstacle objects etc in their own respective array. This way, I can loop through say all Lines and know were each one has been drawn by the player.
The game is a bit like classic JezzBall so I need to be able to create these polygon shapes when a user draws a line because the polygon shape will be used as my way of detecting if that particular area contains a ball. If not the area needs to be filled.
Since you already have the nodes and edges for your polygons, I'd recommend that you calculate the centroids, perimeters, and areas using contour integration You can express the centroids and areas as contour integrals using Green's theorem.
You can use Gaussian quadrature to do piecewise integration along each edge.
It'll be fast and accurate; it'll work on polygons of arbitrary complexity.
UPDATE: Objective-C is an object-oriented language. I don't know it myself, but I believe it's based on ideas from C and C++. Since that's the case, I'd recommend that you start writing more in terms of objects. Arrays of coordinates? They need to encapsulated together. I'd suggest a Point abstraction that encapsulates a point (id, x, y) together. Make a Grid that has a List of Points.
It sounds like users supply the relationship between Points to form Polygons. That's not clear from your description, so it's not a surprise that you're having trouble implementing it.

Trace a ccsprite cocos2d-iphone

I have a layer with a sprite of a simple black donut. I want the user to be able to draw on the sprite in a different color (which I've managed to do without any problem using CCRenderTexture).
My question is how I can calculate whether the image has been traced at least 95% (meaning, find out when 95% of the black pixels are now the new color). I've tried methods like taking a screenshot of the layer and counting the number of black pixels, but it hasn't worked that well (using this solution: https://stackoverflow.com/a/1262893/1577738).
It would be even better if I could just change the color of each pixel as it's touched (to avoid issues with coloring out of the lines). I could theoretically just split the donut into like 10 sprites and change that section's color if the user touches it, but that seems ridiculous if I give the user options to use a bunch of different colors.
Am I going about this the wrong way? Your suggestions are much appreciated!
Reading pixel colors will be rather inaccurate and slow. I suggest dividing the area into smaller rectangles (ie 8x8 or 4x4) and then flag each as "visited" when the user draws on it. If most rectangle areas are flagged, the user has drawn on most parts of the texture.

How to recognize the touch of a non regular sprite image?

I have a sprite and if it is touched the touch should be recognized. I used the coordinates to do so. I took the coordinates (min x, min y, max x , max y)of the sprite image. But The sprite image is not a rectangular shape. So, even if I touch the coordinates outside the sprite and inside the rectangular bounds the sprite is recognized.
But for my application I need only the sprite to be recognized. So, I have to take only the coordinates of the sprite, but it is not regular shape. I am using CCSprite in my program.
So, what can I do to for only the sprite to be selected ? Which classes should use for this?
Thank You.
You could try one of the following...
Create a bounding box smaller than the absolute extents of the sprite image. Yes it will be smaller than the sprite. This will eliminate the dead space click detection of the sprite the trade off being parts of your sprite which look selectable won't be
Use a circular bounding area to detect if the user has clicked on your sprite. Again you will have the dead space problem in my first suggestion but the sphere may give you some better coverage area over the sprite giving you better results on touch detection
This is a standard problem in physics collision detection systems which often end up using circles or rectangles as their collision bodies. I would go with the either a circle or rectangle smaller than the size of your sprite as your bounding area. Going finer detail than that you could generate bounding area polygons. This would however introduce a whole bunch of new issues and concerns.
I am building a Cocos2D game right now and what I am doing is first I step through my sprites and see which sprites the touch hit (they overlap in my app)
Then, for each sprite hit I use [sprite convertTouchToNodeSpace] to get an X,Y co-ordinate inside the sprite, which I can use (although the Y axis is flipped) to reference the CGImage I created the sprite with.
If the pixel at the touch point is 'clear' ie alpha 0, then the sprite was not really touched, and I check the next sprite in the z-order to see if it has color where it was touched.
Sometimes I think I should be using a two color mask image to go along with each sprite, not the sprite image. But, I am mr. make it work, then make it fast.
I realise this is not super efficient, but I do not have very many sprites and I do this only for touches.