Detecting what percent of an drawn image has been "erased" - objective-c

Let's say I have a solid, irregularly shaped (but enclosed) shape on screen in iOS (one colour). I then want to "erase" portions of that shape by dragging my finger around like you would in a typical kids colouring app, erasing with a fixed brush size where I touch the screen.
I could easily accomplish all this with something like an image mask and touch detection however, as a requirement, I also need to determine the rough percentage of the shape that remains.
For example I need to know when 50% of the random enclosed shape has been "erased".
What's the best way of approaching this problem? Are there any existing iOS compatible libraries that can handle it? I'm thinking that I would need to keep track of a ton of polygons and calculate all the overlaps but it seems like there must be a solution to this problem.
EDIT: I have done research into this problem however tracking all the polygons manually and calculating all their positions and area overlaps seems overly complicated. I was simply wondering if anyone else has run into a similar issue and found a better solution.

you will need to first know the fixed space of the image view. then you will need to know the percentage of blank space when the new image is loaded. pixel
double percentageFilledIn = ((double)nonBlankPixelCount/totalpixels);
After you get that value you will need to use that percentage as your baseline for the existing percentage
your new calculation will look like this.
double percentageOfImageLeft = ((double)nonBlankPixelCount/totalpixels/percentageFilledIn);
this calculation will likely be processor intensive. I would only calculate sparingly.
Since this post is not about code and more about login I will let you determine your logic for detecting non blank pixels.
here is how to find a pixel color.
How to get Coordinates and PixelColor of TouchPoint in iOS/ObjectiveC
Good luck.

Related

GODOT: What is an efficient calculation for the AABB of a simple 3D model from a camera's view

I am attempting to come up with a quick and efficient means of translating a 3d mesh into a projected AABB. In the end, I would like to accomplish something similar to figure 1 wherein only the area of the screen covered by the cube is located inside the bounding box highlighted in red. ((if it is at all possible, getting the area as small as possible, highlighted in blue, would increase efficiency down the road.))
Figure 1. https://i.imgur.com/pd0E20C.png
Currently, I have tried:
Calculating the point position on the screen using camera.unproject_position(). this failed largely due to my inability to wrap my head around the pixel positions trending towards infinity. I understand it has something to do with Tan, but frankly, it is too late for my brain to function anymore.
Getting the area of collision between the view frustum and the AABB of the mesh instance. This method seems convoluted, and to get it in a usable format I would need to project the result into 2d coordinates again.
Using the MeshInstance VisualInstance to create a texture wherein a pixel is white if it contains the mesh instance, and black otherwise. Visual instances in general just baffle me, and I did not think it would be efficient to have another viewport just to output this texture.
What I am looking for:
An output that can be passed to a shader informing where to complete certain calculations. Right now this is set up to use a bounding box, but it could easily be rewritten to also use a texture. It also could be rewritten to use polygons, but I am trying to keep calculations to a minimum in the shader.
Certain solutions I have tried before have worked, slightly, but this must be robust. The camera interfacing with the 3d object will be able to move completely around and through it, meaning at times the view will be completely surrounded by the 3d model with points both in front, and behind.
Thank you for any help you can provide.
I will try my best to update this post with information if needed.

Special Kind of ScrollView

So I have my game, made with SpriteKit and Obj-C. I want to know a couple things.
1) What is the best way to make scroll-views in SpriteKit?
2) How do I get this special kind of scroll-view to work?
The kind of scroll-view I'd like to use is one that, without prior knowledge, seems like it could be pretty complicated. You're scrolling through the objects in it, and when they get close to the center of the screen, they get larger. When they're being scrolled away from the center of the screen, they get smaller and smaller until, when their limit is met, they stop minimizing. That limitation goes for getting bigger when getting closer to the center of the screen, too.
Also, I should probably note that I have tried a few different solutions for cheap remakes of scroll views, like merely adding the objects to a SKNode and moving the SKNode's position relative to the finger's, and its movement . . . but that is not what I want. Now, if there is no real way to add a scroll-view to my game, this is what I'm asking. Will I simply have to do some sort of formula? Make the images bigger when they get closer to a certain spot, and maybe run that formula each time -touchesMoved is called? If so, what sort of formula would that be? Some complicated Math equation subtracting the node's position from the center of the screen, and sizing it accordingly? Something like that? If that's the case, will you please give me some smart Math formula to do that, and give it to me in code (possibly a full-out function) format?
If ALL else fails, and there is no good way to do this, what would some other way be?
It is possible to use UIScrollViews with your SpriteKit scenes, but there's a bit of a workaround involved there. My recommendation is to take a look at this github project, that is what I based my UIScrollView off of in my own projects. From the looks of it, most of the stuff you'd want has actually been converted to Swift now, rather than Objective-C when I first looked at the project, so I don't know how that'll fare with you.
The project linked above would result in your SKScene being larger than the screen (I assume that is why it would need to be scrolled), so determining what is and is not close to the center of the scene won't be difficult. One thing you can do is use the update loop in SpriteKit to constantly update the size of Sprites (Perhaps just those on-screen) based on their distance from a fixed, known center point. For instance, if you have a screen of width and height 10, then the midpoint would be x,y = 5,5. You could then say that size = 1.0 - (2 * distance_from_midpoint). Given you are at the midpoint, the size will be 1.0 (1.0 - (2 * 0)), the farther away you get, the smaller your scale will be. This is a crude example that does not account for a max or min fixed size, and so you will need to work with it.
Good luck with your project.
Edit:
Alright, I'll go a bit out of my way here and help you out with the equation, although mine still isn't perfect.
Now, this doesn't really give you a minimum scale, but it will give you a maximum one (Basically at the midpoint). This equation here does have some flaws though. For one, you might use this to find the x and y scale of your objects based on their distance from a midpoint. However, you don't really want two different components to your scale. What if your Sprite is right next to the x midpoint, and the x_scale spits out 0.95? Well, that's almost full-sized. But if it is far away from the midpoint on the y axis, and it gives you a y scale of, say 0.20, then you have a problem.
To solve that, I just take the magnitude or hypotenuse of the vector between the current coordinate and the coordinate of the current sprite. That hypotenuse gives me an number that represents the true distance, which eliminates the problem with clashing scale values.
I've made an example of how to calculate this inside Google's Go-Playground, so you can run the code and see what different scales you get based on what coordinate you plug in. Also, the equation used in there is slightly modified, It's basically the same thing as above but without the maxscale - part of the front part of the equation.
Hope this helps out!
Embedding Attempt:
see this code in play.golang.org

Trace a ccsprite cocos2d-iphone

I have a layer with a sprite of a simple black donut. I want the user to be able to draw on the sprite in a different color (which I've managed to do without any problem using CCRenderTexture).
My question is how I can calculate whether the image has been traced at least 95% (meaning, find out when 95% of the black pixels are now the new color). I've tried methods like taking a screenshot of the layer and counting the number of black pixels, but it hasn't worked that well (using this solution: https://stackoverflow.com/a/1262893/1577738).
It would be even better if I could just change the color of each pixel as it's touched (to avoid issues with coloring out of the lines). I could theoretically just split the donut into like 10 sprites and change that section's color if the user touches it, but that seems ridiculous if I give the user options to use a bunch of different colors.
Am I going about this the wrong way? Your suggestions are much appreciated!
Reading pixel colors will be rather inaccurate and slow. I suggest dividing the area into smaller rectangles (ie 8x8 or 4x4) and then flag each as "visited" when the user draws on it. If most rectangle areas are flagged, the user has drawn on most parts of the texture.

On-the-fly Terrain Generation Based on An Existing Terrain

This question is very similar to that posed here.
My problem is that I have a map, something like this:
This map is made using 2D Perlin noise, and then running through the created heightmap assigning types and color values to each element in the terrain based on the height or the slope of the corresponding element, so pretty standard. The map array is two dimensional and the exact dimensions of the screen size (pixel-per-pixel), so at 1200 by 800 generation takes about 2 seconds on my rig.
Now zooming in on the highlighted rectangle:
Obviously with increased size comes lost detail. And herein lies the problem. I want to create additional detail on the fly, and then write it to disk as the player moves around (the player would simply be a dot restricted to movement along the grid). I see two approaches for doing this, and the first one that came to mind I quickly implemented:
This is a zoomed-in view of a new biased local terrain created from a sampled element of the old terrain, which is highlighted by the yellow grid space (to the left of center) in the previous image. However this system would require a great deal of modification, as, for example, if you move one unit left and up of the yellow grid space, onto the beach tile, the terrain changes completely:
So for that to work properly you'd need to do an excessive amount of, I guess the word would be interpolation, to create a smooth transition as the player moved the 40 or so grid-spaces in the local world required to reach the next tile over in the over world. That seems complicated and very inelegant.
The second approach would be to break up the grid of the original map into smaller bits, maybe dividing each square by 4? I haven't implemented this and I'm not sure how I would in a way that would actually increase detail, but I think that would probably end up being the best solution.
Any ideas on how I could approach this? Keep in mind it has to be local and on-the-fly. Just increasing the resolution of the map is something I want to avoid at all costs.
Rewrite your Perlin noise to be a function of position. Then you can increase the octaves (and thus the detail level) and resample the area at a higher resolution.

Get size of an expanding circle in a CABasicAnimation at any point in time

I would like to know how I can get the diameter (or radius) of an expanding circle animation at a at any point in time during the animation. I will end up stoping the animation right after I get the size as well, but figure I couldn't stop and remove it form the layer until I get the size of the circle.
For an example of how the expanding circle animation is implemented, it is a variation on the implementation shown in the addGrowingCircleAtPoint:(CGPoint)point method in the answer in the iPhone Quartz2D render expanding circle question.
I have tried to check various values on the layers, animation, etc but can't seem to find anything. I figure worse case I can attempt to make a best guess by taking the current time it is into its animation and use that to figure where it "should" be at based on its to and from size states. This seems like overkill for what I would assume is a value that is incrementing someplace I can just get easily.
Update:
I have tried several properties on the Presentation Layer including the Transform which never seems to change all the values are always the same regardless of what size the circle is at the time checked.
Okay here is how you get the current state of the an animation while it is animating.
While Rob was close he left out two pieces of key information.
First from the layer.presentationLayer.subLayers you have to get the layer you are animating on, which for me is the only sub layer available.
Second, from this sub layer you cannot just access the transform directly you have to do it by valueForKeyPath to get transform.scale.x. I used x because its a circle and x and y are the same.
I then use this to calculate the size of the circle at the time of the based on the values used to create the Arc.
I assume what you're trying to get to is the current CATransform3D, and that from that, you can get to your circle size.
What you want is the layer.presentationLayer.transform. See the CALayer docs for details on the presentationLayer. Also see the Core Animation Rendering Architecture.