Render horizon silhouette using Metal in SceneKit - rendering

I'm working on simple side project and small part of it is rendering a terrain. I'm rendering the terrain using height map information. But here is my problem:
I would like to render just a silhouette/outline of the terrain/horizon. Here is screenshot from my app rendering height map:
And here is screenshot similar to desired result from peakfinder.org:
I would like to draw just lines representing silhouette of terrain with the rest transparent or solid colour. How can I solve it? Calculate local maximum somehow?
I created sample project here, in case you want to help me.
Thanks!

Related

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

Simulating a map with DataVsualization.Chart

VB2010 I am using the DataVisualization.Charting.Chart control and am using it as a very crude map of several geographic points. I have that working and it looks good but am trying to see about adding an image to the chart so that it would simulate the land masses. The problem I see is that I zoom around the chart programmatically, so not sure how i can anchor the image to certain x,y coordinates. I don't want to go the whole GIS component as those types of controls are just too much for this fairly simple app.
Update: Well I sort of resolved it. I generated a full world map based on the WGS84 (some would call geographic) projection. This splits the world into perfect 20d x 20d squares. I added this graphic to the BackImage property of my main Chart. I add my points but force the extents of the graph to x: -180 to 180 and y: -90 to 90. Its crude but its actually spot on for my purposes. The only thing that i cant do is zoom in to my points of interest as the background image is static. I wish there was a way to zoom the background image to anchor parts of the image to coordinates on the graph.

Create an irregular shaped frame

I've created a canvas within which I display an image that is clipped when it goes over the edges. I can do this fine with a square shaped frame, however the frame I want to use is the one below. Is there any way I can clip the image inside the frame without having to add a non transparent square border around the image, i.e. just using the black line that I've already drawn? (on iPad)
You'll need to use Core Graphics and Quartz to handle this sort of clipping/graphics manipulation.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001066
If you're using UIBezierPath, you may be able to achieve the clipping you're after using the following process
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-TPXREF101
Convert your UIBezierPath to a CGPath
Get your image into a CGContext
Add your CGPath to the context via CGContextAddPath
Clip your context using CGContextClip
Alternatively, if you don't want to be messing with paths (and depending on whether this technique is suitable for your situation, your description of the issue makes it hard to tell), it might be worth using image masking to achieve the effect you're after. See the first link and look under "Bitmap Images and Image Masks".

Refraction for object { mesh {...}} surface shows artifacts

We want to render a parametrized surface in front of a grid plane and observe the transformation of the grid due to refraction happening at the surface. Our surface is in this simple example a 2D normal distribution which we will view directly from above and the grid plane is placed below:
The surface is given in many triangle directives which we put together in a mesh and used it with
object {
fovea
scale <1,1,3>
texture { pigment {color rgbt <0,0,1,0.5> }}
interior {ior 1.4}
}
The scale here is not necessary and used only to amplify the artifacts. What you see in the image below is, that the refraction seems not to happen smoothly, but creates some sharp artifacts in the underlying grid pattern.
This image was created with Povray 3.6.1 under MacOS X 10.5.6 with the settings +Q9, +A and -J. Can anyone point out a hint? Thanks.
This was a stupid mistake. Since in Mathematica the surface looked really smooth, I assumed that it created a large number of triangle-faces. This assumption was wrong. The rendering engine Mathematica uses, seems to interpolate the normals given for each vertex and therefore the surfaces only looks as it has a high resolution.
A check of the underlying polygons reveals the truth:
Therefore, what looks like refraction artifacts in the rendered image above is actually correct behavior, because the face-normals of neighboring triangles really change that much.
Increasing the resolution of the surface grid solves the problem.

Flipboard style page turn animation

I'm trying to write a fairly simple animation using Core Animation to simulate a book cover being opened. The book cover is an image, and I'm applying the animations to its layer object.
As well as animating a rotation around the y axis (The the anchorPoint property set of the left of the screen), I need to scale the right hand edge of the image up so it appears to "get closer" to the user and create the illusion of depth. Flipboard, for example, does this scaling really well.
I can't find any way of scaling an image like this, so only one edge is scaled and the image ends up nonrectangular.
All help appreciated with this one!
CoreAnimation, by default, "flattens" its 3D hierarchy into a 2D world at z=0. This causes perspective and the like to not work properly. You need to host your layer in a CATransformLayer, which will render its sublayers as a true 3D layer hierarchy.