How do artists create non linear abstract interpolated gradients images - objective-c

I've seen many versions of multicolored gradient like images, that are both non linear and heavily stylized. Usually in the form of layered blob like shapes.
My guess as to how they achieve this effect is
drawing intersecting blob like shapes
masking gradients on the shapes
interpolating the colors on the image.
However as you'll notice by the distinct lines in the image the interpolated effect only appears in certain regions of the image. This effect is what I would like to achieve in metal.

One approach is to draw your solid colors and then apply a zoom or motion blur CoreImage filter to achieve the effect of a gradient, leaving some detail by where you place the center (for zoom) or the angle you set (for motion).
Here's an example of a before and a couple afters. The original image in this case is drawn with 2D function plotting but you could easily use a static input image/video-frame, draw an image with filled bezier paths, etc.
The second image uses a CIZoomBlur, input center pt just off image center at (240, 220), with amount set to 134.9.
The CIMotionBlur filter also produces some interesting gradient effects. Here's the same input image, with CIMotionBlur inputRadius 57.6 and inputAngle -0.415.
I think this could achieve what you're after providing you set up the original solid-color image as you like and are able to figure out optimal settings for the filters (angle, center pt etc.).

Related

How to detect an image between shapes from camera

I've been searching around the web about how to do this and I know that it needs to be done with OpenCV. The problem is that all the tutorials and examples that I find are for separated shapes detection or template matching.
What I need is a way to detect the contents between 3 circles (which can be a photo or something else). From what I searched, its not to difficult to find the circles with the camera using contours but, how do I extract what is between them? The circles work like a pattern on the image to grab what is "inside the pattern".
Do I need to use the contours of each circle and measure the distance between them to grab my contents? If so, what if the image is a bit rotated/distorted on the camera?
I'm using Xamarin.iOS for this but from what I already saw, I believe I need to go native for this and any Objective C example is welcome too.
EDIT
Imagining that the image captured by the camera is this:
What I want is to match the 3 circles and get the following part of the image as result:
Since the images come from the camera, they can be rotated or scaled up/down.
The warpAffine function will let you map the desired area of the source image to a destination image, performing cropping, rotation and scaling in a single go.
Talking about rotation and scaling seem to indicate that you want to extract a rectangle of a given aspect ratio, hence perform a similarity transform. To define such a transform, three points are too much, two suffice. The construction of the affine matrix is a little tricky.

UIImageView half moon slice

I'm trying to create an app with groups you can switch between. My idea was to pick the first 3 photo's of the members in the group, and lay the images over each other. Adding three images over each other is not really difficult, the difficult part for me is to make the other two images show up like a "half moon" beneath the other images. See the attached image for an example.
It isn't really a half moon. It's more like a crescent moon or lunate shape.
The principle is not a difficult one. Practice as follows:
Start with an image, roughly a square.
Make an image context the same size as the image.
Fill a circle the size of the image, roughly offset about a third of its width to the left.
Fill another circle the size of the image, roughly offset about two thirds of its width to the left, using Clear blend mode.
Extract the resulting image from the image context.
You now have the desired lunate shape:
Now use that lunate shape as a mask or clipping area for the original image:

Detection of chessboard-like pattern in OpenCV

I have a problem with detection of chessboard-like pattern. The image is very noisy because it is registered with the use of laser scanner.
The only thing I have managed to achieve is detection of big rectangle:
Now I have no idea how to detect those small squares. I tried all sorts of different algorithms, but the contrast in the squares seems too low. Does anybody have any ideas?
Other pattern images: https://dl.dropboxusercontent.com/u/3681534/kalibrator/6.png https://dl.dropboxusercontent.com/u/3681534/kalibrator/8.png
A way to progress would be to determine the grayvalue level at the inner border of the rectangle, then:
Adjust the average brightness inside the rectangle border.
With that knowledge it is possible to adjust the average brightness inside the rectangle to one value (the small square will still be a bit lighter than the rest)
Increase the contrast a lot
Find the lines that run along the edges of the squares
Either access the line crossings directly or paint white and black
Calculate your calibration data

How to use toon shader to convert 3D models to patent drawings

USPTO requires patent drawings to be black and white lines images.
I'm using blender to make 3D models. At first I got this:
The problem is it's grayscale with no black lines.There's a answer to suggest using toon shader. Convert 3D models to patent digrams
I checked "Edge" and set "Threshold" to max 255 in "Render" tab, I got:
It's getting better but need more edges to be drawn. I searched and found a tutorial http://www.minimaexpresion.es/?p=1070&lang=en , then I got:
It's too complicated for me and I don't know how to use render layers. So I tried another tutorial http://download.blender.org/documentation/oldsite/oldsite.blender3d.org/80_Blender%20tutorial%20Toon%20Shading.html , which says I should assign different materials with different colors to different objects, so I tried and got this:
It leaves only one way to give a shot: render layers. Is there any simple methods to make it work? I only want this and convert it to indexed colors with black and white palette:
And the "Freestyle" only has one option about line thickness:
You were close in the second image. Instead of using the Edge postprocessor, look in the Render panel check the box labelled "Freestyle".
Then in the Render Layers panel there will be a list of configurable options for Freestyle, including how thick you want the lines and the minimum angle required to render an edge.
The best results are if you use mostly shadeless materials with simple textures (just solid colour).

World space to screen space (perspective projection)

I'm using a 3d engine and need to translate between 3d world space and 2d screen space using perspective projection, so I can place 2d text labels on items in 3d space.
I've seen a few posts of various answers to this problem but they seem to use components I don't have.
I have a Camera object, and can only set it's current position and lookat position, it cannot roll. The camera is moving along a path and certain target object may appear in it's view then disappear.
I have only the following values
lookat position
position
vertical FOV
Z far
Z near
and obviously the position of the target object.
Can anyone please give me an algorithm that will do this using just these components?
Many thanks.
all graphics engines use matrices to transform between different coordinats systems. Indeed OpenGL and DirectX uses them, because they are the standard way.
Cameras usually construct the matrices using the parameters you have:
view matrix (transform the world to position in a way you look at it from the camera position), it uses lookat position and camera position (also the up vector which usually is 0,1,0)
projection matrix (transforms from 3D coordinates to 2D Coordinates), it uses the fov, near, far and aspect.
You could find information of how to construct the matrices in internet searching for the opengl functions that create them:
gluLookat creates a viewmatrix
gluPerspective: creates the projection matrix
But I cant imagine an engine that doesnt allow you to get these matrices, because I can ensure you they are somewhere, the engine is using it.
Once you have those matrices, you multiply them, to get the viewprojeciton matrix. This matrix transform from World coordinates to Screen Coordinates. So just multiply the matrix with the position you want to know (in vector 4 format, being the 4ยบ component 1.0).
But wait, the result will be in homogeneous coordinates, you need to divide X,Y,Z of the resulting vector by W, and then you have the position in Normalized screen coordinates (0 means the center, 1 means right, -1 means left, etc).
From here it is easy to transform multiplying by width and height.
I have some slides explaining all this here: https://docs.google.com/presentation/d/13crrSCPonJcxAjGaS5HJOat3MpE0lmEtqxeVr4tVLDs/present?slide=id.i0
Good luck :)
P.S: when you work with 3D it is really important to understand the three matrices (model, view and projection), otherwise you will stumble every time.
so I can place 2d text labels on items
in 3d space
Have you looked up "billboard" techniques? Sometimes just knowing the right term to search under is all you need. This refers to polygons (typically rectangles) that always face the camera, regardless of camera position or orientation.