iOS Image Manipulation (Distortion) - objective-c

I initially approached this issue with CoreImage in mind (because I also need to do facial recognition), but realized that, unfortunately, the CI Distortion filters are not yet included on the iPhone.
I attempted to dive into GLImageProcessing, CImg, and ImageMagick, though I've had a lot of trouble finding a starting point for learning any of these.
Given the number of apps out there that do image distortion, I know this can't be incredibly difficult.
I don't know C or C++, and don't have the time to learn those languages unless absolutely necessary. It would become necessary if one of those libraries is the definitive library for handling this task.
Does anyone have experience with any of these libraries?
Any books out there that cover this for iOS5 specifically?
Resources I've found:
GLImageProcessing sample project
https://developer.apple.com/library/ios/#samplecode/GLImageProcessing/Introduction/Intro.html
ImageMagick & MagickWand
http://www.imagemagick.org/script/magick-wand.php
CImg
http://cimg.sourceforge.net/
Simple iPhone image processing
http://code.google.com/p/simple-iphone-image-processing/

As you say, the current capabilities of Core Image are a little limited on iOS. In particular, the lack of custom kernels like you find on the desktop is disappointing. The other alternatives you list (with the exception of GLImageProcessing, which wouldn't be able to do this kind of filtering) are all CPU-bound libraries and would be much too slow for doing live filtering on a mobile device.
However, I can point you to an open source framework called GPUImage that I just rolled out because I couldn't find something that let you pull off custom effects. As its name indicates, GPUImage does GPU-accelerated processing of still images and video using OpenGL ES 2.0 shaders. You can write your own custom effects using these, so you should be able to do just about anything you can think of. The framework itself is Objective-C, and has a fairly simple interface.
As an example of a distortion filter, the following shader (based on the code in Danny Pflughoeft's answer) does a sort of a fisheye effect:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
const mediump float bulgeFactor = 0.5;
void main()
{
mediump vec2 processedTextureCoordinate = textureCoordinate - vec2(0.5);
mediump float radius = processedTextureCoordinate.x * processedTextureCoordinate.x + processedTextureCoordinate.y * processedTextureCoordinate.y;
mediump vec2 distortedCoordinate = vec2(pow(radius, bulgeFactor)) * processedTextureCoordinate + vec2(0.5);
gl_FragColor = texture2D(inputImageTexture, distortedCoordinate);
}
This produces this kind of effect on a video stream:
In my benchmarks, GPUImage processes images 4X faster than Core Image on an iPhone 4 (6X faster than CPU-bound processing) and video 25X faster than Core Image (70X faster than on the CPU). In even the worst case I could throw at it, it matches Core Image for processing speed.
The framework is still fairly new, so the number of stock filters I have in there right now is low, but I'll be adding a bunch more soon. In the meantime, you can write your own custom distortion shaders to process your images, and the source code for everything is available for you to tweak as needed. My introductory post about it has a little more detail on how to use this in your applications.

Related

How do I convert OpenGLES shaders to Metal compatible ones?

I have a project which uses about 2 dozen .vsh and .fsh files to draw 2D tiles using OpenGLES. Since that is deprecated, I want to convert my project to Metal. My head is now swimming with vocabulary and techniques involved in both systems - graphics is not my forte.
Can I use OpenGLES to compile the .vsh/.fsh files, and then save them in a metal-compatible format? The goal would be to then use the saved information in a metal-centric world and remove all the OpenGLES code from the project. I've spent a few days on this already, and yet I don't understand the processes enough to fully attempt the transition to Metal. Any/all help is appreciated.
I saw this: "On devices that support it, the GLSL code you provide to SKShader is automatically converted to Metal shading language and run on a Metal renderer", which leads me to believe there is a way to get this done. I just don't know where to begin. OpenGL ES deprecated in iOS 12 and SKShader
I have seen this:
Convert OpenGL shader to Metal (Swift) to be used in CIFilter, and if it answers my question, I don't understand how.
I don't think this answers it either: OpenGL ES and OpenGL compatible shaders
Answers/techniques can use either Objective-C or Swift - the existing code is Objective-C, the rest of the project has been converted to Swift 5.
There are many ways to do what you want:
1) You can use MoltenGL to seamlessly convert your GLSL shaders to MSL.
2) You can use open-source shader cross-compilers like: krafix, pmfx-shader, etc.
I would like to point out that based on my experience it would be better in terms of performance that you try to rewrite the shaders yourself.

Pixel Shader performance in Directx9 not equivalent to Opengl ES2.0

I am using a drop effect by the help of pixel shader in directx-9 to be specific SlimDX.Direct3D9 written in hlsl used for transition between two images. I have written the same pixel shader in glsl language to be used in an android project using java 6.0.
The issue here is with the performance difference in both the machines. Android machines is showing smooth transition but there is a visible pixelation in Windows machines during transition. Pixel shader 2.0 version is being used in directx project
I think a couple of pictures would help immensely.
It could be a difference in sampling coordinates. Make sure you are getting 1:1 texture/pixel mapping.
Another possibility could be that the filtering is set to point instead of linear.

Check Device Type For GPUImage

GPUImage requires, for iPhone 4 and below, images smaller than 2048 pixels. The 4S and above can handle much larger. How can I check to see which device my app is currently running on? I haven't found anything in UIDevice that does what I'm looking for. Any suggestions/workarounds?
For this, you don't need to check device type, you simply need to read the maximum texture size supported by the device. Luckily, there is a built-in method within GPUImage that does this for you:
GLint maxTextureSize = [GPUImageContext maximumTextureSizeForThisDevice];
The above will give you the maximum texture size for the device you're running this on. That will determine the largest image size that GPUImage can work with on that device, and should be future-proof against whatever iOS devices come next.
This method works by caching the results of this OpenGL ES query:
glGetIntegerv(GL_MAX_TEXTURE_SIZE, &maxTextureSize);
if you're curious.
I should also note that you can provide images larger than the max texture size to the framework, but they get scaled down to the largest size supported by the GPU before processing. At some point, I may complete my plan for tiling subsections of these images in processing so that larger images can be supported natively. That's a ways off, though.
This is among the best device-detection libraries I've come across: https://github.com/erica/uidevice-extension
EDIT: Although the readme seems to suggest that the more up-to-date versions are in her "Cookbook" sources. Perhaps this one is more current.
Here is a useful class that I have used several times in the past that is very simple and easy to implement,
https://gist.github.com/Jaybles/1323251

Best approach for music visualization/interaction app

I'm am an experienced flash developer who's been learning objective-c for the last 5 months.
I am beginning the development of an app previously prototyped in Flash and I'm trying to guess what could be the best approach to port it to iOS.
My app is kind of a music game. It consists of some dynamic graphics (circles growing and rotating), with typography also changing and rotating. Everything moves in sync with music. And at the same time the user can interact with the app (moving and rotating things) and some sounds will change depending on his actions.
Graphics can't be bitmaps because they get redrawn every frame.
This was easy to develop with Flash due to its management of vector graphics. But I'm not sure what would be the best way to do it in objective-c.
My options, I guess are things like: Core Graphics, OpenGL, maybe Cocos2D (not sure if that would be to kill a flea with a sledgehammer). Or even things like OpenFrameworks or Cinder, but I rather use objective-c other than c++.
Any hint on where to look at will be appreciated.
EDIT:
I can't really put a real screenshot due to confidentiality issues. But it is something similar to this
But it will be interactive and sections would change size and disappear depending on the music and user interaction.
Which graphics library should you use? The answer is going to depend a lot on what you know or could learn. OpenGL will use hardware acceleration, so it's probably fastest. But OpenGL doesn't have built-in functions for drawing arc segments or any curves or text at all, so you'd probably have to do it yourself. Also, OpenGL is notoriously difficult to learn.
Core Graphics has many cool methods for drawing vector graphics (rectangles, arcs, general paths, etc.), but might be slower than you want, depending on what you're trying to do. Without having code to actually run it's hard to say.
It looks like Cocos2D is built on OpenGL and is made to be simple. I see lots of mention of sprites on their website, but nothing about vector graphics. (I've never used it, so it could be there and I'm just not seeing it.)
If I were in your position, I'd look into cocos2d and see if it does vector graphics at all. If not, I might give Core Graphics a try and see what performance was like. I know OpenGL can do what you want, but it can be difficult to learn, so I'd probably do that last.

iphone - digital image processing

I want to build an app similar to Fat Booth, Aging Boot etc. I am totally noob to digital image processing. where should I start? Some hints?
Processing images on the iPhone with any kind of speed is going to require OpenGL ES. That would be the place to start. (If this is your first iOS project, though, I wouldn’t recommend starting off with GL.)
Apple has an image processing example available here: http://developer.apple.com/iphone/library/samplecode/GLImageProcessing/Introduction/Intro.html.
I imagine the apps you refer to use GL too. Fat Booth, for example, might texture a mesh with your photo, then distort the mesh to make the photo bulge out in the middle. It could also be done purely with fragment shaders.