iphone - digital image processing - objective-c

I want to build an app similar to Fat Booth, Aging Boot etc. I am totally noob to digital image processing. where should I start? Some hints?

Processing images on the iPhone with any kind of speed is going to require OpenGL ES. That would be the place to start. (If this is your first iOS project, though, I wouldn’t recommend starting off with GL.)
Apple has an image processing example available here: http://developer.apple.com/iphone/library/samplecode/GLImageProcessing/Introduction/Intro.html.
I imagine the apps you refer to use GL too. Fat Booth, for example, might texture a mesh with your photo, then distort the mesh to make the photo bulge out in the middle. It could also be done purely with fragment shaders.

Related

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Getting started with image processing on Mac OS X

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.
As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!
As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.

Caching images in Xcode? how?

I have started working on a new Xcode project, a game to be exact. Now, i will be adding what you might call sprites to the screen quite frequently, and the image that represents them will be either of a total of 3. Now, when i start adding these images programmatically to the viewcontrollers view, the app will start lagging as i reach a still fairly low number compared to many other games out there (maybe 5-10). Now, i was wondering if it had to do with caching? i see you can cache images in Cocos2d which i just started learning, to reduce the processing time of rendering the images on-screen. How do i come about this in Xcode?
IN SHORT: How do i "cache" or allow Xcode to rapidly draw images to prevent lag when drawing multiple images?
Thanks on advance.
JBJ
Xcode is the IDE and development environment, it's not the operating system, which is where any caching would really be happening.
UIImage does do some kind of caching (here is a related question that talks about this) but if you're going to be using cocos2d, you should rely more upon whatever your game framework provides versus what the O.S> provides.
You should rely on a proper API (like cocos2d, since you are talking about it) to develop games, not on UIKit classes which are not meant to be used in this way.. why should caching be supported in something that is used for layouts and interfaces and not for realtime rendering?
I agree with Jack that you should probably just use Cocos2D. But if you want to do it yourself, you should use the imageNamed: method of UIImage to load the images, because it takes care of caching automatically, and you should use UIImageView to display the images, because Apple has put a lot of effort into optimizing UIImageView.

Best approach for music visualization/interaction app

I'm am an experienced flash developer who's been learning objective-c for the last 5 months.
I am beginning the development of an app previously prototyped in Flash and I'm trying to guess what could be the best approach to port it to iOS.
My app is kind of a music game. It consists of some dynamic graphics (circles growing and rotating), with typography also changing and rotating. Everything moves in sync with music. And at the same time the user can interact with the app (moving and rotating things) and some sounds will change depending on his actions.
Graphics can't be bitmaps because they get redrawn every frame.
This was easy to develop with Flash due to its management of vector graphics. But I'm not sure what would be the best way to do it in objective-c.
My options, I guess are things like: Core Graphics, OpenGL, maybe Cocos2D (not sure if that would be to kill a flea with a sledgehammer). Or even things like OpenFrameworks or Cinder, but I rather use objective-c other than c++.
Any hint on where to look at will be appreciated.
EDIT:
I can't really put a real screenshot due to confidentiality issues. But it is something similar to this
But it will be interactive and sections would change size and disappear depending on the music and user interaction.
Which graphics library should you use? The answer is going to depend a lot on what you know or could learn. OpenGL will use hardware acceleration, so it's probably fastest. But OpenGL doesn't have built-in functions for drawing arc segments or any curves or text at all, so you'd probably have to do it yourself. Also, OpenGL is notoriously difficult to learn.
Core Graphics has many cool methods for drawing vector graphics (rectangles, arcs, general paths, etc.), but might be slower than you want, depending on what you're trying to do. Without having code to actually run it's hard to say.
It looks like Cocos2D is built on OpenGL and is made to be simple. I see lots of mention of sprites on their website, but nothing about vector graphics. (I've never used it, so it could be there and I'm just not seeing it.)
If I were in your position, I'd look into cocos2d and see if it does vector graphics at all. If not, I might give Core Graphics a try and see what performance was like. I know OpenGL can do what you want, but it can be difficult to learn, so I'd probably do that last.

OCR (reading text from photos) in Cocoa?

Is there any code out there, that I can use in Cocoa, to recognize text from photos? Let's say I snap a photo with my iPhone of a page of a book. I'd like to capture the text in it.
There is the Tesseract OCR toolkit that is an open source OCR engine, currently maintained by Google. "Olipion" created a cross compilation tutorial to get in on the iPhone. I would say that this is a good place to start.
However, there are reasons why you might not want to to OCR on the Phone even if you could. Some of these include:
Even the new iPhone 4's processor is not that fast and since you app can't really run in the background doing the processing, the user experience might not be optimal.
Running OCR on a mobile device would probably be a killer for battery life.
Every time you would want to update the OCR engine everybody who installed your app would have to upgrade.
For an always connected mobile device running the OCR on a server somewhere would be probably better. You could upgrade your OCR software easily, you could run much more powerful algorithms then a mobile device could handle and so on.
I am not so sure that you would be able to get good results from photos taken using a mobile camera -- accuracy of OCR systems goes way down with the kind of poorly lit, noisy, distorted images likely to be captured using a phone camera.
As far as commercial products out there, there is Evernote that gives you a OCR capability if you buy their premium service.
As an alternative to machine OCR, there is always Mechanical Turk, where you could pay people small amount to do the OCR for you. Would probably do better at transcription given the image source.