Getting started with image processing on Mac OS X - objective-c

I recently moved from a PC to a MacBook Pro. I'm starting to go through tutorials on Objective-C and developing in Cocoa. I do a lot of image processing algorithm development work (pixel by pixel manipulation) in my day job so I'd like to get create a test image processing app or two for OS X. I'm struggling to figure out where to start - let's say I want to create a simple application (that I could reuse) like the following:
load an image from an open file option within a file menu
display this within the GUI.
Click a button to apply pixel by pixel processing
Update the displayed image
Save the processed image from the save option within the file menu
Any pointers or links would be most appreciated.
Thanks
Other info:
I'm pretty familiar with OpenCV within Linux - haven't looked at using it within Objective-C/Cocoa/Xcode environment yet though - not even sure if this would be a good idea?
I guess it would be nice to use GPU acceleration as well, but I'm not familiar with OpenGL/OpenCL - so I might have to put that one on the long finger for the moment.

As you are looking at the Apple platform, you should look into the CoreImage framework - it will provide you most of pre-baked cookies ready to be consumed in your application.
For more advanced purposes, you can start off with openCV.
Best of luck!!

As samfisher suggests, OpenCV is not that hard to get working on the Mac, and Core Image is a great Cocoa framework for doing GPU-accelerated image processing. I'm working on porting my GPUImage framework from iOS to the Mac, and it's entirely geared around making accelerated image processing easy to work with, but unfortunately that isn't working right now.
If you're just getting started on the Mac, one tool that I can point out which you might overlook is Quartz Composer. You have to download the separate Graphics Tools package from Apple's developer site to install Quartz Composer, because it's no longer shipped with Xcode.
Quartz Composer is a graphical development tool that lets you drag and drop modules, connect inputs and outputs, and do rapid development of some fairly interesting things. One task it's great for is doing rapid prototyping of image processing, either using Core Image or OpenGL shaders. I've even heard of people using OpenCV with this using custom patches. You can easily connect an image or camera source into a filter chain, then edit the filters and see live updates as you work on them, without requiring a compile-run cycle.
If you want some sample QC projects to play with, I have a couple of them linked from this article I wrote a couple of years ago. They both do the same color-based object tracking, with one using Core Image and the other OpenGL shaders. You can dig into that and play around to see how that works, without having to get too far into writing any code.

Related

Glitches when using simple screen recorder on Arch Linux box

So i had been making video tutorials for my friends on how to program. On my old computer i had been all ways running simple screen recorder and it recorded fine. But recently i got a new computer. And so when i got a fresh install of arch linux on the box. I set up the environment with every thing i needed to make another video. When i downloaded simple screen recorder using yaourt, and started recording. I had recorded up to a two hour session with out knowing that it was glitching out. When i look at my computer i do not see the same issue as when the final product is done rendering. I think it might be a rendering error or i do not have the right codecs. After a hour or two searching on the web i could find no forum posts on the codec. I took in multiple things that could be wrong with it fps was my first choice but when i had recorded with 25 and even 50 fps it was still glitching out. The next idea i had was that i had the wrong codec H.264. But with searching i could find no solution to that one. Then i thought that i might have been encoding at to high of a speed (23). But still that proved me wrong. so now i am confused with how to get my answer.
Settings Screen shot:
Video Link:
https://www.youtube.com/watch?v=zfyIZiJCDa4
The glitches are often relate to the rendering backend of the window compositor you are using.
Solution 1 - Change the rendering backend of the window compositor
#thouliha reported having issues with compton. In my case I had glitches with openGL (2.0 & 3.1) and resolved the issue by switching to XRender for recording.
On KDE you easily change the rendering backend of the window compositor in the settings .
Solution 2 - Change the Tearing Prevention method
To keep using OpenGL, for example for better performance, you can also tweak the tearing prevention method.
In my case switching from Automatic to Never allowed me to record video with OpenGL compositor without glitches.
Solution 3 - Intel iGPU specific issues
Intel iGPU (Intel graphics) has some rendering issues with some CPUs.
You can check the Troubleshooting section of ArchLinux wiki to check those.
Example of features creating tearing or flickering related issues:
SNA
VSYNC
Panel Self Refresh (PSR)
Check also /etc/X11/xorg.conf.d/20-intel.conf if your system has put tweaks in here.
I'm not exactly sure what you mean by glitching out, especially since the video is down now, but I've found that the video is choppy when using compton, so I had to turn that off.

Cocos2dx performance issue on Windows Phone 8

I'm trying to port an android/iOS game to windows phone 8(cocos2dx v 2.2). I'm using the exact same code base that I've used for android and iOS. The game functions just fine, but I facing some major FPS drop. The game runs flawlessly at 60FPS in android and iOS, but I'm getting roughly about 35FPS on wp8. Has this got to do anything with differences in OpenGL and directX?
I doubt its got to do with the game's logic and calculations because when the game starts in windows phone, it starts with 60FPS on the main menu, which has got like 5 sprites. But as I add more sprites on the screen, say about 30 of them(average number of sprites when I'm IN the game) the FPS rapidly drops to 35-40 range. Note that there are no schedulers or update functions running at this point. I did the same test on Android, but the FPS didn't drop. Does the win8 port of cocos2dx suck?
Any help,comments or redirection to useful articles would be appreciated.
Thank you.
In case anyone runs into similar issue, I reduced the number of children in the scene and deployed the build in release mode. Gave a major boost to the FPS. Also, I had a bunch of float to string and int to string conversions happening in every frame inside the update function. That was eating away on the processing speed too.
Actually, the Cocos2dx port for WP8 is ok, but outdated. Cocos2d-x is now at 3.0 beta, but the WP8 was left at 2.0 alpha.
Anyway... in Cocos there are some recursive drawing functions which are very heavy on the CPU, and also, keep in mind that even though WP8 is supposed tu support arrays, lists, maps etc. they are very slow on WP8.
And since you came to this subject, Please let me know if you managed to successfully put cocos2d-x on an XAML+D3D Interop project. I am getting tons of crashes.
EDIT: Indeed, the recursive calls which process (draw or update) child "CCNode"s are very heavy on the device. However, after putting Cocos2d-x ver. 2.0alpha for WP8 into a XAML+D3D interop project, I found a whole lot of memory related issues. Apparently, after doing this (or just because I don't know how to properly configure my VS project and allow loose addressing), a lot of uninitialized pointers and data cause some memory overlaps, leading to major crashes.
This proves only that it was truely an alpha release :) Too bad no newer version of Cocos2d-x for Wp8 is available.

Need to add an interactive 3D model to my otherwise non-3D app

As briefly as I can; are there any frameworks available that I can drop into an iPad app I'm working on, along with a 3D model, and allow me to add a view that will present the model in an interactive format?
Model needs to be rotatable, and ideally I would like to be able to add interactive points on to the model that pop up modal views when tapped.
I have never worked with 3D before in any respect so I'm coming at that part as a complete novice. The 3D model is being supplied to me and will be available in "various formats". The rest of the app is pure Objective-C in which I'm proficient enough.
I have Googled and Googled and have come up with nothing so far.
Failing there being any drop-in frameworks, how much of a challenge is it likely to be to get myself up to speed with what I would need to know? Are there any good starting points to expand my knowledge here?
3D is a complex matter, if you don't see your future dealing with it on a regular basis I wouldn't recommend writing your own solutions for it.
The closest you can find to a drag and drop framework would be the SDK of the iPhone / iPad GPU's manufacturer. It's pretty easy to use.
PowerVR SDK Download
After a free registration on their website, you can download the SDK that contains lots of samples with source code. Their framework displays 3D models in their own POD format, which is of course heavily optimized for the iOS devices. Ask your 3D model provider to give you the models in POD format (you can find POD converters / exporters for Maya etc. on PowerVR's website as well).

Is QML worth using for Embedded systems which run around 600MHz and no GPU?

Im planning to build a Embedded system which is almost like an organizer i.e. which handles contacts, games, applications & wifi/2G/3G for internet. I planned to build the UI with QML because of its easy to use and quick application building nature. And to have a linux kernel.
But after reading these articles:
http://qt-project.org/forums/viewthread/5820 &
http://en.roolz.org/Blog/Entries/2010/10/29_Qt_QML_on_embedded_devices.html
I am depressed and reconsidering my idea of using QML!
My hardware will be with these configurations : Processor around 600MHz, RAM 128MB and no GPU.
Please give comments on this and suggest me some alternatives for this.
Thanks in Advance.
inblueswithu
I have created a QML application for Nokia E63 which has 369MHz processor, 128MB RAM. I don't think it has a GPU. The application is a Stop Watch application. I have animated button click events like jumping (jumping balls). The animations are really smooth even when two button jumps at the same time. A 600MHz processor is expected to handle QML easily.
This is the link for the sis file http://store.ovi.com/content/184985. If you have a Nokia mobile you can test it.
May be you should consider building QML elements by hand instead of doing it from Photoshop or Gimp. For example using Item in the place of Rectangle will be optimal. So you can give it a try. May be by creating a rough sketch with good amount of animations to check whether you processor can handle that. Even if it don't work as expected then consider Qt to build your UI.
QML applications work fine on low-specs Nokia devices. I have made one for 5800 XPressMusic smartphone without any problems.

Mac OS X equivalent for DirectShow, GraphEdit

New to Mac OS X, familiar with Windows. Windows has DirectShow, a good number of built-in filters, COM programming, and GraphEdit for very fast prototyping and snooping on the graphs you've constructed in code.
I'm now about to go to the Mac to work with cameras, webcams, microphones, color spaces, files, splitting, synchronization, rendering, file reading, file saving, and many of things I've come to take for granted with DirecShow when putting together applications for live performance. On the Mac side, so far I've found ... nothing! Either I don't know where to look or I'm having the toughest time tying the Mac's reputation for its ease of handling media with a coherent programmatic ability to get in there and start messin' with media manipulatin' building blocks.
I've seen some weak suggestions to use gstreamer or some library for QT but I can't bring myself to believe that this is the Apple way to go. And I've come across some QuickTime documentation but I'm not looking to do transitions, sprites, broadcasting, ...
Having a brain trained on DirectShow means I don't even know how Apple thinks about providing DirectShow-like functionality. That means I don't know the right keywords and don't even know where to look. Books? Bought a few. Now I might be able to write some code that can edit your sister's wedding video (if I can't make decent headway on this topic I may next be asking what that'd be worth to you), but for identifying what filters are available and how to string them together ... nothing. Suggestions?
Video handling is going through a huge transition on the Mac at the moment. QuickTime is very old, but also big and powerful, so it's been undergoing an incremental replacement process for the past 5 years or so.
That said, QTKit is the QuickTime subset (capture, playback, format conversion and basic video editing) which is supported going forward. The legacy QuickTime APIs are still there for the moment, and probably will remain at least until its major features are available elsewhere, but are 32-bit only. For some involved video stuff you may end up needing to use it in places.
At the moment, iOS is ahead of the Mac because it could start from scratch with AV Foundation. The future of the Mac media frameworks will probably either be AV Foundation directly (with QTKit being a lightweight shim over the top) or an extension of QTKit that looks very similar.
For audio there's Core Audio which is on Mac and iOS and isn't going away any time soon. It's quite powerful but somewhat obtuse in places. Luckily online support is very good; the mailing list is an essential resource.
For filters and frame-level processing you've got Core Video as someone else mentioned, as well as Core Image. For motion graphics there's Quartz Composer which includes a graphical editor and a plugin architecture to add your own patches. For programmatic procedural animation and easily mixing rendering modelsĀ (OpenGL, Quartz, video, etc.) there's Core Animation.
In addition to all of these, of course there's no reason you can't use open source libraries where the built-in stuff doesn't do what you want.
To address your comment below:
In QuickTime (and QTKit), individual data types like audio and video are represented as tracks. It may not be immediately clear that QuickTime can open audio as well as video file formats. A common way to combine audio and video would be:
Create a QTMovie with your video file.
Create a QTMovie with your audio file.
Take the QTTrack object representing the audio and add it to the QTMovie with the video in it.
Flatten the movie, so it doesn't simply contain a reference to the other movie but actually contains the audio data.
Write the movie to disk.
Here's an example from Blender. You'll see how the A/V muxing is done in the end_qt function. There's also some use of Core Audio in there (AudioConverter*). (There's some classic QuickTime export code in quicktime_export.c but it doesn't seem to do audio.)