unit testing custom UIView - objective-c

I have a graphic application using CoreGraphics framework,
while i have unit test suits for the model files i can't seem to understand how to create a unit test for my custom UIView,
my goal is setting basic properties of view and see the draw functions result, although in my setters i called:
[self setNeedsDisplay];
my 'drawRect' function is not being called from the unit test although it is being called from the real application
is there a way to draw in unit test project?
what is the best practice/ tools to test ui projects?
thanks

A simple way to test drawing is to first make the drawing look the way you want (so no TDD). Then make a test which renders the drawing into a PNG. Use a #if conditional to switch the test code between capturing a baseline PNG, and comparing against that PNG.
Rendering can change slightly in new OS versions. So stick with a single OS for baseline image testing. Grab a new baseline when you need to.
With this kind of testing in place, you can then refactor your drawing code. If it renders the same image, your latest changes are good. If there's a difference, you have to make an eyeball decision about whether or not to accept the change. (And if you do, capture a new baseline.)
Edit: These days, instead of doing all that by hand, I'd use iOSSnapshotTestCase. When there is a test failure, instead of just giving a "something went wrong" result, it saves the image. This way, you can do a side-by-side comparison without having to manually launch your app and navigate to the view in question. It also simulates rendering to different devices in different orientations, which is really great for checking autolayout.

In general unit testing views isn't the right thing to do. Unit testing is meant to validate discrete atoms of logic, and if you're factoring your code properly there should be very little logic in your views.
A more successful approach might be to use the UIAutomation framework (or your automation tool of choice). This allows you to automate simulated user interaction while the app is actually running (either in the simulator or on device). UIAutomation has functions (the various view methods, and captureRectWithName()) that allow you to find and screenshot specific views. You can then hook it up to, for example, ImageMagick's command line compare tool to validate that you're drawing the correct thing.

I personally think that you should test any rendering you do with your view using XCUI tests. However any business logic that creates the said custom rendering, or any maths, or logic you have added should be tested and done isolated from any other class.
Therefore your UIAutomation tests will check if your view is being rendered, but your unit tests will validate any maths or business logic that might exist in it.

Related

Identify the monitor with the browser window in FireBreath

I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM

How to create a bitmap cropping control in WinJS

I want to create controls that lets the user decide crop area of a bitmap, in the common way, having four corners on the image. I saw that there is a sample C# app in the Microsoft site for this - http://www.microsoft.com/en-us/showcase/details.aspx?uuid=bef08d57-fa4d-4d9c-9080-6ee55b8623c0
But I cannot figure out how to do this strictly WinJS. Do I need to create custom controls - if so how? Any sample code will help a great deal.
I have an example in my codeSHOW project. It's the demo called Rx Crop. It uses Reactive Extensions (which are awesome by the way), but if you didn't want that dependency you could probably just use the example to figure out how to do it without.
BTW, the codeSHOW project has a bug and a usability issue currently. I have an update in certification. For now, just make sure you select the Rx Crop demo on the home screen and then click See the Code. If you hit See the Code with no demo selected it will crash.
Do me a favor and rate the app. Thanks.

Retrieving an app's DockTile (view)

I'm getting my hands dirty with a bit of ObjC by trying to write something Dock-like, with a little less visual bells and whistles. It's going pretty well. However I've stumbled over a problem which I can't quiet solve:
Retrieving an app's icon via NSRunningApplication is easy. However, some apps don't use their icon as DockTile, they use a custom view because their DockTiles are dynamic (f.e. most torrent apps display their current up/down speeds in the dock).
Is there any way to get this DockTile and display it in my own app?
Thanks
No, there is not. The methods which set a custom dock tile end up communicating the contents of the view directly to the Dock; it is not made available to other processes.
For what it's worth, writing a replacement for the Dock is going to be a kind of hopeless task -- Apple's Dock.app uses numerous private, undocumented APIs to interact with the WindowServer, some of which simply cannot be used by any process which is not the Dock.

Getting the screen position of a non-view based NSStatusItem

My application makes use of an NSStatusItem. I need to grab the screen coordinates for the status item, but since I have no need for the functionality offered by setting a custom view for the item I am using a standard icon-based one instead.
Is there a way to get the status item's position without having to resort to setting a custom view to it?
Unless you find a better way, you could use some undocumented API: - (id)_window of NSStatusItem probably returns the window enclosing the item itself. Maybe you can get some interesting information out of that?
Beware: Some fundamentalists might actually try to break your neck for using undocumented API. Just make sure you check regularly if this portion of your code works with newer OS versions (either by hand or using some unit tests).

Very confused by a binding issue between a Cocoa app and a Movie Loader patch in Quartz Composer

I've been programming for a while, but just recently decided to start developing for Mac OS X. I feel like I've come to grips with the basics of Objective-C and Cocoa development over the past week. I'm planning on making graphics apps, and as such am currently in the process of learning how to control Quartz compositions through a Cocoa app. I went through the tutorial that apple offers (with the Mac Engravings composition), and was able to create that just fine. In order to make sure that I truly understood what I learned, I decided to create my own composition and link it to a slightly more complicated Cocoa application.
Essentially, I have a composition that loads a movie or image through a Movie Loader patch, at which point it applies various filters to the frames before outputting it. In my Cocoa app, I've written code (or rather copied and pasted from other apple examples) that lets a user pick a file using an NSOpenPanel object. The filepath of the file they pick gets placed in a text-box that I placed in the app's window using Interface Builder. I binded the value of said text-box to the "Movie_Location" key in my composition, which is a published input in the Movie Loader patch that I'm using. However, no matter what I do, movies and images aren't loaded into this composition no matter what I try. The only thing that gets displayed is the default image that I have saved in that input from Quartz Composer (or nothing if I leave it blank before publishing).
I've added a Clear Color patch to the composition and binded that to a colorwell in my UI, and that successfully changes the color in my display, so I know that the composition and my Cocoa app are communicating. I've spent numerous hours at this point trying to figure out what's going on, and I've just about given up. Does the Movie Loader have any weird behaviors that I'm not aware of, or is there something obvious that I seem to be missing? I'd really appreciate any help or advice from anybody.
Thanks for reading through this...
Best,
Sami
There are two things I can think of as reasons why it is doing this:
The file path isn't formatted incorrectly. Try checking backslashes, colons, etc.
The box isn't updating the value. Try literally clicking in the text field and hitting enter.
That's all I can think of without seeing your quartz composition and/or code.
EDIT:
Check the other continuous box, in the general properties.
I figured this out yesterday. spudwaffle's second idea is what was going on. If I were to type a filepath in and hit enter, it would work just fine. I got this to work properly by just removing the bind and instead using the setValue:keyInPath: function that a patch controller offers. That said, is there some way to force a text-box to update? I remember seeing a "continuously update" or something like that button within the bind sub-menu in the inspector, but my code didn't work with that checked either.
Thanks to those of you that tried to help me! I really appreciate it.
Best,
Sami