I have a very simple Mac Application; the best way to think of it is a master/detail view. Once an item is chosen on the left-hand side, information appears on the right pane, along with an image (can be various sizes/formats/resolutions due to the source). The user can very quickly select items on the left, and while the detail text on the right-hand pane appears quickly, we notice an immediate lag or plain failure to display the image on the right. Even the largest of images is a few MB, and typically less than 1024x1024 pixels. If the user moves slowly, everything is kosher...but if you go quick, it will hiccup...lag behind a few selections, or not display at all.
I've attempted solutions using nsviews' setNeedsDisplay, and alternatively using Core Animation layers, but nothing seems to do the trick. Is there a guide or set of tutorials on performant image display routines?
Targeting 10.6+
Related
Thanks to Apple's PictureSwiper sample code and the very nice NSPageController tutorial from juniperi here on stackoverflow, it's pretty easy to get tantalizing close to image viewing capabilities in Preview. Specifically I want to replicate the ability to swipe forwards/backwards between images/pages, use pinch-to-zoom resize the images, gesture to rotate the images/pages, and support two-page mode.
But there are some hurdles that make me wonder if NSPageController is the right approach or if it is too limiting and a custom view controller is needed.
1) Images of varying sizes are simply displayed stacked and if the top/upper layer image is smaller, the underlying image(s) show through. Using the same images in preview, they hide the larger "underlying" images/pages and fade the underlying image in/out with the swipe transition. I could hide underlying images by linking the page controller to the view rather than the image cell (like PictureSwiper), but that causes the entire view to scale on pinch to zoom and overall looks clunky.
2) Is it possible to use NSPageController with more than one image cell, e.g. two-page mode?
3) Is page/image rotation possible with NSPageController?
4) Is it possible to lock the zoom level for all the images, so they are uniformly displayed as navigated?
My apologies if this too general of a question, but the gist is whether NSPageController too limited and problematic to extend which would necessitate building a custom controller from scratch.
Thanks.
I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).
For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/
Background: I recently got two monitors and want a way to move the focused window to the other screen and vice versa. I've achieved this by using the Accessibility API. (Specifically, I get an AXUIElementRef that holds the AXUIElement associated with the focused window, then I set the NSAccessibilityPositionAttribute value to move the window.
I have this working almost exactly the way I want it to, except I want to animate the movement of windows. I thought that if I could get the NSWindow somehow, I could get its layer and use CoreAnimation to animate the window movement.
Unfortunately, I found out that this isn't possible. (Correct me I'm wrong though -- if there's a way to do it this way it'd be great!) So I'm asking you all for help. How should I go about animating the movement of the focused window, if I have access to the AXUIElementRef?
-R
--EDIT
I was able to get a crude animation going by creating a while loop and moving the position of the window by a small amount each time to make a successful animation. However, the results are pretty sub-par. As you can guess, it takes a lot of unnecessary processing power, and is still very choppy. There must be a better way.
The best possible way I can imagine would be to perform some hacky property comparison between the AXUIElement info values for the window and the info returned from the CGWindow api. Once you're able to ascertain what windows in the CGWindow API match AXUIElementRefs, you could grab bitmaps of the current window contents, overlay the screen with your own custom animation draw of the faux windows, then as you drop the overlay set the real AXUIElementRef's to the desired-end-animation positions.
Hacky, tho.
I've been trying to render the entire canvas in an IWebBrowser2 control to a bitmap. IViewObject::Draw seems to be the most promising approach, but I can't get it to render anything that would requires a scroll to show. While I could automate the scrolling and stitch the images together, this would look weird with any fixed position elements. Is this even doable?
Additionally, I've tried to set the controller's size to one that would allow the entire contents to display without needing to scroll, but Windows caps the max size to the current screen resolution, so that only gets me partially there.
Any help would be much appreciated. I'm currently doing this in the context of Win7 and IE8, but I don't think that should matter much.
Sorry it took so long for me to follow up with the answer to this.
I wrote up an article detailing how to trick Windows into allowing you to resize a window larger than the virtual screen resolution, allowing functions like PrintWindow or IViewObject::Draw to capture the entire client area (i.e., the browser canvas).
http://nirvdrum.com/2010/03/25/how-to-take-full-page-or-full-canvas-screenshots-in-windows.html
An actual implementation of the technique can be found in my SnapsIE repository on GitHub (username: nirvdrum). Unfortunately I don't have enough karma to post two hyperlinks. The repository is linked from the article though.
It is very likely an IE optimisation that avoid to draw more than required. You might be able to scroll the window and call IViewObject::Draw in a loop without any animation occuring ?
I'm surprised that Windows caps the max size to the current screen resolution. Are you sure about that ?