Window Size for Mac application - objective-c

I am new to Mac application development using Cocoa. I am confused as to what should be the window/view size. Like in an iOS app we have proper dimensions for small iPod as well as latest iPods. But how to set the size in a Cocoa App.
Also I want to set the deployment target to OS X 10.6, which does not support AutoLayout.
So what would be a good way to resize the windows which should fit every desktop?

There are a couple of things you need to consider when sizing windows for a display under OS X.
First, there's the size of the display area. You could use
NSRect frame = [[NSScreen mainScreen] frame];
but that's a bit simplistic because the user may be displaying the Dock, and there's almost always the menu bar displayed as well. So a better way of determining the maximum display area for the Desktop would be:
NSRect frame = [[NSScreen mainScreen] visibleFrame];
which respects the menu bar and Dock.
As others have pointed out, this rectangle is seldom the most ideal size for a window. Understand too that this rectangle is only a starting point, because your user may have multiple displays, and they contribute to the total area in which a window may be displayed. But when displaying a new window, you always start within this rectangle. Look to NSScreen's documentation to determine this; the methods above will provide a springboard to your understanding.
I don't know how you plan to create and use your window, but for all but the most simple applications, you'll probably use a subclass of NSWindowController with it. If so, it will be your window's delegate. And so there's an important window delegate method that you'll need to implement in it, and it's this:
- (NSRect)windowWillUseStandardFrame:(NSWindow *)window defaultFrame:(NSRect)newFrame
This is where you determine the 'standard' location and size of your window's frame. It's called by the window when the window is zoomed 'out' to what is called the "standard state" (versus the size the user makes it, the "user state"). In other words, it's the rectangle that best suits the content of your window, yet keeps in mind the rectangle describing the 'safe' area in which you can display it. Unfortunately, I can't tell you exactly how to code it, because it depends entirely on what you're displaying inside your window.
You can see, then, that the definition of 'proper' is something entirely different from that in iOS. Best wishes to you in your endeavors.

Don't think too much like programming for an iOS device. On OS X the user can display multiple windows on the screen next by each other. He may want to do that depending on the task your app has.
You're going to have to design your window so that all objects fit inside. Based on that, you can set a minimum size as well as a maximum size. Consider the smallest screen resolutions are around 1200 x 700 and thus your minimum size shouldn't exceed that.
Before autolayout there existed a "springs & struts" way to define how objects resize or position with a resizing window frame.
I recommend you start laying out your app on aper or with a graphics tool and then see how much space is necessary. If more space than minimum resolution is necessary, you will have to start using scroll views, split views or similar to make the interface working in different window sizes.
A lot of useful information can be taken from OS X HIG.

Related

Can a VkSurfaceKHR represent only a whole window? Or also a portion of a window (ie some rectangular widget)? [duplicate]

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

How to track the location of a window belonging to another app

When screen sharing a specific window on macOS with Zoom or Skype/Teams, they draw a red or green highlight border around that window (which belongs to a different application) to indicate it is being shared. The border is following the target window in real time, with resizing, z-order changes etc.
See example:
What macOS APIs and techniques might be used to achieve this effect?
You can find the location of windows using CGWindowListCopyWindowInfo and related API, which is available to Sandboxed apps.
This is a very fast and efficient API, fast enough to be polled. The SonOfGrab sample code is great platform to try out this stuff.
You can also install a global event tap using +[NSEvent addGlobalMonitorForEventsMatchingMask:handler:] (available in sandbox) to track mouse down, drag and mouse up events and then you can respond immediately whenever the user starts or releases a drag. This way your response will be snappy.
(Drawing a border would be done by creating your own transparent window, slightly larger than, and at the same window layer as, the window you are tracking. And then simply draw a pretty green box into it. I'm not exactly sure about setting the z-order. The details of this part would be best as a separate question.)

Is it possible to control the amount of vibrancy of the translucent/blurry background in Yosemite?

I would like to know whether it is possible to control the amount of translucency in the so-called vibrancy effect introduced recently by Yosemite, which can be implemented in an Objective-C app by employing the NSVisualEffectView class.
Here is an example to be more specific. Consider the translucent effect which is shown by Yosemite OS X when the volume level is changed:
The vibrancy is much stronger than what one obtains by using a simple NSVisualEffectView (shown in the following image)
If we compare the two images — please, ignore the different form of the speakers but focus on the background — we see that the amount of vibrancy (the strength in the Gaussian blurring effect) is much stronger in the Yosemite OS X volume window instead of my app using NSVisualEffectView. How can one obtain that?
In OS X Yosemite Apple introduced new materials that can be applied to NSVisualEffectView.
From the AppKit Release Notes for OS X v10.11:
NSVisualEffectView has additional materials available, and they are now organized in two types of categories. First, there are abstract system defined materials defined by how they should be used: NSVisualEffectMaterialAppearanceBased, NSVisualEffectMaterialTitlebar, NSVisualEffectMaterialMenu (new in 10.11), NSVisualEffectMaterialPopover (new in 10.11), and NSVisualEffectMaterialSidebar (new in 10.11). Use these materials when you are attempting to create a design that mimics these standard UI pieces. Next, there are specific palette materials that can be used more directly to create a specific design or look. These are: NSVisualEffectMaterialLight, NSVisualEffectMaterialDark, NSVisualEffectMaterialMediumLight (new to 10.11), and NSVisualEffectMaterialUltraDark (new to 10.11). These colors may vary slightly depending on the blendingMode set on the NSVisualEffectView; in some cases, they may be the same as another material.
Even though this only applies for OS X El Capitan, you can now create a more "close to original" blur effect for your view. I assume Apple uses the NSVisualEffectMaterialMediumLight material for its volume view.
I achieve this effect as follows
have a NSVisualEffect view to get vibrancy
have a custom view on top of the visual effect view the same size
set the background color of the custom view to white and alpha of 0 (completely transparent)
increase the alpha of the custom view to make it less translucent (less blurry)

NSImageView with high-resolution image causes extreme slowdown when resizing the window

I am creating a simple photo filter app for OS X and I am displaying a photo on an NSImageView (actually two photos on top of each other with two NSImageViews, but the question still applies for a single view too). Everything works super, but when I try to resize the window that contains the NSImageViews, the window (which also resizes the NSImageViews) resizes very slowly, at about less than 1fps, creating a negative impact on the user experience. I want resizing windows to be as smooth as possible. When I disable resizing the image views, the window resizes smoothly, so the cause of the slowdown is those NSImageViews.
I'm loading 20-megapixel images from my DSLR. When I scale them down to a reasonable size for screen (e.g. 1024x768), they scale smoothly, so the problem is the way NSImageView renders the images. It (I assume as the result of this behavior) tries to re-render 20MP image every time it needs to redraw it into whatever the target frame of the view is.
How can I make NSImageView rescale more smoothly? Should I feed it with a scaled-down version of my images? I don't want to do that as it's a photo editing app that also targets retina display screens and the viewport would actually be quite large. I can do it, but it's my final option. Other than scaling down, how can I make NSImageView resize faster?
I believe part of the solution your are looking for is in NSImage's representations. You can add many representations to an image with addRepresentation: I believe there is some intelligent selection done when drawing. In your case, I think you would need to add both representations (the scaled-down and the full resolution bitmap) to NSImage. I strongly suspect drawRect: should pick the low resolution version. I would make sure "scale up or down" is selected in NSImageView, because the default is scale down only, which may force your full resolution image to be used most of the time. There are some discussion in Apple's documentation regarding "matching" under "Setting the Image Representation Selection Criteria" in NSImage, although at first sight this may not be sufficient.
Then, whenever you need to do something with the full image, you would request the full resolution image by going through the representations ([NSImage representations] returns an array of NSImageRep).

I want to animate the movement of a foreign OS X app's window

Background: I recently got two monitors and want a way to move the focused window to the other screen and vice versa. I've achieved this by using the Accessibility API. (Specifically, I get an AXUIElementRef that holds the AXUIElement associated with the focused window, then I set the NSAccessibilityPositionAttribute value to move the window.
I have this working almost exactly the way I want it to, except I want to animate the movement of windows. I thought that if I could get the NSWindow somehow, I could get its layer and use CoreAnimation to animate the window movement.
Unfortunately, I found out that this isn't possible. (Correct me I'm wrong though -- if there's a way to do it this way it'd be great!) So I'm asking you all for help. How should I go about animating the movement of the focused window, if I have access to the AXUIElementRef?
-R
--EDIT
I was able to get a crude animation going by creating a while loop and moving the position of the window by a small amount each time to make a successful animation. However, the results are pretty sub-par. As you can guess, it takes a lot of unnecessary processing power, and is still very choppy. There must be a better way.
The best possible way I can imagine would be to perform some hacky property comparison between the AXUIElement info values for the window and the info returned from the CGWindow api. Once you're able to ascertain what windows in the CGWindow API match AXUIElementRefs, you could grab bitmaps of the current window contents, overlay the screen with your own custom animation draw of the faux windows, then as you drop the overlay set the real AXUIElementRef's to the desired-end-animation positions.
Hacky, tho.