When we build Universal app for Windows 10, in order to support multiple resolutions we can use Adaptive triggers.
In this case for each visual state, separate layout is used. If we have 3 sizes to adopt, for each layout must be created, and as a result for most controls there will be multiple duplicates, which are hidden and becoming visible for appropriate visual state.
All these controls will be loaded to memory and waste RAM, which can be dangerous for low memory mobile phones (like lumia 620).
Is it right solution to use separate view for this case?
Update
If someone needs, here are good and very simple articles about element layout reordering form wintellect (AdaptiveTrigger, changing element positions in grid) and galasoft (AdaptiveTrigger, RelativePanel).
Windows 10 Xaml introduces an attribute x:DeferLoadingStrategy to mark controls to be loaded only when needed. This will let you include all of the controls in the Xaml without loading them into memory unless and until they are actually used. In the mobile case where the device is likely to have only one size actually used (or two for portrait / landscape) the layouts for the other sizes will never load.
For the case where you are using the same controls but just have slightly different positioning I would look at moving them (possibly with RelativePanel), as Jon Stødle suggests in the comments.
If there are bigger changes then I'd look into separate layouts (like you're doing) within the same file or with separate Xaml, but for simple positional changes that's probably overkill.
Related
Using C++/CLI on Visual Studio I want to create a 2D simulation, with the options to change the user's inputs on one side of the screen and the simulation on the other side. (The user's inputs would be used to calculate what to draw for the simulation)
I would like to be able to do this within a panel/fixed region keeping the drawing separate from UI elements (buttons etc). Essentially I would like to draw multiple dots on the screen and the position of those dots changes every second. Trouble is all the examples of of drawing I've seen take up the whole form.
What libraries and how can I use to create multiple 2D drawings either by controlling the colours of pixels or any other way inside a fixed region like a panel?
This really depends on what GUI toolkit you're using.
If you're using WinForms, create a control and override the OnPaint method.
If you're using WPF, I'd use WritableBitmap.
There are other methods for both toolkits, of course, but those are the ones I would use. Plus there's things like DirectX and OpenGL, but it sounds like you want something simple, so those would probably be overkill.
Cocoa uses a drawing system (user coordinate space) measured in "points" which are resolution independent...sounds great
While we need to be concerned with our app running in many resolutions, Cocoa is going to take care of that for us in (1) above...sounds too good to be true!
It does scale our controls as resolution changes...this is good.
BUT the screen size increases as my resolution increases...this is not good, I though we had a drawing canvas that was independent of the resolution!
What if the controls shrink to silly small levels as the resolution increases - should I be concerned about this?
To summarize: is their a "standard" resolution I should design for and then all automatic scaling by Apple will automatically look fine?
[Confused while reading the Apple Progammer Guide on the topic of Drawing]
You do not need to be concerned about this. The user is only allowed to select resolutions which make sense given the physical size of the display, so the standard controls will always be "large enough". You just need to test your app on Retina and non-Retina displays (and ideally both at the same time, with an external 1x monitor plugged on a 2x machine ; move your windows between the two screens and check that your images update accordingly).
I have already sort of asked this question already here (Previous Question) but it only got a handful of views and zero answers/comments so I thought I'd give it a go again with some more info that I've found.
I basically have a Windows Store DirectX + XAML app that I'm developing. I currently have the problem that the Rendering event of the SwapChainBackgroundPanel that I use for DirectX rendering (as per the Windows 8 example on MSDN) sometimes isn't called when the user is interacting with the app.
It will continue to update if I am doing something with the camera such as changing what it's looking at based on touch/mouse position but it won't be called if I am picking and I don't know why.
I use the standard GPU picking method (where I render the scene with a unique color for each object and then take a 1x1 texture of the press area to find the selected object) but when I am using this picking technique to select multiple objects (the user drags their finger/mouse over many objects) Rendering isn't being called. So in effect what happens is, lots of objects get selected but the user only sees this when they remove their finger/stop pressing the mouse button.
Is there any reason why this is happening? Is it because of the GPU picking method? And if so is there a way around it rather than using the ray-trace picking method (which considerably slows down picking for a large number of objects)?
Has anyone else had this problem? Is there an explanation from Microsoft anywhere that it is deliberate that rendering doesn't get called while this is happening?
Thanks for your time.
I'm developing a Windows 8 Metro-style application and I want to use vector images. As there seems to be no direct support for svg images, I am trying to use a xaml fragment consisting of multiple shapes (a path and some lines) as an image. I would like to have a resource dictionary entry with the composite shape and be able to include it in different pages. Ideally, I would also like to be able to resolve a specific composite shape from a data bound property.
From what I've read, the WPF approach was to have a VisualBrush or DrawingBrush consisting of the shapes, but there are no such classes in Windows 8 (and it seems like it's not even possible to derive from Brush).
How am I supposed to do this using WinRT UI?
No, you cannot use a DrawingBrush as the value of a background property in WinRT XAML. It's too bad, huh? Seems like a very powerful features to setup the fill of an object with vector layouts. In fact DrawingBrush is not even part of Windows 8 yet. It is what it is. For now, images are a fine solution. But we feel your pain.
I might as well toss in that VisualBrush is not part of WinRT-XAML either.
Each XAML fragment is really a UI element. I think the simplest approach would be to put the XAML into it's own user control and then add the user control to each page you wanted to display the "drawing" on. If you want the user control to display different shapes, you can expose a property on the control, data bind to that property, and inside the setter of the property read the value and toggle the visibility of the various XAML shapes to show or hide whatever parts of the composite image you want. It is a bit brute force, but will do what you want.
Windows8 way to have vector graphics in it is just to use ViewBox+Canvas and Path elements. It works well in my opinion, though I do miss VisualBrush.
I am learning cocoa, and I am creating an application that will require similiar layout to the screenshot below (this seems like a very common layout approach).
What kind of controls/architecture would this type of Cocoa application be?
I'm still in my early stages of learning/reading, and I know of document based applications only so far, but this type of layout doesn't seem to look like a document based app since it doesn't really require multiple windows opened.
If it isn't document, is there a name for other design patters or layouts?
From what I now so far, I would describe this like:
I would be grateful if someone could give me a detailed overview of the high level design for an app like this i.e. things like: # of panels, views used, controls, controllers etc?
Also, a few quick sub-questions:
what kind of menu controls are those in the left pane, then expand and display sub elements?
When preferences windows are displayed, what is that effect called that makes it display in an animated way (like the address book does), where it is a small window that expands to its correct size in an animated fashion.
You are right that this is probably not a document based application, as they open documents in new windows by default.
To layout the window like that, there’d be an NSSplitView that contains the 3 panes. Each pane may optionally contain a view loaded from an NSViewController, which can help keep the code modularised, but it depends on what you’re trying to do if this is appropriate.
The left pane would be an NSOutlineView (a NSTableView subclass), the middle an NSTableView, but I’m not sure exactly how the right-hand side view would be created (lots of custom NSViews and other things, possibly WebView)
That popover options window is possibly a NSPopover (which contains an NSViewController), but that’s only compatible with OS X 10.7, so may also be totally custom for backwards compatibility and easier customisation.
Also note this is a fairly complicated example you’ve given, with lots of custom controls that are probably harder to create than they look:
To get the outline views on the left to have unread counts and icons (from memory) is not built into AppKit, so was all custom created. To do things like that, you’ll need a solid understanding of NSCell vs NSView, and ideally also know about Core Animation layer backed views, and what to use for different aspects.
The window has a taller-than usual title bar. This means the developer probably had to do some crazy stuff to get it to work, if not create the whole window from scratch.
That’s just the start. There’s lots of really nice design in there that’s custom and done from scratch.
Designing Mac apps can be hard sometimes. AppKit is pretty old (back from the NEXT days), and has lots of legacy stuck in it. UIKit on iOS on the other hand is quite nice – Apple clearly learned from their past and made things much better.
I’ve hardly touched on the controllers and model behind all that. There’s lots of different ways you could do it. For persistence, you could use CoreData, sqlite, NSKeyedArchived, just to name a few. Brent Simmons (past developer of another RSS reader, NetNewsWire) wrote some interesting blog posts about that:
http://inessential.com/2010/02/26/on_switching_away_from_core_data
http://inessential.com/2011/09/22/core_data_revisited
The way you design your model & controllers really depends on the specific problem. Cocoa really forces you to stick to MVC though – if you don’t, things are guaranteed to end up messy.
I hope that all helps! I’m really only just learning myself too.
Apple refers to this type of application design as Single-window, library- (or “shoebox”) style and gives a number of recommendations for this design choice in the docs.
(see Mac App Programming Guide)