If an application which is using Direct3D to draw graphics loads a DLL that wants to do the same (although independently, in a different window), is that safe?
Also, can two IDirect3DDevice9's render to the same window?
The answer is yes, a DLL can use DirectX independently of it's loading application. I still don't know if two apps can draw to the same window.
Related
I am new to UWP app development and want to understand how xaml control is rendered in the UWP application. Which is the class that implements the rendering logic? And is that code open source? Also, when I create a new custom control, do I need to take care of rendering myself?
How is xaml control rendered in UWP application?
This is a big topic, for understanding this, we need to know UI Framework Layering. Windows.UI.XAML-> Windows.UI.Composition-> DirectX Family.
Derive from official document, The Visual layer provides a high performance, retained-mode API for graphics, effects and animations, and is the foundation for all UI across Windows devices. You define your UI in a declarative manner and the Visual layer relies on graphics hardware acceleration to ensure your content, effects and animations are rendered in a smooth, glitch-free manner independent of the app's UI thread.
And is that code open source?
Currently, the UWP open source controls are WinUI. You could find the base custom control build logic.
do I need to take care of rendering myself
The rendering is very low level, if you just custom control that inherit Control class, you have no need to care about the rendering process, you just need to build the interface in the target templated style. and add specific dependency property in the code behind. For more detail please refer to Templated Control document.
Is it possible to inspect GUI elements for OpenGL windows applications, as it is possible to do it with native windows apps or any app done in WPF or windows forms? I would like to be able to read text of labels, textboxes in the OpenGL application, didn't manage to do it with UIAutomation with c#, nor with Selenium.
Unfortunately, no. OpenGL is a rasterisation API: by the time the UI element data reaches OpenGL, it is already in a format describing how to draw rectangles and the pixels of the font, not abstract UI element descriptions.
Your best bet is OCR.
I am trying to develop a user interface which involves creating train sets on the canvas. I am planning to create this as a windows store app. I have some experience in Silverlight and xaml. I am planning to make my app in C# and Xaml.
I have done some research on the web and I could not find any decent framework which support following animations and UI activities:
Drag and drop controls
Snapping of controls
Reordering snapped controls
Drop-shadow effects for control
I know how to di all these in Silverlight world but the windows store xaml put some limitations. Could anyone suggest some framework or perhaps code samples that could be useful for me.
The GridView control offers drag and drop, snapping and reordering of items right out of the box.
Does Metro-style apps UI only support either Fullscreen or tile-based environment?
Is there any other window styles?
"Metro" or Windows Store applications support several orientations and layout states. Depending on the resolution of your device, Filled and Snapped may or may not be available (1366 x 768 or greater resolution is required).
Within an application you can also use flyouts (such as provided by the Callisto library for XAML applications and included 'natively' for JavaScript).
Tiles are not really an application 'style.' Every Windows Store application can have a tile on the Start Screen, and it's part of your application's manifest to determine the appearance (though the user has ultimate control over the size and whether he/she wants the tile on the Start Screen at all). Additionally, through the use of notifications you can reflect additional information via the tile, but again you can't rely on the tile actually being there even if your application is installed.
Developing for the iPhone has been my first experience with objective-c and first in-depth experience with xcode. How difficult would it be to port an openGLES iPhone app to the OSX desktop using openGL? I am not asking about user interface - obviously there is no equivalent to cocoa touch UI on the desktop. I am asking specifically about the app delegate and openGLES layers. Are there any major hurdles? Is it as straight forward as simply creating a new app delegate in a project of type cocoa?
I've started looking into just the same thing, and it appears that OpenGL-heavy applications would be among the easiest to backport to the Mac. Pretty much everything in OpenGL ES is present in OpenGL on the desktop (with the exception of some of the fixed-point stuff), so that code can stay the same.
The way that OpenGL is handled on the iPhone is via a Core Animation layer (CAEAGLLayer), rather than a specific view. Therefore, you should be able to transfer that across to a Leopard-based desktop application, although you'll need to convert all references to EAGL classes to their OpenGL equivalent (EAGLContext to NSOpenGLContext, for example). You could render into a CAOpenGLLayer that's displayed by itself, or use that layer to back a custom NSView.
The fundamental structure of a desktop Cocoa application will be different than a Cocoa Touch one, but you should be able to start from one of the Xcode templates and add back in your components from the Cocoa Touch application.
Again, I haven't yet done this for my application, but it looks reasonably straightforward.