Will vulkan api handle window creation? - api

I have heared vulkan will unify the initialisation on different operating systems. Does that mean vulkan creates the window, handles mouse/keyboard events so I can avoid using os specific programming?

It won't. Window creation will be platform specific and an WSI extension will let you link the window to a renderable Image that you can push to the screen.
From information gleaned out of the presentations that have been given I expect that you will use a platform specific WSI Extension to create a Swapchain for your window.
Then each time you wish to push a frame to the screen you need to acquire a presentable image from the swapchain; render to it and then present it.
see this slide pack from slide 109 onward.

No, Vulkan is a low-level API for accessing GPUs. It does not deal with windows and input. In fact it can be used easily in a "headless" manner with no visual output at all.

Porbably not, Vulkan API is an Graphics Library much like OpenGL.
Where in Linux Ubuntu OpenGL is used for animation effects of the desktop in Unity and could be replaced with Vulkan for better performance.
But I don't think Windows will change it as they have their own DirectX Graphics Library and would be weird if they use something else instead their own software.
The most applications that are going to benefit from Vulkan are Games and other software that uses either 2D or 3D rendering.
It's very likely that most of the games are going to change to Vulkan because it's Cross-platform and therefore they will gain more users which equals to more profit.
Khronos (Vulkan API developers) are also bringing out tools that will largely port your application from OpenGL or DX12 to Vulkan therefore requiring less development/porting from the software developers side.
So...
Window creation, likely. (Although the code behind the window is CPU side, the library that draws the window on screen might be using Vulkan) - this differs greatly from which OS, distribution and version you are working on.
Mouse/keyboard events, no as this doesn't require any graphical calculations but CPU calculations.

Window (frames) are general desktop manager controls; you could display the vulkan app's content in the client area, otherwise vulkan would have to provide interfaces to the desktop manager for window creation (GUI library). Someone could simply create a device context (DC in windows, similar for the X server) then manage the "vulkan app" manually like a windowed game with no chrome (frameless), but this would be a great deal of work right now.
Old hand Windows developer bible addressing device context and rendering among many other things: Programming Windows®, Fifth Edition (Developer Reference) 5th Edition. Very good read and from an agnostic point of view provides a great deal of carry over knowledge loosely applicable to most systems.

Related

HoloLens external rendering

Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.

Is QML worth using for Embedded systems which run around 600MHz and no GPU?

Im planning to build a Embedded system which is almost like an organizer i.e. which handles contacts, games, applications & wifi/2G/3G for internet. I planned to build the UI with QML because of its easy to use and quick application building nature. And to have a linux kernel.
But after reading these articles:
http://qt-project.org/forums/viewthread/5820 &
http://en.roolz.org/Blog/Entries/2010/10/29_Qt_QML_on_embedded_devices.html
I am depressed and reconsidering my idea of using QML!
My hardware will be with these configurations : Processor around 600MHz, RAM 128MB and no GPU.
Please give comments on this and suggest me some alternatives for this.
Thanks in Advance.
inblueswithu
I have created a QML application for Nokia E63 which has 369MHz processor, 128MB RAM. I don't think it has a GPU. The application is a Stop Watch application. I have animated button click events like jumping (jumping balls). The animations are really smooth even when two button jumps at the same time. A 600MHz processor is expected to handle QML easily.
This is the link for the sis file http://store.ovi.com/content/184985. If you have a Nokia mobile you can test it.
May be you should consider building QML elements by hand instead of doing it from Photoshop or Gimp. For example using Item in the place of Rectangle will be optimal. So you can give it a try. May be by creating a rough sketch with good amount of animations to check whether you processor can handle that. Even if it don't work as expected then consider Qt to build your UI.
QML applications work fine on low-specs Nokia devices. I have made one for 5800 XPressMusic smartphone without any problems.

Windows Low-Level Graphics [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm new to programming. I do know C/C++ and the basics of Win32. I am now trying to do graphics, but I want the fastest connection to the screen. I realize most are going with Opengl or DirectX. But, I don't want the overhead. I want to start from scratch and control the pixel data. I know about GDI bitmap, but I'm not sure if this is best access to the data. I know that I have to talk through windows, which is the trouble. Do Opengl and DirectX compile down to the level of GDI or is there a special way they do it, do they bypass or use similar code? Please, Don't ask why I want to do this. Maybe an explanation of how this is done might help. Like how windows combines all windows to create the final image.
The most direct access to pixel data is via shaders, which are supported by both OpenGL and Direct3D. They are cross-compiled and run directly on the video card. They do not use OpenGL, they do not have OpenGL overhead. OpenGL is just used to get them to the graphics card's own processor in the first place.
Anything you do on the CPU has to first be copied across the bus (typically PCI-express) to the video card. GDI is actually many levels removed from the graphics memory.
OpenGL, Direct3D, Direct2D, GDI, and GDI+ are all abstraction layers. The GPU vendor writes a driver that accepts these standard command functions, re-encodes the data in the card-specific format, then sends it to the card. Typically OpenGL and Direct3D are the most heavily optimized and also require the least amount of re-encoding.
How Windows combines the various on-screen windows to create the full-screen image depends heavily on what version of Windows you are talking about. DWM changed everything. Since DWM was introduced in Vista, programs render to their personal areas of GPU memory, then the window manager uses the texture lookup units of the video card to efficiently layer each of the programs' individual areas onto the screen primary buffer. When a program (usually a game) requests full-screen exclusive access, this step is skipped and the driver causes rendering commands from that application to affect the primary screen buffer directly.
Assuming that the CPU is generating the data which needs to be displayed, the fastest and most efficient approach is likely to be block-copying that data into a vertex buffer object and using OpenGL commands to rasterize it as lines or polygons or whatever (or the Direct3D equivalent). If you previously thought that GDI was the low-level interface, you've got some reading ahead of you to make this work. But it will run several orders of magnitude faster than pure GDI. So much faster, in fact, that the new architecture is that GDI (and WPF) is built on top of Direct2D and/or Direct3D.
but I want the fastest connection to the screen
I want to start from scratch and control the pixel data
You're asking for the impossible. You get best performance when you use GPU-accelerated functions. However, in this case you don't get direct access to pixel data, and trying to access it (read it back or write) will negatively impact the performance, because you'll have to transfer data from system memory to video memory. As a result anything that is being streamed from system memory to video memory should be handled with care. Plus you'll have to study API.
If you "start from scratch" and do rendering on CPU, you'll get easy access to pixel data and full control over the rendering, but performance will be inferior to GPU (CPU is less suitable for parallel processing, and system memory can be slower by order of magnitude than video memory), plus you'll spend significant amount of time reinventing the wheel.
Do Opengl and DirectX compile down to the level of GDI or is there a special way they do it, do they bypass or use similar code?
No. They communicate with graphic hardware nearly directly using drivers provided by hardware manufacturer. And those "direct hardware access" interfaces used by DirectX/OpenGL won't be available to you - they're hardware-specific and manufacturer specific, can be internal and possibly even protected by patents.
There are, of course, few legacy hardware interfaces which ARE available to you (namely VESA or VGA 13h mode), however, their direct use is normally forbidden by operating system (you can't easily access VESA on windows), so to access them you'll have to either boot MS-DOS, use custom operating system, or helper classes (such as SVGAlib on linux) which might only function under root privilegies. And of course, even if you actually use VESA/VGA to render something yourself, on any hardware (newer than RivaTNT 2 Pro) performance will be horrible compared to hardware-accelerated rendering done by OpenGL/DirectX. Have you ever seen how fast windows xp works when it doesn't have proper GPU driver (takes a second to redraw window)? that's how fast it is going to work with direct VESA/VGA access.
Please, Don't ask why I want to do this.
It makes sense to ask why you would want to do that. Your "I want direct low-level access" approach was suitable maybe 15..20 years ago or in DOS era. Right now reasonable solution would be to use existing API (that is maintained by somebody who isn't you) and search for a way to fully utilize it. Of course, if you wanted to develop drivers, that would be another story.
Do Opengl and DirectX compile down to the level of GDI or is there a special way they do it
and
I realize most are going with Opengl or DirectX. But, I don't want the overhead
So what you're saying is **you have absolutely no clue what OpenGL or DirectX actually do, and yet you've decided that they are not efficient enough for your needs.
I'm sorry, but this is nonsense. It is impossible to answer a question like that.
In the real world, you have a small supercomputer dedicated to doing graphics. And you get access to it through OpenGL and DirectX.
And the reason they are fast is that they do NOT just "start from scratch and control the pixel data".
So please, if you want serious answers, try letting those with the knowledge to answer your questions decide which question is best.
The correct answer, if you want efficient graphics, is to use DirectX or OpenGL.

How to estimate size of Windows CE run-time image

I am developing an application, and need to estimate how much resources (RAM and ROM) it will need to run on a device. I have been looking online, but couldn't find any good tip on how to do this.
The system in question is an industrial system. The application itself will need to have a .NET Compact framework, and following components besides Windows CE Core: SYSGEN_HTTPD (Web Server), SYSGEN_ATL (Active Template Libraries), SYSGEN_SOAPTK_SERVER (SOAP Server), SYSGEN_MSXML_HTTP (XML/HTTP), SYSGEN_CPP_EH_AND_RTTI (Exception Handling and Runtime Type Information).
Tx
There really is not way to estimate this, becasue application behavior and code can have wildly different requirements. Something that does image manipulation is going to likely require more RAM than a simple HMI, but even two graphical apps that do the same thing could be vastly different based on how image algorithms and buffer sizes are set up.
The only way to greally get an idea is to actually run the application and see what the footprint looks like. I would guess that you're going to want a BOM that includes at least 64MB or RAM and 32MB of flash at a bare minimum. Depending on the app, I'd probably ask for 128MB of RAM. Flash would greatly depend on what the app needs to do.
Since you are specifying core OS components and since I assume you can estimate your own application's resources, I assume you ask for an estimation of the OS as a whole.
The simplest way to have an approximation is to build an emulator image (CE6 has an arm one) and it should give you a sense. The difference with the final image will be with the size of the drivers for the actual platform you will use.

How to determine if an application is using the GPU

I'm looking for a way to determine how to know whether an application is using the GPU with Objective-C. I want to be able to determine if any applications currently running on the system have work going on on the GPU (ie: a reason why the latest MacBook Pros would switch to the discrete graphics over the Intel HD graphics).
I've tried getting the information by crossing the list of active windows with the list of windows that have their backing location stored in video memory using Quartz Window Services, but all that does is return the Dock application and I have other applications open that I know are using the GPU (Photoshop CS5, Interface Builder), that and the Dock doesn't require the 330m.
The source code of this utility gfxCardStatus might help....