According to this video OneNote uses VirtualSurfaceImageSource to do its rendering. This allows it to render tiles on demand.
However, the video also indicates that continous input such as inking is best served by using a SwapChainPanel using independent input, that allows input events to be handled on a non-UI render thread.
Since OneNote also implements high performance inking, how does it do so? Does it uses a different technique for inking than it does for the rest of the canvas?
Related
To acquire EELS, I used these below,
img:=camera.cm_acquire(procType,exp,binX, binY,tp,lf,bt,rt)
imgSP:=img.verticalSum() //this is a custom function to do vertical sum
and this,
imgSP:=EELSAcquireSpectrum(exp, nFrames, binX, binY, processing)
When using either one in my customized 2D mapping, they are much slower than the "spectrum Imaging" from Gatan. (The first one is faster than the 2nd one). Is the lack of speed the natural limitation with scripting? or there are better function calls?
Yes, the lack of speed is a limitation of the scripting giving you only access to the camera in single read mode. I.e. one command initiates the camera, exposes it, reads it out and returns the image.
In SpectrumImaging the camera is run in continuous mode, i.e. the same as if you have the live view running. The cameras is constantly exposed and reads out (with shutter, depending on the type of camera). This mode of camera acquisition is available as camera script command from GMS 3.4.0 onward.
I'm trying to use offline open street map in a react native application, for that reason, and according to react native maps I need to store the tiles in a specific format :
The path template of the locally stored tiles. The patterns {x} {y} {z} will be replaced at runtime
For example, /storage/emulated/0/mytiles/{z}/{x}/{y}.png
I tried to download the tiles using tiles servers, however, I find out that It will take a lot of time (it is almost impossible) I also looked at the proposed ways to download tiles, however, I don't know the files extension and I don't know if I could convert one of them to png, therefore, I wonder I there is an opensource/free way to do that
I find also, this software but I can only use it up to zoom=13, otherwise its not for free.
Bulk downloads are usually forbidden. See the tile usage policy. Quoting the important parts:
OpenStreetMap’s own servers are run entirely on donated resources.
OpenStreetMap data is free for everyone to use. Our tile servers are not.
Bulk downloading is strongly discouraged. Do not download tiles unnecessarily.
In particular, downloading significant areas of tiles at zoom levels 17 and higher for offline or later usage is forbidden [...]
You can render your own raster tiles by installing a rendering software such as TileMill or by installing your own tile-server. Alternatively take a look at Commercial OSM software and services.
Alternatively switch to vector tiles. Obtaining raw OSM data is rather easy. Vector tiles allow you to render tiles on your device on the fly.
Does soneone of you have a good solution for external rendering for Microsoft HoloLens Apps? Specified: Is it possible to let my laptop render an amount of 3D objects that is too much for the HoloLens GPU and then display them with the HoloLens by wifi including the spatial mapping and interaction?
It's possible to render remotely both directly from the unity editor and from a built application.
While neither achieves your goal of a "good solution" they both allow very intensive applications to at least run at all.
This walks you through how to add it to an app you're building.
https://learn.microsoft.com/en-us/windows/mixed-reality/add-holographic-remoting
This is for running directly from the editor:
https://blogs.unity3d.com/2018/05/30/create-enhanced-3d-visuals-with-holographic-emulation-in-uwp/
I don't think this is possible since, you can't really access the OS or the processor at all on the HoloLens. Even if you do manage to send the data to a 3rd party to process, the data will still need to be run back through the HoloLens which is really just the same as before.
You may find a way to perhaps hook up a VR backpack to it but even then, I highly doubt it would be possible.
If you are having trouble rendering 3D objects, then you should reduce the number of triangles, get a lower resolution shader on it, or reduce the size of the object. The biggest factor in processing 3D objects on the HoloLens is how much space is being drawn on the lens. If your object takes up 25% of the view instead of 100% it will be easier to process on the HoloLens.
Also if you can't avoid a lot of objects in the scene maybe check out LOD, which reduces the resolution of objects based off of distance to it and vice versa.
I have heared vulkan will unify the initialisation on different operating systems. Does that mean vulkan creates the window, handles mouse/keyboard events so I can avoid using os specific programming?
It won't. Window creation will be platform specific and an WSI extension will let you link the window to a renderable Image that you can push to the screen.
From information gleaned out of the presentations that have been given I expect that you will use a platform specific WSI Extension to create a Swapchain for your window.
Then each time you wish to push a frame to the screen you need to acquire a presentable image from the swapchain; render to it and then present it.
see this slide pack from slide 109 onward.
No, Vulkan is a low-level API for accessing GPUs. It does not deal with windows and input. In fact it can be used easily in a "headless" manner with no visual output at all.
Porbably not, Vulkan API is an Graphics Library much like OpenGL.
Where in Linux Ubuntu OpenGL is used for animation effects of the desktop in Unity and could be replaced with Vulkan for better performance.
But I don't think Windows will change it as they have their own DirectX Graphics Library and would be weird if they use something else instead their own software.
The most applications that are going to benefit from Vulkan are Games and other software that uses either 2D or 3D rendering.
It's very likely that most of the games are going to change to Vulkan because it's Cross-platform and therefore they will gain more users which equals to more profit.
Khronos (Vulkan API developers) are also bringing out tools that will largely port your application from OpenGL or DX12 to Vulkan therefore requiring less development/porting from the software developers side.
So...
Window creation, likely. (Although the code behind the window is CPU side, the library that draws the window on screen might be using Vulkan) - this differs greatly from which OS, distribution and version you are working on.
Mouse/keyboard events, no as this doesn't require any graphical calculations but CPU calculations.
Window (frames) are general desktop manager controls; you could display the vulkan app's content in the client area, otherwise vulkan would have to provide interfaces to the desktop manager for window creation (GUI library). Someone could simply create a device context (DC in windows, similar for the X server) then manage the "vulkan app" manually like a windowed game with no chrome (frameless), but this would be a great deal of work right now.
Old hand Windows developer bible addressing device context and rendering among many other things: Programming Windows®, Fifth Edition (Developer Reference) 5th Edition. Very good read and from an agnostic point of view provides a great deal of carry over knowledge loosely applicable to most systems.
Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.