How do we get Qt to render to memory rather than a device? - embedded

I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?

Related

Nsight Graphics and RenderDoc cannot trace application

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks
Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

How to send a texture with Agora Video SDK for Unity

I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks
did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.

OS Data Hub not working on iPad or iPhone

I look after a website which displays UK Ordnance Survey maps for people going on walks in SW England, and it relies on displaying a marker on an OS map, in response either to an alphanumeric OSGB grid reference, appended as a query string to the name of the HTML file which displays the map, or to a click on the OS map produced by the HTML file, in the absence of a query string. This has all worked well for several years, on both Windows PCs and mobile devices, using OS OpenSpace, which is based on OpenLayers 2, but now Ordnance Survey have decided to enforce a new system based on Open Layers 6, and requiring a complete re-code of the JavaScript. OS provide example code for various scenarios, from which I have re-written my code to run perfectly on a Windows PC with a mouse, but it fails on an iPad or an iPhone, particularly if the iPad is several years old.
Using a current Apple device, I cannot display a scaled vector graphic on an OS map, either at hard-coded co-ordinates, or in response to a tap on a map. A tap will however bring up a pop-up at the tapped point, and a swipe on the map will pan it. Using an iPad several years old, in addition to the above problems, opening two fingers will not zoom the map, and a swipe will not pan it. The only things that work are the + and - buttons on the map.
I have read that I may need to include a Controls section within my Map command, along the lines of:
controls: [
new OpenLayers.Control.TouchNavigation({
dragPanOptions: {
enableKinetic: true
}
}),
new OpenLayers.Control.Zoom()
],
but if I include this code, my JavaScript crashes, and the JavaScript Error Console within MS Edge gives error message 'OpenLayers is not defined'
Ordnance Survey tell me that my problem 'is an issue relating to the mapping library, OpenLayers, rather than our APIs' so my question is, how do I get code based on the OS examples to run on mobile devices of any age, in the same way my existing code based on Open Layers 2 has run for several years?
I will gladly supply a copy of my code which works on a Windows PC, but not on an Apple device, on request.
OpenLayers needs polyfills on older devices/browsers. Try adding
<script src="https://unpkg.com/elm-pep"></script>
<script src="https://cdn.polyfill.io/v2/polyfill.min.js?features=fetch,requestAnimationFrame,Element.prototype.classList,URL"></script>
before the ol.js script. The default controls should be sufficient (the code you have used is Openlayers 2 format and will not work with OpenLayers 6).

Codename one Camera app

I'm trying to build sort of a Camera app using Codename one, taking a picture is no problem. But streaming camera feed to the background, like it is on a regular camera on mobile phones, so you can actually see what you're about to film or photograph.
We don't currently support some of the more elaborate AR API's introduced by Google/Apple but we do support placing a camera view finder right into your app with a new cn1lib: https://github.com/codenameone/CameraKitCodenameOne
Since this is implemented in a library you can effectively edit the native code and add functionality as needed.
The original answer is out of date by now I'm keeping the original answer below for reference:
You can record video or take a photo with Codename One.
However, augmented reality type applications where you can place elements on top of the camera viewfinder are currently not supported by Codename One. This functionality is somewhat platform specific and hard to implement in a portable manner.

Identify the monitor with the browser window in FireBreath

I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM