I am trying to convert vk_raytrace to headless so that I can run it via commandline and dump rendered image. I am new to vulkan and saw that vulkan supports headless surface. My first approach was to replace the surface created using glfw window to a headless surface. However, I get VK_ERROR_EXTENSION_NOT_PRESENT for VK_EXT_headless_surface. Next I tried removing surface and swapchain related logic and create frame buffers with image view as attachment. Haven't had any luck with that either.
Any pointers on this would be very helpful.
Related
I have selenium Hub as a service on Cluster Kubernetes and I start testing remotely using selenium side runner. Unfortunately when I try to run a test with terminal (using Ubuntu), I get the following error:
enter image description here
UnsupportedOperationError: pointer movements relative to viewport are
not supported in bridge mode
at executeLegacy (../../../../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/input.js:1129:17)
at Actions.perform
(../../../../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/input.js:971:16
if I try to start the tests from selenium ide, everything works well. What can the error depend on?
Can you help me? Thank you in advance.
Best regards
This error message...
UnsupportedOperationError: pointer movements relative to viewport are not supported in bridge mode
at executeLegacy (../../../../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/input.js:1129:17)
at Actions.perform (../../../../../usr/lib/node_modules/selenium-side-runner/node_modules/selenium-webdriver/lib/input.js:971:16
...implies that the pointer movement which happens relative to the viewport is not supported while working within a canvas in bridge mode.
As per the documentation in Clause 6 of Class Actions:
For W3C actions, move offsets relative to a WebElement are interpretted relative to the center of an element's first client rect in the viewport. For legacy actions, element offsets are relative to the top-left corner of the element's bounding client rect. When translating actions to the legacy protocol in bridge mode, an extra command must be inserted to translate move offsets from one frame of reference to the other. This extra command conributes to the overall latency issue.
You can find a detailed discussion in Is it possible to programmatically determine whether W3C action commands are used?
This is a known issue with Selenium and is being tracked through Not correctly move pointer to the position inside of element in bridge mode.
ChromeDriver - Implement Actions API
The currently released ChromeDriver 76.0.3809.12 contains the implementation of Actions API.
----------ChromeDriver 76.0.3809.12 (2019-06-07)----------
Supports Chrome version 76
Resolved issue 1897: Implement Actions API [Pri-1]
Link to Issue 1897: Implement Actions API
Switching to ChromeDriver 76.0 will solve your issue.
It could be due to selenium version. I faced the same issue and upgraded selenium version to 4.0.X.
I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?
I am using FireBreath to create a cross browser plugin which makes use of some native libraries for the respective platform (some .NET based DLLs for Windows and Objective-C based dylibs/frameworks for Mac). Native libraries display UI screens. In order to improve usability, if the user has a multi/extended monitor setup, i would like the native UIs to appear on the same screen as the browser window is currently on.
If an identifier to the monitor with the browser window can be retrieved, that can be passed down to the native components which can be configured to display their UIs on that monitor. I have used FireBreath's getWindowPosition() method to get the rect coordinates of the plugin and used that info to identify the correct monitor in the Windows platform.
However, the coordinates returned in Mac seems to be always 0 (or 1) irrespective of monitor on which the browser window currently resides. I understand that we have to configure an event model and a drawing model in order for this to work in Mac. I have tried following event/drawing model combinations without much success.
1) Cocoa/CoreGraphics
2) Carbon/CoreGraphics
Any help in this regard is much appreciated. Also please do share if there are other approaches to achieve the same. What i want to achieve is to identify the monitor on which the current active browser window resides in Mac. I am unsure at this point, but it maybe possible to achieve this at Objective-C level (without any changes at FireBreath level). Also please note that i want to support Safari, Firefox and Chrome browsers.
You won't like this answer, but simply put you can't do that on Mac. The problem is that with CoreGraphics you are only given a CGContextRef to work with, and it doesn't know where it will be drawn. It was technically possible in older browsers to get an NSWindow by exploiting some internal implementation details, but many browsers that's no longer possible and it was never supported.
Other drawing models are the same; CoreAnimation you have a CALayer but it doesn't know which screen or monitor it is drawn to. I personally think it's a bit annoying as well, but I do not know of any way to find out which monitor your plugin is rendered to, particularly since most of them actually copy the buffer to something else and render in a different process.
I did manage to come up with a workaround and i am just replying here for the completeness of the thread. As #taxilian explained, it is not possible to retrieve plugin coordinates using the window reference. As an alternative approach, Javascript 'Window' object has 2 properties called 'screenX' and 'screenY' that return X and Y coordinates of the browser window relative to the screen. If the user has an extended monitor setup, these are the absolute coordinates with respect to the full extended screen. We can use these values to determine the monitor with the browser window (if the X coordinate is outside the bounds of the primary monitor's width, then the browser should essentially be on the extended monitor). We can retrieve DOM properties from Firebreath as explained in the following link:
http://www.firebreath.org/display/documentation/Invoking+methods+on+the+DOM
I was able to successfully port, cross compile and run the cairo gears
application in gles backend, on my embedded system target.
http://people.linaro.org/~afrantzis/cairogears-0~git20100719.2b01100+gles2.tar.gz
The ported samples trap, comp, text and shadow run well in cairo1.12.3
and 1.12.4.
But I face problem in running the same in 1.12.14.
I could not run the texture related samples like comp, text, shadow.
Trap plays well but the gradient could not be displayed in the gradient sample.
I use gles backend and converting all image surfaces I load from png
file to gl surface.
Let me know if there is something that should be done for the
texture+gradient samples to work in 1.12.14.
thanks
Sundara raghavan
The problem was because of the need to convert the GL_BGRA,the internal image format of cairo , to GL_RGBA for loading in to GL textures (which were GL_RGBA by default). I solved it by applying an existing patch which uses BGRA GL texture and hence avoids conversion. This was possible because my hardware is capable of both reading as well as creating bgra textures.
The Patch was found here:
http://lists.freedesktop.org/archives/cairo/2013-February/024038.html
I have a project in which I would like to programatically create and render a 3d animation based upon input. I originally asked here on stackoverflow if Blender was right for the job, and the response was yes, but upon looking at the API, it says this:
Python was embedded in Blender, so to access BPython modules you need to run scripts from the program itself: you can't import the Blender module into an external Python interpreter.
I want to be able to create and render this scene without having to ever open another program like Blender. Is this possible, and is Blender still the right choice?
Thanks in advance!
At work me and colleague worked on a project that rendered 3d scenes altered externally. We used Python to modify/create scenes, and did the rending on server through the command line interface (no GUI).
You can pass a python script as an argument to Blender in the command line options to
generate your scene objects and do the rendering.
I don't see how you can render in Blender without using Blender.
You can use Blender if you want, obviously this is not your only option.
If you need to
create and render a 3d animation based upon input.
You can go as simple or as you complex as you'd like.
You can use OpenGL in your language of choice (C++, Java, Python, etc.)
and display the animation (with or without fancy renderings).
It's up to what 'render' means to your context.
If you need some nice shading(light, soft shadows, reflections, etc. - ray tracers basically), you can still show an interactive preview to your users and generate the scene
for a 3rd party renderer(like Yafaray, Sunflow, LuxRender, etc. - I've put together a short list of free renders), and show the progress to the users after they've chosen the external render option.
On a similar note, have a look at joons.
HTH
Cart by Suomi - Yafaray Gallery image
Julia quaternion fractal - Sunflow Gallery image
Klein Bottle - LuxRender Gallery image