What does VK_ERROR_NATIVE_WINDOW_IN_USE_KHR mean - vulkan

I am folowing the vulkan tutorial from vulkan-tutorial.com and when I try to recreate the swapchain vulkan gives me this error. I was not able to find out from the internet what causes this error. The question is what causes the VK_ERROR_NATIVE_WINDOW_IN_USE_KHR error.

VK_ERROR_NATIVE_WINDOW_IN_USE_KHR means that window already has a swapchain. A window can only have one swapchain (counting even other API's swapchains like OpenGL and DXGI).
If you recreate the swapchain, you must either first destroy the old swapchain, or you must provide it to the oldSwapchain parameter.

Related

Renderdoc statistics overlay appearing when running executable without renderdoc

When using Renderdoc to debug my Vulkan applications, I have noticed that when I execute the application normally somehow Renderdoc is also running and producing the statistics overlay on the window.
For example when executing ./executable from the command line I get:
(The information on the top left corner is the Renderdoc statistics)
Note that this still applies when deleting and re-compiling the executable.
I have tried uninstalling Renderdoc but I get the error that Vulkan fails to load librenderdoc.so, I have even tried reinstalling the Vulkan libraries but have had no luck.
Any help as to how I could execute my program normally without displaying the program statistics would be greatly appreciated.
The terminal output (from the validation layers) display:
Renderdoc uses Vulkan's layer mechanism to inject itself into the executable.
They have a lengthy blog post explaining the gory details, but basically what happens is that at runtime all your Vulkan API calls pass through Renderdoc first, which is what causes the statistics to appear. If you don't want that, you simply need to not include the Renderdoc layer when initializing Vulkan.
You either requested Vulkan to pull in the Renderdoc layer via the ppEnabledLayerNames field in the VkInstanceCreateInfo sturct passed to vkCreateInstance. In that case, you need to change your code to no longer do that and recompile the executable. Or, you may have Renderdoc loaded as an implicit layer. In that case, the executable does not need to be changed at all (and running the executable on a different machine that does not have Renderdoc installed will just work without showing the overlay). Instead, it is the Vulkan loader that pulls in the Renderdoc layer automatically.
So in that latter case, you need to change the loader configuration for your system. Since this works differently for every system and you did not mention which OS you are using, I will just leave you with a link to the official documentation that contains detailed info for all major operating systems. Simply remove Renderdoc from the list of implicit layers and the overlay should disappear.

Anylogic: truncated class error during optimization

I state that I am a beginner, here is my problem.
My model works perfectly when I run it with a normal simulation. Now I'm trying to optimize some parameters using the optimization experiment, I've followed all the steps of the official tutorial, but it doesn't work because I get "Exception during discrete event execution:
Truncated class file". The strange thing is that, looking into the console displaying the error, I see that some lines are referred to an old version of my model, for example:
java.lang.ClassFormatError: Truncated class file
at coffe_maker.Main._m1_1_delayTime_xjal(Main.java:14070)
The current model's name is coffee_maker_v2_6 so I don't understand why I get this kind of error, do you know if it is normal? What am I doing wrong?
The most likely cause is that you have Java code left in an 'unused' configuration of a Delay block's "Delay time" expression (e.g., it now has a static value but you had Java code in the now-switched-out dynamic value).
Unfortunately, AnyLogic sometimes still includes the switched-out code in the compiled class, and this can sometimes cause strange runtime errors such as that one.
If this does look to be the case, temporarily switch to the offending switched-out configuration and delete it before switching back to the correct one.
I have resolved: the problem was that, in every delay block of my model, the delay time was linked with a database reference (type code), now I am trying by tyiping the probability distributions in the delays directly and now the optimization works

Credential Provider V2 Combobox unexpected behavior

I've been developing our company's credential provider for windows 10 for almost a year now.
Now, I encountered a problem. I don't usually ask questions on forums, blogs, because in most cases I find the solution, but this time I've been struggling with an issue for a month now and I found the root of the problem.
Brief description of the problem itself: The credential provider uses a combobox, which worked before without a problem. Now, I rewrote the whole code to manage a big update, but a strange bug got into the system. The bug only occurs at a specific scenario. I'm developing and testing the code on my personal laptop.
The scenario:
1) The laptop is plugged in to my monitor / power, etc.
2) I make it go sleep.
3) I unplug all cables (including power).
4) I wake it up from sleep.
Than, the combobox doesn't show a default selected item, it's empty. When I drop it down, it shows all the necessary items. Than the credprov crashes and restarts, than everything is fine.
I know, that in similar "strange" scenarios, in most of the cases, a memory leak or something related causes the problem. When I check the event viewer it shows me c0000005, which is access violation. I started to debug where the violation is. Than I found out that the program refers the combobox item list array (actually vector in my case) at a very very high index (out of range could be the reason for the violation). The actual index is obviously stored in the "selectedComboItemIndex" variable (DWORD).
I was curious when did it change to this strange number, than I found an unexpected behavior.
The SetComboBoxSelectedValue method randomly gets called once (when the bug happens) with an insanely high index value. I don't even call this method in my code, so I have no idea, why does it get called. The call happens even when I don't drop down the combobox.
I give it a chance that it could be a bug in the credprov itself. What do you think? Have you seen this problem before?
Thank you in advance!
I solved the problem by stopping to use "SetSelected" and "SetDeselected" methods. Furthermore, I filtered the "SetComboBoxSelectedValue" input parameters to accept valid numbers only. When the index parameter is invalid, I recall the same method with the index parameter replaced to the first item (0).

How do I emulate WebGL stuff with OESMesa on AWS Lambda?

I would like to take a screenshot of a web site using WebGL. I don't have to use GPU to open that site. Using emulation is enough for me.
At the beginning, I already tried headless-chrome to do this. That can take screenshot of ordinal web sites. But, It not works for WebGL canvases.
I think one of possibility is using OSMesa or something to emulate OpenGL.
I have used all of my strategy for overcoming this. Is this actually possible to do?
If yes, please tell me how to do that. If no, I would like to know why.
Thanks.
Yes it is possible!
You need the right combination of:
headless-chromium binary
libosmesa.os binary (in same directory)
launch chrome headless with the right flags, such as (see link for full details): ['--use-gl=osmesa', '--enable-webgl', '--ignore-gpu-blacklist', '--homedir=/tmp', '--single-process', '--data-path=/tmp/data-path', '--disk-cache-dir=/tmp/cache-dir']
This thread on the serverless-chrome github project discusses the issue and provides some binaries which I have used to capture screenshots of WebGL content on AWS Lambda using Page.captureScreenshot().
https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-416494572
(See comment by #apalchys on 28th August)
This particular example uses SwiftShader which seems to be the preferred option going forward.
Note, however, that I was unable to create PDFs using Page.printToPDF() using this version - WebGL content just appears blank/white. However, I was able to also get Page.printToPDF() using an earlier version which uses osmesa, see https://github.com/adieuadieu/serverless-chrome/issues/108#issuecomment-371199530

Detect limit switch reached

I'm having a little bit of a problem with LabVIEW 6.x concerning the motion control stuff. How do I detect whether a limit switch has been reached? So far I haven't found a way to do that. So what I'm trying to do is detect the minimum and maximum position of the attached device. To do that, I initiate a movement by using "Load Target Position" und "Start Movement" (don't recall the exact name of the latter). Now, if the position reaches a limit, the movement stops. But how do I detect that it stopped because the physical limit has been reached? I tried using the error output, but it just tells me that there is no error, although it displays a message box telling me that the limit in the direction of travel has been reached. It also tells me that this is warning 70026. But that number doesn't appear anywhere else, particularly not in the error code where I expected it to be. So I hope I made clear what I'm trying to accomplish and am grateful for any help on that. Thanks in advance.
For completeness sake:
Your looking into the wrong property, 'enabled' means that the device is equipped with a limit switch and 'listening' to it.
Active means that the limit switch is triggered.