Nsight Graphics and RenderDoc cannot trace application - gpu

I am stuck writing a Vulkan renderer. The final output I see on the screen is only the clear color, animated over time, but no geometries. Even with all possible validation turned on I dont get any errors / warnings / bestPractices / performance hints etc except for the bestPractices warning "You are using VK_PIPELINE_STAGE_ALL_COMMANDS_BIT when vkQueueSubmit is called". Not actually sure I use all possible validation, but I have Vulkan Configuration running and ticked all checkboxes under "VK_LAYER_KHRONOS_validation", after which the vkQueueSubmit hint showed up
After poking around for some hours I decided to look into using RenderDoc and I can startup the application just fine, however RenderDoc says "Connection status: Established, Api: none" and I cannot capture a frame.
Quite confused I thought I would look into using NSight Graphics just to find the same problem: I can run the application but it says "Attachable process detected. Status: No Graphics API". I read somewhere I can start the process first, then use the attach functionality to attach to the running process, which I did, unfortunately with the same outcome
I read there can be problems when not properly presenting every frame, which was the reason for me to change the clear color over time to make sure I actually present every frame, which I can confirm is the case
I am quite lost at this point, did anyone make similar experiences? Any ideas as to what I could do to get RenderDoc / NSight Graphics working properly? They both dont show anything in the logs as I guess they just assume the process does not use any graphics api and thus wont be traced.
I am also thankful for ideas about why I cannot see my geometries but I understand this is even harder to guess from your side, still some notes: I have even forced depth and stencil tests off, although the vertices should be COUNTER_CLOCKWISE I have also checked CLOCKWISE just to make sure, set the face cull mode off, checked the color write mask and rasterizerDiscard, even set the gl_Position to ignore the vertex positions and transform matrices completely and use some random values in range -1 to 1 instead, basically everything that came to my mind when I hear "only clear color, but no errors" but everything to no avail
In case it helps with anything: I am on Win11 using either RTX3070 or Intel UHD 770 both with the same outcome
Small Update:
Using the Vulkan Configurator I could force the VK_LAYER_RENDERDOC_capture layer on, after which when running the application I can see the overlay and after pressing F12 read that it captured a frame. However RenderDoc still cannot find a graphics api for this process and I have no idea how to access that capture
I then forced VK_LAYER_LUNARG_api_dump on and dumped it into an html which I inspected and I still cannot see anything wrong. I looked especially closely at the Pipeline and Renderpass creation calls.
This left me thinking it would be any uniform / vertex buffer content / offsets or whatever so I removed any of that, use hardcoded vertex positions and fragment outputs and still I can only see the clear color in the final image on the screen.
Thanks

Maybe confused me should start converting the relative viewport that I expose to absolute values using my current cameras width and height, ie giving (0,0,1920,1080) to Vulkan instead of (0,0,1,1).
Holymoly what a ride

Related

Blender Texturing doesn't show up correctly after repeatedly baked on it even though UV-Mapping fits perfectly

The problem occurs, when I baked lightings & reflections via Principled-BSDF (Cycles) on an "Image Texture"-Node repeatedly. The first times I get excpected solutions and then suddenly the mesh seams to be broken as it keeps showing future bakings incorrectly (image below).
Also when I move an island in the UV-Map nothing seams to changes on the Mesh in the 3D-Viewport. The UV-Texture looks unchanged no matter what I do. Like it has frozen or something.
My Blender Version is: 2.92. Im getting the same problem with 2.83.
I keep getting this problem over and over and I just can't find a solution. Even if I exported the mesh in another project. It just "infects" the other project and I get the same problem there.
I only can repair it if I completely start over.
Please help me. I'm really frustrated with this. This has defeated my blender project now for like the 4th time... :/
> Screenshot example here <
It appears as if the generated texture coordinates are being used for some reason instead of the UVMap coordinates. If the vector socket of the of the image texture node is unconnected it should use the current selected UVMap.
This may actually be a bug if it's happening after multiple uses of the baking tool.
You should first try connecting the image vector input to the uv output of a texture coordinate node to see if it has any effect. Alternatively try to connect a UVMap node

Acquire a series of STEM images via the method of event listener

Recently I am trying to attach an event listener to a live display so as to acquire a series of images automatically and the even map used is "data_value_changed". In TEM mode everything is fine and the 3D stack can be properly obtained. Unfortunately, while applying this to a live STEM image from the DigiScan, the script failed completely. Later on I just realized that in such a mode, the image is updated pixel by pixel with the scanning rather than frame by frame. Another event map "data_changed" was further tested but still ended up with failure.
With DM2.0 or later version, it seems to be much easier to acquire a series of customerized STEM images, since the DigiScan controlling can be conveniently accessable via scripting. Unfortunately, our microscope is quite old and only with DM 1.5 installed.
Is there any event map sepcific to this purpose or the approach of event handler is not suitable at all?
Thanks in advance
The data changed event handler is suitable, but there is no event for a particular frame/complete event.
Instead, your event-handling code needs to be creative and deal with the situation that you get more events than you want. You are really only interested in the event which (also) change the last pixel in an image (as the frame is sequentially filled), but you do get events whenever sub-parts of the image change.
So you need to "filter" those events out - as quickly and CPU-conservative as possible.
The easiest way is to gather the last pixel's value at each event and compare it to a stored value. If the value changed, then this pixel was changed, indicating the frame is "complete" and you want to use the event. Otherwise, just return without further action.
There is a very slim chance ( - for scanned images - ) that a "new" frame has the numerically identical value than the frame before, so this in most cases is all yoou need to do.
If this isn't enough for you, you may look at longer - but also more CPU cycles consuming - checks like each time computing a Boolean change map betwee "now" and "buffered" and keeping track of the "last" change. Then, if there is a "jump over" to an earlier index, you know that your "buffered" last image actually wa a full frame.
(Note, that you will always see the data update once at the end of frame. Hence this will work.)
There is an example of this type of script in this answer here. If this isn't working for you, please comment or rephrase your question for more details on where you run into issues.

How do we get Qt to render to memory rather than a device?

I have an application that uses Qt 5.6 for various purposes and that runs on an embedded device. Currently I have it rendering via eglfs to a Linux frame buffer on an attached display but I also want to be able to grab the data and send it to a single-color LED display unit (a device will either have that unit or a full video device, never both at the same time).
Based on what I've found on the net so far, the best approach is to:
turn off anti-aliasing;
set Qt up for 1 bit/pixel display device;
select a 1bpp font, no grey-scale allowed; and
somehow capture the graphics scene that Qt produces so I can transfer it to the display unit.
It's just that last one I'm having issues with. I suspect I need to create a surface of some description and inject that into the Qt display "stack", but I cannot find any good examples on how to do this.
How does one do this and, assuming I have it right, is there a synchronisation method used to ensure I'm only getting complete buffers from the surface (i.e., no tearing)?

Vulkan and transparent windows

I'm currently adapting my personal engine to Vulkan and I want to reimplement transparent windows, which I already had with OpenGL.
I thought that all I need to do is to select the correct color format ( with alpha channel ) and to set the compositeAlpha property of VkSwapchainCreateInfoKHRto VK_COMPOSITE_ALPHA_POST_MULTIPLIED_BIT_KHR.
However clearing the window with a full transparent color doesn't provide the expected results. It's fully opaque.
Of course my window system, which didn't change since I had OpenGL, supports it and when I just disable the rendering I also can't click through at the supposed position of the window, this tells me that it's there.
Are there any other required changes to make this work?
Some infos
The image format is VK_FORMAT_B8G8R8A8_UNORM and I oriented the vulkan setup as found in Sascha Willems examples.
That capability (as most others) have to be queried before usage about whether it is supported. Otherwise it is invalid to use it.
This particular feature is queried by vkGetPhysicalDeviceSurfaceCapabilitiesKHR as pSurfaceCapabilities->supportedCompositeAlpha. It is a bitfield/flag-set, so more than one mode or none can be supported.
I think the result/feature support may be influenced by the VkSurface. That is, how the platform window was created. Or maybe the driver maker simply did not implement it yet (despite that feature being supportable).
Since it worked for you before in OGL, the later is more likely. But couldn't hurt to play with the platform window creation parameters...
Dunno if this is still relevant, but I got it working with transparent windows through GLFW. (If you are not using GLFW you may dismiss this answer!)
As stated here, there are two ways of obtaining window transparency: framebuffer transparency (alpha bit), and window transparency.
For window transparency it is sufficient to call glfwSetWindowOpacicity(GLFWwindow*, float), where the opacity value should be in the range (0, 1].
NOTE: Since GLFW does not support using both transparency methods at the same time, we must still use VK_COMPOSITE_ALPHA_OPAQUE_BIT_KHR in the compositeAlpha field of the VkSwapchainCreateInfoKHR object.
Window transparency may not be supported on all systems, which is why GLFW provides us with a function glfwGetWindowOpacity(GLFWwindow*), to check if calling the first methods was successful.

I cannot get a QTCaptureSession to Capture when in a Terminal Application

I've got a terminal application that needs to take a webcam picture and then perform some processing on it. I'm having trouble getting it to initialize. There's a fairly complete demo with an app called MyRecorder in the Apple docs that uses QTKit, which I was able to make work fine. I was also able to modify it to grab a single frame instead of a stream.
When I move this to a terminal application, the startRunning of the QTCaptureSession command simply does nothing. There are no errors, and everything reports as successful, but my webcam doesn't light up, and no frames are captured.
Any idea what's going on here? Are there any kind of security restrictions, or other kinds of restrictions that would prevent the QTCaptureSession from working?
So switching to AVFoundation solved my problem. I'm still not certain what the issue is, but for now using AVFoundation seems like the way to go since it was designed to replace QtKit anyways.