Input attachments and multisampling - vulkan

I know how to use input attachments and multisampling separately. But I don't understand how these 2 features can be used together.
I have render pass with 2 subpasses and 4 attachments:
an image, which is presented;
a multisampled image with samples=N, which is rendered and resolved (into image #1) in the second subpass;
a multisampled depth image with samples=N, which is rendered in the both subpasses;
a multisampled image with samples=N, which is used as color attachment in the first subpass and input attachment in the second subpass.
If N equals to VK_SAMPLE_COUNT_1_BIT, everything works fine. But if N equals to VK_SAMPLE_COUNT_4_BIT, I have a lot of errors:
vkCreateRenderPass returns error code on Mi A1;
vkCreateRenderPass returns success on Mi A2 lite, but I get a lot of warnings from validation layers:
Descriptor set 0x28107 encountered the following validation error at vkCmdDraw() time: Descriptor in binding #0 at global descriptor index 0 requires bound image to have VK_SAMPLE_COUNT_1_BIT but got VK_SAMPLE_COUNT_4_BIT.
Questions:
Is it prohibited to use multisampled input attachments? I can't find anything about this in spec.
I can resolve my image #4 in the first subpass into some other image (#5) which will be used as input attachment in the second subpass. Is it the only way to fix this problem?

I don't change shaders.
Then that's your problem. The multisampling status of a texture is part of its GLSL type. You cannot fetch from a multisample input texture through a subpassInput; you have to use a subpassInputMS.

Related

How to display more than one image dynamically with python (pyqt5)?

I'm using python to build an application by PyQt5, the main idea is to display images from a directory and for each image some masks show up next to it to select one of them (i.e. for each image there are different number of masks). So, I want to display these masks in a "container" that can handle this dynamically.
I tried QlistWidget in a scroll area and the masks are displayed as icons in it, but I want to use something else to handle them easier and retrieve any mask as Qpixmap or QImage to use it in another operations without convert it multiple times (which slows down the program)
this is the part that displays the masks (this is the only way that worked with me)
for ann in annotations:
mask = self.coco.annToMask(ann)
mask_img = Image.fromarray((mask*255).astype(np.uint8))
qt_img = ImageQt.ImageQt(mask_img) # convert Image to ImageQt
icon = QtGui.QIcon(QtGui.QPixmap.fromImage(qt_img))
item = QtWidgets.QListWidgetItem(icon, f"{annIndex}")
self.listWidget.addItem(item)
annIndex += 1

Vulkan: (MSAA) Resolve only some color attachments of a subpass

I think I'm setting things up correctly, but I'm still getting validation errors.
I'm trying to define a subpass with 2 color attachments: the first is the swapchain surface, and the second is a regular color attachment.
I want to perform multi-sampling on the second color attachment, but not the first.
Since there are 2 color attachments, I add 2 pResolveAttachments to the VkSubpassDescription. The first VkAttachmentReference entry, which refers to the swapchain surface, is defined with
.attachment as VK_ATTACHMENT_UNUSED. That is:
pResolveAttachments[0].attachment=VK_ATTACHMENT_UNUSED
From my understanding, this should prevent the Vulkan from doing a resolve operation on pColorAttachments[0].
Per Spec:
if pResolveAttachments is not NULL, each of its elements corresponds
to a color attachment (the element in pColorAttachments at the same
index), and a multisample resolve operation is defined for each
attachment. At the end of each subpass, multisample resolve
operations read the subpass’s color attachments, and resolve the
samples for each pixel within the render area to the same pixel
location in the corresponding resolve attachments, unless the resolve
attachment index is VK_ATTACHMENT_UNUSED.
However, I get the validation errors. What am I doing wrong?
Validation Errors:
Nvidia Nsight:
From my log file:
A Vulkan limitation that subsumes this error is that all output attachments need to have the same sample count. So, regardless of the MSAA operation, this particular usecase is not valid because the first color attachment is the swapchain surface, and the second is a color attachment. Swapchain images can only have sample count of 1.
The overarching error is about color attachments not having the same sample count. 1

GIMP Script-fu changing default scripts

I am having issues re-writing one of the default logo scripts in GIMP(using Script-fu based on scheme). For one thing the alpha layer is not shown in the layer browser after the image is shown. I am re-writing the Create Neon Logo script(neon-logo.scm) and I want it to do the following before it displays the new image:
add an alpha channel
change black(background color) to transparent via colortoalpha
return the generated image as an object to be used in another python script(using for loops to generate 49 images)
I have tried modifying the following code to the default script:
(gimp-image-undo-disable img)
(apply-neon-logo-effect img tube-layer size bg-color glow-color shadow) *Generates neon logo
(set! end-layer (car (gimp-image-flatten img))) *Flattens image
(gimp-layer-add-alpha end-layer) *Adds alpha layer as last layer in img(img=the image)
(plug-in-colortoalpha img 0 (255 255 255)) *Uses color to alpha-NOT WORKING
(gimp-image-undo-enable img) *Enables undo
(gimp-display-new img) *Displays new image
For number 3 my python code is this:
for str1 in list1:
for color1 in list3:
img = pdb.script_fu_neon_logo(str1,50,"Swis721 BdOul BT",(0,0,0),color1,0)
But img is a "Nonetype" object. I would like to make it so that instead of displaying the generated image in a new window, it just returns the generated image for use with my python script.
Can anyone help?
Maybe to keep everything more managaeable and readable, you should translate theoriginal script into Python - that way you willhaveno surprises on otherwiser trivial things as variable assignment, picking elements from sequences and so on.
1 and 2) your calls are apparantly correct to flaten an "add an alpha channel " (not "alpha layer",a s you write, please) to the image - but you are calling color-to-alpha to make White (255 255 255) transparemt not black. Trey changing that to (0 0 0) - if it does not work, make]
each of the calls individually, either on script-fu console or on python console, and check what is wrong.
3) Script-fu can't return values to the caller (as can be seen by not having a "return value type" parameter to the register call. That means that scripts in scheme in GIMP can only render thigns on themselves and not be used to compose more complex chains.
That leaves you with 2 options: port the original script to Python-fu (and them just register it to return a PF-IMAGE) - or hack around the call like this, in Python:
create a set with all images opened, call your script-fu, check which of your images currently open is not on the set of images previously opened - that will be your new image:
The tricky part with this is that: there is no unique identifier to an image when you see it from Python-fu - so you 'dhave to compose a value like (name, number_of_layers, size) to go on those comparison sets and even that might not suffice - or youg could juggle with "parasites" (arbitrary data that can be attached to an image). As you can see, having the original script-fu rewriten in Python, since all the work is done by PDB calls, and these translate 1:1, is preferable.

GDAL appears to ignore NoDataValue

I'm trying to build a mosaic, and I rely on the NoDataValue feature to treat some parts of the image as transparent.
However, it appears that GDAL doesn't work as expected.
I also created a very simple test case using a vrt dataset and gdal_translate - and I get the same results (that is - the 2nd image draws over the 1st image, ignoring "transparent areas")
I have to 100X100 image files with a white marking (different in each file) over black background (black being exactly equal to 0)
I built a simple vrt file:
<VRTDataset rasterXSize="100" rasterYSize="100">
<VRTRasterBand dataType="Byte" band="1">
<ColorInterp>Gray</ColorInterp>
<SimpleSource>
<SourceFilename relativeToVRT="1">a1.tif</SourceFilename>
<SourceBand>1</SourceBand>
<SrcRect xOff="0" yOff="0" xSize="100" ySize="100"/>
<DstRect xOff="0" yOff="0" xSize="100" ySize="100"/>
<HideNoDataValue>1</HideNoDataValue>
<NoDataValue>0</NoDataValue>
</SimpleSource>
<SimpleSource>
<SourceFilename relativeToVRT="1">a2.tif</SourceFilename>
<SourceBand>1</SourceBand>
<SrcRect xOff="0" yOff="0" xSize="100" ySize="100"/>
<DstRect xOff="0" yOff="0" xSize="100" ySize="100"/>
<HideNoDataValue>1</HideNoDataValue>
<NoDataValue>0</NoDataValue>
</SimpleSource>
</VRTRasterBand>
</VRTDataset>
and I run the command:
gdal_translate mosaic.vrt mosaic.tif
The result is identical to image a2.tif, instead of being a combination of a1.tif and a2.tif
I got the error using gdal 1.8 and 1.9
any ideas?
I got an answer in the gdal-dev list from Even Rouault
Several errors :
The NoDataValue and HideNoDataValue elements are only valid under the VRTRasterBand element, not SimpleSource
You want to change SimpleSource to ComplexSource, and add a <NODATA>0</NODATA> element in it. (well basically rename your current NoDataValue to NODATA.

directshow "Color Space Converter" filter configuration problem (VMR windowless renderer)

I'm using VMR to mix a bitmap with a video stream. I run the renderer in windowless mode.
Since I need to have more than 1 stream on the renderer, I add the renderer to the graph first and then use IFilterGraph2::RenderEx with AM_RENDEREX_RENDERTOEXISTINGRENDERERS.
Everything works fine most of the time, but I have one .avi file that will render fine with RenderFile, but ends up displaying all black when rendered in my graph. I compared the two graphs in graphedit, and they're the same:
capture.avi -> AVI Splitter -> Color Space Converter -> Video Renderer
The only difference between the graphs is that the Color Space Renderer is setup differently: graphedit shows that the following settings in the graph that works:
Input:
Major Type: Video
Sub Type: ARGB32
...
XForm Out:
Major Type: Video
Sub Type: RGB32
Whereas in my graph it shows:
Input: (same)
XForm Out:
Major Type: Video
Sub Type: ARGB32
So it looks like the converter is basically doing nothing. I have looked around and was not able to find any configuration interface for the Color Space Converter filter. I've also tried different things with IPin::QueryAccept and IFilterGraph2::ReconnectEx on the VMR input pin and the Color Space Converter output pin to try and force the output of the Converter filter to RGB32, but I haven't had much luck. Hopefully somebody here can point me in the right direction!
As far as I know the Color Space Converter filter does not have an interface, but you don't need it either. You can force the Color Space Converter filter to convert to RGB32 by inserting a filter which only accepts RGB32. The TransNull32 from the RGBFilters example does exaclty this. Your graph will look like this:
capture.avi -> AVI Splitter -> Color Space Converter -> TransNull32 -> Video Renderer
See also Regarding the scope of Sample Grabber in DirectShow where I explaind how to use the TransNull24 filter.