Generated SPIR-V with -fvk-use-scalar-layout flag using DirectxShaderCompiler causes validation layer error - vulkan

I have a fairly simple HLSL shader that is being compiled into SPIR-V using DirectxShaderCompiler. Though, using scalar layout causes validation layer error. I have enabled the VK_EXT_SCALAR_BLOCK_LAYOUT_EXTENSION_NAME extension while creating the VkDevice. Is it a validation layer, or dxc bug or do I require an additional flag to generate SPIR-V?
Command to generate SPIR-V:
COMMAND $ENV{VULKAN_SDK}/bin/dxc -spirv -fvk-use-scalar-layout -fvk-invert-y -T vs_6_0 -E ${vertexEntry} ${file} -Fo ${CMAKE_SOURCE_DIR}/Assets/${vertexEntry}.spv
Validation layer error:
Validation Error: [ UNASSIGNED-CoreValidation-Shader-InconsistentSpirv ] Object 0: handle = 0x1e320175dc0, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0x6bbb14 | SPIR-V module not valid: Structure id 7 decorated as BufferBlock for variable in Uniform storage class must follow relaxed storage buffer layout rules: member 1 is an improperly straddling vector at offset 12
%Vertex = OpTypeStruct %v3float %v3float %v2float

Related

Why isn't my command buffer in the right state?

I have made a command pool that does not have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT flag set. I submit command buffers, and when those command buffers are completed I reset the pool. I keep the command buffer around, and I get it again, and I begin recording, with the VK_COMMAND_BUFFER_USAGE_ONE_TIME_SUBMIT_BIT. The validation layer gives me this error:
[VULKAN][INFO] Vulkan validation layer callback: Validation Error: [
VUID-vkBeginCommandBuffer-commandBuffer-00050 ] Object 0: handle =
0x20db70f20c0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; Object 1: handle
= 0xcb1c7c000000001b, type = VK_OBJECT_TYPE_COMMAND_POOL; | MessageID = 0xb24f00f5 | Call to vkBeginCommandBuffer() on VkCommandBuffer 0x20db70f20c0[] attempts to implicitly reset cmdBuffer created from
VkCommandPool 0xcb1c7c000000001b[] that does NOT have the
VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT bit set. The Vulkan
spec states: If commandBuffer was allocated from a VkCommandPool which
did not have the VK_COMMAND_POOL_CREATE_RESET_COMMAND_BUFFER_BIT flag
set, commandBuffer must be in the initial state
Now, isn't it in the initial state after I reset the pool?

How to run query with lists and sets in cuDF

I am using cudf (dask-cudf) to handle tens~billions of data for social media. I'm trying to use query in extracting only the relevant users from the mother data set.
However, unlike pandas, cudf's query will error if I pass in a list or set.
The environment is anaconda rapids22.12 and cuda is 11.4.
The error is as follows:
TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Internal error at <numba.core.typeinfer.CallConstraint object at 0x7f381a6097f0>.
Failed in cuda mode pipeline (step: native lowering)
Failed in nopython mode pipeline (step: native lowering)
NRT required but not enabled
During: lowering "$6for_iter.1 = iternext(value=$phi6.0)" at /home/user/.pyenv/versions/anaconda3-2020.11/envs/rapids-22.12/lib/python3.8/site-packages/numba/cpython/listobj.py (664)
During: lowering "$6compare_op.2 = src in __CUDF_ENVREF__test" at <string> (2)
During: resolving callee type: type(CUDADispatcher(<function queryexpr_5ee033e5bcab9f09 at 0x7f381b909ee0>))
During: typing of call at <string> (6)
Enable logging at debug level for details.
File "<string>", line 6:
<source missing, REPL/exec in use?>
test code is as follows:
df is a cudf.DataFrame and is a table of edge lists consisting of "src" and "dst" columns
test = list(test_userid)[0:2]
df.query("(src==#test)or(dst==#test)") #ok if one value not list
df.query("src.isin(#test)") #ng
df.query("src in #test") #ng
df.query("src==#test") #ng
It is not essential to use query, so if there is a way to extract other than query, I would like to know that as well.
I have confirmed that the code can successfully extract if it is by pandas. Also, the cudf query works correctly if it is a single value, not a list.
I believe that it should work properly even if you pass lists to cudf.

How to fix VK_KHR_portability_subset error on mac m1 while following vulkan tutorial

Hi i'm having an error on compile. Apparently i'm missing an extension :
validation layer: Validation Error: [ VUID-VkDeviceCreateInfo-pProperties-04451 ] Object 0: handle = 0x1055040c0, type = VK_OBJECT_TYPE_PHYSICAL_DEVICE; | MessageID = 0x3a3b6ca0 | vkCreateDevice: VK_KHR_portability_subset must be enabled because physical device VkPhysicalDevice 0x1055040c0[] supports it The Vulkan spec states: If the [VK_KHR_portability_subset] extension is included in pProperties of vkEnumerateDeviceExtensionProperties, ppEnabledExtensions must include "VK_KHR_portability_subset". (https://vulkan.lunarg.com/doc/view/1.2.176.1/mac/1.2-extensions/vkspec.html#VUID-VkDeviceCreateInfo-pProperties-04451)
I naively added to the deviceExtension vector "VK_KHR_portability_subset" and then got a second error who seems to be similar to the previous one.
validation layer: Validation Error: [ VUID-vkCreateDevice-ppEnabledExtensionNames-01387 ] Object 0: VK_NULL_HANDLE, type = VK_OBJECT_TYPE_INSTANCE; | MessageID = 0x12537a2c | Missing extension required by the device extension VK_KHR_portability_subset: VK_KHR_get_physical_device_properties2. The Vulkan spec states: All required extensions for each extension in the VkDeviceCreateInfo::ppEnabledExtensionNames list must also be present in that list (https://vulkan.lunarg.com/doc/view/1.2.176.1/mac/1.2-extensions/vkspec.html#VUID-vkCreateDevice-ppEnabledExtensionNames-01387)
I added to the deviceExtension vector "VK_KHR_get_physical_device_properties2" and then got a third error :
libc++abi: terminating with uncaught exception of type std::runtime_error: failed to find a suitable GPU!
The thing is that previously he recognized that i was using a m1 chip but now no informations on the device shows :(
I added to the deviceExtension vector "VK_KHR_get_physical_device_properties2"
VK_KHR_get_physical_device_properties2 is an instance extension, and as such belongs to vkCreateInstance (not device).
Reportedly, that fixed your issue.
I have encountered same message on same device and after some research understood it's not error, just a warning. If you don't really need to handle this special case, you can just ignore it.

vkMapMemory validation error on vkQueuePresentKHR, but never called the function directly

When calling vkQueuePresentKHR i get the following validation error:
Validation Error: [ VUID-vkMapMemory-size-00680 ] Object 0: handle = 0x8483000000000025, type = VK_OBJECT_TYPE_DEVICE_MEMORY; | MessageID = 0xff4787ab | VkMapMemory: Attempting to map memory range of size zero The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be greater than 0 (https://vulkan.lunarg.com/doc/view/1.2.148.0/windows/1.2-extensions/vkspec.html#VUID-vkMapMemory-size-00680)
I never called vkMapMemory() directly.
Here is an excerpt of my code: https://gist.github.com/alexandru-cazacu/7847161564daa5f93d1bada39280faa8
Closing RivaTuner Statistics Server fixed the issue.
I had 3 other validation errors caused by it.
As stated in https://vulkan-tutorial.com/FAQ
I get an access violation error in the core validation layer: Make sure that MSI Afterburner / RivaTuner Statistics Server is not
running, because it has some compatibility problems with Vulkan.

NuPIC OPF Runtime error getOutputData unknown output categoriesOut

I'm trying to run TemporalClassification model using OPF to recognize patterns from stream. I've adjusted model params so it has two Sensor inputs: ScalarEncoder and SDRCategoryEncoder. The latter marked as classifierOnly. And also it's set as predictedField in inferences.
When trying to feed model with input data I get
RuntimeError: getOutputData unknown output 'categoriesOut' on region Classifier.
NontemporalClassification (only inferenceType changed) model runs without such error.
I've found 6 occurances of categoriesOut in nupic code: https://github.com/numenta/nupic/search?utf8=%E2%9C%93&q=categoriesOut
And error arises in nupic/frameworks/opf/clamodel.py at line 558
classificationDist = classifier.getOutputData('categoriesOut')
Seems that ClassifierRegion in the network is not prepared properly to output data.
Can anyone explain why the classfier region doesn't have 'categoriesOut'? I guess there's misconfiguration in my model params, but there were no errors or warnings during initialization of model. Is there any mandatory parameters and assignments (except noticed in NUPIC documentation) necessary for TemporalClassification model to run?
There are several types of ClassifierRegions in NuPIC. You can find them in nupic/regions folder. I've checked sources and found that 'categoriesOut' is in the outputs dict of the KNNClassifierRegion
https://github.com/numenta/nupic/blob/469f6372082e95dd5d2a96181b745ba36d2e7a8a/nupic/regions/KNNClassifierRegion.py
outputs=dict(
categoriesOut=dict(
description='A vector representing, for each category '
'index, the likelihood that the input to the node belongs '
'to that category based on the number of neighbors of '
'that category that are among the nearest K.',
dataType='Real32',
count=0,
regionLevel=True,
isDefaultOutput=True),
Ensure you use KNNClassifierRegion when configuring your TemporalClassification model. Samples for NontemporalClassification use CLAClassifier, but CLAClassifierRegion has no categoriesOut in its outputs and error described in your question will arise if you keep
'regionName' : 'CLAClassifierRegion'
for TemporalClassification model.