Why is VK_SAMPLE_COUNT_1_BIT an invalid choice for multisampling in Vulkan? - vulkan

Hello people of StackOverflow,
I am currently working on a games engine using the Vulkan graphics API, in the past I was just setting anti-aliasing to the max it could be. However today I was trying to turn it off (to improve performance on weaker systems). To do this I tried to set the MSAA samples on my engine to VK_SAMPLE_COUNT_1_BIT however this produced the validation error:
Validation Error: [ VUID-VkSubpassDescription-pResolveAttachments-00848 ] Object 0: handle = 0x55aaa6e32828, type = VK_OBJECT_TYPE_DEVICE; | MessageID = 0xfad6c3cb | ValidateCreateRenderPass(): Subpass 0 requests multisample resolve from attachment 0 which has VK_SAMPLE_COUNT_1_BIT. The Vulkan spec states: If pResolveAttachments is not NULL, for each resolve attachment that is not VK_ATTACHMENT_UNUSED, the corresponding color attachment must not have a sample count of VK_SAMPLE_COUNT_1_BIT (https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#VUID-VkSubpassDescription-pResolveAttachments-00848)
I can work around this problem relatively easily so it isn't really an issue for me, however I was wondering why exactly this limit is put into place. If I want to set the MSAA samples to 1 why can't I?
Thanks,
sckzor

A sample count of 1 means "not a multisampled image". And if you're doing multisample resolve, resolving from a non-multisampled image doesn't make sense. Which is also why you can't use such images for any other things that expect a multisampled image (you can't use an MS-style sampler or texture function on them).

Related

GNURadio Companion and OFDM TX and RX in single Graph

I am following this github example for understanding OFDM on gnuradio-companion, I am able to execute ofdm_tx individually (64 and 512 FFT point) without any issues, but when I connect these two in single graph, I am able to get spectrum from ofdm_tx (no output from ofdm_rx or getting straight line).
My question here, each time I close my output spectrum, my tool get hanged and in background (inside gnu-companion) I observe the following message tarin (attached, printscreen). Similar thing also observed when I run ofdm_rx individually.
Error message in Console :
packet_headerparser_b :info: Detected an invalid packet at item 1448.
header_payload_demux :info :parser returned #f
Please guide me in this regard,
by selecting "NO" for vector source "Repeat" variable , issue sorted out (no hang), but not able to see spectrum anymore.

vkMapMemory validation error on vkQueuePresentKHR, but never called the function directly

When calling vkQueuePresentKHR i get the following validation error:
Validation Error: [ VUID-vkMapMemory-size-00680 ] Object 0: handle = 0x8483000000000025, type = VK_OBJECT_TYPE_DEVICE_MEMORY; | MessageID = 0xff4787ab | VkMapMemory: Attempting to map memory range of size zero The Vulkan spec states: If size is not equal to VK_WHOLE_SIZE, size must be greater than 0 (https://vulkan.lunarg.com/doc/view/1.2.148.0/windows/1.2-extensions/vkspec.html#VUID-vkMapMemory-size-00680)
I never called vkMapMemory() directly.
Here is an excerpt of my code: https://gist.github.com/alexandru-cazacu/7847161564daa5f93d1bada39280faa8
Closing RivaTuner Statistics Server fixed the issue.
I had 3 other validation errors caused by it.
As stated in https://vulkan-tutorial.com/FAQ
I get an access violation error in the core validation layer: Make sure that MSI Afterburner / RivaTuner Statistics Server is not
running, because it has some compatibility problems with Vulkan.

Log messages classification/grouping and finding human readable pattern for each group

As new to data science and machine learning I would like to ask the following questions about the problem explained below:
Is machine learning good for such problem or is it overkill?
Could this problem be related with another classical problem that has already published papers so I can choose the right solution?
The problem:
I've been doing a research on pretty interesting problem that I believe many Analytics system solved by automated process.
We are collecting many JavaScript error messages that happen in all kind of browsers and custom build web applications. Our goal is to group the similar messages and label each group by the common pattern the grouped messages have.
Example:
+---------------------------------------------------------------+
|Label: "Forbidden: User session {{placeholder1}} has expired." |
+---------------------------------------------------------------+
|Message: "Forbidden: User session aad3-1v299-4400 has expired."|
|Message: "Forbidden: User session jj41-1d333-bbaa has expired."|
|Message: "Forbidden: User session aab3-bn12n-1111 has expired."|
+---------------------------------------------------------------+
So far we have semi-automated process that solves the problem but from time to time we get new user generated JavaScript error messages that slip through our filters.
I've been thinking about naive 2 step approach that uses existing libraries/tools/algorithms.
For a batch of error lines run an algorithm (e.g. Levenshtein) that finds similar strings. Group the similar errors.
Within a group of similar strings run a diff and find the dynamic parts. Check the diff:
For reference here we have error messages that were collected in the period of one minute:
Message: 3312445,Error: Unknown page "retina_list"
Message: 9931234,Error: Unknown page "widget_summary"
Message: ReferenceError: 'alg,TypeError: g' is undefined
Message: 522574,Error: Unknown page "page_options"
Message: ReferenceError: '297756| Zly / Error in handler for event:,[object Object],ApiListenerError: TypeError: a' is undefined
Message: [Euv warn]: style="width: {{item.evaluation}}em": interpolation in 'style' attribute will cause the attribute to be discarded in Internet Explorer. Use krt-bind:style instead. (found in component: <default-componentfalse2320383>)
Message: [Euv warn]: src="//www.example.com/image/{{item._id}}-1.jpg?w=220&h=165&mode=crop": interpolation in 'src' attribute will cause a 404 request. Use krt-bind:src instead. (found in component: <default-componentfalse8372912>)
Message: [Euv warn]: src="//www.example.com/image/{{item._id}}?car=recommend_sp312": interpolation in 'src' attribute will cause a 404 request. Use krt-bind:src instead. (found in component: <default-componentfalse3330736>)
Message: [Euv warn]: src="//www.example.com/image/{{item._id}}-1.jpg?w=220&h=165&mode=crop": interpolation in 'src' attribute will cause a 404 request. Use krt-bind:src instead. (found in component: <default-componentfalse4893336>)
Message: ReferenceError: 'alg,TypeError: g' is undefined
Message: 73276| Zly / Error in handler for event:,[object Object],ApiListenerError: TypeError: Cannot read property 'campaignName' of undefined
Message: ReferenceError: 'alg,TypeError: g' is undefined
Message: ReferenceError: 'bend,TypeError: f' is undefined
I've been playing lately with Tensorflow JS where I am complete beginner but I may try to train something that could help me classify strings and label them.
I also think that the more serious problem is to generate the group label than grouping the strings because sometimes a pair of similar strings have very different length and the placeholders are long sentences with special characters like \,".^%#&*!?<>|][{}.
As you have pointed out, it sounds like we can separate this problem into two distinct steps.
Group together similar messages, and
Label each group.
Step 1:
While I am not too familiar with Tensorflow JS, I do not believe it is overkill to use Machine Learning (ML) to tackle this problem, especially for step 1.
In fact, this type of problem is a great candidate for a specific form of ML known as Unsupervised Learning, and more specifically, Clustering. In Unsupervised Learning, we look to find “previously unknown patterns in our data without pre-existing labels”.
See: https://en.wikipedia.org/wiki/Unsupervised_learning
In this context, that means that we do not know if “Error Message 1” and “Error Message 2” will belong to the same group before we apply our Clustering algorithm. Using your example, we can reasonably suspect that the messages:
“Forbidden: User session aad3-1v299-4400 has expired"
“Forbidden: User session jj41-1d333-bbaa has expired"
will belong to the same group, but the Clustering algorithm does not know this when it starts.
We can contrast this with a form of Supervised Learning known as Classification, where we know beforehand that we expect a group to have the form
“Forbidden: User session {{placeholder1}} has expired".
Then the pre-existing labels in the data are that messages such as
“Forbidden: User session aad3-1v299-4400 has expired"
“Forbidden: User session jj41-1d333-bbaa has expired"
belong to the expected group just above. We essentially give the ML model a bunch of examples of what this group looks like, and then incoming messages that appear to be similar will be classified to this group.
It sounds like from your description that for Step 1, you want to perform a string match (such as Levenshtein) to compare all of the example messages, and then apply a Clustering algorithm to those results. Then after you have groups (clusters) of messages, Step 2 involves finding an appropriate label for each group.
Step 2:
Agreed that finding an appropriate label for each group is likely the harder problem here. One approach that could be useful is to count how many times a word or phrase appears within a group or cluster, and if it does not meet some pre-defined threshold, to use a placeholder as you have in your example label. For example, the words “Forbidden”, “User”, “session”, and “expired” will be common to the group, whereas the alpha numeric ID’s listed are unique to the individual messages. If the threshold is that a word or phrase must show up in at least two messages, only the ID’s will be replaced by the placeholder.
In this approach, you are essentially looking to find words or phrases that are uncommon to the group, and do not provide useful information in forming an appropriate label. In a way, this is the opposite of a concept used in many search engines that aims to find how common or important a word or phrase is to a document (see https://en.wikipedia.org/wiki/Tf%E2%80%93idf).

Corrupted MD-SAL queries when trying to read flows on tables: missing flow rules

I am experiencing a glitch from OpenDaylight (using Mininet).
Essentially, I am querying flow rules on specific nodes and on specific tables. The relevant code is the following, and is run by 1 separate thread per node that I am polling:
public static final InstanceIdentifier<Nodes NODES_II = InstanceIdentifier
.builder(Nodes.class).build();
public static InstanceIdentifier<Table> makeTableIId(NodeId nodeId, Short tableId) {
return NODES_IID.child(Node.class, new NodeKey(nodeId))
.augmentation(FlowCapableNode.class)
.child(Table.class, new TableKey(tableId));
}
and
InstanceIdentifier<Table> tableIId = makeTableIId(nodeId, tableId);
Optional<Table> tableOptional = dataBroker.newReadOnlyTransaction()
.read(LogicalDatastoreType.OPERATIONAL, tableIId).get();
if(!tableOptional.isPresent()) {
continue;
}
List<Flow> flows = tableOptional.get().getFlow();
The behavior: tableOptional is present, and getFlow() returns an empty list.
The observation: there ARE flow rules installed on ALL nodes on the tables I am querying, but for some reason, some of these nodes show none of these flows on none of the tables (here, tables 3, 4, 5, and 6).
The weirdness: On one of the problematic nodes, I have four rules, installed on tables 9, 13, 17 and 22 respectively. They timeout simultaneously after 150 seconds. After they disappear, the query suddenly begins to "see" the flows installed on tables 3, 4, 5, and 6, returning these for each table.
Question: How is this even possible?
EDIT I just realized that the rules whose timeout "suddenly fix everything" were also rules that generated warnings in ODL's log (OpenFlowPlugin to be more specific). I did not observe any obvious issue, so I'd sort of brushed it aside.
Here is the code relevant to the error:
https://pastebin.com/yJDZesXU
Here are the errors I get every time I install a rule that walks through these lines:
https://pastebin.com/c9HYLBt6
I must stress that these rules work as intended, and that printing them out reveals no evident formatting issue. Again, they appear fine when dumped.
My hypothesis is that this warning is a symptom of ODL "messing up" trying to store the rules in MD-SAL, which ends up messing a lot of rule-reading queries. On uninstallation of the garbage that ensues, rule-reading queries become functional again.
This makes sense to me, but then... I haven't understood how to fix these warnings, or what these warnings were about in the first place.
EDIT 2: By commenting lines suspecting of causing the warnings in the above pastebin:
//ipv4MatchBuilder.setIpv4SourceAddressNoMask(...);
//ipv4MatchBuilder.setIpv4SourcenArbitraryBitmask(...);
The warnings disappear, AND the flows appear correctly on all tables, when pinged. This confirms my hypothesis that somewhere, something wrong happens in the data store.
EDIT 3: I have found that by setting any non-trivial arbitrary bitmask, this error goes away. That is, I have tried setting an arbitrary bitmask which was neither null nor "255.255.255.255", and this error has gone away. The problem is I might like having a bitmask for the source, but an exact match on the destination. Even setting the bitmask to "127.255.255.255" (as I tried) is still unnerving. It really feels to me like this is an OpenFlowPlugin glitch, though.
EDIT 4: Steps for reproducing the bug
Install a rule with ipv4 arbitrary bitmask match, with the destination ip set, and the destination arbitrary bitmask either null or set to 255.255.255.255.
Ipv4MatchArbitraryBitMaskBuilder ipv4MatchBuilder = new Ipv4MatchArbitraryBitMaskBuilder();
ipv4MatchBuilder.setIpv4DestinationAddressNoMask(new Ipv4Address("10.0.0.1"));
ipv4MatchBuilder.setIpv4DestinationArbitraryBitmask(new DottedQuad("255.255.255.255"));
matchBuilder = new MatchBuilder().setEthernetMatch(ethernetMatchBuilder.build()).setLayer3Match(ipv4MatchBuilder.build());
... and so on ...
Extra optional steps: Install one such rule for the destination, one such rule for the source, and install equivalent rules where the bitmask is set to something else, like 127.255.255.255.
Make a query to MDSal to fetch flow information from the node on which you installed the flow rule.
Now, do "log:display" inside your ODL controller. You should have a warning about a malformed destination address. Additionally, the Table object you queried should contain no flows, so tableObject.getFlow() should return an empty list.

NuPIC OPF Runtime error getOutputData unknown output categoriesOut

I'm trying to run TemporalClassification model using OPF to recognize patterns from stream. I've adjusted model params so it has two Sensor inputs: ScalarEncoder and SDRCategoryEncoder. The latter marked as classifierOnly. And also it's set as predictedField in inferences.
When trying to feed model with input data I get
RuntimeError: getOutputData unknown output 'categoriesOut' on region Classifier.
NontemporalClassification (only inferenceType changed) model runs without such error.
I've found 6 occurances of categoriesOut in nupic code: https://github.com/numenta/nupic/search?utf8=%E2%9C%93&q=categoriesOut
And error arises in nupic/frameworks/opf/clamodel.py at line 558
classificationDist = classifier.getOutputData('categoriesOut')
Seems that ClassifierRegion in the network is not prepared properly to output data.
Can anyone explain why the classfier region doesn't have 'categoriesOut'? I guess there's misconfiguration in my model params, but there were no errors or warnings during initialization of model. Is there any mandatory parameters and assignments (except noticed in NUPIC documentation) necessary for TemporalClassification model to run?
There are several types of ClassifierRegions in NuPIC. You can find them in nupic/regions folder. I've checked sources and found that 'categoriesOut' is in the outputs dict of the KNNClassifierRegion
https://github.com/numenta/nupic/blob/469f6372082e95dd5d2a96181b745ba36d2e7a8a/nupic/regions/KNNClassifierRegion.py
outputs=dict(
categoriesOut=dict(
description='A vector representing, for each category '
'index, the likelihood that the input to the node belongs '
'to that category based on the number of neighbors of '
'that category that are among the nearest K.',
dataType='Real32',
count=0,
regionLevel=True,
isDefaultOutput=True),
Ensure you use KNNClassifierRegion when configuring your TemporalClassification model. Samples for NontemporalClassification use CLAClassifier, but CLAClassifierRegion has no categoriesOut in its outputs and error described in your question will arise if you keep
'regionName' : 'CLAClassifierRegion'
for TemporalClassification model.