How does the pipeline know which descriptor layout to use when binding descriptors? - vulkan

When calling vkCmdBindDescriptorSets, I have to pass the number of the first set and an array of descriptor sets that I would like bound. I then get to use whichever set I like in my shader using layout(set = X, binding = 0).
My question is the following. The descriptor set layout for the set was only specified at descriptor set creation. Yet when I bind, I can bind any descriptor set to any set number using the above function. Is it up to my to keep my shader layout and binding consistent with the layout specified amongst pipeline creation? Otherwise, how does the pipeline/shader "know" which layout my specific set is using?

In Vulkan, unless otherwise noted, it's always "up to you". This is no exception.
If you attempt to render/dispatch with a pipeline and bound descriptor sets that do not have matching layouts, undefined behavior results.
The pipeline "knows" which layout you're using by fiat. The whole point of a layout is that it "lays out" the arrangement of the internal data representing how those descriptors are organized. So where "binding 2" is within whatever internal data structure the implementation uses for defining that is determined solely by the layout.
A layout is therefore kind of like a struct in C or C++. You can't pass a pointer to a struct of type B to a function that expects a pointer to a struct of type A. Well, you can if you do a bunch of casts, but when the function accesses that pointer, undefined behavior results.
The same goes for pipelines and bound descriptor sets: they must use compatible layouts, or undefined behavior results.

Related

How can I invoke a virtual method handle using ByteBuddy's InvokeDynamic class?

I've found the InvokeDynamic class and have made it work with a static method handle acquired via MethodHandles.Lookup.findStatic().
Now I am trying to do the same thing, but with a virtual method handle acquired via MethodHandles.Lookup.findVirtual().
I can cause my bootstrap method to run, and I make sure in my bootstrap method that I'm returning a ConstantCallSite(mh), where mh is the result of calling MethodHandles.Lookup.findVirtual(). (This part all works fine, i.e. I understand how "indy" works.)
However, when I use the resulting Implementation as the argument to an intercept() call, I cannot pass the actual object on which the method represented by the method handle is to be invoked. This is due to the withArgument() method being used for two contradictory purposes.
Here is my recipe:
Implementation impl =
InvokeDynamic.bootstrap(myBootstrapDescription, someOtherConstantArgumentsHere)
.invoke(theMethodName, theMethodReturnType)
// 0 is the object on which I want to invoke my virtual-method-represented-by-a-method-handle;
// 1 is the sole argument that the method actually takes.
.withArgument(0, 1);
There are some problems here.
Specifically, it seems that withArgument() is used by ByteBuddy for two things, not just one:
Specifying the parameter types that will be used to build a MethodType that will be supplied to the bootstrap method. Let's say my virtual method takes one argument.
Specifying how the instrumented method's arguments are passed to the actual method handle execution.
If I have supplied only one argument, the receiver type is left unbound and execution of the resulting MethodHandle cannot happen, because I haven't passed an argument that will be used for the receiver type "slot". If I accordingly supply two arguments to (1) above (as I do in my recipe), then the method handle is not found by my bootstrap method, because the supplied MethodType indicates that the method I am searching for requires two arguments, and my actual method that I'm finding only takes one.
Finally, I can work around this (and validate my hypothesis) by doing some fairly ugly stuff in my bootstrap method:
First, I deliberately continue to pass two arguments, not one, even though my method only takes two arguments: withArgument(0, 1)
In my bootstrap method, I now know that the MethodType it will receive will be "incorrect" (it will have two parameter types, not one, where the first parameter type will represent the receiver type). I drop the first parameter using MethodType#dropParameterTypes(int, int).
I call findVirtual() with the new MethodType. It returns a MethodType with two parameter types: the receiver type that it adds automatically, and the existing non-dropped parameter type.
(More simply I can just pass a MethodType as a constant to my bootstrap method via, for example, JavaConstant.MethodType.of(myMethodDescription) or built however I like, and ignore the one that ByteBuddy synthesizes. It would still be nice if there were instead a way to control the MethodType that ByteBuddy supplies (is obligated to supply) to the bootstrap method.)
When I do things like this in my bootstrap method, my recipe works. I'd prefer not to tailor my bootstrap method to ByteBudddy, but will here if I have to.
Is it a bug that ByteBuddy does not seem to allow InvokeDynamic to specify the ingredients for a MethodType directly, without also specifying the receiver?
What you described, is entirely independent of Byte-Buddy. It’s just the way how invokedynamic works.
JVMS, §5.4.3.6
5.4.3.6. Dynamically-Computed Constant and Call Site Resolution
To resolve an unresolved symbolic reference R to a dynamically-computed constant or call site, there are three tasks. First, R is examined to determine which code will serve as its bootstrap method, and which arguments will be passed to that code. Second, the arguments are packaged into an array and the bootstrap method is invoked. Third, the result of the bootstrap method is validated, and used as the result of resolution.
…
The second task, to invoke the bootstrap method handle, involves the following steps:
An array is allocated with component type Object and length n+3, where n is the number of static arguments given by R (n ≥ 0).
The zeroth component of the array is set to a reference to an instance of java.lang.invoke.MethodHandles.Lookup for the class in which R occurs, produced as if by invocation of the lookup method of java.lang.invoke.MethodHandles.
The first component of the array is set to a reference to an instance of String that denotes N, the unqualified name given by R.
The second component of the array is set to the reference to an instance of Class or java.lang.invoke.MethodType that was obtained earlier for the field descriptor or method descriptor given by R.
Subsequent components of the array are set to the references that were obtained earlier from resolving R's static arguments, if any. The references appear in the array in the same order as the corresponding static arguments are given by R.
A Java Virtual Machine implementation may be able to skip allocation of the array and, without any change in observable behavior, pass the arguments directly to the bootstrap method.
So the first three arguments to the bootstrap method are provided by the JVM according to the rules cited above. Only the other arguments are under the full control of the programmer.
The method type provided as 3rd argument always matches the type of the invokedynamic instruction describing the element types to pop from the stack and the type to push afterwards, if not void. Since this happens automatically, there’s not even a possibility to create contradicting, invalid bytecode in that regard; there is just a single method type stored in the class file.
If you want to bind the invokedynamic instruction to an invokevirtual operation using a receiver from the operand stack, you have exactly the choices already mentioned in your question. You may derive the method from other bootstrap arguments or drop the first parameter type of the instruction’s type. You can also use that first parameter type to determine the target of the method lookup. There’s nothing ugly in this approach; it’s the purpose of bootstrap methods to perform adaptations.

Modify Scilab/Xcos Block in Scilab 6 Gateway Function

I would like to modify an Xcos block from within a gateway function using the new (non-legacy) Scilab API, for example, replace the block's model property by a new model structure. In other words, do the same as the Scilab command(s):
m = scicos_model()
block.model = m
However, I did not manage to achieve this behavior with the functions from Scilab 6 API: a block created by standard_define() is correctly passed to my gateway function, where this argument is available as scilabVar of type 128. On the other hand, the Scilab help claims that a block is a "scilab tlist of type "Block" with fields : graphics, model, gui and doc".
Attempts
Assume scilabVar block taken from gateway function argument, string constants of type wchar_t[], scilabVar model holding the result of scicos_model():
Application of function scilab_setTListField (env, block, "model", model) returns error status (as its equivalents for MList and List do)
Knowing that property .model is at index 3, a setfield (3, model, block) called through scilab_call ("setfield", ...) also fails.
This is not surprising: when called directly from the Scilab command line, it ends up with
setfield: Wrong type for input argument #3: List expected. .
However, a getfield (3, block) works, so that at least read access to the block's data fields is possible.
An external helper function
function block = blockSetModel (block, model)
block.model = model
endfunction
also called through scilab_call("blockSetModel", ...) actually returns a block with changed model,
but the original block passed to this function remains unchanged.
Although ugly, this gives at least a way to construct an individual block structure
which needs to be returned as a copy.
Summary
So, is there simply a function missing in the API, which returns the TList (or whatever) behind a type 128 pointer variable?
Or is there any other approach to this problem I was unable to discover?
Background
The goal behind is to move the block definition task from the usual interfacing "gui" function (e.g. a Scilab script MyBlock.sci) into own C code. For this purpose, the interfacing function is reduced to a wrapper around a C gateway, which, for example, usesscilab_call ("standard_define",...) to create a new block when being called with parameter job=="define".
Modification of the contained model and graphics objects through the Scilab API works fine since these are standard list types. However, getting or setting these objects as attributes .model and .graphics of the
original block fails as described above.
Starting from Scilab/Xcos 6.0.0, the data-structure behind a block is no more an MList (or TList) so you cannot upgrade the model to your own MList. All the data behind are stored using a classical MVC within a C++ coded Block.hxx.
On each try you made, a serialization/deserialization happens to reconstruct the block model field as a Scilab value.
Could you describe what kind of field you want to append/edit regarding the block structure ? Some of the predefined fields might be enough to pass extra information.

AnyLogic Fluid dynamically assign storage tank (visual representation)

I have a custom FuelTank object, anylogic tank but with some extra logic. It has a parameter SimTank of type "storageTank". The anylogic "tank" block (inside the FuelTank object) has its "StorageTank" set to this parameter.
I now have a TankFarm object which in turn incorporates a number of these FuelTank objects. And a Collection - FuelTanks to reference each of the tanks. It has a parameter SimTanks, which is a one-dimensional array of type Other, storageTank[].
So now.
If I configure the TankFarm object, select each tank in the object and one by one set each tank's SimTank parameter to "SimTanks[0]", "SimTanks[1]", etc. then populate SimTanks with the list of storageTanks I want to use in my visual representation, everything works fine. EXCEPT if I have less "storageTanks" in my SimTanks array than there are tanks in my TankFarm object. (which is understandable. If I only have 4 storageTanks, but 5 tanks in my tankfarm, then tank 5 will have its "SimTank" parameter set to "SimTanks[4]" which of course does not exist in the "SimTanks" array and correspondingly gives an error)
To get around this problem, I use a function and run it when the simulation starts:
for (int s=0; s<=(TankFarm.SimTanks.length-1); s++)
TankFarm.FuelTanks.get(s).SimTank = SimTanks[s];
So now if the user only added 4 "storageTank objects" to his visual simulation, only the first four tanks in the "TankFarm" are assigned a storageTank the last one is "null".
Code works (does not give an error), but when you run the model there is ZERO simulation, none, nothing happens with the StorageTank objects in the visual representation, they don't show anything; it is as if the AnyLogic tanks (inside the individual FuelTank objects), are not linked with the StorageTanks?
How do I fix this please"? How do I dynamically assign the StorageTank objects dropped on the main window and added to the SimTanks array to the tanks in my FuelTank object?
(To clarify, if I do it manually, one by one, it works - but then if I have less storageTanks than Tanks in my farm it gives an error. If I do exactly the same dynamically, through code, it does not give an error, but the simulation does not work, the storageTanks does not show anything??)
This issue has been resolved thank you. I created a function inside the library object that returns a StorageTank. Then added a parameter that the user configures to use the object's animation or add "his own" animation. Each tank object in the library then calls the function which returns a StorageTank object - either the built-in object (built into the library object) or the one the user added manually. The method works perfectly - user may drop the library object (which includes its own animation) but then configure it to not use that animation, in which case user has to build and assign his own animation.

What are the use cases of sparsed VkDescriptorSetLayoutBinding?

I have troubles figuring out any use case for VkDescriptorSetLayoutBinding::binding, here is the struct :
struct VkDescriptorSetLayoutBinding
{
uint32_t binding;
VkDescriptorType descriptorType;
uint32_t descriptorCount;
VkShaderStageFlags stageFlags;
const VkSampler* pImmutableSamplers;
};
used here to create a DescriptorSetLayout :
struct VkDescriptorSetLayoutCreateInfo
{
VkStructureType sType;
const void* pNext;
VkDescriptorSetLayoutCreateFlags flags;
uint32_t bindingCount;
const VkDescriptorSetLayoutBinding* pBindings;
};
I was wondering why the "binding" variable is not deduced from the index in the pBindings array.
After some research I found that the vulkan specs says :
The above layout definition allows the descriptor bindings to be specified sparsely such that not all binding numbers between 0 and the maximum binding number need to be specified in the pBindings array. Bindings that are not specified have a descriptorCount and stageFlags of zero, and the value of descriptorType is undefined. However, all binding numbers between 0 and the maximum binding number in the VkDescriptorSetLayoutCreateInfo::pBindings array may consume memory in the descriptor set layout even if not all descriptor bindings are used, though it should not consume additional memory from the descriptor pool.
I can't find in which case you can use those sparsed bindings, why would you leave an empty unused space ?
Binding indices are hard-coded into shaders (you can define binding indices via specialization constants, but otherwise, they're part of the shader code). So let's imagine that you have the code for a shaders stage. And you want to use it in two different pipelines (A and B). And let's say that the descriptor set layouts for these pipelines are not meant to be compatible; we just want to reuse the shader.
Well, the binding indices in your shader didn't change; they can't change. So if this shader has a UBO in binding 3 of set 0, then any descriptor set layout it gets used with must have a UBO in binding 3 of set 0.
Maybe in pipeline A, some shader other than the one we reuse might use bindings 0, 1, and 2 from set 0. But what if none of the other shaders for pipeline B need binding index 2? Maybe the fragment shader in pipeline A used 3 descriptor resources, but the one in pipeline B only needs 2.
Having sparse descriptor bindings allow you to reuse compiled shader modules without having to reassign the binding indices within a shader. Oh yes, you have to make sure that all such shaders are compatible with each other (that they don't use the same set+binding index in different ways), but other than that, you can mix-and-match freely.
And it should be noted that contiguous bindings has almost never been a requirement of any API. In OpenGL, your shader pipeline could use texture units 2, 40, and 32, and that's 100% fine.
Why should it be different for Vulkan, just because its resource binding model is more abstract?

Can I set/unset a default Coder in Scio?

I would like to consistently apply a custom RicherIndicatorCoder for my case class RicherIndicator. Moreover if I fail to provide a new Coder for Tuples or KVs containing RicherIndicator then I would like to obtain a compile-time or runtime error rather than fall back on a suboptimal coder.
However Scio does not seem to honor the #DefaultCoder annotation:
#DefaultCoder(classOf[RicherIndicatorCoder]) // Ignored
case class RicherIndicator (
policy: Policy,
indicator: Indicator
)
Nor does Scio give priority to custom coders registered with the CoderRegistry, instead falling back on its own default coder:
val registry = sc.pipeline.getCoderRegistry
registry.registerCoderForClass(classOf[RicherIndicator], RicherIndicatorCoder.of) // Not used
Therefore I must use setCoder(RicherIndicatorCoder.of) wherever an SCollection of this type appears, and carefully comb through the pipeline in case there are composite types which include a RicherIndicator.
Is there a way to set my custom coder as the default, or to disable falling back on the default Magnolia or Kryo based coder?
Java annotation does not work in Scala. You can wrap your Beam Coder as an implicit Scio Coder like this:
implicit val richIndicateCoder = Coder.beam(RicherIndicatorCoder)
It should be picked up as long as the implicit is in scope.