The example here example
construct an envelope and use it to check if a point is inside it.
Would it be possible to get the mesh from the envelope ?
The mesh is not explicitly stored as internally only an AABB-tree of faces is constructed.
Related
Very simple question...
I have some example of code:
technique Draw
{
pass
{
vertex_shader = VertexShaerName(vec_in);
pixel_shader = PixelShaderName(vec_in);
}
}
Where can I find documentation of technique keyword usage? Here is no link with description provided for such a statetament...
Techniques are used by the (now deprecated) effects system.
It wraps a lot of the low level api (originally DirectX9 worked with techniques, so this was created to facilitate transition between direct3d9 to direct3d10/11).
It provides helpers to manage constant buffers and a reflection/variable api to assign data to the shaders.
So while with low level pipeline you would compile your vertex and pixel shaders independently, create you constant buffers, and the structures that go with it it allows to build everything in a single block, breaks down constant buffers into a variable system and has a pass api that allows to perform binding all in one go.
Both have their pros and cons, fx is really nice for authoring and prototyping (its really easy to use their variable system to create automatic gui for example), but since it manages a lot of boilerplate for you, but it gets a bit awkard when it comes to efficient resource reuse or complex shader permutations.
One thing which I particularly miss from effects are variable semantics and annotations, you could set things like :
float2 tex : TARGETSIZE;
and by using variable system detect that tex has a TARGETSIZE semantic, hide from ui and auto attach render taget size for example.
Old common usage for annotations were to provide some metatada to values like :
float4 color <bool color=true;> = 1.0f;
Reflecting across annotations allows to see we consider this variable as a color (and display a color picker in editor instead of 4 channels)
While the fx_5_0 profile is deprecated in the d3dcompiler_47, it is still possible to use it, and the wrapper is open source:
https://github.com/microsoft/FX11
I read the portion relevant to "Render Pass Compatibility" in Vulkan specification. I'm not sure if my understanding is right.
Record some commands inside a render pass, which exists the invocation of VkFrameBuffer and VkPipeline. VkFrameBuffer or VkPipeline is strongly related to VkRenderPass. They must only be used with that render pass object, or one compatible with it. Can I reuse VkFrameBuffer or VkPipeline in a compatible render pass? Tell me more about this topic, please.
Not sure what do with your question except answer "yes".
VkRenderPassBeginInfo VU:
renderPass must be compatible with the renderPass member of the VkFramebufferCreateInfo structure specified when creating framebuffer.
e.g. vkCmdDraw VU:
The current render pass must be compatible with the renderPass member of the VkGraphicsPipelineCreateInfo structure specified when creating the VkPipeline bound to VK_PIPELINE_BIND_POINT_GRAPHICS.
I.e. the VkFramebuffer resp. VkPipeline has to be used with render pass that is "only" compatible, not necessarily the same object handle.
VkGraphicsPipelineCreateInfo expects assignment of a VkRenderPass to .renderPass property. I don't really understand why a pipeline must be coupled with render pass. I mean, VkGraphicsPipelineCreateInfo doesn't directly "talks" to render pass related content, like FBOs and their attachments. I may want to use same pipeline with more than one render pass, like in case where I want to render same set of objects in different scenes, so do I have to create another one with exactly the same setup?
Just to add that creating VkPipeline with .renderPass = nullptr fails with validation error:
vkCreateGraphicsPipelines: required parameter
pCreateInfos[0].renderPass specified as VK_NULL_HANDLE.Invalid
VkRenderPass Object 0x0. The Vulkan spec states: renderPass must be a
valid VkRenderPass handle
(https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-VkGraphicsPipelineCreateInfo-renderPass-parameter)
I mean VkGraphicsPipelineCreateInfo doesn't directly "talks" to render pass related content, like FBOs and their attachments.
Of course it does. What do you think a fragment shader is doing when it writes into a render pass' attachments?
Do I have to create another one with exactly the same setup?
No. As per the specification:
"renderPass is a handle to a render pass object describing the environment in which the pipeline will be used; the pipeline must only be used with an instance of any render pass compatible with the one provided. See Render Pass Compatibility for more information".
... so a pipeline can be used with any render pass that is compatible with the one used to create it.
I have too components that deal with n-dimension array. One component is written in python which process the data and save the processed ndarray by tobytes(). Now the other component is written in java, which need to read the serialized ndarray produced in first component.
I am curious if there are any existing java libraries that can read serialized numpy array. Or there is a better way to communicate ndarray between java & python.
Any advice is appreciated!
Thank you!
ND4J supports reading from and writing to Numpy arrays. Look at the ND4J javadocs for xxxNpyYYYArray methods .
It can read and write from/to files, byte arrays and even raw pointers to a numpy array.
The pointer methods allow for using the arrays without copying or serialization. We use the pointer methods inside jumpy (which runs Java via pyjnius) and when using javacpp's cpython/numpy preset to run a cpython interpreter inside a Java process.
I have used Apache Arrow to solve this.
First the pyarrow package has a numpy ndarray API to serialize the array into bytes. Basically the ndarray becomes an Arrow bytes sequence batch.
Then the java API provides a VectorSchemaRoot to read it from the bytes. And you could get the values in the Arrow array. You could use this array to create ND4J array(if you need), or directly operate your array.
For detailed operations you could refer to Apache Arrow doc, and if any obstacles we could discuss here.
Also, Arrow uses native memory to store the buffer so the data is off the java heap. This may be an issue at some point.
Any other solutions could also share with me. :)
I have a question relating to the DataMapper component and extending the behaviour. I have a scenario where I'm converting one payload to another using the DataMapper. Some of the elements in my source request as strings (i.e. Male, Female) and these values need to be mapped to ID elements, known as enums in the target system. A DBLookup will suffice but because of the structure of enums (a.k.a lookup tables) in the target system I'd need to define multiple DBLookups for the values which need to be changed. So I'm looking to develop a more generic way of performing the mapping. I've two proposals, which I'm currently exploring
1) Use the invokeTransformer default function in to call a custom transformer. i.e.
output.gender = invokeTransformer("EnumTransformer",input.gender);
However, even though my transformer is defined in my flow
<custom-transformer name="EnumTransformer" class="com.abc.mule.EnumTransformer" />
Running a Preview in the DataMapper fails with the following error (in Studio Error Log)
Caused by: java.lang.IllegalArgumentException: Invalid transformer name 'EnumTransformer'
at com.mulesoft.datamapper.transform.function.InvokeTransformerFunction.call(InvokeTransformerFunction.java:35)
at org.mule.el.mvel.MVELFunctionAdaptor.call(MVELFunctionAdaptor.java:38)
at org.mvel2.optimizers.impl.refl.ReflectiveAccessorOptimizer.getMethod(ReflectiveAccessorOptimizer.java:1011)
at org.mvel2.optimizers.impl.refl.ReflectiveAccessorOptimizer.getMethod(ReflectiveAccessorOptimizer.java:987)
at org.mvel2.optimizers.impl.refl.ReflectiveAccessorOptimizer.compileGetChain(ReflectiveAccessorOptimizer.java:377)
... 18 more
As the transformer is scoped to my flow and the datamapper is outside this scope do I assume it is now possible to invoke a custom transformer in a datamapper? Or do I require additional setup.
2) The alternative approach would be to use "global function". I've found he documentation in this area to be quiet weak. The functionality is referenced in the the cheat sheet and there is a [jira](
https://www.mulesoft.org/jira/browse/MULE-6438) to improve the documentation.
Again perhaps this functionality suffers from a scope issue. Questions on this approach is if anyone can provide a HOWTO on calling some JAVA code via MEL from a data mapper script? This blog suggests data mapper MEL can call JAVA but limits it's example to string functions. Is there any example of calling a custom JAVA class / static method?
In general I'm questioning if I am approaching this wrong? Should I use a Flow Ref and call a JAVA component?
Update
It is perfectly acceptable to use a custom transformer from the data mapper component. The issue I was encountering was a Mule Studio issue. Preview of a data mapping which contains a transformer does not work because the mule registry is not populated on the mule context as mule is not running.
In terms of the general approach now that I have realized the DB Lookup can accept multiple input parameters I can use this to address my mapping scenario.
Thanks
Rich
Try by providing complete class name