VkViewport negative height? - vulkan

The validation layer complains:
vkCreateGraphicsPipelines: pCreateInfos[0].pViewportState->pViewports[0].height is not greater than 0.0. The Vulkan spec states: height must be greater than 0.0
https://www.khronos.org/registry/vulkan/specs/1.1-extensions/html/vkspec.html#VUID-VkViewport-height-01772
and yet the vulkan spec says in documentation of VkViewport:
The application can specify a negative term for height
https://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VkViewport.html
What am I missing? These two statements seem to contradict each other.

In order to use negative height, you need to either enable VK_KHR_maintenance1 extension, or use Vulkan 1.1+.

Related

remeshing a vtk file with mmg (mmgs) file using a size map

I’m using mmgtools's mmgs to remesh some polydata (vtp files). I need to control the cell size according to a metric, so I provide a size map. However, I can’t succeed to make mmgs take this size map into account. For now, I'm trying with just a constant size.
If I provide a constant size at the command line (mmgs_O3 test.vtp -hsiz .001), it works as expected.
However if I save this same size in point data, suffixed with :metric (as explained in the prerequisite section):
> mesh.point_data["size:metric"]
pyvista_ndarray([0.001, 0.001, 0.001, ..., 0.001, 0.001, 0.001])
Then mmgs (mmgs_O3 test.vtp) just remeshes ignoring the size map.
I note however that mmgs does read this field as if I create another one suffixed with :metric, it fails with an error ## Error:MMG5_count_vtkEntities: 2 metric fields detected (labelled with a string containing the 'metric' keyword)..
So, I must be missing something, but can’t find what. Does anyone have experience with this tool? What do I miss for mmgs to take this size into account?
Thank you in advance!
To answer myself, it's because the mesh contained other data values. In that case mmgs doesn't fail, but remeshes while ignoring the passed size metric.
In order to work, the mesh's cells and point must be stripped of any other data, and contain only the :metric value.

CNTK Asymmetric padding warning

When creating a model in CNTK, with a convolutional layer, I get the following warning:
WARNING: Detected asymmetric padding issue with even kernel size and lowerPad (9) < higherPad (10) (i=2), cuDNN will not be able to produce correct result. Switch to reference engine (VERY SLOW).
I have tried increasing the kernel size from 4x4 to 5x5 so the kernel size is not even without result.
I have also tried adjusting lowerPad, upperPad (the paramater named in the docs), and higherPad (the parameter listed in the message).
Setting autoPadding=false does not affect this message.
Is it just a warning that I should ignore? The VERY SLOW part concerns me, as my models are already quite slow.
I figured this out if anyone else is interested in the answer.
I stated in the question that I tried setting "autopadding=false". This is the incorrect format for the autopadding parameter; it must actually be a set of boolean values, with the value corresponding to the InputChannels dimension being false.
So the correct form of the parameter would be "autopadding=(true:true:false)", and everything works correctly.
You have a layer that has lower pad 9 and upper pad 10 at depth direction. Are you doing 3D convolution?

Explain how premultiplied alpha works

Can somebody please explain why rendering with premultiplied alpha (and corrected blending function) looks differently than "normal" alpha when, mathematically speaking, those are the same?
I've looked into this post for understanding of premultiplied alpha:
http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx
The author also said that the end computation is the same:
"Look at the blend equations for conventional vs. premultiplied alpha. If you substitute this color format conversion into the premultiplied blend function, you get the conventional blend function, so either way produces the same end result. The difference is that premultiplied alpha applies the (source.rgb * source.a) computation as a preprocess rather than inside the blending hardware."
Am I missing something? Why is the result different then?
neshone
The difference is in filtering.
Imagine that you have a texture with just two pixels and you are sampling it exactly in the middle between the two pixels. Also assume linear filtering.
Schematically:
R|G|B|A + R|G|B|A = R|G|B|A
non-premultiplied:
1|0|0|1 + 0|1|0|0 = 0.5|0.5|0|0.5
premultiplied:
1|0|0|1 + 0|0|0|0 = 0.5|0|0|0.5
Notice the difference in green channel.
Filtering premultiplied alpha produces correct results.
Note that all this has nothing to do with blending.
This is a guess, because there is not enough information yet to figure it out.
It should be the same. One common way of getting a different value is to use a different Gamma correction method between the premultiply and the rendering step.
I am going to guess that one of your stages, either the blending, or the premultiplying stage is being done with a different gamma value. If you generate your premultiplied textures with a tool like DirectXTex texconv and use the default srgb option for premultiplying alpha, then your sampler needs to be an _SRGB format and your render target should be _SRGB as well. If you are treating them linearly then you may not be able to render to an _SRGB target or sample the texture with gamma correction, even if you are doing the premultiply in the same shader that samples (depending on 3D API and render target setup differences). Doing so will cause the alpha to be significantly different between the two methods in the midtones.
See: The Importance of Being Linear.
If you are generating the alpha in Photoshop then you should know a couple things. Photoshop does not save alpha in linear OR sRGB format. It saves it as a Gamma value about half way between linear and sRGB. If you premultiply in Photoshop it will compute the premultiply correctly but save the result with the wrong ramp. If you generate a normal alpha then sample it as sRGB or LINEAR in your 3d API it will be close but will not match the values Photoshop shows in either case.
For a more in depth reply the information we would need would be.
What 3d API are you using.
How are your textures generated and sampled
When and how are you premultiplying the alpha.
and preferably a code or shader example that shows the error.
I was researching why one would use Pre vs non-Pre and found this interesting info from Nvidia
https://developer.nvidia.com/content/alpha-blending-pre-or-not-pre
It seems that their specific case has more precision when using Pre, over Post-Alpha.
I also read (I believe on here but cannot find it), that doing pre-alpha (which is multiplying Alpha to each RGB value), you will save time. I still need to find out if that's true or not, but there seems to be a reason why pre-alpha is preferred.

Three.js camera tilt up or down and keep horizon level

camera.rotate.y pans left or right in a predictable manner.
camera.rotate.x looks up or down predictably when camera.rotate.y is at 180 degrees.
but when I change the value of camera.rotate.y to some new value, and then I change the value of camera.rotate.x, the horizon rotates.
I've looked for an algorithm to adjust for horizon rotation after camera.rotate.x is changed, but haven't found it.
In three.js, an object's orientation can be specified by its Euler rotation vector object.rotation. The three components of the rotation vector represent the rotation in radians around the object's internal x-axis, y-axis, and z-axis respectively.
The order in which the rotations are performed is specified by object.rotation.order. The default order is "XYZ" -- rotation around the x-axis occurs first, then the y-axis, then the z-axis.
Rotations are performed with respect to the object's internal coordinate system -- not the world coordinate system. This is important. So, for example, after the x-rotation occurs, the object's y- and z- axes will generally no longer be aligned with the world axes. Rotations specified in this way are not unique.
So, for example, if in code you specify,
camera.rotation.y = y_radians; // Y first
camera.rotation.x = x_radians; // X second
camera.rotation.z = 0;
the rotations are applied in the object's rotation.order, not in the order you specified them.
In your case, you may find it more intuitive to set rotation.order to "YXZ", which is equivalent to "heading, pitch, and roll".
For more information about Euler angles, see the Wikipedia article. Three.js follows the Tait–Bryan convention, as explained in the article.
three.js r.61
I've been looking for the same info for few days now, the trick is: use regular rotateX to look up/down, but use rotateOnWorldAxis(new THREE.Vector3(0.0, 1.0, 0.0), angle) for horiz turn (https://discourse.threejs.org/t/vertical-camera-rotation/15334).

Colorbar for imshow, centered on 0 and with symlog scale

I want to generate a grid of plots, of several arrays, with positive and negative values, with log scale, sharing the same colorbar.
I've achieved the sharing part of the colorbar (using ImageGrid and common max and min values), and I know that I could get a logarithmic scale using LogNorm() on the imshow call in the case of only positive values. But given the presence of negative values, I would need a colorbar on symmetric logarithmic scale.
I have found what would be the solution on https://stackoverflow.com/a/7741317/1101750 , but running the sample code Yann provides gives me very different results, cleary wrong:
Reviewing the code, I'm not able to grasp what's going on.
In addition to that, I've discovered that on Matplotlib 1.2, scale.SymmetricalLogScale.SymmetricalLogTransform asks for a new argument not explained on the documentation (linscale, which looking at the code of other transforms I assume that leaving it as 1 is a safe value).
Is the easiest solution subclassing LogNorm?
I've used a pretty simple recipe in the past to do exactly this, without the need to do any subclassing. matplotlib.colors.SymLogNorm provides most of the functionality you need, except that I've found it necessary to generate the tick marks by hand. Note that this solution uses matplotlib 1.3.0, and I may be using features that weren't available with 1.2.
def imshow_symlog(my_matrix, vmin, vmax, logthresh=5):
img=imshow( my_matrix ,
vmin=float(vmin), vmax=float(vmax),
norm=matplotlib.colors.SymLogNorm(10**-logthresh) )
maxlog=int(np.ceil( np.log10(vmax) ))
minlog=int(np.ceil( np.log10(-vmin) ))
#generate logarithmic ticks
tick_locations=([-(10**x) for x in xrange(minlog,-logthresh-1,-1)]
+[0.0]
+[(10**x) for x in xrange(-logthresh,maxlog+1)] )
cb=colorbar(ticks=tick_locations)
return img,cb
Since 1.3 matplotlib has a SymLogNorm. http://matplotlib.org/api/colors_api.html#matplotlib.colors.SymLogNorm