Qt3D Billboards: how to make plane always face to camera - camera

I try to use camera's viewVector to make plane face to camera, but it will turn Clockwise or Counterclockwise when Camera rotate to left or right.
Can I make the plane always face to camera without turning Clockwise or Counterclockwise?
I think camera->upVector() can help me maybe, but I don't how to use.
My code :
class planeTransformClass : public Qt3DCore::QTransform {
public:
planeTransformClass( Qt3DCore::QNode *entity = nullptr ) :Qt3DCore::QTransform(entity) {}
signals:
void faceTo( QVector3D v ) {
setRotation(QQuaternion::rotationTo(QVector3D(0,1,0), -v));
} ;
};
// Background
Qt3DCore::QEntity *planeEntity = new Qt3DCore::QEntity(rootEntity);
Qt3DExtras::QPlaneMesh *planeMesh = new Qt3DExtras::QPlaneMesh(planeEntity);
planeMesh->setHeight(2);
planeMesh->setWidth(2);
Qt3DExtras::QTextureMaterial planeMaterial = new Qt3DExtras::QTextureMaterial(planeEntity);
Qt3DRender::QTexture2D *planeTexture = new Qt3DRender::QTexture2D(planeMaterial);
FlippedTextureImage *planeTextureImage = new FlippedTextureImage(planeTexture);
planeTextureImage->setSize(QSize(3000, 3000));
planeTexture->addTextureImage(planeTextureImage);
planeMaterial->setTexture(planeTexture);
planeMaterial->setAlphaBlendingEnabled(true);
// Transform
planeTransformClass planeTransform = new planeTransformClass(planeEntity);
planeTransform->setRotationX(90);
planeTransform->setTranslation(QVector3D(2, 0, 0));
planeEntity->addComponent(planeMesh);
planeEntity->addComponent(planeMaterial);
planeEntity->addComponent(planeTransform);
// connect camera's viewVectorChanged
camera->connect( camera, &Qt3DRender::QCamera::viewVectorChanged,
planeTransform, &planeTransformClass::faceTo);

Solution using geometry shader
There is already a C++ translation this QML code that I tried to translate (using geometry shader):
https://github.com/ismailsunni/qt3d-custom-shader
My own translation can be found here. It works now after solving the issues mentioned in this question.
Solution without geometry shader
I got a different version running based on this bug report. You can find my implementation on GitHub. The billboards don't change their size when the camera moves, i.e. become smaller and larger on screen like every other object, but I hope you can work with that.
On the master branch the images also move for some reason as can be seen in this image:
The no_instanced_rendering branch doesn't use instanced rendering and displays everything correctly.
Edit
I'm sorry I'm not using your example but I don't have time anymore to adjust it. Check out the patch related to the bug report I mentioned. The person implemented a material with this effect, you should be able to extract it and make it run.

Related

How to draw back into the scene a removed shape with Konva?

I used remove() method to delete a rectangle from the scene, how can i draw it back?
The documentation said: "remove a node from parent, but don't destroy. You can reuse the node later."
Link to documentation
I couldn't find any clue
Thanks
Thank you for the responses, i actually found a simple workaround.
I use .hide() and .show() methods because i want to keep intact the object for later use and when i do not need it anymore i'll just .destroy() the shapes.
The downside is you need more memory to keep all, but with few shapes on the scene is negligible.
Just keep a reference to the node via a variable. For example, in the code below I add a node to layer1, remove it, and add it to layer2.
var layer1 = new Konva.Layer();
stage.add(layer1);
Var node = new Konva.WhateverShape({....});
layer1.add(node);
layer1.draw();
...
...
var layer2 = new Konva.Layer();
stage.add(layer2);
node.remove(); // at this time the node exists but is not on the stage
layer2.add(node);
layer2.draw(); // now the node is visible again.

Blender: How to export animation as GLTF file

I have made a very basic animation in Blender (2.79) and I am trying to export it as a GLTF or GLB. I have succesfully installed the gltf exporter and am able to export both gltfs and glbs of unanimated models no problem.
As soon as I add animation and try to export however, I get the following error message
The animation is as about as simple as it could be, I'm just experimenting. Its just the default box that changes location and rotation across 3/4 keyframes.
I am new to Blender so perhaps I am missing a step but my process is as follows: Add box to the scene, add keyframes (LocRot), go to export as I would with a static object, this has the following (default) settings.
I have tried clicking Pushdown on the action sheet as I have seen suggested somewhere but it makes no difference.
Am I missing something? Please let me know if you need any more info in order to advise, I'm happy to share the file or whatever might help.
NB this will eventually be used in a-frame so any specific tips on export for that would be much appreciated.
So after seeing someone else's export settings (thanks to the a-frame slack channel) I have got this working. I needed to have force sample animations selected in the animation export settings. Then it exports no problem and works as expected. Answering my own question in case anyone else has this problem.
Thanks.
To export multiple animated objects with armatures, vertex groups should have unique names. Here's a little script to rename bones and vertex groups based on parent armature name:
import bpy
for obj in bpy.context.selected_objects:
if obj.type == "ARMATURE":
prefix = obj.name + "_"
for bone in obj.data.bones:
bone.name = prefix + bone.name
if obj.type == "MESH":
prefix = obj.parent.name + "_"
for vg in obj.vertex_groups:
if vg.name[0:len(prefix)] != prefix:
vg.name = prefix + vg.name

What is the field of view(FOV) of the color camera in Kinect for Windows V2?

Can't find it anywhere. The kinectforwindows site has the FOV for depth camera. I can't find it in the box either.
#user1809923 is correct. I contacted the developer of the link:
http://smeenk.com/kinect-field-of-view-comparison/
And he responded with this information:
If I remember correctly these FOV values are part of the
framedescriptions that you can retrieve with help of the Kinect SDK.
Since other people asked for the same info I will update the my blog.
I confirmed his findings by calling the frame's framedescription in the Kinect SDK code and printing the values to the screen.
According to this post
http://smeenk.com/kinect-field-of-view-comparison/
the FOV of the color camera is 84.1 x 53.8. I am not sure, where the author found this information though.
I just wanted to add some more information to this, because the official Kinect Fusion documentation is really bad and because the answer here is correct but the numbers are rounded:
assuming, you already have a initialized IColorFrameSource* (e.g. with the name pColorFrameSource) you should retrieve the information after you opened the reader: pColorFrameSource->OpenReader(&m_pColorFrameReader); with m_pColorFrameReader beeing IColorFrameReader*
the code to retrieve the FOV then looks like this:
IFrameDescription *f=nullptr;
float fovDiago = 0;
float fovHori = 0;
float fovVerti = 0;
HRESULT hh=pColorFrameSource
->CreateFrameDescription(ColorImageFormat::ColorImageFormat_Rgba,&f);
if (hh == S_OK) {
f->get_DiagonalFieldOfView(&fovDiago);
f->get_HorizontalFieldOfView(&fovHoront);
f->get_VerticalFieldOfView(&fovVerti);
}
The FOV values not rounded are:
fovDiago =91.9000015
fovHori =84.0999985
fovVerti =53.7999992

Dojo dnd (drag and drop) 1.7.2 - How to maintain a separate (non-dojo-dnd) list?

I'm using Dojo dnd version 1.7.2 and it's generally working really well. I'm happy.
My app maintains many arrays of items, and as the user drags and drops items around, I need to ensure that my arrays are updated to reflect the contents the user is seeing.
In order to accomplish this, I think I need to run some code around the time of Source.onDndDrop
If I use dojo.connect to set up a handler on my Source for onDndDrop or onDrop, my code seems to get called too late. That is, the source that's passed to the handler doesn't actually have the item in it any more.
This is a problem because I want to call source.getItem(nodes[0].id) to get at the actual data that's being dragged around so I can find it in my arrays and update those arrays to reflect the change the user is making.
Perhaps I'm going about this wrong; and there's a better way?
Ok, I found a good way to do this. A hint was found in this answer to a different question:
https://stackoverflow.com/a/1635554/573110
My successful sequence of calls is basically:
var source = new dojo.dnd.Source( element, creationParams );
var dropHandler = function(source,nodes,copy){
var o = source.getItem(nodes[0].id); // 0 is cool here because singular:true.
// party on o.data ...
this.oldDrop(source,nodes,copy);
}
source.oldDrop = source.onDrop;
source.onDrop = dropHandler;
This ensures that the new implementation of onDrop (dropHandler) is called right before the previously installed one.
Kind'a shooting a blank i guess, there are a few different implementations of the dndSource. But there are a some things one needs to know about the events / checkfunctions that are called during the mouseover / dnddrop.
One approach would be to setup checkAcceptance(source, nodes) for any target you may have. Then keep a reference of the nodes currently dragged. Gets tricky though, with multiple containers that has dynamic contents.
Setup your Source, whilst overriding the checkAcceptance and use a known, (perhaps global) variable to keep track.
var lastReference = null;
var target = dojo.dnd.Source(node, {
checkAcceptance(source, nodes) : function() {
// this is called when 'nodes' are attempted dropped - on mouseover
lastReference = source.getItem(nodes[0].id)
// returning boolean here will either green-light or deny your drop
// use fallback (default) behavior like so:
return this.inhertied(arguments);
}
});
Best approach might just be like this - you get both target and source plus nodes at hand, however you need to find out which is the right stack to look for the node in. I believe it is published at same time as the event (onDrop) youre allready using:
dojo.subscribe("/dnd/drop", function(source, nodes, copy, target) {
// figure out your source container id and target dropzone id
// do stuff with nodes
var itemId = nodes[0].id
}
Available mechanics/topics through dojo.subscribe and events are listed here
http://dojotoolkit.org/reference-guide/1.7/dojo/dnd.html#manager

directshow "Color Space Converter" filter configuration problem (VMR windowless renderer)

I'm using VMR to mix a bitmap with a video stream. I run the renderer in windowless mode.
Since I need to have more than 1 stream on the renderer, I add the renderer to the graph first and then use IFilterGraph2::RenderEx with AM_RENDEREX_RENDERTOEXISTINGRENDERERS.
Everything works fine most of the time, but I have one .avi file that will render fine with RenderFile, but ends up displaying all black when rendered in my graph. I compared the two graphs in graphedit, and they're the same:
capture.avi -> AVI Splitter -> Color Space Converter -> Video Renderer
The only difference between the graphs is that the Color Space Renderer is setup differently: graphedit shows that the following settings in the graph that works:
Input:
Major Type: Video
Sub Type: ARGB32
...
XForm Out:
Major Type: Video
Sub Type: RGB32
Whereas in my graph it shows:
Input: (same)
XForm Out:
Major Type: Video
Sub Type: ARGB32
So it looks like the converter is basically doing nothing. I have looked around and was not able to find any configuration interface for the Color Space Converter filter. I've also tried different things with IPin::QueryAccept and IFilterGraph2::ReconnectEx on the VMR input pin and the Color Space Converter output pin to try and force the output of the Converter filter to RGB32, but I haven't had much luck. Hopefully somebody here can point me in the right direction!
As far as I know the Color Space Converter filter does not have an interface, but you don't need it either. You can force the Color Space Converter filter to convert to RGB32 by inserting a filter which only accepts RGB32. The TransNull32 from the RGBFilters example does exaclty this. Your graph will look like this:
capture.avi -> AVI Splitter -> Color Space Converter -> TransNull32 -> Video Renderer
See also Regarding the scope of Sample Grabber in DirectShow where I explaind how to use the TransNull24 filter.