BABYLONJS : How to move the camera in front of a mesh - camera

It's my first post in StackOverFlow.
I'm asking for how can I put the camera in front of a mesh.
Context :
my projects is a museum, and when i click in to picture " mesh" , i need the camera get in front of the mesh , so that i can watch the picture.
I tried :
camera.postion = Mesh.position;
Problem :
the camera take the position of the mesh so i cannot see the picture , but i'm in the picture !
Thank you for your help !

You can try to move the camera a bit away from the mesh position.
Something like that:
camera.position = mesh.position.add(new BABYLON.Vector3(0, 0, 5));
camera.target = mesh.position

I found the right solution !
Thank you for your answer ! i did tried before, but it doesn't worked properly !
Solution !
Picture.metadata={};
Picture.metadata.visitorPosition = new BABYLON.Vector3(x,y,z);
if (pickResult.hit){
if(!pickResult.pickedMesh.metadata){return;}
camera.position = pickResult.pickedMesh.metadata.visitorPosition;
camera.setTarget(pickResult.pickedMesh.position);}

Related

Mineflayer bot.look at a direction

Ok so im making a minecraft bot app and im using mineflayer but I cant seem to figure out on how do I make the bot look at a direction such a north or south I tried doing bot.lookAt(north) but that didnt work I searched the api documentation but couldnt find any answers . Any help related would be appreciated
bot.lookAt needs a vec3 direction as input.
So you cant just say north.
You can give it sth like (0,1,0)
https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.md#botlookatpoint-force
If you code with JavaScript like this
var Vec3 = require('vec3').Vec3;
var v1 = new Vec3(1, 2, 3);
bot.lookAt(v1)

Qt3D Billboards: how to make plane always face to camera

I try to use camera's viewVector to make plane face to camera, but it will turn Clockwise or Counterclockwise when Camera rotate to left or right.
Can I make the plane always face to camera without turning Clockwise or Counterclockwise?
I think camera->upVector() can help me maybe, but I don't how to use.
My code :
class planeTransformClass : public Qt3DCore::QTransform {
public:
planeTransformClass( Qt3DCore::QNode *entity = nullptr ) :Qt3DCore::QTransform(entity) {}
signals:
void faceTo( QVector3D v ) {
setRotation(QQuaternion::rotationTo(QVector3D(0,1,0), -v));
} ;
};
// Background
Qt3DCore::QEntity *planeEntity = new Qt3DCore::QEntity(rootEntity);
Qt3DExtras::QPlaneMesh *planeMesh = new Qt3DExtras::QPlaneMesh(planeEntity);
planeMesh->setHeight(2);
planeMesh->setWidth(2);
Qt3DExtras::QTextureMaterial planeMaterial = new Qt3DExtras::QTextureMaterial(planeEntity);
Qt3DRender::QTexture2D *planeTexture = new Qt3DRender::QTexture2D(planeMaterial);
FlippedTextureImage *planeTextureImage = new FlippedTextureImage(planeTexture);
planeTextureImage->setSize(QSize(3000, 3000));
planeTexture->addTextureImage(planeTextureImage);
planeMaterial->setTexture(planeTexture);
planeMaterial->setAlphaBlendingEnabled(true);
// Transform
planeTransformClass planeTransform = new planeTransformClass(planeEntity);
planeTransform->setRotationX(90);
planeTransform->setTranslation(QVector3D(2, 0, 0));
planeEntity->addComponent(planeMesh);
planeEntity->addComponent(planeMaterial);
planeEntity->addComponent(planeTransform);
// connect camera's viewVectorChanged
camera->connect( camera, &Qt3DRender::QCamera::viewVectorChanged,
planeTransform, &planeTransformClass::faceTo);
Solution using geometry shader
There is already a C++ translation this QML code that I tried to translate (using geometry shader):
https://github.com/ismailsunni/qt3d-custom-shader
My own translation can be found here. It works now after solving the issues mentioned in this question.
Solution without geometry shader
I got a different version running based on this bug report. You can find my implementation on GitHub. The billboards don't change their size when the camera moves, i.e. become smaller and larger on screen like every other object, but I hope you can work with that.
On the master branch the images also move for some reason as can be seen in this image:
The no_instanced_rendering branch doesn't use instanced rendering and displays everything correctly.
Edit
I'm sorry I'm not using your example but I don't have time anymore to adjust it. Check out the patch related to the bug report I mentioned. The person implemented a material with this effect, you should be able to extract it and make it run.

Blender: How to export animation as GLTF file

I have made a very basic animation in Blender (2.79) and I am trying to export it as a GLTF or GLB. I have succesfully installed the gltf exporter and am able to export both gltfs and glbs of unanimated models no problem.
As soon as I add animation and try to export however, I get the following error message
The animation is as about as simple as it could be, I'm just experimenting. Its just the default box that changes location and rotation across 3/4 keyframes.
I am new to Blender so perhaps I am missing a step but my process is as follows: Add box to the scene, add keyframes (LocRot), go to export as I would with a static object, this has the following (default) settings.
I have tried clicking Pushdown on the action sheet as I have seen suggested somewhere but it makes no difference.
Am I missing something? Please let me know if you need any more info in order to advise, I'm happy to share the file or whatever might help.
NB this will eventually be used in a-frame so any specific tips on export for that would be much appreciated.
So after seeing someone else's export settings (thanks to the a-frame slack channel) I have got this working. I needed to have force sample animations selected in the animation export settings. Then it exports no problem and works as expected. Answering my own question in case anyone else has this problem.
Thanks.
To export multiple animated objects with armatures, vertex groups should have unique names. Here's a little script to rename bones and vertex groups based on parent armature name:
import bpy
for obj in bpy.context.selected_objects:
if obj.type == "ARMATURE":
prefix = obj.name + "_"
for bone in obj.data.bones:
bone.name = prefix + bone.name
if obj.type == "MESH":
prefix = obj.parent.name + "_"
for vg in obj.vertex_groups:
if vg.name[0:len(prefix)] != prefix:
vg.name = prefix + vg.name

autoenablesDefaultLighting is too bright in iOS 12 and SCNView.pointOfView is not effective

I am using SceneKit’s autoenablesDefaultLighting and allowsCameraControl functions of my sceneView to provide light to an obj 3D model in the app and rotate around this object in Objective-C. Since upgrading to iOS12, the default light intensity of autoenablesDefaultLighting gets higher and the 3D model looks so bright!
Did anyone faced the same issue? If yes, is there a way to control the light intensity of autoenablesDefaultLighting when its value is ‘YES’? If it is not editable, I tried to attach/constraint an omni light or directional light to a camera by creating a node, assign a light to this node and add as child of SCNView.pointOfView but no light illuminates the scene.
Exemple:
3D object displayed before iOS 12
3D object displayed in iOS 12
It will be good if anyone can help me on it.
Many thanks!
Edit to solve this issue
I create a new SCNCamera and add this in a node and set the PointOfView of my scnView.
Activate the HDR of this camera with scnView.pointOfView.wantHDR = YES;
but a had a grey background.
To delete the grey background I delete the background color with scnView.backgroundColor = [UIColor ClearColor]
and set the explosure of the camera to -1 with :
self.scnView.pointOfView.camera.minimumExposure = -1;
self.scnView.pointOfView.camera.maximumExposure = -1;
Thanks
You can try enabling HDR. It should result in a balanced exposure
scnView?.pointOfView?.camera?.wantsHDR = true
With HDR enabled, you can even control exposure compensation with
scnView?.pointOfView?.camera?.exposureOffset
.camera?.wantsHDR = true
.camera?.wantsExposureAdaptation = false
Should solve the problem!

Autoit script to automate the audacity track down menu with dynamic mouseclick

I am trying to automate some 600 mp3 files for church to combine instrument and voice into one track so I can use the mixer to do left or right channel. when I combined two sounds file, I need to select left channel for first track and right channel for second track.
the problem is that if audacity is fixed screen/location, the mouseclick is working fine. But I moved the script to laptop/Desktop, it won't work since mouse coordinates are not same.
Is there a way to select down track menu from audacity to select left or right channel? There are no shortcuts at current version. If someone can help, I appreciate for you opinion and times.
Br, Adnrew
This codes worked great for me. Thanks, guys.
Opt("MouseCoordMode", 0) ;1=absolute, 0=relative, 2=client
;set left channel
ControlClick($FileNameL, "","","Left",1,90,13)
Send("{SHIFTDOWN}l{SHIFTUP}")
Sleep(1000)
;set right channel
ControlClick($FileNameL, "","","Left",1,60,162)
Send("{SHIFTDOWN}r{SHIFTUP}")
Sleep(1000)
Send("{CTRLDOWN}a{CTRLUP}")
;MsgBox(0,"File Size","waiting")
Send("{ALTDOWN}tx{ALTUP}")
You just need
Opt("MouseCoordMode", 0) ;1=absolute, 0=relative, 2=client
This will make your clicks relative to the window you are working with.
Use window info Tool to find those coordinates.
Go to options and set Coord mode to Window.
You can also use
ControlClick ( "title", "text", controlID [, button [, clicks [, x [, y]]]] )
Any of these two methods are not dependant on your screen resolution.