Convert from Blender to Ogre3D - blender

I just finished to setup Blender so it can export to Ogre. When I am exporting I get a bunch of mesh files and a scene file.
I am loading the model that the Ogre SDK provides and it works like so:
mSceneMgr->setAmbientLight(Ogre::ColourValue(0.5f, 0.5f, 0.5f));
// Create an Entity
Ogre::Entity* ogreHead = mSceneMgr->createEntity("Head", "ogrehead.mesh");
// Create a SceneNode and attach the Entity to it
Ogre::SceneNode* headNode = mSceneMgr->getRootSceneNode()->createChildSceneNode("HeadNode");
headNode->attachObject(ogreHead);
// Create a Light and set its position
Ogre::Light* light = mSceneMgr->createLight("MainLight");
light->setPosition(20.0f, 80.0f, 50.0f);
What's happening is that it loads a single mesh file and that's it.
This is the Blender export output:
What do I need to do from here in order to load my model?

It depends a bit on what you want to achieve.
Currently you have created a scene in blender containing multiple parts that together result in your BlackHawk helicopter. If you just need a single object in Ogre, you can combine the elements inside Blender into one object, export that and use the same loading code as before (using the new .mesh file name of course).
If you want the individual parts to stay independent, you will have to load them into Ogre one by one or use one of the many DotScene loaders (e.g. this one or that one or the one that also handles user data) and point it to your "BlackHawk.scene" file (which should reference all helicopter parts).

Related

Impossible to get post transform statistics by split

I'm running a simple penguin pipeline in interactive mode with a split train/eval, the transform step run but i can't get post_transform_statistics artifacts.
Inside the dedicated artifacts folder /tmp/tfx-penguin_custom_INTERACTIVE-nq5dn56x/Transform/post_transform_stats/5, i have just one FeaturesStats.pb inside, but not subfolders Split-train and Split-eval with a FeaturesStats.pb inside each.
However, I have the subfolders inside artifacts dedicated to transformed examples (/tmp/tfx-penguin_custom_INTERACTIVE-nq5dn56x/Transform/transformed_examples/5/).
Here is how i define the transform components by explicitly providing splits and also disable_statistics=False:
transform = tfx.components.Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
disable_statistics=False,
splits_config= transform_pb2.SplitsConfig(
analyze=['train'], transform=['train', 'eval']),
module_file=_transformer_module_file)
I went to the docstring and even the __init__ of the component https://github.com/tensorflow/tfx/blob/master/tfx/components/transform/component.py, it seems there is nothing i would have forgotten or mistaken but i was very disturbed to read following comment with an untraceable location for stats....
disable_statistics: If True, do not invoke TFDV to compute pre-transform
and post-transform statistics. When statistics are computed, they will
will be stored in the `pre_transform_feature_stats/` and
`post_transform_feature_stats/` subfolders of the `transform_graph`
export.
For now, the workaround is to explicitly disable stats in the transform component and define next to it, a dedicated statistics components to work on transformed features splits but it would have been great to have the splits statistics inside transform component directly.
Thanks for any help
This is expected as StatisticsGen in Transform is currently working on the entire transform dataset regardless of split/span.
To generate separate statistics for different splits, please use StatisticsGen component.
Thank you!

How to properly use QSkyBoxEntity?

I looked everywhere, but there are not any guides or explanations of how to use QSkyBoxEntity.
I created Entity and filled it with transform (set translation and 3d scale). Also changed name and extension.
When I'm trying to run program it says
"Qt3D.Renderer.OpenGL.Backend: Unable to find suitable Texture Unit for "skyboxTexture""
I checked several times and tried different png files but no luck.
My image (I know it's fake transparency, but it shouldn't change anything, right?)
And here's part of a code:
Qt3DCore::QEntity *resultEntity = new Qt3DCore::QEntity;
Qt3DExtras::QSkyboxEntity *skyboxEntity = new Qt3DExtras::QSkyboxEntity(resultEntity);
skyboxEntity->setBaseName("skybox"); //I tried using path as well
skyboxEntity->setExtension("png");
Qt3DCore::QTransform *skyTransform = new Qt3DCore::QTransform(skyboxEntity);
skyTransform->setTranslation(QVector3D(0.0f,0.0f,0.0f));
skyTransform->setScale3D(QVector3D(0.1f,0.1f,0.1f));
skyboxEntity->addComponent(skyTransform);
Looks like it's not finding the skybox texture. Did you use an absolute path when you say "I tried using path as well"? The path you set is relative to the build path, i.e. it's not where your C++ file lies.
Alternatively, you could use a resources file and then load then image using
"qrc:/[prefix]/[filename without extension]"
You can also check out the Qt3D manual SkyBox test here:
https://github.com/qt/qt3d/tree/dev/tests/manual/skybox
It's important to properly name files in order for skybox to work and use resource file for storing.
I recommend .tga, but other formats should work as well.
You can read about it here:
https://doc.qt.io/qt-6/qml-qt3d-extras-skyboxentity.html
And here's example how it should look

Web Audio: How to set different destination than speakers?

I struggle to understand how – using one AudioContext – I would achieve the following:
I use createMediaStreamSource to create the source of the context – works.
I then connect a volume node to the source – works.
I then want to create TWO outputs: One is the "standard" output (the speakers) and the second would be used to feed into a mediaRecorder
I struggle with the 3rd point. How do I specify a different output than speakers? Is the output still a stream I can feed into MediaRecorder?
From what you describe, I assume that you have some code that looks like this:
mediaStreamAudioSourceNode
.connect(gainNode) // your volume node
.connect(audioContext.destination);
You just need to add another MediaStreamAudioDestinationNode and use that as an additional output.
const mediaStreamAudioDestinationNode = new MediaStreamAudioDestinationNode(audioContext);
gainNode.connect(mediaStreamAudioDestinationNode);
const mediaRecoder = new MediaRecorder(mediaStreamAudioDestinationNode.stream);

Accessing resources of a dynamically loaded module

I can't find a way to correctly get access to resources of an installed distribution. For example, when a module is loaded dynamically:
require ::($module);
One way to get hold of its %?RESOURCES is to ask module to have a sub which would return this hash:
sub resources { %?RESOURCES }
But that adds extra to the boilerplate code.
Another way is to deep scan $*REPO and fetch module's distribution meta.
Are there any better options to achieve this task?
One way is to use $*REPO ( as you already mention ) along with the Distribution object that CompUnit::Repository provides as an interface to the META6 data and its mapping to a given data store / file system.
my $spec = CompUnit::DependencySpecification.new(:short-name<Zef>);
my $dist = $*REPO.resolve($spec).distribution;
say $dist.content("resources/$_").open.slurp for $dist.meta<resources>.list;
Note this only works for installed distributions at the moment, but would work for not-yet-installed distributions ( like -Ilib ) with https://github.com/rakudo/rakudo/pull/1812

Export faces from Picasa

Is there any way to export croped detected faces images from normal
images in Picasa?
Is there any way to export similar person not naming them(like
person1 and person2 etc. and maybe with probabilities)?
Is there any way to detect only one person per folder?
Once you have labeled each face within Picasa, there will be a file called .picasa.ini in the same folder as the photos. Its contents look like this:
[Contacts2]
a5719c14e1f43ecd=Bob;;
3df0fc0982a61960=Tom;;
[188698.jpg]
faces=rect64(4787040aa5044c1d),a5719c14e1f43ecd
backuphash=47212
[188766.jpg]
faces=rect64(49243a1b62da69d0),a5719c14e1f43ecd
backuphash=47212
[188804.jpg]
faces=rect64(283512ee998ed795),3df0fc0982a61960
backuphash=36479
[188803.jpg]
faces=rect64(778799bdf0c8f784),3df0fc0982a61960
backuphash=36479
[188812.jpg]
faces=rect64(28350000ae21dc8b),3df0fc0982a61960
backuphash=36479
[188806.jpg]
faces=rect64(44643314e5f5afd9),3df0fc0982a61960
backuphash=36479
You can use this data to calculate the coordinates of each rectangle you want to crop.
For more details on the .picasa.ini format, see this question:
Automatic face detection using Picasa API to extract individual images