How do you import a FBX/OBJ model into a Babylon scene? - fbx

I am in the process of learning BabylonJS.
How do you add a 3D model into an already existing BabylonJS scene? I have a scene of a building and I want to add a grand piano into the interior of the building. The piano is a 3d model in OBJ and FBX form.
Thanks!

You've got to use the Assets Manager.
const assetsManager = new BABYLON.AssetsManager(scene);
const meshTask = assetsManager.addMeshTask('piano task', '', './assets/', 'piano.obj');
meshTask.onSuccess = (task) => {
const pianoMesh = task.loadedMeshes[0];
// Do something with the mesh here
}
assetsManager.load();
If your mesh is in the .OBJ format, then you'll need to use babylonjs-loaders.

Hello you have several options:
Import it in Blender or 3dsMax or Unity and export it using one of the supported exporters: https://github.com/BabylonJS/Babylon.js/tree/master/Exporters
Use the FBX exporter: https://github.com/BabylonJS/Babylon.js/tree/master/Exporters/FBX
Use the OBJ loader: https://github.com/BabylonJS/Babylon.js/blob/master/dist/preview%20release/loaders/babylon.objFileLoader.js

I do not know if your question is of some importance still for you but you could also try the very good FBX2glTf converter
https://github.com/facebookincubator/FBX2glTF
It can even handle blend shapes, materials, draco-compression etc. It's easy to build and works very fast. Maybe you give that thing a try.

Related

Is it possible to control which layers are in front on deck.gl?

On Leaflet, when I create a layer from a GeoJSON, I have a way of controlling which layer is shown in front of the map by looping the layer features and then using a function like feature.bringToFront(). The result is like the following:
On deck.gl however, after creating my layers from a GeoJSON, it's hard to tell which layer is in front of the map... The following image is the same example made with deck.gl instead of Leaflet.
So, is there any way of controlling the feature that I want to show in front with deck.gl? I know I could change the elevation when the view of the map is from above, but in this case, it's not a good solution when I'm navigating through the 3D map. Is it possible to do on deck.gl? Can I force some specific feature to appear in front of others? Is there any parameter or function that controls that?
Have been working on this for a few hours and from a leafletJs background.
The closest I've gotten to a feature.bringToFront equivalent is that you can order the layers and pass that to the DeckGl component, a description from the deck.gl documentation is here:
https://deck.gl/docs/developer-guide/using-layers#rendering-layers
The below works for me, however, it seems to work only for layers of the same type. In the example below, if I add a LineLayer type at index 0 of the layers array below, it will render in front of the two SolidPolygonLayer's below.
const layers = [
new SolidPolygonLayer({
id: 'at array index 0 and appears in the background',
data: helper.getPolygonData(),
getPolygon: d => d.coordinates,
getFillColor: d => [255, 0, 0],
filled: true,
}),
new SolidPolygonLayer({
id: 'at array index 1 and appears in the foreground',
data: helper.getPolygonData(),
getPolygon: d => d.coordinates,
getFillColor: d => d.color,
filled: true,
}),
];
return (
<DeckGL
layers={layers}
viewState={getViewState()}
onViewStateChange={e => setViewState(e.viewState)}
initialViewState={INITIAL_VIEW_STATE}
controller={true}
getTooltip={getTooltip}
>
...
</DeckGL>
);

How to use inner camera in gltf in aframe?

I have a gltf model, which contains cameras, models and camera animations. After loading, they become an gltf entity. I can't use secondcamera el. SetAttribute ('camera ','active', true ')" to change camera. How can I use the camera in it.
This can be modified in three. JS
this.content.traverse((node) => {
if (node.isCamera && node.name === name) {
this.activeCamera = node;
}
});
this.renderer.render( this.scene, this.activeCamera )
but, how to use inner camera in gltf in aframe?
You could keep the reference to the camera:
var model_camera = null;
// wait until the model is loaded
entity.addEventListener("model-loaded", e => {
const mesh = entity.getObject3D("mesh");
mesh.traverse(node => {
// assuming there is one camera
if (node.isCamera) model_camera = node;
})
})
and when you need to use it, just substitute the active camera in the scene.camera property:
entity.sceneEl.camera = model_camera;
Check it out in this example with a model containing two different cameras.
Afaik there is no cleanup involved, the look-controls or wasd-controls will still be working as long as you don't manually disable them when switching cameras.
The camera system has a function for swapping active cameras, but you would need to add <a-entity> wrappers for the camera references (as the system tries to perform getObject3D('camera') on the provided entity).

How to use THREE js TextureLoader() in React native?

I am trying to create a 3D model in React-Native using Three.js, for that I have to use an image for texture but TextureLoader() function using 'document' and 'canvas' object creation logic, which can't be used in react-native.
so, how can I use an image as texture in react-native using three.js ?
Short summary: the Textureloader didnt work because it required an imageloader which required the inaccesible DOM.
Apparantly there's a THREE.Texture function which i used like this:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
As you can see i used an Image constructor, this is from React Native and is imported like this:
import { Image } from 'react-native';
In the react native documentation it will explain how it can be used, it supports base64 encoded JPEG.
and finally map the texture to the material:
material.map = texture;
Note: i havent tried to show the end result in my webglview yet, but in the element inspector it seems to show proper THREEjs objects.
(Note: this is not the answer)
EDIT: made the code a bit shorter and use DOMParser, still doesnt work because:
"image.addEventListener is not a function"
Which implies the image object which is created in this method doesnt seem to be able to be used in THREEjs,..
var DOMParser = require('react-native-html-parser').DOMParser;
var myDomParser = new DOMParser();
window.document = {};
window.document.createElementNS = (x,y) => {
if (y === "img") {
let doc = myDomParser.parseFromString("<html><head></head><body><img class='image' src=''></img></body></html>");
return doc.getElementsByAttribute("class", "image")[0];
}
}
Edit: be careful with this, because it also overrides the other createElemensNS variants in which they create canvas or other things.

Is there a way to implement some sort of auto translation in an react native app?

I know this isn't google, but I wasn't able to find anything usefull and maybe you can give me some advice.
What I am looking for is some way to add an auto translation to strings in my react native application.
Right now I am using a workaround in which I translate some of the most common words manually - since that doesn't cover the whole language the outcome looks pretty unsatisfying :)
You could use react-native-i18n.
var I18n = require('react-native-i18n');
var Demo = React.createClass({
render: function() {
return (
<Text>{I18n.t('greeting')}</Text>
)
}
});
// Enable fallbacks if you want `en-US` and `en-GB` to fallback to `en`
I18n.fallbacks = true;
I18n.translations = {
en: {
greeting: 'Hi!'
},
fr: {
greeting: 'Bonjour!'
}
}
take user phone OS language using device info
https://www.npmjs.com/package/react-native-device-info#getdevicelocale
or using
I18n = require('react-native-i18n')
locale = I18n.currentLocale()
then Use power translator
https://www.npmjs.com/package/react-native-power-translator
//set your device language as a Target_Language on app start
TranslatorConfiguration.setConfig('Provider_Type', 'Your_API_Key','Target_Language', 'Source_Language');
//Fill with your own details
TranslatorConfiguration.setConfig(ProviderTypes.Google, 'xxxx','fr');
Use it as a component
<PowerTranslator text={'Engineering physics or engineering science refers to the study of the combined disciplines of physics'} />
add-on :
Use redux store or async storage to store all your string on first app start.
Then use translated text from store or storage.
IT will save your api bill as you have fixed strings.
sir for auto-translate. you can create one component where you can pass all strings (text) in your app, And use '#aws-sdk/client-translate' for translation, it's very fast and also works on dynamic data \
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-translate/index.html
https://www.npmjs.com/package/#aws-sdk/client-translate

How to animate Google Maps API KML Ground Overlay

Two questions:
What is the best way to create a smooth animation through 12 KML Files through google maps API V2?
How can I integrate a fadeIn() / fadeOut() to smoothly transtition between these KML files?
I have experimented some with setTimeouts() with 2 of my KML filed but haven't figured out a smooth or consistent way to animate between them. Code is below.
function animate () {
function series_1 () {
geoXml = new GGeoXml("lake/colors_test.kml");
map.addOverlay(geoXml);
setTimeout("map.removeOverlay(geoXml)", 5000);
}
function series_2 () {
geoXml1 = new GGeoXml("lake/colors_test_1.kml");
map.addOverlay(geoXml1);
setTimeout("map.removeOverlay(geoXml1)", 5000);
}
series_1();
series_2();
}
animate();
I think you'll need to apply fading to the underlying images:
$("#mapContent").find("img[src*=\"lyrs=kml\"]").fadeOut();
This is for the V3 API, you might need a different selector for the V2 API.
I'm assuming the map is created with
map = new google.maps.Map document.getElementById("mapContent");
jQuery also has a fadeIn() method, however it gets tricky because the images are probably recreated when you add a new KML layer. You'll need to find a way to set their visibility to zero when they are created.