Import image with threejs in VueJS project - vue.js

I followed threejs documentation in vuejs project to import image using :
texture.load( "./clouds" )
This code is not working, I have to import image using require :
texture.load( require( "./clouds.png" ) )
Now I want to use functions for sucess or error, so thank's to the internet i found that
texture.load( require( "./clouds.png" ), this.onSuccess, this.onProgress, this.onError )
The problem is in success function, I want to create a cube with texture and nothing happened. I also tried on success function to add color in material but it didn't work.
onSuccess( image ) {
this.material = new THREE.MeshLambertMaterial( {
color: 0xf3ffe2,
map: image
}
this.generateCube()
}
generateCube() {
let geometry = new THREE.BoxGeometry( 100, 100, 100 );
this.forme = new THREE.Mesh( geometry, this.material );
this.forme.position.z = -200
this.forme.position.x = -100
this.scene.add( this.forme );
},

Your problem is not related to VueJS /ThreeJs (again ^^), you should learn how to use this inside a callback, here is a E6 fix :
texture.load( require( "./clouds.png" ), t => this.onSuccess(t), e => this.onProgress(e), e => this.onError(e) )
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions

You can put your images in the "public" folder.
And then you can load your texture by
texture.load( "/clouds.png" )

Related

D3 linkHorizontal() update on mouse position

I’m trying to implement a drag and drop functionality with a connection line. The connection line has a starting point ([x, y]), which is gathered when the mouse is clicked and a target point ([x, y]) which follows the mouse position and is continuously updated while dragging the element.
The project uses Vue.JS with VUEX store and for the connection line D3.js (linkHorizontal method https://bl.ocks.org/shivasj/b3fb218a556bc15e36ae3152d1c7ec25).
In the main component I have a div where the SVG is inserted:
<div id="svg_lines"></div>
In the main.js File I watch the current mouse position (targetPos), get the start position from the VUEX store (sourcePos) and pass it on to connectTheDots(sourcePos, targetPos).
new Vue({
router,
store,
render: (h) => h(App),
async created() {
window.document.addEventListener('dragover', (e) => {
e = e || window.event;
var dragX = e.pageX, dragY = e.pageY;
store.commit('setConnMousePos', {"x": dragX, "y": dragY});
let sourcePos = this.$store.getters.getConnStartPos;
let targetPos = this.$store.getters.getConnMousePos;
// draw the SVG line
connectTheDots(sourcePos, targetPos);
}, false)
},
}).$mount('#app');
The connectTheDots() function receives sourcePos and targetPos as arguments and should update the target position of D3 linkHorizontal. Here is what I have:
function connectTheDots(sourcePos, targetPos) {
const offset = 2;
const shapeCoords = [{
source: [sourcePos.y + offset, sourcePos.x + offset],
target: [targetPos.y + offset, targetPos.x + offset],
}];
let svg = d3.select('#svg_lines')
.append('svg')
.attr('class', 'test_svgs');
let link = d3.linkHorizontal()
.source((d) => [d.source[1], d.source[0]])
.target((d) => [d.target[1], d.target[0]]);
function render() {
let path = svg.selectAll('path').data(shapeCoords)
path.attr('d', function (d) {
return link(d) + 'Z'
})
path.enter().append('svg:path').attr('d', function (d) {
return link(d) + 'Z'
})
path.exit().remove()
}
render();
}
I stole/modified the code from this post How to update an svg path with d3.js, but can’t get it to work properly.
Instead of updating the path, the function just keeps adding SVGs. See attached images:
Web app: Multiple SVGs are drawn
Console: Multiple SVGs are added to element
What am I missing here?
#BKalra helped me solve this.
This line keeps appending new SVGs:
let svg = d3.select('#svg_lines') .append('svg') .attr('class', 'test_svgs');
So I removed it from the connectTheDots() function.
Here is my solution:
In the main component I added an SVG with a path:
<div id="svg_line_wrapper">
<svg class="svg_line_style">
<path
d="M574,520C574,520,574,520,574,520Z"
></path>
</svg>
</div>
In the connectTheDots() function I don't need to append anymore, I just grap the SVG and update its path:
function connectTheDots(sourcePos, targetPos) {
const offset = 2;
const data = [{
source: [sourcePos.y + offset, sourcePos.x + offset],
target: [targetPos.y + offset, targetPos.x + offset],
}];
let link = d3.linkHorizontal()
.source((d) => [d.source[1], d.source[0]])
.target((d) => [d.target[1], d.target[0]]);
d3.select('#svg_line_wrapper')
.selectAll('path')
.data(data)
.join('path')
.attr('d', link)
.classed('link', true)
}

Forge Data Visualization not working on Revit rooms [ITA]

I followed the tutorials from the Forge Data Visualization extension documentation: https://forge.autodesk.com/en/docs/dataviz/v1/developers_guide/quickstart/ on a Revit file. I used the generateMasterViews option to translate the model and I can see the Rooms on the viewer, however I have problems coloring the surfaces of the floors: it seems that the ModelStructureInfo has no rooms.
The result of the ModelStructureInfo on the viewer.model is:
t {model: d, rooms: null}
Here is my code, I added the ITA localized versions of Rooms as 3rd parameter ("Locali"):
const dataVizExtn = await this.viewer.loadExtension("Autodesk.DataVisualization");
// Model Structure Info
let viewerDocument = this.viewer.model.getDocumentNode().getDocument();
const aecModelData = await viewerDocument.downloadAecModelData();
let levelsExt;
if (aecModelData) {
levelsExt = await viewer.loadExtension("Autodesk.AEC.LevelsExtension", {
doNotCreateUI: true
});
}
// get FloorInfo
const floorData = levelsExt.floorSelector.floorData;
const floor = floorData[2];
levelsExt.floorSelector.selectFloor(floor.index, true);
const model = this.viewer.model;
const structureInfo = new Autodesk.DataVisualization.Core.ModelStructureInfo(model);
let levelRoomsMap = await structureInfo.getLevelRoomsMap();
let rooms = levelRoomsMap.getRoomsOnLevel("2 - P2", false);
// Generates `SurfaceShadingData` after assigning each device to a room (Rooms--> Locali).
const shadingData = await structureInfo.generateSurfaceShadingData(devices, undefined, "Locali");
// Use the resulting shading data to generate heatmap from.
await dataVizExtn.setupSurfaceShading(model, shadingData, {
type: "PlanarHeatmap",
placePosition: "min",
usingSlicing: true,
});
// Register a few color stops for sensor values in range [0.0, 1.0]
const sensorType = "Temperature";
const sensorColors = [0x0000ff, 0x00ff00, 0xffff00, 0xff0000];
dataVizExtn.registerSurfaceShadingColors(sensorType, sensorColors);
// Function that provides a [0,1] value for the planar heatmap
function getSensorValue(surfaceShadingPoint, sensorType, pointData) {
const { x, y } = pointData;
const sensorValue = computeSensorValue(x, y);
return clamp(sensorValue, 0.0, 1.0);
}
const sensorType = "Temperature";
dataVizExtn.renderSurfaceShading(floor.name, sensorType, getSensorValue);
How can I solve this issue? Is there something else to do when using a different localization?
Here is a snapshot of what I get from the console:
Which viewer version you're using? There was an issue causing ModelStructureInfo cannot produce the correct LevelRoomsMap, but it gets fixed now. Please use v7.43.0 and try again. Here is the snapshot of my test:
BTW, if you see t {model: d, rooms: null} while constructing the ModelStructureInfo, it's alright, since the room data will be produced after you called ModelStructureInfo#getLevelRoomsMap or ModelStructureInfo#getRoomList.

reading ID3 tag in NativeScript + Vue

I am trying to read id3 tags of mp3 files in my project but it seems all the node plugins has a dependency on fs ,
since I get this error: TypeError: fs.exists is not a function
so How can I read id3 tags in NativeScript?
{N} !== Node, You will have to fetch the meta data in native iOS / Android way. Use nativescript-media-metadata-retriever plugin.
tns plugin add nativescript-media-metadata-retriever
You could try nativescript-nodeify, but I remember having problem afterwards with with bundling.
Also, I used this back in NativeScript 4. I don't know if this still works in NS 6.
I share my usage of this module in sake of noobies like me, who can waste their times just for a misunderstanding :))
requires:
import { MediaMetadataRetriever } from "nativescript-media-metadata-retriever";
const imageSourceModule = require( "tns-core-modules/image-source" );
const fs = require("tns-core-modules/file-system");
and the code:
// ------- init MediaMetadataRetriever
let mmr = new MediaMetadataRetriever();
mmr.setDataSource( newValue );
mmr.getEmbeddedPicture()
.then( ( args ) => {
// ------- show the albumArt on bgIMG
var albumArt = this.$refs.bgIMG.nativeView;
var img = new imageSourceModule.ImageSource();
// ------- setNativeSource is a **Methods** of imageSourceModule
img.setNativeSource( args );
albumArt.set( "src" , img );
console.log("ImageSource set...");
// ------- save the albumArt in root of SDCARD
// ------- fromNativeSource is a **Function** of imageSourceModule
let imageSource = imageSourceModule.fromNativeSource( args );
console.log("Here");
let fileName = "test.png";
const root = android.os.Environment.getExternalStorageDirectory().getAbsolutePath().toString();
let path = fs.path.join( root , fileName);
let saved = imageSource.saveToFile(path, "png");
if (saved) {
console.log("Image saved successfully!");
}
} );

stats.js shows FPS 0~2, render movement too slow

i'm beginner for three.js also using it for BIM project,
when i load a gltf file of ~25mb i can barely move the whole object and stats.js monitor shows fps of 0~2 at max
gltf file : https://github.com/xeolabs/xeogl/tree/master/examples/models/gltf/schependomlaan
im using THREE js with vuejs
//package.json
"stats.js": "^0.17.0",
"three": "^0.109.0",
import * as THREE from 'three';
import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader.js';
import { DRACOLoader } from 'three/examples/jsm/loaders/DRACOLoader.js';
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';
this.scene = new THREE.Scene();
this.stats = new Stats();
this.stats.showPanel( 0, 1, 2 ); // 0: fps, 1: ms, 2: mb, 3+: custom
let div = document.createElement('div')
div.appendChild(this.stats.dom)
div.style.position = 'absolute';
div.style.top = 0;
div.style.left = 0;
document.getElementsByClassName('gltfViewer')[0].appendChild( div );
// Camera
this.camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 1500 );
this.camera.position.set( this.pos, this.pos, this.pos );
// renderer
this.raycaster = new THREE.Raycaster();
this.renderer = new THREE.WebGLRenderer({ canvas: document.getElementById('gltfViewerCanvas'), alpha: false });
this.renderer.setClearColor( 0xefefef );
this.renderer.setPixelRatio( window.devicePixelRatio );
this.renderer.setSize(window.innerWidth, window.innerHeight);
// adding controls
this.controls = new OrbitControls( this.camera, this.renderer.domElement );
this.controls.dampingFactor = 0.1;
this.controls.rotateSpeed = 0.12;
this.controls.enableDamping = true;
this.controls.update();
window.addEventListener('resize', _ => this.render());
this.controls.addEventListener('change', _ => this.render());
// light
var ambientLight = new THREE.AmbientLight( 0xcccccc );
this.scene.add( ambientLight );
var directionalLight = new THREE.DirectionalLight( 0xffffff );
directionalLight.position.set( 0, 1, 1 ).normalize();
this.scene.add( directionalLight );
// loading gltf file
// Instantiate a loader
this.gltfLoader = new GLTFLoader();
// Optional: Provide a DRACOLoader instance to decode compressed mesh data
this.dracoLoader = new DRACOLoader();
this.dracoLoader.setDecoderPath( 'three/examples/js/libs/draco' );
this.gltfLoader.setDRACOLoader( this.dracoLoader );
// Load a glTF resource
this.gltfLoader.load( this.src, this.onGLTFLoaded, this.onGLTFLoading, this.onGLTFLoadingError );
//onGLTFLoaded()
this.scene.add( optimizedGltf.scene );
// gltf.scene.getObjectById(404).visible = false;
this.listGltfObjects(gltf);
this.render();
// render ()
this.renderer.render( this.scene, this.camera );
this.stats.update();
// on mounted component :
animate()
// animate()
this.stats.begin()
this.render();
this.stats.end();
even after applying Draco compression using https://github.com/AnalyticalGraphicsInc/gltf-pipeline nothing changes.
Thanks
On filesize —
Draco compression reduces network size, but not the final amount of uncompressed data that must be sent to your GPU and rendered. If your original mesh was 100mb and you compress it to 25mb, you will still get the framerate of the original 100mb mesh. Aside: Using the -b option of glTF-Pipeline will reduce the size by another 50%, to 13MB, but again doesn't affect FPS.
On framerate —
This model contains 4280 meshes1, each requiring a GPU draw call. That is the source of your low QPS, and unfortunately it's a common problem in BIM models. You'll need to merge these meshes (in a program like Blender, or after loading in three.js) to as few as possible. A model like this should require < 100 draw calls, or even as few as 1.
1 To see this, try opening the model on https://gltf-viewer.donmccurdy.com/ and opening the JavaScript console. You should see a printout of the scene graph, which will contain many different meshes.

Is the geofencing module broken on appcelerator? is there an alternative?

I just upgraded to one of the premium memberships so that I could use the geofencing module. I'm having no luck at all, and there is no help from appcelerator.
Here is my simple, simple code:
var geoFences = e.geo_fences;
var regionList = [ ];
var region = Geofence.createRegion( {
center: {
latitude: 26,
longitude: -98
},
radius: 400,
identifier: 'test'
} );
regionList.push( region );
// There are multiple events tested here, because the sample code
// on appcelerators docs say "enterregion", but the list
// of events for the object says "enterregions"
Geofence.addEventListener( "enterregions", function( e ) {
Ti.API.info('enter region fired');
});
Geofence.addEventListener( "enterregion", function( e ) {
Ti.API.info('enter regions fired');
});
Geofence.addEventListener( "exitregion", function( e ) {
Ti.API.info('exit region fired');
});
Geofence.addEventListener( "exitregions", function( e ) {
Ti.API.info('exit regions fired');
});
Geofence.addEventListener( "error", function( e ) {
Ti.API.info('error fired');
});
Geofence.addEventListener( "monitorregions", function( e ) {
Ti.API.info('monitor regions fired');
});
// Start monitoring for region entrances/exits:
Geofence.startMonitoringForRegions( regionList );
I open the app in the iOS simulator, and i see:
[INFO] : monitor regions fired
So it's monitoring the region. But i go to Debug > Location > Custom Location, and set it to 25, -98 (out of the region) then to 26, -98 (in the center of the region), then back to 25, -98, and I don't get any of the enter or exit regions firing!
I also tried this with a more precise location on an iphone 6+ with live cell and gps enabled. I drove from 1 mile away to the exact location, and it did not fire any of the events (400m is approx 1/4 mile)
I can't test it on android because there is a build error that i'm currently working on with appcelerator in jira.
Is the module broken? Is there an alternative module?
I use the module myself, very successfully. So it does work.