Convert Feature Layer to an array of Polygons in ArcGIS JS? - arcgis

Pretty straight forward: I am simply trying to get a feature layer
var floodLayer = new FeatureLayer("URL");
and convert it to a polygon array similar to:
var polygons = [];
for (var i = 0; i < floodLayer.graphics.length; i++) {
var polygons[i] = new Polygon({ "rings": floodLayer.graphics[i].rings, "spatialReference": floodLayer.graphics[i].spatialReference });
}
However, feature layers don't appear to have the appropriate properties to create polygons. Unless I am missing something?

rings and spatialReference are properties of geometry which is a property of your graphics. So you need to use floodLayer.graphics[i].geometry.rings instead of floodLayer.graphics[i].rings, for example.

Related

Altertative solution to require dinamically in React-Native

I have a vector of strings and I want to use each element of that vector with require like that
for(var i = 0; i< happy_songs.melodies.length;++i)
{
var track = '../resources/'
track += happy_songs.melodies[i]
console.log(track)
var song = require(`../resources/${happy_songs.melodies[i]}`)
}
I have tried several syntax approaches but none of them worked, always having a syntax error at the require.
Is there any alternative solution to require which can do the job I want ?
PS: I don't know if this matters but those lines of code are used in an async function.
Hi you can use a aux var as:
var songSource= [path1,path2,...] and use the index as interator to use dinamically

Best practice for removing Mesh objects in ThreeJS?

What are some common ways in ThreeJS to work with creating Meshes (consisting of Geometry and Materials) and then later deleting those objects?
My use case is to display lat/long points on a rotating 3D globe. Each point should be clickable and show further information. The points displayed should also be able to change based on data that is binded with Vue.
I currently use something like this but am running into memory leak issues:
var material = new THREE.MeshPhongMaterial({
color: 0xdc143c,
});
var cone = new THREE.Mesh(
new THREE.ConeBufferGeometry(radius, height, 8, 1, true),
material
);
cone.position.y = height * 0.5;
cone.rotation.x = Math.PI;
var sphere = new THREE.Mesh(
new THREE.SphereBufferGeometry(sphereRadius, 16, 8),
material
);
export default class Marker extends THREE.Object3D {
constructor() {
super();
this.name = "Marker";
this.add(cone, sphere);
}
destroy() {
this.remove(cone, sphere);
}
}
Are there libraries on top of Three that makes the management of Meshes/Materials/Geometry more simple to work with?
What I do is create a function in some utils file, and any time I need to dispose of a mesh, I pass the mesh in as the function parameter:
const removeMesh = (meshToRemove) => {
meshToRemove.visible = false;
scene.remove(meshToRemove);
meshToRemove.geometry.dispose();
meshToRemove.material.dispose();
meshToRemove= undefined;
}

How to use THREE js TextureLoader() in React native?

I am trying to create a 3D model in React-Native using Three.js, for that I have to use an image for texture but TextureLoader() function using 'document' and 'canvas' object creation logic, which can't be used in react-native.
so, how can I use an image as texture in react-native using three.js ?
Short summary: the Textureloader didnt work because it required an imageloader which required the inaccesible DOM.
Apparantly there's a THREE.Texture function which i used like this:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
As you can see i used an Image constructor, this is from React Native and is imported like this:
import { Image } from 'react-native';
In the react native documentation it will explain how it can be used, it supports base64 encoded JPEG.
and finally map the texture to the material:
material.map = texture;
Note: i havent tried to show the end result in my webglview yet, but in the element inspector it seems to show proper THREEjs objects.
(Note: this is not the answer)
EDIT: made the code a bit shorter and use DOMParser, still doesnt work because:
"image.addEventListener is not a function"
Which implies the image object which is created in this method doesnt seem to be able to be used in THREEjs,..
var DOMParser = require('react-native-html-parser').DOMParser;
var myDomParser = new DOMParser();
window.document = {};
window.document.createElementNS = (x,y) => {
if (y === "img") {
let doc = myDomParser.parseFromString("<html><head></head><body><img class='image' src=''></img></body></html>");
return doc.getElementsByAttribute("class", "image")[0];
}
}
Edit: be careful with this, because it also overrides the other createElemensNS variants in which they create canvas or other things.

Updating texture and geometry in three.js

I'm trying to figure out if there is a way of getting a callback once a texture has been updated? I'm swapping the geometry in the scene but sharing the same changing underlying texture based on a canvas and I'm noticing random corruption that eventually disappears as the program runs. I think this is as a result of thrashing that is causing the geometry changes to become unsynchronized with the texture updates.
So I realize the established mechanism is to set is needsUpdate = true on the texture and material, but can I know when that update has been applied so that I can add the new geometry to the scene? I think I am adding the geometry before that update actually gets to the gpu and using the old texture for random frames. Is this possible, or does the texture update definitely get processed before the scene is drawn?
If you can point me in the right direction it would be much appreciated.
Additionally I've added an update event listener to the material, which gets called frequently, but I still end up with the same issues relating to the texture update happening after the geometry has been updated.
Here is an example of what I mean, some code removed. The general idea is to share the same texture / material among many generated objects. Update the canvas, and then add the new object into the scene. This for the most part works under chrome, but occasionally glitches where a frame has the previous texture applied.
var material, texture, texture_canvas, context;
var frames = [];
var total_frames = 24;
var last_frame = undefined;
var current_frame = 0;
var scene;
function init() {
// initialize scene, etc
scene = new THREE.Scene();
// Set up render etc, omitted
texture_canvas = document.createElement('canvas');
texture_canvas.height = 1024;
texgture_canvas.width = 1024;
context = texture_canvas.getContext( '2d' );
context.fillStyle = '#000000';
context.fillRect( 0, 0,
texture_canvas.width,
texture_canvas.height );
texture = new THREE.Texture( this.texture_canvas );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.generateMipmaps = false;
material = new THREE.MeshLambertMaterial({ map: texture } );
for (var i=0; i<total_frames; i++) {
var mesh = new THREE.Mesh( generate_geometry(), material );
frames.push(mesh);
}
}
function animate() {
if (last_frame) {
scene.remove(frames[last_frame]);
}
// Update / generate new canvas
context.drawImage( generate_image(), 0, 0 );
texture.needsUpdate = true;
material.needsUpdate = true;
frames[current_frame].uvsNeedUpdate = true;
scene.add(frames[current_frame % total_frames]);
last_frame = current_frame;
current_frame++;
requestAnimationFrame( animate );
}

Have Google maps center around geo-locs and zoom in appropriately

I would like to pass an x amount of geo-locations to the Google Maps API and have it centered around these locations and set the appropriate zoom level so all locations are visible on the map. I.e. show all the markers that are currently on the map.
Is this possible with what the Google Maps API offers by default or do I need to resolve to build this myself?
I used fitBounds (API V3) for each point:
Declare the variable.
var bounds = new google.maps.LatLngBounds();
Go through each marker with FOR loop
for (i = 0; i < markers.length; i++) {
var latlng = new google.maps.LatLng(markers[i].lat, markers[i].lng);
bounds.extend(latlng);
}
Finally call
map.fitBounds(bounds);
There are a number of ways of doing this but it is pretty easy if you are using the V2 API. See this post for an example, this group post or this post.
For V3 there's zoomToMarkers
Gray's Idea is great but does not work as is. I had to work out a hack for a zoomToMarkers API V3:
function zoomToMarkers(map, markers)
{
if(markers[0]) // make sure at least one marker is there
{
// Get LatLng of the first marker
var tempmark =markers[0].getPosition();
// LatLngBounds needs two LatLng objects to be constructed
var bounds = new google.maps.LatLngBounds(tempmark,tempmark);
// loop thru all markers and extend the LatLngBounds object
for (var i = 0; i < markers.length; i++)
{
bounds.extend(markers[i].getPosition());
}
// Set the map viewport
map.fitBounds(bounds);
}
}