How to zoom to bounds of arcs with deck.gl? - deck.gl

I've created some arcs using deck.gl. When you click on different points/polygons, different arcs appear between countries. When doing this, I want the map to zoom to the bounds of those arcs.
For clarity, here is an example: When clicking on Glasgow, I'd want to zoom to the arc shown (as tightly as possible):
It appears that with WebMercatorViewport, you can call fitBounds
(see: https://deck.gl/docs/api-reference/core/web-mercator-viewport#webmercatorviewport)
It's not clear to me how this gets used, though. I've tried to find examples, but have come up short. How can I add this to what I have?
Here is the code for the arcs:
fetch('countries.json')
.then(res => res.json())
.then(data => {
console.log('data',data)
const inFlowColors = [
[0, 55, 255]
];
const outFlowColors = [
[255, 200, 0]
];
const countyLayer = new deck.GeoJsonLayer({
id: 'geojson',
data: data,
stroked: true,
filled: true,
autoHighlight: true,
lineWidthScale: 20,
lineWidthMinPixels: 1,
pointRadiusMinPixels: 10,
opacity:.5,
getFillColor: () => [0, 0, 0],
getLineColor: () => [0,0,0],
getLineWidth: 1,
onClick: info => updateLayers(info.object),
pickable: true
});
const deckgl = new deck.DeckGL({
mapboxApiAccessToken: 'pk.eyJ1IjoidWJlcmRhdGEiLCJhIjoiY2pudzRtaWloMDAzcTN2bzN1aXdxZHB5bSJ9.2bkj3IiRC8wj3jLThvDGdA',
mapStyle: 'mapbox://styles/mapbox/light-v9',
initialViewState: {
longitude: -19.903283,
latitude: 36.371449,
zoom: 1.5,
maxZoom: 15,
pitch: 0,
bearing: 0
},
controller: true,
layers: []
});
updateLayers(
data.features.find(f => f.properties.name == 'United States' )
);
function updateLayers(selectedFeature) {
const {exports, centroid, top_exports, export_value} = selectedFeature.properties;
const arcs = Object.keys(exports).map(toId => {
const f = data.features[toId];
return {
source: centroid,
target: f.properties.centroid,
value: exports[toId],
top_exports: top_exports[toId],
export_value: export_value[toId]
};
});
arcs.forEach(a => {
a.vol = a.value;
});
const arcLayer = new deck.ArcLayer({
id: 'arc',
data: arcs,
getSourcePosition: d => d.source,
getTargetPosition: d => d.target,
getSourceColor: d => [0, 55, 255],
getTargetColor: d => [255, 200, 0],
getHeight: 0,
getWidth: d => d.vol
});
deckgl.setProps({
layers: [countyLayer, arcLayer]
});
}
});
Here it is as a Plunker:
https://plnkr.co/edit/4L7HUYuQFM19m9rI

I try to make it simple, starting from a raw implementation with ReactJs then try to translate into vanilla.
In ReactJS I will do something like that.
Import LinearInterpolator and WebMercatorViewport from react-map-gl:
import {LinearInterpolator, WebMercatorViewport} from 'react-map-gl';
Then I define an useEffect for viewport:
const [viewport, setViewport] = useState({
latitude: 37.7577,
longitude: -122.4376,
zoom: 11,
bearing: 0,
pitch: 0
});
Then I will define a layer to show:
const layerGeoJson = new GeoJsonLayer({
id: 'geojson',
data: someData,
...
pickable: true,
onClick: onClickGeoJson,
});
Now we need to define onClickGeoJson:
const onClickGeoJson = useCallback((event) => {
const feature = event.features[0];
const [minLng, minLat, maxLng, maxLat] = bbox(feature); // Turf.js
const viewportWebMercator = new WebMercatorViewport(viewport);
const {longitude, latitude, zoom} = viewport.fitBounds([[minLng, minLat], [maxLng, maxLat]], {
padding: 20
});
viewportWebMercator = {
...viewport,
longitude,
latitude,
zoom,
transitionInterpolator: new LinearInterpolator({
around: [event.offsetCenter.x, event.offsetCenter.y]
}),
transitionDuration: 1500,
};
setViewport(viewportWebMercator);
}, []);
First issue: in this way we are fitting on point or polygon clicked, but what you want is fitting arcs. I think the only way to overcome this kind of issue is to add a reference inside your polygon about bounds of arcs. You can precompute bounds for each feature and storage them inside your geojson (the elements clicked), or you can storage just a reference in feature.properties to point another object where you have your bounds (you can also compute them on the fly).
const dataWithComputeBounds = {
'firstPoint': bounds_arc_computed,
'secondPoint': bounds_arc_computed,
....
}
bounds_arc_computed need to be an object
bounds_arc_computed = {
minLng, minLat, maxLng, maxLat,
}
then on onClick function just take the reference
const { minLng, minLat, maxLng, maxLat} = dataWithComputedBounds[event.features[0].properties.reference];
const viewportWebMercator = new WebMercatorViewport(viewport);
...
Now just define our main element:
return (
<DeckGL
layers={[layerGeoJson]}
initialViewState={INITIAL_VIEW_STATE}
controller
>
<StaticMap
reuseMaps
mapStyle={mapStyle}
preventStyleDiffing
mapboxApiAccessToken={YOUR_TOKEN}
/>
</DeckGL>
);
At this point we are pretty close to what you already linked (https://codepen.io/vis-gl/pen/pKvrGP), but you need to use deckgl.setProps() onClick function instead of setViewport to change your viewport.
Does it make sense to you?

Related

Add more features to the feature layer

Here is a piece of code to add a polygon, its 'name' (PERMIT_AREA) and description to the feature layer. My problem is the drawn polygon is not applying to the layer. How we can add more details such as name and description to the layer? Can you please correct my code,
view.when(() => {
const polygonSymbol = {
type: "simple-fill", // autocasts as new SimpleFillSymbol()
color: [207, 34, 171, 0.5],
outline: {
// autocasts as new SimpleLineSymbol()
color: [247, 34, 101, 0.9],
}
};
const sketchViewModel = new SketchViewModel({
view: view,
layer: graphicsLayer,
polygonSymbol: polygonSymbol,
});
sketchViewModel.create("polygon", { mode: "hybrid" });
// Once user is done drawing a rectangle on the map
// use the rectangle to select features on the map and table
sketchViewModel.on("create", async (event) => {
if (event.state === "complete") {
// this polygon will be used to query features that intersect it
const geometries = graphicsLayer.graphics.map(function (graphic) {
return graphic.geometry
});
const queryGeometry = await geometryEngineAsync.union(geometries.toArray());
const query = {
geometry: queryGeometry,
outFields: ["*"],
attributes: {
PERMIT_AREA:'sample PERMIT_AREA'
}
};
console.log(query);
lyrpermitAreaUrl.queryFeatures(query).then(function (results) {
var lyr = results.features;
selectFeatures(lyr);
});
}
});
});
// This function is called when user completes drawing a rectangle
// on the map. Use the rectangle to select features in the layer and table
function selectFeatures(geometryabc) {
console.log(geometryabc);
// create a query and set its geometry parameter to the
// rectangle that was drawn on the view
lyrpermitAreaUrl.applyEdits({ addFeatures: [geometryabc] }).then((editsResult) => {
console.log(editsResult);
console.log(editsResult.addFeatureResults[0].objectId);
});
}
I have edited the code like provided below. Now I can apply user entered polygon to feature layer,
require(["esri/config",
"esri/Map",
"esri/views/MapView",
"esri/layers/FeatureLayer",
"esri/layers/TileLayer",
"esri/layers/VectorTileLayer",
"esri/layers/GraphicsLayer",
"esri/widgets/Search",
"esri/widgets/Sketch/SketchViewModel",
"esri/geometry/geometryEngineAsync",
"esri/Graphic",
],
function (esriConfig, Map, MapView, FeatureLayer, TileLayer, VectorTileLayer, GraphicsLayer, Search, SketchViewModel, geometryEngineAsync, Graphic) {
esriConfig.apiKey = "AAPK3f43082c24ae493196786c8b424e9f43HJcMvP1NYaqIN4p63qJnCswIPsyHq8TQHlNtMRLWokqJIWYIJjga9wIEzpy49c9v";
const graphicsLayer = new GraphicsLayer();
const streetmapTMLayer = new TileLayer({
url: streetmapURL
});
const streetmapLTMLayer = new VectorTileLayer({
url: streetmapLebelsURL
});
const lyrwholeMeters = new FeatureLayer({
url: MetersWholeURL,
outFields: ["*"],
});
const lyrMeters = new FeatureLayer({
url: MetersURL,
outFields: ["*"],
});
const lyrpermitAreaUrl = new FeatureLayer({
url: PermitAreaURL,
outFields: ["*"],
});
console.log(lyrMeters);
const map = new Map({
basemap: "arcgis-topographic", // Basemap layer service
layers: [streetmapTMLayer, streetmapLTMLayer, lyrMeters, lyrwholeMeters, graphicsLayer]
});
const view = new MapView({
map: map,
center: [-95.9406, 41.26],
// center: [-118.80500, 34.02700], //Longitude, latitude
zoom: 16,
maxZoom: 21,
minZoom: 13,
container: "viewDiv", // Div element
});
view.when(() => {
const polygonSymbol = {
type: "simple-fill", // autocasts as new SimpleFillSymbol()
color: [207, 34, 171, 0.5],
outline: {
// autocasts as new SimpleLineSymbol()
color: [247, 34, 101, 0.9],
}
};
const sketchViewModel = new SketchViewModel({
view: view,
layer: graphicsLayer,
polygonSymbol: polygonSymbol,
});
sketchViewModel.create("polygon", { mode: "hybrid" });
// Once user is done drawing a rectangle on the map
// use the rectangle to select features on the map and table
sketchViewModel.on("create", async (event) => {
if (event.state === "complete") {
// this polygon will be used to query features that intersect it
const geometries = graphicsLayer.graphics.map(function (graphic) {
return graphic.geometry
});
const queryGeometry = await geometryEngineAsync.union(geometries.toArray());
var rawrings = queryGeometry.rings;
var rings = [];
console.log('rings here');
for (var i = 0; i < rawrings.length; i++) {
rings.push(rawrings[i]);
// graphics.push(graphic);
console.log(rawrings[i]);
}
console.log(rings[0]);
// Create a polygon geometry
const polygon = {
type: "polygon",
spatialReference: {
wkid: 102704, //102704,
},
rings: [rings[0]]
};
const simpleFillSymbol = {
type: "simple-fill",
color: [227, 139, 79, 0.8], // Orange, opacity 80%
outline: {
color: [255, 255, 255],
width: 1
}
};
const polygonGraphic = new Graphic({
geometry: polygon,
symbol: simpleFillSymbol,
});
graphicsLayer.add(polygonGraphic);
// const addEdits = {
// addFeatures: polygonGraphic
// };
console.log('here');
// console.log(addEdits);
// apply the edits to the layer
selectFeatures(polygonGraphic);
}
});
});
// This function is called when user completes drawing a rectangle
// on the map. Use the rectangle to select features in the layer and table
function selectFeatures(geometryabc) {
console.log(geometryabc);
// create a query and set its geometry parameter to the
// rectangle that was drawn on the view
lyrpermitAreaUrl.applyEdits({ addFeatures: [geometryabc] }).then((editsResult) => {
console.log(editsResult);
console.log(editsResult.addFeatureResults[0].objectId);
});
}
// search widget
const searchWidget = new Search({
view: view,
});
view.ui.add(searchWidget, {
position: "top-left",
index: 2
});
});
I have added the attributes like below,
var attr = {"NETSUITE_USERID":"test user","PERMIT_AREA":"test permit area","COMMENTS":"test comments"};
const polygonGraphic = new Graphic({
geometry: polygon,
symbol: simpleFillSymbol,
attributes: attr
});
If I understand then, you need a new Graphic with the sketched geometry and the attributes you query. Something like this might work,
sketchViewModel.on("create", async (event) => {
if (event.state === "complete") {
// remove the graphic from the layer. Sketch adds
// the completed graphic to the layer by default.
graphicsLayer.remove(event.graphic);
const newGeometry = event.graphic.geometry;
const query = {
geometry: newGeometry,
outFields: ["NAME", "DESCRIPTION"],
returnGeometry: false
};
lyrpermitAreaUrl.queryFeatures(query).then(function (results) {
if (!results.features) {
return;
}
const data = results.features[0].attributes;
const newGraphic = new Graphic({
geometry: newGeometry,
attributes: {
"NAME": data["NAME"],
"DESCRIPTION": data["DESCRIPTION"]
}
});
lyrpermitAreaUrl.applyEdits({ addFeatures: [newGraphic] }).then((editsResult) => {
console.log(editsResult);
});
});
}
});
NAME and DESCRIPTION are sample attributes name, you should use the ones you need.

How can I get onHover to work for deck.gl MVTLayer?

The deck.gl MVTLayer inherits from Layer that has onHover enabled which together with pickable should give interactivity. I am trying to get interactivity to work so I can do a popup with the data I hover. But in below code, I can get the onClick event to fire, but not the onHover event. what am I doing wrong
Thanks :)
import React from "react";
import DeckGL from "#deck.gl/react";
import { MVTLayer } from "#deck.gl/geo-layers";
const INITIAL_VIEW_STATE = {
longitude: -122.41669,
latitude: 37.7853,
zoom: 13,
pitch: 0,
bearing: 0
};
const layer = new MVTLayer({
id: "MVTLayer",
data: [
"https://tiles-a.basemaps.cartocdn.com/vectortiles/carto.streets/v1/{z}/{x}/{y}.mvt"
],
stroked: false,
getLineColor: [255, 0, 0],
getFillColor: (f) => {
switch (f.properties.layerName) {
case "poi":
return [0, 0, 0];
case "water":
return [120, 150, 180];
case "building":
return [255, 0, 0];
default:
return [240, 240, 240];
}
},
getPointRadius: 2,
pointRadiusUnits: "pixels",
getLineWidth: (f) => {
switch (f.properties.class) {
case "street":
return 6;
case "motorway":
return 10;
default:
return 1;
}
},
maxZoom: 14,
minZoom: 0,
onHover: (info) => console.log("Hover", info.object),
onClick: (info) => console.log("Click", info.object),
pickable: true
});
export default function App() {
return (
<DeckGL
initialViewState={INITIAL_VIEW_STATE}
controller={true}
layers={[layer]}
></DeckGL>
);
}
It works .... I think I had a dodgy chrome instance running

How to use diffClamp in reanimated 2?

I am trying to hide and show the header based on the scroll event from reanimated 2 useAnimatedScrollHandler and I need to use the diffClamp so whenever the user scrolls up the header should be shown in less time than the whole scroll event contentOffset.y value but the problem is diffClamp is I think from reanimated v1 and I need to use useAnimatedStyle hook in order to animate styles in reanimated v2 and finally it gives an error.
Can someone please help with it?
const clamp = (value, lowerBound, upperBound) => {
"worklet";
return Math.min(Math.max(lowerBound, value), upperBound);
};
const scrollClamp = useSharedValue(0);
const scrollHandler = useAnimatedScrollHandler({
onScroll: (event, ctx) => {
const diff = event.contentOffset.y - ctx.prevY;
scrollClamp.value = clamp(scrollClamp.value + diff, 0, 200);
},
onBeginDrag: (event, ctx) => {
ctx.prevY = event.contentOffset.y;
}
});
const RStyle = useAnimatedStyle(() => {
const interpolateY = interpolate(
scrollClamp.value,
[0, 200],
[0, -200],
Extrapolate.CLAMP
)
return {
transform: [
{ translateY: interpolateY }
]
}
})

jsPlumb + Panzoom infinite droppable canvas

I have created a codepen that uses jquery ui droppable(for drag/drop), jsPlumb (for flowcharting) and Panzoom (panning and zooming) to create a flowchart builder. You could drag the list items from the draggable container (1st column) to the flowchart (2nd column) and then connect the items using the dots to create a flowchart. The #flowchart is a Panzoom target with both pan and zoom enabled. This all works fine.
However, I would like to have the #flowchart div always span the whole area of the flowchart-wrapper i.e. the #flowchart should be an infinite canvas that supports panning, zooming and is a droppable container.
It should have the same effect as flowchart-builder-demo. The canvas there is infinite where you can drag and drop items (Questions, Actions, Outputs) from the right column.
Any pointers on how to achieve this (like the relevant events or multiple panzoom elements and/or css changes) would be greatly appreciated.
const BG_SRC_TGT = "#2C7BE5";
const HEX_SRC_ENDPOINT = BG_SRC_TGT;
const HEX_TGT_ENDPOINT = BG_SRC_TGT;
const HEX_ENDPOINT_HOVER = "#fd7e14";
const HEX_CONNECTOR = "#39afd1";
const HEX_CONNECTOR_HOVER = "#fd7e14";
const connectorPaintStyle = {
strokeWidth: 2,
stroke: HEX_CONNECTOR,
joinstyle: "round",
outlineStroke: "white",
outlineWidth: 1
},
connectorHoverStyle = {
strokeWidth: 3,
stroke: HEX_CONNECTOR_HOVER,
outlineWidth: 2,
outlineStroke: "white"
},
endpointHoverStyle = {
fill: HEX_ENDPOINT_HOVER,
stroke: HEX_ENDPOINT_HOVER
},
sourceEndpoint = {
endpoint: "Dot",
paintStyle: {
stroke: HEX_SRC_ENDPOINT,
fill: "transparent",
radius: 4,
strokeWidth: 3
},
isSource: true,
connector: ["Flowchart", { stub: [40, 60], gap: 8, cornerRadius: 5, alwaysRespectStubs: true }],
connectorStyle: connectorPaintStyle,
hoverPaintStyle: endpointHoverStyle,
connectorHoverStyle: connectorHoverStyle,
dragOptions: {},
overlays: [
["Label", {
location: [0.5, 1.5],
label: "Drag",
cssClass: "endpointSourceLabel",
visible: false
}]
]
},
targetEndpoint = {
endpoint: "Dot",
paintStyle: {
fill: HEX_TGT_ENDPOINT,
radius: 5
},
hoverPaintStyle: endpointHoverStyle,
maxConnections: -1,
dropOptions: { hoverClass: "hover", activeClass: "active" },
isTarget: true,
overlays: [
["Label", { location: [0.5, -0.5], label: "Drop", cssClass: "endpointTargetLabel", visible: false }]
]
};
const getUniqueId = () => Math.random().toString(36).substring(2, 8);
// Setup jquery ui draggable, droppable
$("li.list-group-item").draggable({
helper: "clone",
zIndex: 100,
scroll: false,
start: function (event, ui) {
var width = event.target.getBoundingClientRect().width;
$(ui.helper).css({
'width': Math.ceil(width)
});
}
});
$('#flowchart').droppable({
hoverClass: "drop-hover",
tolerance: "pointer",
drop: function (event, ui) {
var helper = $(ui.helper);
var fieldId = getUniqueId();
var offset = $(this).offset(),
x = event.pageX - offset.left,
y = event.pageY - offset.top;
helper.find('div.field').clone(false)
.animate({ 'min-height': '40px', width: '180px' })
.css({ position: 'absolute', left: x, top: y })
.attr('id', fieldId)
.appendTo($(this)).fadeIn('fast', function () {
var field = $("#" + fieldId);
jsPlumbInstance.draggable(field, {
containment: "parent",
scroll: true,
grid: [5, 5],
stop: function (event, ui) {
}
});
field.addClass('panzoom-exclude');
var bottomEndpoints = ["BottomCenter"];
var topEndPoints = ["TopCenter"];
addEndpoints(fieldId, bottomEndpoints, topEndPoints);
jsPlumbInstance.revalidate(fieldId);
});
}
});
const addEndpoints = (toId, sourceAnchors, targetAnchors) => {
for (var i = 0; i < sourceAnchors.length; i++) {
var sourceUUID = toId + sourceAnchors[i];
jsPlumbInstance.addEndpoint(toId, sourceEndpoint, { anchor: sourceAnchors[i], uuid: sourceUUID });
}
for (var j = 0; j < targetAnchors.length; j++) {
var targetUUID = toId + targetAnchors[j];
jsPlumbInstance.addEndpoint(toId, targetEndpoint, { anchor: targetAnchors[j], uuid: targetUUID });
}
$('.jtk-endpoint').addClass('panzoom-exclude');
}
// Setup jsPlumbInstance
var jsPlumbInstance = jsPlumb.getInstance({
DragOptions: { cursor: 'pointer', zIndex: 12000 },
ConnectionOverlays: [
["Arrow", { location: 1 }],
["Label", {
location: 0.1,
id: "label",
cssClass: "aLabel"
}]
],
Container: 'flowchart'
});
// Setup Panzoom
const elem = document.getElementById('flowchart');
const panzoom = Panzoom(elem, {
excludeClass: 'panzoom-exclude',
canvas: true
});
const parent = elem.parentElement;
parent.addEventListener('wheel', panzoom.zoomWithWheel);
I've just been working on the exact same issue and came across this as the only answer
Implementing pan and zoom in jsPlumb
The PanZoom used looks to be quite old - but the idea was the same, use the JQuery Draggable plugin for the movable elements, instead of the in-built JsPlumb one. This allows the elements to move out of bounds.
The below draggable function fixed it for me using the PanZoom library.
var that = this;
var currentScale = 1;
var element = $('.element');
element.draggable({
start: function (e) {
//we need current scale factor to adjust coordinates of dragging element
currentScale = that.panzoom.getScale();
$(this).css("cursor", "move");
that.panzoom.setOptions({ disablePan: true });
},
drag: function (e, ui) {
ui.position.left = ui.position.left / currentScale;
ui.position.top = ui.position.top / currentScale;
if ($(this).hasClass("jtk-connected")) {
that.jsPlumbInstance.repaintEverything();
}
},
stop: function (e, ui) {
var nodeId = $(this).attr('id');
that.jsPlumbInstance.repaintEverything();
$(this).css("cursor", "");
that.panzoom.setOptions({ disablePan: false });
}
});
I'm not sure if redrawing everything on drag is that efficient - so maybe just redraw both the connecting elements.

Issue with WebdriverIO V6 dragAndDrop method

I've tried drag and drop using dragAndDrop & performAction, but none of them working for Vue.Draggable web app (e.g. vuedraggable: Two Lists).
Can anyone share a solution if possible?
Sample Code:
it('should demonstrate the dragAndDrop command', () => {
browser.url('https://sortablejs.github.io/Vue.Draggable/#/two-lists')
const elem = $('#two-lists .row div:nth-child(1) div.list-group div.list-group-item:nth-child(1)')
const target = $('#two-lists .row div:nth-child(2) div.list-group')
elem.dragAndDrop(target)
browser.pause(5000)
})
Built-in dragAndDrop function won't work because of very custom implementation of drag n drop in vuedraggable.
The only option would be to either fully or partially replace it with js.
Here you go
it('should demonstrate the dragAndDrop command', () => {
browser.url('https://sortablejs.github.io/Vue.Draggable/#/two-lists')
const listFrom = $('#two-lists .col-3:nth-child(1) .list-group')
const listTo = $('#two-lists .col-3:nth-child(2) .list-group')
const draggable = listFrom.$('.list-group-item')
const target = listTo.$('.list-group-item')
target.waitForClickable()
// before 4 in left and 3 in right list
expect(listFrom).toHaveChildren(4)
expect(listTo).toHaveChildren(3)
// start dragging
browser.performActions([
{
type: 'pointer',
id: 'finger1',
parameters: { pointerType: 'mouse' },
actions: [
{ type: 'pointerMove', origin: draggable, x: 0, y: 0 },
{ type: 'pointerDown', button: 0 },
{ type: 'pointerMove', origin: 'pointer', x: 5, y: 5, duration: 5 },
],
},
])
// emulate drop with js
browser.execute(
function (elemDrag, elemDrop) {
const pos = elemDrop.getBoundingClientRect()
const center2X = Math.floor((pos.left + pos.right) / 2)
const center2Y = Math.floor((pos.top + pos.bottom) / 2)
function fireMouseEvent(type, relatedTarget, clientX, clientY) {
const evt = new MouseEvent(type, { clientX, clientY, relatedTarget, bubbles: true })
relatedTarget.dispatchEvent(evt)
}
fireMouseEvent('dragover', elemDrop, center2X, center2Y)
fireMouseEvent('dragend', elemDrag, center2X, center2Y)
fireMouseEvent('mouseup', elemDrag, center2X, center2Y)
},
draggable,
target
)
// after 3 in left and 4 in right list
expect(listFrom).toHaveChildren(3)
expect(listTo).toHaveChildren(4)
browser.pause(2000) // demo
})
See also https://ghostinspector.com/blog/simulate-drag-and-drop-javascript-casperjs/