I'm trying to display a large number of images using leaflet and I'm having problems making the tileLayers display properly.
I'm extending the TileLayer class and overriding the createTile function like this:
`
let imageLayer = TileLayer.extend({
createTile: function (coords, done) {
const image = window.L.DomUtil.create("img");
image.src = `http://placekitten.com/g/200/300`;
image.crossOrigin = "";
image.onload = function () {
this.onload = function () {};
done(null, image);
};
return image;
}
});
Result
I want to change the position of the images and to only display them in areas I have already defined (for example between offsetting the image at 0,0 to be at 1,1 or using latLng
([8273.300000000001, 10618.02] , [9232.653535353536, 9658.666464646465])).
Ive tried a bunch of different methods and almost always came back to bounds, which don't work and setting the bounds doesn't change anything.
I don't want to use ImageOverlays because the images I'm trying to display are sliced and named with the leaflet naming scheme (0_0_1.png).
Related
Does pdf.js allow to render a PDF page only partially? More specifically, is it possible to tell pdf.js to render a selected "rectangle of pixels" out of an entire PDF page?
Assuming a resolution of 144 dpi, a typical page (DIN A4) would have approx. 684 (width) by 1190 (height) pixels. I would like to render (for example) a rectangle like [100, 100] (top left coordinate in pixels) and [400, 400] (bottom right coordinate in pixels).
A typical use case could be a scanned document with several handwritten notes that I would like to display and further process individually.
I do understand that a "workaround" could be to save the entire page as jpg (or any other suitable bitmap format) and apply some clipping function. But this would for sure be a less performant approach than selected rendering.
pdfs.js uses a viewport object (presumably containing parameters) for rendering. This object contains
height
width
offsetX
offsetY
rotation
scale
transform
viewBox (by default [0, 0, width / scale, height / scale])
One might think that manipulating the viewBox inside it might lead to the desired outcome, but I have found that changing the viewBox parameters does not do anything at all. The entire page is rendered every time that I apply the render method.
What might I have done wrong? Does pdf.js offer the desired functionality? And if so, how can I get it to work? Thank you very much!
Here is a very simple React component demonstrating my approach (that does not work):
import React, { useRef } from 'react';
import { pdfjs } from 'react-pdf';
pdfjs.GlobalWorkerOptions.workerSrc = 'pdf.worker.js';
function PdfTest() {
// useRef hooks
const myCanvas: React.RefObject<HTMLCanvasElement> = useRef(null);
const test = () => {
const loadDocument = pdfjs.getDocument('...');
loadDocument.promise
.then((pdf) => {
return pdf.getPage(1);
})
.then((page) => {
const viewport = page.getViewport({ scale: 2 });
// Here I modify the viewport object on purpose
viewport.viewBox = [100, 100, 400, 400];
if (myCanvas.current) {
const context = myCanvas.current.getContext('2d');
if (context) {
page.render({ canvasContext: context, viewport: viewport });
myCanvas.current.height = viewport.height;
myCanvas.current.width = viewport.width;
}
}
});
};
// Render function
return (
<div>
<button onClick={test}>Test!</button>
<canvas ref={myCanvas} />
</div>
);
}
export default PdfTest;
My initial thought was also to modify a viewBox of page Viewport. This was not the right guess (I hope that you already figured it out).
What do you need really to do to project only a part of a page to canvas is to prepare correctly the transformation of Viewport.
So it will look more or less like following:
const scale = 2
const viewport = page.getViewport({
scale,
offsetX: -100 * scale,
offsetY: - 100 * scale
})
This will move your your box section to the beginning of the canvas coordinates.
What probably you would like to do next is to make a canvas equal to the selected rectangle size (in your case is 300x300 scaled by your scale) and this solved the issue in my case.
How would I adapt #ghettovoice JSFiddle that saves a map to PDF to save the map to a JPEG or PNG? I have no idea how to attempt this problem so ideally if you know hoe to do it you can explain the logic behind it.
exportMap: function () {
var map = this.$refs.map
map.once('rendercomplete', function () {
var mapCanvas = document.createElement('canvas');
var size = map.getSize();
mapCanvas.width = size[0];
mapCanvas.height = size[1];
var mapContext = mapCanvas.getContext('2d');
Array.prototype.forEach.call(
document.querySelectorAll('.ol-layer canvas'),
function (canvas) {
if (canvas.width > 0) {
var opacity = canvas.parentNode.style.opacity;
mapContext.globalAlpha = opacity === '' ? 1 : Number(opacity);
var transform = canvas.style.transform;
// Get the transform parameters from the style's transform matrix
var matrix = transform
.match(/^matrix\(([^(]*)\)$/)[1]
.split(',')
.map(Number);
// Apply the transform to the export map context
CanvasRenderingContext2D.prototype.setTransform.apply(
mapContext,
matrix
);
mapContext.drawImage(canvas, 0, 0);
}
}
);
if (navigator.msSaveBlob) {
// link download attribuute does not work on MS browsers
navigator.msSaveBlob(mapCanvas.msToBlob(), 'map.png');
} else {
var link = document.getElementById('image-download');
link.href = mapCanvas.toDataURL();
link.click();
}
});
map.renderSync();
}
The problem was a combination of missing dependencies (namely FileSaver.js and fakerator.js) and a cross origin server block (CORS block) (Browsers automatically prevent httpRequests to a different domain name unless the server allows it). The first one is fixed by installing the packages while the second one is resolved by setting the crossOrigin Attribute of the ImageWMSLayer to null in my case but possibly to 'Anonymous' for other sources. Hope this helped someone else :)
My component is a modal which displays an <img> with some tags on top.
I get the image dimensions using const size = this.$refs.media.getBoundingClientRect().
However, the height and width is 0, so the tags become placed wrong.
So it seems the image hasn't loaded yet? If I delay getting the height+width with setTimeout, I do get the width and height. But it seems like a not too reliable workaround.
I have "v-cloak" on the modal, but that doesn't work.
I can get the html element with this.$refs.media, but it doesn't mean that the image is loaded, apparently.
So, is there a way to check if the image is loaded? Is there some way to delay the component until the image is loaded?
One more thing: if I add a "width" and "height" in inline style on the img, then those values are available directly. But I don't want to set the size that way because it ruins the responsiveness.
The code doesn't say much more, but it is basically:
<modal>
<vue-draggable-resizable // <-- each tag
v-for="( tag, index ) in tags"
:x="convertX(tag.position.x)"
:y="convertY(tag.position.y)"
(other parameters)
convertX (x) {
const size = this.$refs.media.getBoundingClientRect()
return x * (size.width - 100) // <--- 0 width
async mounted () {
this.renderTags(this.media.tags) // <-- needs img width and height
Updated, solution
As the below answer says, I can use the onload event. I also came across the "#load" event, which I tried and it worked. I can't find it in the documentation though(?) (I searched for #load and v-on:load). Using "onload" directly on the image element didn't work though, so #load seems to be the way.
If I have a data property:
data () {
return {
imgLoaded: false
I can set it to true with a method using #load on the image:
<img
#load="whenImgLoaded"
<template v-if="imgLoaded = true">
<vue-draggable-resizable
:x="convertX(tag.position.x)"
:y="convertY(tag.position.y)"
(other parameters)
methods: {
whenImgLoaded(){
this.imgLoaded = true;
this.renderTags(this.media.tags)
}
You could load the image first via javascript, and after it's loaded run the renderTags method.
E.g., do something like this:
async mounted() {
let image = new Image();
image.onload = function () {
this.renderTags(this.media.tags)
};
image.src = 'https://some-image.png'
}
I am trying to create a 3D model in React-Native using Three.js, for that I have to use an image for texture but TextureLoader() function using 'document' and 'canvas' object creation logic, which can't be used in react-native.
so, how can I use an image as texture in react-native using three.js ?
Short summary: the Textureloader didnt work because it required an imageloader which required the inaccesible DOM.
Apparantly there's a THREE.Texture function which i used like this:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
As you can see i used an Image constructor, this is from React Native and is imported like this:
import { Image } from 'react-native';
In the react native documentation it will explain how it can be used, it supports base64 encoded JPEG.
and finally map the texture to the material:
material.map = texture;
Note: i havent tried to show the end result in my webglview yet, but in the element inspector it seems to show proper THREEjs objects.
(Note: this is not the answer)
EDIT: made the code a bit shorter and use DOMParser, still doesnt work because:
"image.addEventListener is not a function"
Which implies the image object which is created in this method doesnt seem to be able to be used in THREEjs,..
var DOMParser = require('react-native-html-parser').DOMParser;
var myDomParser = new DOMParser();
window.document = {};
window.document.createElementNS = (x,y) => {
if (y === "img") {
let doc = myDomParser.parseFromString("<html><head></head><body><img class='image' src=''></img></body></html>");
return doc.getElementsByAttribute("class", "image")[0];
}
}
Edit: be careful with this, because it also overrides the other createElemensNS variants in which they create canvas or other things.
I want to limit map extent to the initial extent of the map and limit user from panning more than certain extent.
I tried following but nothing has changed:
map = new Map( "map" , {
basemap: "gray",
center: [-85.416, 49.000],
zoom : 6,
logo: false,
sliderStyle: "small"
});
dojo.connect(map, "onExtentChange", function (){
var initExtent = map.extent;
var extent = map.extent.getCenter();
if(initExtent.contains(extent)){}
else{map.setExtent(initExtent)}
});
Just to flesh out Simon's answer somewhat, and give an example. Ideally you need two variables at the same scope as map:
initExtent to store the boundary of your valid extent, and
validExtent to store the last valid extent found while panning, so that you can bounce back to it.
I've used the newer dojo.on event syntax as well for this example, it's probably a good idea to move to this as per the documentation's recommendation - I assume ESRI will discontinue the older style at some point.
var map;
var validExtent;
var initExtent;
[...]
require(['dojo/on'], function(on) {
on(map, 'pan', function(evt) {
if ( !initExtent.contains(evt.extent) ) {
console.log('Outside bounds!');
} else {
console.log('Updated extent');
validExtent = evt.extent;
}
});
on(map, 'pan-end', function(evt) {
if ( !initExtent.contains(evt.extent) ) {
map.setExtent(validExtent);
}
});
});
You can do the same with the zoom events, or use extent-change if you want to trap everything. Up to you.
It looks like your extent changed function is setting the initial extent variable to the maps current extent and then checking if that extent contains the current extents centre point - which of course it always will.
Instead, declare initExtent at the same scope of the map variable. Then, change the on load event to set this global scope variable rather than a local variable. In the extent changed function, don't update the value of initExtent, simply check the initExtent contains the entire of the current extent.
Alternatively you could compare each bound of the current extent to each bound of the initExtent, e.g. is initExtent.xmin < map.extent.xmin and if any are, create a new extent setting any exceeded bounds to the initExtent values.
The only problem is these techniques will allow the initExtent to be exceeded briefly, but will then snap the extent back once the extent changed function fires and catches up.
I originally posted this solution on gis.stackexchange in answer to this question: https://gis.stackexchange.com/a/199366
Here's a code sample from that post:
//This function limits the extent of the map to prevent users from scrolling
//far away from the initial extent.
function limitMapExtent(map) {
var initialExtent = map.extent;
map.on('extent-change', function(event) {
//If the map has moved to the point where it's center is
//outside the initial boundaries, then move it back to the
//edge where it moved out
var currentCenter = map.extent.getCenter();
if (!initialExtent.contains(currentCenter) &&
event.delta.x !== 0 && event.delta.y !== 0) {
var newCenter = map.extent.getCenter();
//check each side of the initial extent and if the
//current center is outside that extent,
//set the new center to be on the edge that it went out on
if (currentCenter.x < initialExtent.xmin) {
newCenter.x = initialExtent.xmin;
}
if (currentCenter.x > initialExtent.xmax) {
newCenter.x = initialExtent.xmax;
}
if (currentCenter.y < initialExtent.ymin) {
newCenter.y = initialExtent.ymin;
}
if (currentCenter.y > initialExtent.ymax) {
newCenter.y = initialExtent.ymax;
}
map.centerAt(newCenter);
}
});
}
And here's a working jsFiddle example: http://jsfiddle.net/sirhcybe/aL1p24xy/