Gl React Expo rescaling captured image - react-native

I have an issue where the output image from a Surface suffers a rescaling, not remaining the same resolution as the original.
Some examples of input and output, along with the render size of the surface.
Example 1:
Original Image = {
height: 2560,
width: 1440,
}
Final Image = {
height: 1518,
width: 854,
}
Surface Size = {
height: 284.625,
width: 506
}
Example 2:
Original Image = {
height: 357,
width: 412,
}
Final Image = {
height: 936,
width: 1080,
}
Surface Size = {
height: 360,
width: 311.9417475728156
}
To capture the image, I use the following code:
getEditedImage = async () => {
return await this.image.glView.capture({ quality: 1 });
};
where image represents the surface from which I'm capturing the image.
I want the output image resolution to be exactly the same as the input. Does someone have an idea how can I achieve it?

you see how to use here.
https://docs.expo.io/versions/v32.0.0/sdk/gl-view/
The format (string), compressing (number) parts seem to be the part you need.
source: https://docs.expo.io/versions/latest/sdk/take-snapshot-async/
This link contains a description of the quality.
takeSnapshotAsync(view, options)
quality : number -- Number between 0 and 1 where 0 is worst quality and 1 is best, defaults to 1
const targetPixelCount = 1080; // If you want full HD pictures
const pixelRatio = PixelRatio.get(); // The pixel ratio of the device
// pixels * pixelratio = targetPixelCount, so pixels = targetPixelCount / pixelRatio
const pixels = targetPixelCount / pixelRatio;
const result = await takeSnapshotAsync(this.imageContainer, {
result: 'file',
height: pixels,
width: pixels,
quality: 1,
format: 'png',
});

Related

How to get original pixel color before shader

I have an ImageLayer and a RasterSource which uses a shader to manipulate the colors of the image.
While listening to the map's pointermove I get the color of the pixel under the pointer which is the color manipulated by the shader.
How can I get the original color before it was manipulated by the shader?
const extent = [0, 0, 1024, 968]; // the image extent in pixels.
const projection = new Projection({
code: 'xkcd-image',
units: 'pixels',
extent: extent,
});
let staticImage = new Static({ // from 'ol/source/ImageStatic'
attributions: '© xkcd',
url: "./assets/map.png",
projection: projection,
imageExtent: extent
});
let imageLayer = new ImageLayer({ // from 'ol/layer/Image'
source: new RasterSource({
sources: [staticImage],
operation: function (pixels, data) {
let p = pixels[0];
let grayscale = (p[0] + p[1] + p[2]) / 3;
p[0] = grayscale;
p[1] = grayscale;
p[2] = grayscale;
return p;
}
})
});
let map; // Map from 'ol'
map.on('pointermove', (evt) => {
let pixel = evt.pixel;
let color = imageLayer.getRenderer().getDataAtPixel(pixel, evt.framestate, 1);
// color is the one manipulated by the shader
});
More code here:
https://codesandbox.io/s/raster-original-pixel-3mejh9?file=/main.js
Note: because of security restrictions I've had to comment out the shader code which turns the colors gray
Which was adapted from this example:
https://openlayers.org/en/latest/examples/static-image.html
I found a workaround by adding a duplicate RasterLayer with a no-op operation. The trick is to add the layer just after the color manipulated layer, but with an opacity of 0.005 (apparently the lowest possible value) so it's rendered but you don't see it. Then combine the grey color's alpha with the original color's RGB values:
let noopLayer = new ImageLayer({
opacity: 0.005,
source: new RasterSource({
sources: [
staticImage
],
operation: function (pixels, data) {
let p = pixels[0];
return p;
},
}),
});
const map = new Map({
layers: [
rasterLayer,
noopLayer,
],
target: 'map',
view: new View({
projection: projection,
center: getCenter(extent),
zoom: 2,
maxZoom: 8,
}),
});
map.on('pointermove', (evt) => {
const pixel = evt.pixel;
let grey = rasterLayer.getRenderer().getDataAtPixel(pixel, evt.framestate, 1);
let noop = noopLayer.getRenderer().getDataAtPixel(pixel, evt.framestate, 1);
if (grey != null) {
// [228, 228, 228, 255]
console.log('grey:', grey[0], grey[1], grey[2], grey[3]);
// [255, 255, 170, 3]
console.log('noop:', noop[0], noop[1], noop[2], noop[3]);
// [255, 255, 170, 255]
console.log('original:', noop[0], noop[1], noop[2], grey[3]);
}
});

React Native problem with Image.getSize function

I am new with React Native, so be patient my friends. I have been struggling with getting the image size outside of the Image.getSize function.
First problem: Console is logging first console.log('Image.getSize funktion ulkopuolella koko on ${imgWidth} x ${imgHeight}') while there is Image.getSize before that. Tried re-order them with no luck. And it's logging image size 0x0.
I want use those width and height for drawing image. But I do not know how to "get them out" from the Image.getSize function. How I can do it?
The point in this function is to get image size for calculating right image ratio for resizing.
kuvanpiirto = () => {
pdfKuvaKorkeus = 100;
let imgWidth = 0;
let imgHeight = 0;
Image.getSize(arr[i].path, (width, height) => {
console.log(`Kuvan leveys PDF luonnissa on ${width}`) // logittaa kuvan leveyden
console.log(`Kuvan korkeus PDF luonnissa on ${height}`) // logittaa kuvan korkeuden
imgWidth = width;
imgHeight = height;
});
console.log(`Image.getSize funktion ulkopuolella koko on ${imgWidth} x ${imgHeight}`) // logittaa kuvan leveyden
if(imgHeight>imgWidth) { // VERTICAL / PYSTYKUVA
page.drawImage(arr[i].path.substring(7),'jpg',{
x: imgX,
y: imgY,
width: pdfKuvaKorkeus*1.33,
height: pdfKuvaKorkeus,
})
}
if(imgHeight<imgWidth) { // Horizontal / Vaakakuva
page.drawImage(arr[i].path.substring(7),'jpg',{
x: imgX,
y: imgY,
width: pdfKuvaKorkeus*0.75,
height: pdfKuvaKorkeus,
})
}
}
kuvanpiirto();
Tried also put all code inside Image.getSize function like this:
kuvanpiirto = () => {
Image.getSize(arr[i].path, (width, height) => {
imgWidth = width;
imgHeight = height;
if(imgHeight>imgWidth) { // VERTICAL / PYSTYKUVA
console.log(`Vertical - IMAGE SIZE ${imgWidth}X${imgHeight}`)
console.log(`IMAGE PATH ${arr[i].path}`)
page.drawImage(arr[i].path.substring(7),'jpg',{
x: imgX,
y: imgY,
width: 100*1.33,
height: 100,
})
}
if(imgHeight<imgWidth) { // Horizontal / Vaakakuva
console.log(`Horizontal - IMAGE SIZE ${imgWidth}X${imgHeight}`)
console.log(`IMAGE PATH ${arr[i].path}`)
page.drawImage(arr[i].path.substring(7),'jpg',{
x: imgX,
y: imgY,
width: 100*0.75,
height: 100,
})
}
});
}
kuvanpiirto();
It works better, but not draw an image. Giving error, picture below. Do not have any glue why it cannot draw the image.
I didn't check but it should work
kuvanpiirto = async () => {
// your code
const {imgWidth, imgHeight} = await new Promise((resolve) => {
Image.getSize(img, (width, height) => {
resolve({imgWidth: width, imgHeight: height});
});
});
// your code
};
useEffect ( ()=>{
Image.getSize(uri ,(width,height)=>{
setWidth(width)
setHeight(height)
}
},[uri])

move snapshot to images folder

I am building an app for iOS with React-native, and I use my code to capture a screenshot successfully. The problem is that I want to move the snapshot to the Images folder so it is easily accessible by the user.
I use the following code:
snapshot = async () => {
const targetPixelCount = 1080; // If you want full HD pictures
const pixelRatio = PixelRatio.get(); // The pixel ratio of the device
// pixels * pixelratio = targetPixelCount, so pixels = targetPixelCount / pixelRatio
const pixels = targetPixelCount / pixelRatio;
const result = await Expo.takeSnapshotAsync(this, {
result: 'file',
height: pixels,
width: pixels,
quality: 1,
format: 'png',
});
if (result) {
//RNFS.moveFile(result, 'Documents/snapshot.png');
Alert.alert('Snapshot', "Snapshot saved to " + result);
}
else {
Alert.alert('Snapshot', "Failed to save snapshot");
}
}
Does anybody know how to move the image to the Images Folder?
Thank you
Use the RN CameraRoll Module: https://facebook.github.io/react-native/docs/cameraroll

In Vue, how can I batch a bunch of changes to a DOM element?

I'm doing an animation, where a thumbnail blows up to take up the whole screen. The first step is to get the element, change it's position to absolute and assign explicitly the dimensions and position that it's currently occupying, using a style object. Then I do my animation.
Most of the time it works nicely, but I seem to be getting DOM renders that catch my element in an intermediate state, it's position changed to absolute but the dimensions not set. In Vue, is there a way of grouping into a single batch a bunch of updates to an element?
var me = this;
var outer = this.$refs.outer;
if (!this.vueStore.previewing) {
var rect = outer.getBoundingClientRect();
this.elStyle.height = rect.bottom - rect.top;
this.elStyle.width = rect.right - rect.left;
this.elStyle.top = rect.top;
this.elStyle.left = rect.left;
this.elStyle.zIndex = 100;
this.elStyle.position = 'absolute';
this.elStyle.transition = 'all 0.5s ease-in-out';
setTimeout(function () {
this.elStyle.top = this.vueStore.contentRect.top;
this.elStyle.left = this.vueStore.contentRect.left;
this.elStyle.height = this.vueStore.contentRect.bottom;
this.elStyle.width = this.vueStore.contentRect.right;
}, 300);
} else {...
You should assign elStyle in 1 time
var me = this;
var outer = this.$refs.outer;
if (!this.vueStore.previewing) {
var rect = outer.getBoundingClientRect();
me.elStyle = Object.assign({}, me.elStyle, {
height: rect.bottom - rect.top,
width: rect.right - rect.left,
top: rect.top,
left: rect.left,
zIndex: 100,
position: absolute
})
setTimeout(function () {
me.elStyle = Object.assign({}, me.elStyle, {
height: me.vueStore.contentRect.bottom - 50;,
width: me.vueStore.contentRect.right,
top: me.vueStore.contentRect.top,
left: me.vueStore.contentRect.left,
})
}, 300);
}

Strange getting width of page in phantomjs

here is the script code:
var page = require('webpage').create();
page.paperSize = {
format: 'A4',
orientation: "landscape"
};
page.open('http://www.google.com', function () {
var arr = page.evaluate(function () {
var pageWidth = document.body.clientWidth;
var pageHeight = document.body.clientHeight;
return [pageWidth, pageHeight];
});
console.log('pagwWidth: ' + arr[0] + '; pageHeight: ' + arr[1]);
page.render('google.pdf');
phantom.exit();
});
I'm trying to get clientWidth and clientHeight of document.body page. When I exec this script I'm getting the folowing values:
pagwWidth: 400; pageHeight: 484
Why is the width above is 400px? I think I should be wider.
Thank you for the reply. But then I don't understand the following thing. When I use viewportSize = {width: 1024, height: 800}
var page = require('webpage').create();
page.paperSize = {
format: 'A4',
orientation: "landscape"
};
page.viewportSize = {width: 1024, height: 800};
page.open('http://www.google.com', function () {
page.render('google.pdf');
phantom.exit();
});
I get the following file:
But if I use viewportSize = {width: 400, height: 400}
var page = require('webpage').create();
page.paperSize = {
format: 'A4',
orientation: "landscape"
};
page.viewportSize = {width: 400, height: 400};
page.open('http://www.google.com', function () {
page.render('google.pdf');
phantom.exit();
});
I get the same:
So I don't understand how does viewportSize affect to the view?
The document is affected by the viewport size and not by the paper size. Think along this line, how a web page looks like in your web browser has nothing to do with your current printer setting.
Use viewportSize if you want to influence the page layout:
page.viewportSize = { width: 1024, height: 800 };