Gettting wrong landmarks from mediapipe handpose - react-native

I am creating a handpose recognition app using mediapipe handpose. But the landmarks that I am getting from the estimatehands functions keeps on increasing and after certiain point i am receiving NAN of array as a output. The image that is received in the function is from tensor camera. Please do help. Thank You..
// run for every image in a video
let frame = 0;
const computeRecognitionEveryNFrames = 1;
const handleCameraStream = (images) => {
let outer = [];
let points = [];
const loop = async () => {
if (net) {
if (frame % computeRecognitionEveryNFrames === 0) {
const nextImageTensor = images.next().value;
if (nextImageTensor) {
if (outer.length > 90) outer = [];
if (points.length > 11340) points = [];
const objects = await net.estimateHands(nextImageTensor);
console.log(objects);
objects.length > 0
? objects.map((object) => {
object.landmarks.map((arr) => {
points.push(...arr);
});
})
: null;
// tf.dispose([nextImageTensor]);
nextImageTensor.dispose();
if (points.length >= 126) outer.push(points.slice(-126));
// if (outer.length >= 30) {
// (async () => {
// const model = await tf.loadLayersModel(
// bundleResourceIO(modelJSON, modelWeights)
// );
// // console.log(outer);
// const ypreds = Array.from(
// model
// .predict(tf.tensor(outer.slice(-30)).expandDims(0))
// .dataSync()
// );
// // console.log(ypreds.indexOf(Math.max(ypreds)));
// console.log(ypreds);
// })();
// }
console.log(outer.length);
}
}
frame += 1;
frame = frame % computeRecognitionEveryNFrames;
}
requestAnimationFrame(loop);
};
loop();
};
=========================================================
<TensorCamera
style={{
width: 256,
height: height * 0.8,
marginRight: "auto",
marginLeft: "auto",
}}
resizeHeight={height * 0.8}
resizeDepth={3}
resizeWidth={256}
type={Camera.Constants.Type.front}
autorender={true}
onReady={(images) => handleCameraStream(images)}
/>
</View>```
[![landmarks][1]][1]
[1]: https://i.stack.imgur.com/pR0JW.png

Related

html canvas - multiple svg images which all have different fillStyle colors

For a digital artwork I'm generating a canvas element in Vue which draws from an array of multiple images.
The images can be split in two categories:
SVG (comes with a fill-color)
PNG (just needs to be drawn as a regular image)
I came up with this:
const depict = (options) => {
ctx.clearRect(0, 0, canvas.width, canvas.height);
const myOptions = Object.assign({}, options);
if (myOptions.ext == "svg") {
return loadImage(myOptions.uri).then((img) => {
ctx.drawImage(img, 0, 0, 100, 100);
ctx.globalCompositeOperation = "source-in";
ctx.fillStyle = myOptions.clr;
ctx.fillRect(0, 0, 100, 100);
ctx.globalCompositeOperation = "source-over";
});
} else {
return loadImage(myOptions.uri).then((img) => {
ctx.fillStyle = myOptions.clr;
ctx.drawImage(img, 0, 0, 100, 100);
});
}
};
this.inputs.forEach(depict);
for context:
myOptions.clr = the color
myOptions.uri = the url of the image
myOptions.ext = the extension of the image
While all images are drawn correctly I can't figure out why the last fillStyle overlays the whole image. I just want all the svg's to have the fillStyle which is attached to them.
I tried multiple globalCompositeOperation in different orders. I also tried drawing the svg between ctx.save and ctx.restore. No succes… I might be missing some logic here.
So! I figured it out myself in the meantime :)
I created an async loop with a promise. Inside this I created a temporary canvas per image which I then drew to one canvas. I took inspiration from this solution: https://stackoverflow.com/a/6687218/15289586
Here is the final code:
// create the parent canvas
let parentCanv = document.createElement("canvas");
const getContext = () => parentCanv.getContext("2d");
const parentCtx = getContext();
parentCanv.classList.add("grid");
// det the wrapper from the DOM
let wrapper = document.getElementById("wrapper");
// this function loops through the array
async function drawShapes(files) {
for (const file of files) {
await depict(file);
}
// if looped > append parent canvas to to wrapper
wrapper.appendChild(parentCanv);
}
// async image loading worked best
const loadImage = (url) => {
return new Promise((resolve, reject) => {
const img = new Image();
img.onload = () => resolve(img);
img.onerror = () => reject(new Error(`load ${url} fail`));
img.src = url;
});
};
// depict the file
const depict = (options) => {
// make a promise
return new Promise((accept, reject) => {
const myOptions = Object.assign({}, options);
var childCanv = document.createElement("canvas");
const getContext = () => childCanv.getContext("2d");
const childCtx = getContext();
if (myOptions.ext == "svg") {
loadImage(myOptions.uri).then((img) => {
childCtx.drawImage(img, 0, 0, 100, parentCanv.height);
childCtx.globalCompositeOperation = "source-in";
childCtx.fillStyle = myOptions.clr;
childCtx.fillRect(0, 0, parentCanv.width, parentCanv.height);
parentCtx.drawImage(childCanv, 0, 0);
accept();
});
} else {
loadImage(myOptions.uri).then((img) => {
// ctx.fillStyle = myOptions.clr;
childCtx.drawImage(img, 0, 0, 100, parentCanv.height);
parentCtx.drawImage(childCanv, 0, 0);
accept();
});
}
});
};
drawShapes(this.inputs);

Loading an .obj file with Expo-three?

I am trying to insert a .obj file into a React Native app built using Expo.
From the examples I've found that are successfully working, most of these seem to rely on building spheres or cubes within the rendering. I haven't found a good example with a successful rendering of a local file, specifically .obj.
I'm using the expo-three documentation which describes rendering with obj files, but no working examples.
This is what I have so far, which is not producing any rendered object. But want to know if I am on the right track with this, and what I am missing to get the object to render.
Below is the current file code.
import { Renderer, TextureLoader } from 'expo-three';
import * as React from 'react';
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader';
import {
AmbientLight,
Fog,
GridHelper,
PerspectiveCamera,
PointLight,
Scene,
SpotLight,
} from 'three';
import { Asset } from 'expo-asset';
import { MTLLoader } from 'three/examples/jsm/loaders/MTLLoader';
export default function ThreeDPhytosaur() {
return (
<GLView
style={{ flex: 1 }}
onContextCreate={async (gl) => {
const { drawingBufferWidth: width, drawingBufferHeight: height } = gl;
const sceneColor = 0x6ad6f0;
const renderer = new Renderer({ gl });
renderer.setSize(width, height);
renderer.setClearColor(sceneColor);
const camera = new PerspectiveCamera(70, width / height, 0.01, 1000);
camera.position.set(2, 5, 5);
const scene = new Scene();
scene.fog = new Fog(sceneColor, 1, 10000);
scene.add(new GridHelper(10, 10));
const ambientLight = new AmbientLight(0x101010);
scene.add(ambientLight);
const pointLight = new PointLight(0xffffff, 2, 1000, 1);
pointLight.position.set(0, 200, 200);
scene.add(pointLight);
const spotLight = new SpotLight(0xffffff, 0.5);
spotLight.position.set(0, 500, 100);
spotLight.lookAt(scene.position);
scene.add(spotLight);
const asset = Asset.fromModule(model['phytosaur']);
await asset.downloadAsync();
const objectLoader = new OBJLoader();
const object = await objectLoader.loadAsync(asset.uri);
object.scale.set(0.025, 0.025, 0.025);
scene.add(object);
camera.lookAt(object.position);
const render = () => {
timeout = requestAnimationFrame(render);
renderer.render(scene, camera);
gl.endFrameEXP();
};
render();
}}
/>
);
}
const model = {
'phytosaur': require('../assets/phytosaur.obj'),
};
Thanks very much!
This is the code that I got to render the obj file. Changed the structure of the original file based on some other examples found.
But this might help someone else!
import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader.js';
import { Asset } from 'expo-asset';
import { Renderer} from 'expo-three';
import * as React from 'react';
import {
AmbientLight,
Fog,
PerspectiveCamera,
PointLight,
Scene,
SpotLight,
} from 'three';
export default function ThreeDTwo() {
let timeout;
React.useEffect(() => {
// Clear the animation loop when the component unmounts
return () => clearTimeout(timeout);
}, []);
return (
<GLView
style={{ flex: 1 }}
onContextCreate={async (gl) => {
const { drawingBufferWidth: width, drawingBufferHeight: height } = gl;
const sceneColor = 668096;
// Create a WebGLRenderer without a DOM element
const renderer = new Renderer({ gl });
renderer.setSize(width, height);
renderer.setClearColor(0x668096);
const camera = new PerspectiveCamera(70, width / height, 0.01, 1000);
camera.position.set(2, 5, 5);
const scene = new Scene();
scene.fog = new Fog(sceneColor, 1, 10000);
const ambientLight = new AmbientLight(0x101010);
scene.add(ambientLight);
const pointLight = new PointLight(0xffffff, 2, 1000, 1);
pointLight.position.set(0, 200, 200);
scene.add(pointLight);
const spotLight = new SpotLight(0xffffff, 0.5);
spotLight.position.set(0, 500, 100);
spotLight.lookAt(scene.position);
scene.add(spotLight);
const asset = Asset.fromModule(require("../assets/phytosaur_without_mtl.obj"));
await asset.downloadAsync();
// instantiate a loader
const loader = new OBJLoader();
// load a resource
loader.load(
// resource URL
asset.localUri,
// called when resource is loaded
function ( object ) {
object.scale.set(0.065, 0.065, 0.065)
scene.add( object );
camera.lookAt(object.position)
//rotate my obj file
function rotateObject(object, degreeX=0, degreeY=0, degreeZ=0) {
object.rotateX(THREE.Math.degToRad(degreeX));
object.rotateY(THREE.Math.degToRad(degreeY));
object.rotateZ(THREE.Math.degToRad(degreeZ));
}
// usage:
rotateObject(object, 0, 0, 70);
//animate rotation
function update() {
object.rotation.x += 0.015
}
const render = () => {
timeout = requestAnimationFrame(render);
update();
renderer.render(scene, camera);
gl.endFrameEXP();
};
render();
},
// called when loading is in progresses
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( error );
}
);
}}
/>
);
}

Image ratio using in image drawing react-native

I have an app, where I take a pic and draw it to the pdf file.
I have been trying to get them in right ratio. If I take all picture horizontal, it would be fine. If I take all pictures vertical, it would be fine. Problem is, when I take both vertical and horizontal, last picture's ratio applied to all pictures.
I save image ratio to AsyncStorage, where I get it in other component.
Questions is: How can I keep original ratios when I take pictures both ways?
Camera component -> takePicture function:
takePicture = async() => {
if (this.camera) {
const options = { quality: 0.5, base64: true , fixOrientation: true};
const data = await this.camera.takePictureAsync(options);
console.log(data.uri);
Image.getSize(data.uri, (width, height) => { // Get image height and width
let imageWidth = width;
let imageHeight = height;
let stringImageWidth = '' + imageWidth; // Convert to string for AsyncStorage saving
let stringImageHeight = '' + imageHeight;
const horizontalRatioCalc = () => { // return ratio for own variable
return (imageWidth/imageHeight);
};
const verticalRatioCalc = () => {
return (imageWidth/imageHeight);
};
horizontalImageRatio = horizontalRatioCalc();
verticalImageRatio = verticalRatioCalc();
stringHorizontalImageRatio = '' + horizontalImageRatio;
stringVerticalImageRatio = '' + verticalImageRatio;
console.log(`Size of the picture ${imageWidth}x${imageHeight}`);
horizontalRatio = async () => {
if (imageHeight>imageWidth) {
verticalRatioCalc();
try {
AsyncStorage.setItem("imageVerticalRatio", stringVerticalImageRatio),
AsyncStorage.setItem("asyncimageWidth", stringImageWidth),
AsyncStorage.setItem("asyncimageHeight", stringImageHeight),
console.log(`Vertical ratio saved! It's ${stringVerticalImageRatio}`),
console.log(`Image Width saved! It's ${stringImageWidth}`),
console.log(`Image height saved! It's ${stringImageHeight}`)
} catch (e) {
console.log(`AsyncStorage saving of image vertical ratio cannot be done.`)
}
}if (imageHeight<imageWidth) {
horizontalRatioCalc();
try {
AsyncStorage.setItem("imageHorizontalRatio", stringHorizontalImageRatio),
AsyncStorage.setItem("asyncimageWidth", stringImageWidth),
AsyncStorage.setItem("asyncimageHeight", stringImageHeight),
console.log(`Horizontal ratio saved! It's ${stringHorizontalImageRatio}`),
console.log(`Image Width saved! It's ${stringImageWidth}`),
console.log(`Image height saved! It's ${stringImageHeight}`)
} catch (e) {
console.log(`AsyncStorage saving of image vertical ratio cannot be done.`)
}
}
}
horizontalRatio();
}, (error) => {
console.error(`Cannot size of the image: ${error.message}`);
});
back(data.uri)
//CameraRoll.saveToCameraRoll(data.uri).then((data) => {
// console.log(data)
//back(data);
//}).catch((error) => {
//console.log(error)
//})
}
CreatePdf component -> PDF drawing function:
readData = async () => {
try {
kuvakorkeus = await AsyncStorage.getItem('asyncimageHeight')
kuvaleveys = await AsyncStorage.getItem('asyncimageWidth')
vertikaaliratio = await AsyncStorage.getItem('imageVerticalRatio')
horisontaaliratio = await AsyncStorage.getItem('imageHorizontalRatio')
if (kuvakorkeus !== null) {
return kuvakorkeus;
return kuvaleveys;
return vertikaaliratio;
return horisontaaliratio;
}
} catch (e) {
alert('Failed to fetch the data from storage')
}
}
readData ();
kuvanpiirto = () => {
pdfKuvaKorkeus = 100;
if(kuvakorkeus>kuvaleveys) { // VERTICAL / PYSTYKUVA
page.drawImage(arr[i].path.substring(7),'jpg',{
x: imgX,
y: imgY,
width: pdfKuvaKorkeus*vertikaaliratio,
height: pdfKuvaKorkeus,
})
}
if(kuvakorkeus<kuvaleveys) { // Horizontal / Vaakakuva
page.drawImage(arr[i].path.substring(7),'jpg',{
x: imgX,
y: imgY,
width: pdfKuvaKorkeus*horisontaaliratio,
height: pdfKuvaKorkeus,
})
}
}
kuvanpiirto();

Problem with script to change number of Flatlist columns depending on rotation/size

I'm working on some code to calculate numColumns for a flatlist- intention is 3 on landscape tablet, 2 on portrait tablet, and 1 on portrait phone.
Here's my code:
const [width, setWidth] = useState(Dimensions.get('window').width);
const [imageWidth, setImageWidth] = useState(100);
const [imageHeight, setImageHeight] = useState(100);
const [columns, setColumns] = useState(3);
useEffect(() => {
function handleChange() {
setWidth(Dimensions.get('window').width);
}
Dimensions.addEventListener("change", handleChange);
return () => Dimensions.removeEventListener("change", handleChange);
}, [width]);
useEffect(() => {
if (width > 1100) {
setColumns(3);
} else if (width <= 1099 && width > 600) {
setColumns(2);
} else {
setColumns(1);
}
setImageWidth((width - (64 * columns) + 15) / columns);
setImageHeight(((width - (64 * columns) + 15) / columns) * .6);
}, [width]);
imageWidth and imageHeight are passed to the render component of the flatlist to size an image.
It seems to work fine when I load it in landscape mode, but if I rotate to portrait, I get this:
Then, if I go back to landscape, it stays as 2 columns?
Any idea how I can fix this?
You're not returning the height of the device, afterwards, you need to calculate the orientation of the device (isLandscape).
Logically it flows as the following:
is the device landscape? (setColumns 3)
is the device wide and portrait? (set columns 2)
others (setColumns 1)
From there you can pass that into the second useEffect, (say, useColumnsHook). This should be able to set the height/width based on orientation of the device.
I also recommend setting the height/width based on percentages rather than exact pixels for devices (100%, 50%, 33.3%).
const [width, setWidth] = useState(Dimensions.get('window').width);
const [imageWidth, setImageWidth] = useState(100);
const [imageHeight, setImageHeight] = useState(100);
const [columns, setColumns] = useState(3);
/**
* orientation
*
* return {
* width,
* height
* }
*/
const useScreenData = () => {
const [screenData, setScreenData] = useState(Dimensions.get("screen"))
useEffect(() => {
const onChange = (result) => {
setScreenData(result.screen)
}
Dimensions.addEventListener("change", onChange)
return () => Dimensions.removeEventListener("change", onChange)
})
return {
...screenData,
}
}
const { width, height } = useScreenData()
const isLandscape = width > height
useEffect(() => {
if (isLandscape && width > 1100) {
// handle landscape
setColumns(3)
} else if (!isLandscape && (width <= 1099 && width > 600)) {
setColumns(2)
} else {
setColumns(1)
}
setImageWidth((width - (64 * columns) + 15) / columns);
setImageHeight(((width - (64 * columns) + 15) / columns) * .6);
}, [width, isLandscape]);
1:Tablet or Phone
You can use the react-native-device-info package along with the Dimensions API. Check the isTablet() method and apply different styles according on the result.
if you care Expo user then you have to user expo-constants userInterfaceIdiom method to detect isTablet Reference
import DeviceInfo from 'react-native-device-info';
let isTablet = DeviceInfo.isTablet();
2:Portrait mode or Landscape mode
then you can find portrait mode or landscape mode by
const isPortrait = () => {
const dim = Dimensions.get('screen');
return dim.height >= dim.width;
};
3:Code:
import React, { useState, useEffect } from "react";
import { View, Text, Dimensions } from "react-native";
import DeviceInfo from "react-native-device-info";
const App = () => {
const isLandscapeFunction = () => {
const dim = Dimensions.get("screen");
return dim.width >= dim.height;
};
const [width, setWidth] = useState(Dimensions.get("window").width);
const [imageWidth, setImageWidth] = useState(100);
const [imageHeight, setImageHeight] = useState(100);
const [columns, setColumns] = useState(3);
let isTabletPhone = DeviceInfo.isTablet();;
useEffect(() => {
function handleChange(result) {
setWidth(result.screen.width);
const isLandscape = isLandscapeFunction();
if (isLandscape && isTabletPhone) {
setColumns(3);
} else if (!isLandscape && isTabletPhone) {
setColumns(2);
} else {
setColumns(1);
}
}
Dimensions.addEventListener("change", handleChange);
return () => Dimensions.removeEventListener("change", handleChange);
});
useEffect(() => {
console.log(columns);
setImageWidth((width - 64 * columns + 15) / columns);
setImageHeight(((width - 64 * columns + 15) / columns) * 0.6);
}, [columns, width]);
return (
<View style={{ justifyContent: "center", alignItems: "center", flex: 1 }}>
<Text>imageWidth {imageWidth}</Text>
<Text>imageHeight {imageHeight}</Text>
</View>
);
};
export default App;

Expo-pixi Save stage children on App higher state and retrieve

I'm trying another solution to my problem:
The thing is: im rendering a Sketch component with a background image and sketching over it
onReady = async () => {
const { layoutWidth, layoutHeight, points } = this.state;
this.sketch.graphics = new PIXI.Graphics();
const linesStored = this.props.screenProps.getSketchLines();
if (this.sketch.stage) {
if (layoutWidth && layoutHeight) {
const background = await PIXI.Sprite.fromExpoAsync(this.props.image);
background.width = layoutWidth * scaleR;
background.height = layoutHeight * scaleR;
this.sketch.stage.addChild(background);
this.sketch.renderer._update();
}
if (linesStored) {
for(let i = 0; i < linesStored.length; i++) {
this.sketch.stage.addChild(linesStored[i])
this.sketch.renderer._update();
}
}
}
};
this lineStored variable is returning a data i've saved onChange:
onChangeAsync = async (param) => {
const { uri } = await this.sketch.takeSnapshotAsync();
this.setState({
image: { uri },
showSketch: false,
});
if (this.sketch.stage.children.length > 0) {
this.props.screenProps.storeSketchOnState(this.sketch.stage.children);
}
this.props.callBackOnChange({
image: uri,
changeAction: this.state.onChangeAction,
startSketch: this.startSketch,
undoSketch: this.undoSketch,
});
};
storeSketchOnState saves this.sketch.stage.children; so when i change screen and back to the screen my Sketch component in being rendered, i can retrieve the sketch.stage.children from App.js state and apply to Sketch component to persist the sketching i was doing before
i'm trying to apply the retrieved data like this
if (linesStored) {
for(let i = 0; i < linesStored.length; i++) {
this.sketch.stage.addChild(linesStored[i])
this.sketch.renderer._update();
}
}
```
but it is not working =(