I'm converting a tensor returned from a tensorflow camera into a jpeg. I've been able to do so but the jpeg image quality is terrible. This is for a mobile react-native app developed with expo. The library I'm using to convert the tensor to an image is jpeg-js.
This is the code I'm using to transform the tensor into an image:
const handleCameraStream = async (imageAsTensors) => {
const loop = async () => {
const tensor = await imageAsTensors.next().value;
console.log(tensor)
const [height, width] = tensor.shape
const data = new Buffer(
// concat with an extra alpha channel and slice up to 4 channels to handle 3 and 4 channels tensors
tf.concat([tensor, tf.ones([height, width, 1]).mul(255)], [-1])
.slice([0], [height, width, 4])
.dataSync(),
)
const rawImageData = {data, width, height};
const jpegImageData = jpeg.encode(rawImageData, 200);
const imgBase64 = tf.util.decodeString(jpegImageData.data, "base64")
const salt = `${Date.now()}-${Math.floor(Math.random() * 10000)}`
const uri = FileSystem.documentDirectory + `tensor-${salt}.jpg`;
await FileSystem.writeAsStringAsync(uri, imgBase64, {
encoding: FileSystem.EncodingType.Base64,
});
setUri(uri)
// return {uri, width, height}
};
// requestAnimationFrameId = requestAnimationFrame(loop);
!uri ? setTimeout(() => loop(), 2000) : null;
}
The picture in the top half of the image is the camera stream. The picture in the bottom half of the image below is the transformed tensor.
I tried your code with increased resize height and width and was able to get an image with good quality. If that doesn't work, maybe you can post the rest of your code and I'll try to help.
Example Image
<TensorCamera
type={Camera.Constants.Type.front}
resizeHeight={cameraHeight}
resizeWidth={cameraWidth}
resizeDepth={3}
onReady={handleCameraStream}
autorender={true}
/>
Related
I am using a custom google Teachable model in my react native app and getting this error:
[TypeError: gl.texStorage2D is not a function. (In 'gl.texStorage2D(tex2d, 1, internalFormat, width, height)', 'gl.texStorage2D' is undefined)]
Code:
const fileUri = ImageDetail.uri;
setImage(fileUri);
const imgB64 = await FileSystem.readAsStringAsync(fileUri, {
encoding: FileSystem.EncodingType.Base64,
});
const imgBuffer = tf.util.encodeString(imgB64, 'base64').buffer;
const raw = new Uint8Array(imgBuffer)
const IMGSIZE = 224;
const imageTensor = decodeJpeg(raw).expandDims().resizeBilinear([IMGSIZE, IMGSIZE]).div(tf.scalar(255)).reshape([1, IMGSIZE, IMGSIZE, 3]);
const prediction = await model.predict(imageTensor).data();
setClassification(prediction);
When I change imageTensor to decodeJpeg(raw) without resizing, reshaping I get this error:
[Error: Error when checking : expected input_1 to have 4 dimension(s), but got array with shape [1280,960,3]]
I'm trying to integrate a ML model into a React Native application. The first step was to transform my image into a Tensor. I followed this tutorial but I'm getting the following error:
Argument 'images' passed to 'resizeNearestNeighbor' must be a Tensor or TensorLike, but got 'Tensor'
import * as tf from '#tensorflow/tfjs'
import { bundleResourceIO, decodeJpeg, fetch } from '#tensorflow/tfjs-react-native'
import * as fs from 'expo-file-system'
const App = () => {
const transformImageToTensor = async (uri) => {
const img64 = await fs.readAsStringAsync(uri, { encoding: fs.EncodingType.Base64 })
const imgBuffer = tf.util.encodeString(img64, 'base64').buffer
const raw = new Uint8Array(imgBuffer)
let imgTensor = decodeJpeg(raw)
const scalar = tf.scalar(255)
//resize the image
imgTensor = tf.image.resizeNearestNeighbor(imgTensor, [300, 300])
//normalize; if a normalization layer is in the model, this step can be skipped
const tensorScaled = imgTensor.div(scalar)
//final shape of the rensor
const img = tf.reshape(tensorScaled, [1, 300, 300, 3])
return imgTensor
}
useEffect(()=> { transformImageToTensor('/Users/roxana/Desktop/rndigits/project/model/example.jpeg')
})
}
I found a mention that maybe #tfjs and #tfjs-react-native have different implementations of the Tensor object, but I'm not sure how to code it differently. No other tutorial out there worked
There is decodeJpeg from #tensorflow/tfjs-react-native but no encodeJpeg. How to write a tensor into a local jpeg file then?
I have tried to look at the code and "inverse" the function, I ended up writing:
import * as tf from '#tensorflow/tfjs';
import * as FileSystem from 'expo-file-system';
import * as jpeg from 'jpeg-js';
export const encoderJpeg = async (tensor, name) => {
// add alpha channel if missing
const shape = [...tensor.shape]
shape.pop()
shape.push(4)
const tensorWithAlpha = tf.concat([tensor, tensor], [-1]).slice([0], shape)
const array = new Uint8Array(tensorWithAlpha.dataSync())
const rawImageData = {
data: array.buffer,
width: shape[1],
height: shape[0],
};
const jpegImageData = jpeg.encode(rawImageData, 50);
const imgBase64 = tf.util.decodeString(jpegImageData.data, "base64")
const uri = FileSystem.documentDirectory + name;
await FileSystem.writeAsStringAsync(uri, imgBase64, {
encoding: FileSystem.EncodingType.Base64,
});
return uri
}
but when I show the images with an <Image /> I see that there are all plain green.
This is my final util for doing this:
import * as tf from '#tensorflow/tfjs';
import * as FileSystem from 'expo-file-system';
import * as jpeg from 'jpeg-js';
export const encodeJpeg = async (tensor) => {
const height = tensor.shape[0]
const width = tensor.shape[1]
const data = new Buffer(
// concat with an extra alpha channel and slice up to 4 channels to handle 3 and 4 channels tensors
tf.concat([tensor, tf.ones([height, width, 1]).mul(255)], [-1])
.slice([0], [height, width, 4])
.dataSync(),
)
const rawImageData = {data, width, height};
const jpegImageData = jpeg.encode(rawImageData, 100);
const imgBase64 = tf.util.decodeString(jpegImageData.data, "base64")
const salt = `${Date.now()}-${Math.floor(Math.random() * 10000)}`
const uri = FileSystem.documentDirectory + `tensor-${salt}.jpg`;
await FileSystem.writeAsStringAsync(uri, imgBase64, {
encoding: FileSystem.EncodingType.Base64,
});
return {uri, width, height}
}
You can use imgBase64 directly into your image component as follows:
<Image source={{uri: 'data:image/jpeg;base64,' + imgBase64}} />
I'm trying to place a sketch on a building on the map with Leaflet. I succeeded in this as well, I can place and scale an image in svg format on the map. But when I add a new sketch to the map, I want the old sketch to be deleted from the map.
map.removeLayer(sketchLayer)
sketchLayer.removeFrom(map)
map.removeLayer(L.sketchLayer)
i tried these
My codes,
splitCoordinate (value) {
const topLeftCoordinates = value.top_left.split(',')
const topRightCoordinates = value.top_right.split(',')
const bottomLeftCoordinates = value.bottom_left.split(',')
this.initMap(topLeftCoordinates, topRightCoordinates, bottomLeftCoordinates)
},
initMap (topLeftCoordinates, topRightCoordinates, bottomLeftCoordinates) {
let map = this.$refs.map.mapObject
map.flyTo(new L.LatLng(topLeftCoordinates[0], topLeftCoordinates[1]), 20, { animation: true })
const vm = this
var topleft = L.latLng(topLeftCoordinates[0], topLeftCoordinates[1])
var topright = L.latLng(topRightCoordinates[0], topRightCoordinates[1])
var bottomleft = L.latLng(bottomLeftCoordinates[0], bottomLeftCoordinates[1])
var sketchLayer = L.imageOverlay.rotated(vm.imgUrl, topleft, topright, bottomleft, {
opacity: 1,
interactive: true
})
map.addLayer(sketchLayer)
}
Fix:
The error was related to the javascript scope. I defined the sketchLayer in the data and my problem solved.
I am building an app for iOS with React-native, and I use my code to capture a screenshot successfully. The problem is that I want to move the snapshot to the Images folder so it is easily accessible by the user.
I use the following code:
snapshot = async () => {
const targetPixelCount = 1080; // If you want full HD pictures
const pixelRatio = PixelRatio.get(); // The pixel ratio of the device
// pixels * pixelratio = targetPixelCount, so pixels = targetPixelCount / pixelRatio
const pixels = targetPixelCount / pixelRatio;
const result = await Expo.takeSnapshotAsync(this, {
result: 'file',
height: pixels,
width: pixels,
quality: 1,
format: 'png',
});
if (result) {
//RNFS.moveFile(result, 'Documents/snapshot.png');
Alert.alert('Snapshot', "Snapshot saved to " + result);
}
else {
Alert.alert('Snapshot', "Failed to save snapshot");
}
}
Does anybody know how to move the image to the Images Folder?
Thank you
Use the RN CameraRoll Module: https://facebook.github.io/react-native/docs/cameraroll