I'm working on a react native iOS app where I want to take certain images from a user's Camera Roll and save them in cloud storage (right now I'm using Firebase).
I'm currently getting the images off the Camera Roll and in order to save each image to the cloud I'm converting each image uri to base64 and then to a blob using the react-native-fetch-blob library. While this is working I am finding the conversion process to base64 for each image to be taking a very long time.
An example image from the Camera Roll:
What would be the most efficient/quickest way to take the image uri for each image from the Camera Roll, convert it, and store it to cloud storage.
Is there a better way I can be handling this? Would using Web Workers speed up the base64 conversion process?
My current image conversion process:
import RNFetchBlob from 'react-native-fetch-blob';
const Blob = RNFetchBlob.polyfill.Blob;
const fs = RNFetchBlob.fs
window.XMLHttpRequest = RNFetchBlob.polyfill.XMLHttpRequest
window.Blob = Blob
function saveImages(images) {
let blobs = await Promise.all(images.map(async asset => {
let response = await convertImageToBlob(asset.node.image.uri);
return response;
}));
// I will then send the array of blobs to Firebase storage
}
function convertImageToBlob(uri, mime = 'image/jpg') {
const uploadUri = uri.replace('file://', '');
return new Promise(async (resolve, reject) => {
let data = await readStream(uploadUri);
let blob = await Blob.build(data, { type: `${mime};BASE64` });
resolve(blob);
})
}
function readStream(uri) {
return new Promise(async (resolve, reject) => {
let response = await fs.readFile(uri, 'base64');
resolve(response);
})
}
I found the solution below to be extremely helpful in speeding up the process. The base64 conversion now takes place on the native side rather than through JS.
React Native: Creating a custom module to upload camera roll images.
It's also worth noting this will convert the image to thumbnail resolution.
To convert an image to full resolution follow guillaumepiot's solution here:
https://github.com/scottdixon/react-native-upload-from-camera-roll/issues/1
I would follow the example here form the react-native-fetch docs. It looks like you're trying to add an extra step when they take care of that for you.
https://github.com/wkh237/react-native-fetch-blob#upload-a-file-from-storage
Related
I’m writing a Vue app which uses the Microsoft Graph API and SDK for initial authentication on the front end and then uses different aspects of the API throughout the app. Like displaying emails, OneDrive files, etc.
I’m using the profile photo from a users Microsoft account to display an avatar to other users. My issue is that when I call {graphApi}/me/photo/$value the result returned is a Blob. This is the endpoint provided in MS Graph.
I’ve read the MS Graph docs thoroughly, combed MDN & other sources and have not found a way to transform this result into a simple image in my markup.
Template markup:
<template>
<img :src="userPhoto" :alt="user.displayName" />
</template>
Setup function logic:
<script setup>
import { client } from "./foobar"
const userPhoto = ref();
async function getPhoto(){
const photo = await client.api("/me/photo/$value").get()
console.log(photo.value)
userPhoto.value = photo
};
</script>
Returned result:
{Blob, image:{id: default, size:48x48}}
So how do I decode or download the Blob properly to display an image in my Vue markup?? I’ve tried createObjectURL and FileReader() without any luck. I’m sure there is a simple solution but I am not finding it. Thanks for the help.
Explanation:
In below snippet as you can see I am passing the objectId of the Employee fetched from Graph previously.
Then making call for employee to get their Avatar/DP
The Graph Profile Photo endpoint returns binary Data of the photo.
Convert that binary data into data:image/png;base64,<readAsDataURL> URL e.g. data:image/png;base64,iVBORw0KGgoAAAANSU...
Use in <img src="dataUrl"/>
let imageUrl = (await request.get(GRAPH_CONFIG.GRAPH_DP_ENDPT + objectId + "/photos/48x48/\$value", { responseType: 'arraybuffer', validateStatus: (status) => status === 200 || status === 404 }))
if (imageUrl.status === 200) {
let reader = new FileReader()
let blob = new Blob([imageUrl.data], {type: 'image/jpeg'})
reader.onload = (event) => {
return event.target?.result.toString();
}
reader.readAsDataURL(blob)
}
Using Adobe PDF Embed API, you can register a callback:
this.adobeDCView = new window.AdobeDC.View(config);
this.adobeDCView.registerCallback(
window.AdobeDC.View.Enum.CallbackType.SAVE_API, (metaData, content, options) => {
})
Content is according to the docs here: https://www.adobe.io/apis/documentcloud/dcsdk/docs.html?view=view
content: The ArrayBuffer of file content
When I debug this content using chrome inspector, it shows me that content is a Int8Array.
Normally when we upload a pdf file, the user selects a file and we read as dataURI and get base64 and push that to AWS. So I need to convert this PDF's data (Int8Array) to Base64, so I can also push it to AWS.
Everything I have found online uses UInt8Array to base64, and I don't understand how to go from Int8Array to UInt8Array. I would think you can just add 128 to the signed int to get a ratio between 0-256, but this doesn't seem to work.
I have tried using this:
let decoder = new TextDecoder('utf8');
let b64 = btoa(decoder.decode(content));
console.log(b64);
But I get this error:
ERROR DOMException: Failed to execute 'btoa' on 'Window': The string to be encoded contains characters outside of the Latin1 range.
Please help me figure out how to go from Int8Array to Base64.
I use the function in this answer.
For Embed API, use the "content" parameter from the save callback as the input to the function.
You can see a working example at this CodePen. The functional part is below.
adobeDCView.registerCallback(
AdobeDC.View.Enum.CallbackType.SAVE_API,
function (metaData, content, options) {
/* Add your custom save implementation here...and based on that resolve or reject response in given format */
var base64PDF = arrayBufferToBase64(content);
var fileURL = "data:application/pdf;base64," + base64PDF;
$("#submitButton").attr("href", fileURL);
/* End save code */
return new Promise((resolve, reject) => {
resolve({
code: AdobeDC.View.Enum.ApiResponseCode.SUCCESS,
data: {
/* Updated file metadata after successful save operation */
metaData: { fileName: urlToPDF.split("/").slice(-1)[0] }
}
});
});
},
saveOptions
);
I might be over thinking this, but I got a an avatarprofile component that lets a user update their avatar image. I am using Google Cloud storage as the image hosting and then I am using an image CDN (imagekit) to handle the image optimization/caching. All works good...however, I do have a minor annoyance that I was wondering if someone could help me with:
First, here's my code (will help explain what I am having issue with):
async avatarChangeHandler(e) {
this.overlayShow = true //<-- show a loading indicator
try {
if (e.target.files.length) {
this.image = await new Promise((resolve) => {
const reader = new FileReader()
reader.onload = (e) => {
resolve(e.target.result)
}
reader.readAsDataURL(e.target.files[0])
})
await this.photoUpload() //<-- upload image to Google storage and return new image URL
await this.updateAvatar() //<-- update user's profile in the database with new URL
await this.$store.dispatch('getUserProfile', this.currentUser) //<-- grab updated profile
console.log('updating...done')
this.overlayShow = false //<-- terminate the loading indicator
The problem is that the user profile is updated with the new image URL and fetched faster than the image CDN...and so that means the overlayShow is already set to false and still shows the old image for a second or so (depending on level of optimization needed).
What can I do to make sure the overlayShow is not set to false until the image CDN is done with the new image? Thanks and I realize this may not be feasible, but looking for advice or suggestions on approach. Thanks!
In my React Native 0.63.2 app, after user uploads images of artwork, the app will do 2 things:
1. save artwork record and image records on backend server
2. save the images into cloud storage
Those 2 things are related and have to be done successfully all together. Here is the code:
const clickSave = async () => {
console.log("save art work");
try {
//save artwork to backend server
let art_obj = {
_device_id,
name,
description,
tag: (tagSelected.map((it) => it.name)),
note:'',
};
let img_array=[], oneImg;
imgs.forEach(ele => {
oneImg = {
fileName:"f"+helper.genRandomstring(8)+"_"+ele.fileName,
path: ele.path,
width: ele.width,
height: ele.height,
size_kb:Math.ceil(ele.size/1024),
image_data: ele.image_data,
};
img_array.push(oneImg);
});
art_obj.img_array = [...img_array];
art_obj = JSON.stringify(art_obj);
//assemble images
let url = `${GLOBAL.BASE_URL}/api/artworks/new`;
await helper.getAPI(url, _result, "POST", art_obj); //<<==#1. send artwork and image record to backend server
//save image to cloud storage
var storageAccessInfo = await helper.getStorageAccessInfo(stateVal.storageAccessInfo);
if (storageAccessInfo && storageAccessInfo !== "upToDate")
//update the context value
stateVal.updateStorageAccessInfo(storageAccessInfo);
//
let bucket_name = "oss-hz-1"; //<<<
const configuration = {
maxRetryCount: 3,
timeoutIntervalForRequest: 30,
timeoutIntervalForResource: 24 * 60 * 60
};
const STSConfig = {
AccessKeyId:accessInfo.accessKeyId,
SecretKeyId:accessInfo.accessKeySecret,
SecurityToken:accessInfo.securityToken
}
const endPoint = 'oss-cn-hangzhou.aliyuncs.com'; //<<<
const last_5_cell_number = _myself.cell.substring(myself.cell.length - 5);
let filePath, objkey;
img_array.forEach(item => {
console.log("init sts");
AliyunOSS.initWithSecurityToken(STSConfig.SecurityToken,STSConfig.AccessKeyId,STSConfig.SecretKeyId,endPoint,configuration)
//console.log("before upload", AliyunOSS);
objkey = `${last_5_cell_number}/${item.fileName}`; //virtual subdir and file name
filePath = item.path;
AliyunOSS.asyncUpload(bucket_name, objkey, filePath).then( (res) => { //<<==#2 send images to cloud storage with callback. But no action required after success.
console.log("Success : ", res) //<<==not really necessary to have console output
}).catch((error)=>{
console.log(error)
})
})
} catch(err) {
console.log(err);
return false;
};
};
The concern with the code above is that those 2 async calls may take long time to finish while user may be waiting for too long. After clicking saving button, user may just want to move to next page on user interface and leaves those everything behind. Is there a way to do so? is removing await (#1) and callback (#2) able to do that?
if you want to do both tasks in the background, then you can't use await. I see that you are using await on sending the images to the backend, so remove that and use .then().catch(); you don't need to remove the callback on #2.
If you need to make sure #1 finishes before doing #2, then you will need to move the code for #2 intp #1's promise resolving code (inside the .then()).
Now, for catching error. You will need some sort of error handling that alerts the user that an error had occurred and the user should trigger another upload. One thing you can do is a red banner. I'm sure there are packages out there that can do that for you.
My intent:
I want my app to upload images to S3. If image already exists, server should record a reference to existing image rather than asking for an upload of another copy.
How I imagine that works:
Hash image data
Send hash to server with request for signed url (to upload to AWS S3)
If hash matches something already stored, reference it and tell app
Initial thoughts:
Use imageEditor.cropImage to get image into ImageStore, which will give me an appropriate uri. Then use getBase64ForTag(uri, success, failure) to retrieve base64 data for a hash calculation.
The problem:
According to the answer on this question, this process is not efficient in the least. The usual solution would be to use native methods, as described in the answer to this question, however I do not want to eject my Expo app for this feature.
My Question:
Is there a better way to hash image data? Or more fundamentally, is there a better way of ensuring that identical images are not duplicated in S3 storage?
EDIT 2020-10-21 :
The library updated itself, and you should now call:
_hashImage = async (imageUri) => {
return await FileSystem.getInfoAsync(imageUri, { md5: true } );
}
ORIGINAL:
It turns out that Expo provides this out of the box.
Expo.FileSystem.getInfoAsync
myImageHashFunction = async (imageUri) => {
let fsInfo = await Expo.FileSystem.getInfoAsync(imageUri, [{ md5: true }] )
console.log(fsInfo.md5)
}
If you are still looking for a solution:
This is how I got it working - create a base64 of the image and then create a hash of it.
import * as FileSystem from 'expo-file-system';
import * as Crypto from 'expo-crypto';
let info = await FileSystem.readAsStringAsync(imageUri,
{ encoding: FileSystem.EncodingType.Base64 });
const hashData = await Crypto.digestStringAsync (
Crypto.CryptoDigestAlgorithm.MD5,
info
)