I am trying to show a message to the user after he captures an image and the image is saved in gallery. I have surfed through the net but can not find any solution. So far what I have tried the following code from here for capturing image-
takePicture = async function() {
if (this.camera) {
this.camera.takePicture().then(data => {
FileSystem.moveAsync({
from: data,
to: `${FileSystem.documentDirectory}photos/Photo_${this.state
.photoId}.jpg`,
}).then(() => {
this.setState({
photoId: this.state.photoId + 1,
});
Vibration.vibrate();
});
});
}
};
Now I want to know what should I do to get the completion event. Any help is highly appreciated.
I am not the best with what all to put in that code, but you can make a message show this way:
Toast.makeText(getApplicationContext(),"Picture taken!",Toast.LENGTH_SHORT).show();
Instead of Toast you can use a cross platform library : react-native-dropdown-alert
Related
From some research, I've figured out that expo libraries like takePicturesAsync() are able to take a picture and save it to the app's cache. However, the default state for libraries like these is to save the whole image. Is there any way for me to save a specific part of the image (e.g. the 2500 pixels at the center of the screen)?
Thanks!
You can use the onPictureSaved event to grab & manipulate the image.
takePicture = () => {
if (this.camera) {
this.camera.takePictureAsync({ onPictureSaved: this.onPictureSaved });
}
};
onPictureSaved = photo => {
console.log(photo);
}
I have a Parse Cloud afterSave trigger from where I can access the obj and inside the obj a field that has a store parse file img.
I want to use sharp to resize it and save it in another field but I'm struggling and getting an error when I use sharp. Here is a summary of the code I already have inside the cloud trigger:
let file = obj.get("photo");
sharp(file)
.resize(250, 250)
.then((data) => {
console.log("img-----", data);
})
.catch((err) => {
console.log("--Error--", err);
});
After some research, I managed to figure out how to create Parse Cloud afterSave trigger which resizes and then saves the img, I couldn't find much information on it so ill post my solution so others can use it if it's helpful.
Parse.Cloud.afterSave("Landmarks", async (req) => {
const obj = req.object;
const objOriginal = req.original;
const file = obj.get("photo");
const condition = file && !file.equals(objOriginal.get("photo"));
if (condition) {
Parse.Cloud.httpRequest({ url: file.url() })
.then((res) => {
sharp(res.buffer)
.resize(250, 250, {
fit: "fill",
})
.toBuffer()
.then(async (dataBuffer) => {
const data = { base64: dataBuffer.toString("base64") };
const parseFile = new Parse.File(
"photo_thumbnail",
data
);
await parseFile.save();
await obj.save({ photo_thumb: parseFile });
})
.catch((err) => {
console.log("--Sharp-Error--", err);
});
})
.catch((err) => {
console.log("--HTTP-Request-Error--", err);
});
} else {
console.log("--Photo was deleted or did not change--");
}
});
So to break this down a bit, what i did first was get the obj and the objOriginal so i can compare them and check for a change in a specific field. This condition is necessery since in my case i wanted to save the resized img in parse which would cause an infinite loop otherwise.
After that i did a Parse.Cloud.httpRequest({ url: file.url()}).then() which is the way i found to get the buffer from the photo. The buffer is stored inside res.buffer and we need it for sharp.
Next i use sharp(res.buffer) since sharp also accepts buffers and resize it to the desired dimensions (i used the fit config for it). Then we turn the resulted img into another buffer using .toBuffer(). Furthermore, i use a .then().catch() blocks and if sharp is succesful i turned the outputed buffer into a base64 and passed it in Parse.File(), note that the specific syntax { base64: 'insert buffer here' } is important.
And finally i just save the file and the obj. Is this the best way to do it, absolytely not, but its the one i found that works. Another possible solution is instead of using buffers and base64 is to create a temporary dir which you save the images there, use them and then delete the directory. I tried this as well but had issues making it work.
In my React Native 0.63.2 app, after user uploads images of artwork, the app will do 2 things:
1. save artwork record and image records on backend server
2. save the images into cloud storage
Those 2 things are related and have to be done successfully all together. Here is the code:
const clickSave = async () => {
console.log("save art work");
try {
//save artwork to backend server
let art_obj = {
_device_id,
name,
description,
tag: (tagSelected.map((it) => it.name)),
note:'',
};
let img_array=[], oneImg;
imgs.forEach(ele => {
oneImg = {
fileName:"f"+helper.genRandomstring(8)+"_"+ele.fileName,
path: ele.path,
width: ele.width,
height: ele.height,
size_kb:Math.ceil(ele.size/1024),
image_data: ele.image_data,
};
img_array.push(oneImg);
});
art_obj.img_array = [...img_array];
art_obj = JSON.stringify(art_obj);
//assemble images
let url = `${GLOBAL.BASE_URL}/api/artworks/new`;
await helper.getAPI(url, _result, "POST", art_obj); //<<==#1. send artwork and image record to backend server
//save image to cloud storage
var storageAccessInfo = await helper.getStorageAccessInfo(stateVal.storageAccessInfo);
if (storageAccessInfo && storageAccessInfo !== "upToDate")
//update the context value
stateVal.updateStorageAccessInfo(storageAccessInfo);
//
let bucket_name = "oss-hz-1"; //<<<
const configuration = {
maxRetryCount: 3,
timeoutIntervalForRequest: 30,
timeoutIntervalForResource: 24 * 60 * 60
};
const STSConfig = {
AccessKeyId:accessInfo.accessKeyId,
SecretKeyId:accessInfo.accessKeySecret,
SecurityToken:accessInfo.securityToken
}
const endPoint = 'oss-cn-hangzhou.aliyuncs.com'; //<<<
const last_5_cell_number = _myself.cell.substring(myself.cell.length - 5);
let filePath, objkey;
img_array.forEach(item => {
console.log("init sts");
AliyunOSS.initWithSecurityToken(STSConfig.SecurityToken,STSConfig.AccessKeyId,STSConfig.SecretKeyId,endPoint,configuration)
//console.log("before upload", AliyunOSS);
objkey = `${last_5_cell_number}/${item.fileName}`; //virtual subdir and file name
filePath = item.path;
AliyunOSS.asyncUpload(bucket_name, objkey, filePath).then( (res) => { //<<==#2 send images to cloud storage with callback. But no action required after success.
console.log("Success : ", res) //<<==not really necessary to have console output
}).catch((error)=>{
console.log(error)
})
})
} catch(err) {
console.log(err);
return false;
};
};
The concern with the code above is that those 2 async calls may take long time to finish while user may be waiting for too long. After clicking saving button, user may just want to move to next page on user interface and leaves those everything behind. Is there a way to do so? is removing await (#1) and callback (#2) able to do that?
if you want to do both tasks in the background, then you can't use await. I see that you are using await on sending the images to the backend, so remove that and use .then().catch(); you don't need to remove the callback on #2.
If you need to make sure #1 finishes before doing #2, then you will need to move the code for #2 intp #1's promise resolving code (inside the .then()).
Now, for catching error. You will need some sort of error handling that alerts the user that an error had occurred and the user should trigger another upload. One thing you can do is a red banner. I'm sure there are packages out there that can do that for you.
I try to share multiple photos with other apps(telegram, Instagram,...)in react-native, but I don't know how to share more than one image at on call. any suggestion can be helpful,
thank you
I use this lib to share more than one image to another apps.react-native-share.
This is a simple way to do this on react native.
shareImage(images) {
const shareOptions = {
title: 'Share file',
urls:images,
};
return Share.open(shareOptions);
};
_renderShareIt(base64Items){
let selectedImages=[]
base64Items.map((images)=>{
// push selected images on an array
});
if(selectedImages.length > 0){
this.shareImage(selectedImages)
}else{
// show an alert, no image selected
}
}
hopefully it was useful :)
I have posted about this previously but still struggling to get a working version.
I want to create a sharable link from my app to a screen within my app and be able to pass through an ID of sorts.
I have a link on my home screen opening a link to my expo app with 2 parameters passed through as a query string
const linkingUrl = 'exp://192.168.0.21:19000';
...
_handleNewGroup = async () => {
try {
const group_id = await this.createGroupId()
Linking.openURL(`${linkingUrl}?screen=camera&group_id=${group_id}`);
}catch(err){
console.log(`Unable to create group ${err}`)
}
};
Also in my home screen I have a handler that gets the current URL and extracts the query string from it and navigates to the camera screen with a group_id set
async handleLinkToCameraGroup(){
Linking.getInitialURL().then((url) => {
let queryString = url.replace(linkingUrl, '');
if (queryString) {
const data = qs.parse(queryString);
if(data.group_id) {
this.props.navigation.navigate('Camera', {group_id: data.group_id});
}
}
}).catch(err => console.error('An error occurred', err));
}
Several issues with this:
Once linked to the app with the query string set, the values don't get reset so they are always set and therefore handleLinkToCameraGroup keeps running and redirecting.
Because the URL is not an http formatted URL, it is hard to extract the query string. Parsing the query string returns this:
{
"?screen": "camera",
"group_id": "test",
}
It doesn't seem right having this logic in the home screen. Surely this should go in the app.js file. But this causes complications not being able to use Linking because the RootStackNavigator is a child of app.js so I do not believe I can navigate from this file?
Any help clarifying the best approach to deep linking would be greatly appreciated.