Does react-native supports Pintura Editor? - react-native

I want to use Pintura Editor in my react-native project, But it's not working and also didn't getting any error.
I am not able to understand where i am going wrong and does react-native supports Pintura ?
Is there any way i can solve this issue?
Any lead would be highly appreciated. Thank you in advance.

React-Native doesn't supports Pintura Editor yet. But we can use pintura with WebView, as Pintura is based on html technologies it will have to run in a WebView to work. Here is an example
Pintura_Editor.js
import { WebView } from 'react-native-webview';
import { View } from 'react-native';
import React, { forwardRef, useRef, useEffect,useState } from 'react';
const noop = () => {};
//For Making First letter uppercase.
const upperCaseFirstLetter = (str) => str.charAt(0).toUpperCase() + str.slice(1);
// PATH_TO_PINTURA.HTML file (need to download paid version)
const PinturaEditorProxy = require('./pintura.html');
const Editor = forwardRef((props, ref) => {
const { style, ...options } = props;
const webViewRef = useRef(null);
const [isStatus, setStatus] = useState(false)
setInterval(()=> {
setStatus(true)
}, 3000)
// this passes options to the editor
useEffect(() => {
clearInterval();
// setup proxy to call functions on editor
const handler = {
get: (target, prop) => {
if (prop === 'history') return new Proxy({ group: 'history' }, handler);
return (...args) => {
const name = [target.group, prop].filter(Boolean).join('.');
webViewRef.current.postMessage(
JSON.stringify({
fn: [name, ...args],
})
);
};
},
};
webViewRef.current.editor = new Proxy({}, handler);
// post new options
webViewRef.current.postMessage(JSON.stringify({ options: options }));
},[isStatus]);
return (
<View ref={ref} style={style}>
<WebView
ref={webViewRef}
javaScriptEnabled={true}
scrollEnabled={false}
domStorageEnabled={true}
allowFileAccess={true}
allowUniversalAccessFromFileURLs={true}
allowsLinkPreview={false}
automaticallyAdjustContentInsets={false}
originWhitelist={['*']}
textZoom={100}
source={PinturaEditorProxy}
onMessage={async (event) => {
// message from WebView
const { type, detail } = JSON.parse(event.nativeEvent.data);
const handler = await options[`on${upperCaseFirstLetter(type)}`] || noop;
// call handler
handler(detail);
}}
/>
</View>
);
});
export default Editor;
Now, import this editor to your app.js
import PinturaEditor from 'PATH_TO_PINTURA_EDITOR'
const editorRef = null;
....
....
render() {
return (
<PinturaEditor
ref={editorRef}
style={{ margin:40, width: '80%', height: '80%', borderWidth: 2, borderColor: '#eee' }}
imageCropAspectRatio={1}
// Image should be base64 or blob type
src={base64Image}
onLoad={(img) => {
// do something
}}
onProcess={async (image) => {
// do something with edited image
}}
/>)
}

Related

Issue sending & receiving streams between two clients in LiveKit's React Native SDK

I'm trying to build on the example app provided by livekit, so far I've implemented everything like the example app and I've been successful with connecting to a room on example website, I recieve audio from website, but I don't read the video stream, and I also can't send audio or video at all.
Steps to reproduce the behavior:
add the following to index.js
import { registerRootComponent } from "expo";
import { registerGlobals } from "livekit-react-native";
import App from "./App";
registerRootComponent(App);
registerGlobals();
Rendering the following component in App.tsx
import { Participant, Room, Track } from "livekit-client";
import {
useRoom,
useParticipant,
AudioSession,
VideoView,
} from "livekit-react-native";
import { useEffect, useState } from "react";
import { Text, ListRenderItem, StyleSheet, FlatList, View } from "react-native";
import { ParticipantView } from "./ParticipantView";
import { RoomControls } from "./RoomControls";
import type { TrackPublication } from "livekit-client";
const App = () => {
// Create a room state
const [, setIsConnected] = useState(false);
const [room] = useState(
() =>
new Room({
publishDefaults: { simulcast: false },
adaptiveStream: true,
})
);
// Get the participants from the room
const { participants } = useRoom(room);
const url = "[hard-coded-url]";
const token =
"[hard-coded-token";
useEffect(() => {
let connect = async () => {
// If you wish to configure audio, uncomment the following:
await AudioSession.configureAudio({
android: {
preferredOutputList: ["speaker"],
},
ios: {
defaultOutput: "speaker",
},
});
await AudioSession.startAudioSession();
await room.connect(url, token, {});
await room.localParticipant.setCameraEnabled(true);
await room.localParticipant.setMicrophoneEnabled(true);
await room.localParticipant.enableCameraAndMicrophone();
console.log("connected to ", url);
setIsConnected(true);
};
connect();
return () => {
room.disconnect();
AudioSession.stopAudioSession();
};
}, [url, token, room]);
// Setup views.
const stageView = participants.length > 0 && (
<ParticipantView participant={participants[0]} style={styles.stage} />
);
const renderParticipant: ListRenderItem<Participant> = ({ item }) => {
return (
<ParticipantView participant={item} style={styles.otherParticipantView} />
);
};
const otherParticipantsView = participants.length > 0 && (
<FlatList
data={participants}
renderItem={renderParticipant}
keyExtractor={(item) => item.sid}
horizontal={true}
style={styles.otherParticipantsList}
/>
);
const { cameraPublication, microphonePublication } = useParticipant(
room.localParticipant
);
return (
<View style={styles.container}>
{stageView}
{otherParticipantsView}
<RoomControls
micEnabled={isTrackEnabled(microphonePublication)}
setMicEnabled={(enabled: boolean) => {
room.localParticipant.setMicrophoneEnabled(enabled);
}}
cameraEnabled={isTrackEnabled(cameraPublication)}
setCameraEnabled={(enabled: boolean) => {
room.localParticipant.setCameraEnabled(enabled);
}}
onDisconnectClick={() => {
// navigation.pop();
console.log("disconnected");
}}
/>
</View>
);
};
function isTrackEnabled(pub?: TrackPublication): boolean {
return !(pub?.isMuted ?? true);
}
const styles = StyleSheet.create({
container: {
flex: 1,
alignItems: "center",
justifyContent: "center",
},
stage: {
flex: 1,
width: "100%",
},
otherParticipantsList: {
width: "100%",
height: 150,
flexGrow: 0,
},
otherParticipantView: {
width: 150,
height: 150,
},
});
export default App;
the components used here are mostly the same as what's in the example, I've removed the screensharing logic and the messages
5. I run the app using an expo development build
6. it will log that it's connected, you'll be able to hear sound from the remote participant, but not see any video or send any sound.
7. if i try to add
await room.localParticipant.enableCameraAndMicrophone();
in the useEffect, I get the following error:
Possible Unhandled Promise Rejection (id: 0):
Error: Not implemented.
getSettings#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:103733:24
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:120307:109
generatorResume#[native code]
asyncGeneratorStep#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21908:26
_next#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21927:29
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21932:14
tryCallTwo#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26656:9
doResolve#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26788:25
Promise#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26675:14
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21924:25
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:120173:52
generatorResume#[native code]
asyncGeneratorStep#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21908:26
_next#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21927:29
tryCallOne#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26648:16
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:26729:27
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27687:26
_callTimer#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27602:17
_callReactNativeMicrotasksPass#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27635:17
callReactNativeMicrotasks#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:27799:44
__callReactNativeMicrotasks#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:21006:46
#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:20806:45
__guard#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:20986:15
flushedQueue#http://192.168.1.150:8081/index.bundle?platform=ios&dev=true&hot=false:20805:21
flushedQueue#[native code]
Expected behavior
This should both receive & send video and audio streams between the two clients

Testing react-native app with jest. Problem in accessing context and Provider

I know the title is very vague but I hope someone may have an idea.
I want to perform a simple snapshot test on one of my screens with jest but I keep getting errors like this:
Warning: React.jsx: type is invalid -- expected a string (for built-in components) or a class/function (for composite components) but got: undefined. You likely forgot to export your component from the file it's defined in, or you might have mixed up default and named imports.
14 | test('renders correctly', async () => {
15 | const tree = renderer.create(
> 16 | <AuthContext.AuthProvider>
| ^
17 | <AuthContext.Consumer>
18 | <ValidateScreenPhrase ref={(navigator)=>{ setNavigator(navigator) }}/>
19 | </AuthContext.Consumer>
The problem is probably that I use a Context build that looks as follows:
import React, { useReducer } from 'react'
export default (reducer, actions, defaultValue) => {
const Context = React.createContext();
const Provider = ({children}) => {
const [state, dispatch] = useReducer(reducer, defaultValue)
const boundActions = {}
for (let key in actions){
boundActions[key] = actions[key](dispatch);
}
return (
<Context.Provider value={{ state, ...boundActions }}>{children}</Context.Provider>
)
}
return { Context, Provider }
}
from here I then build different Contexts that contain functions and states as e.g.:
import createDataContext from "./createDataContext"
import { navigate } from '../navigationRef'
const authReducer = (state, action) => {
switch (action.type){
case 'clear_error_message':
return { ...state, errorMessage: '' }
default:
return state
}
}
const validateInput = (dispatch) => {
return (userInput, expected) => {
if (userInput === expected) {
navigate('done')
}
else{dispatch({ type: 'error_message', payload: 'your seed phrase was not typed correctly'})}
}
}
export const { Provider, Context } = createDataContext(
authReducer,
{ clearErrorMessage },
{ errorMessage: '' }
)
Now the screen that I want to test is this:
import React, { useState, useContext, useEffect } from 'react'
import { StyleSheet, View, TextInput, SafeAreaView } from 'react-native'
import { Text, Button } from 'react-native-elements'
import { NavigationEvents } from 'react-navigation'
import { Context as AuthContext } from '../context/AuthContext'
import BackButton from '../components/BackButton'
const ValidateSeedPhraseScreen = ({navigation}) => {
const { validateInput, clearErrorMessage, state } = useContext(AuthContext)
const [seedPhrase, setSeedPhrase] = useState('')
const testPhrase = 'blouse'
const checkSeedPhrase = () => {
validateInput(seedPhrase, testPhrase)}
return (
<SafeAreaView style={styles.container}>
<NavigationEvents
onWillFocus={clearErrorMessage}
/>
<NavigationEvents />
<BackButton routeName='walletInformation'/>
<View style={styles.seedPhraseContainer}>
<Text h3>Validate Your Seed Phrase</Text>
<TextInput
style={styles.input}
editable
multiline
onChangeText={(text) => setSeedPhrase(text)}
value={seedPhrase}
placeholder="Your Validation Seed Phrase"
autoCorrect={false}
autoCapitalize='none'
maxLength={200}
/>
<Button
title="Validate"
onPress={() => checkSeedPhrase(seedPhrase, testPhrase)}
style={styles.validateButton}
/>
{state.errorMessage ? (<Text style={styles.errorMessage}>{state.errorMessage}</Text> ) : null}
</View>
</SafeAreaView>
)
}
const styles = StyleSheet.create({
container: {
flex: 1,
marginLeft: 25,
marginRight: 25
},
seedPhraseContainer:{
marginTop: '40%'
},
input: {
height: 200,
margin: 12,
borderWidth: 1,
padding: 10,
fontSize: 20,
borderRadius: 10
},
validateButton:{
paddingBottom: 15
}
})
export default ValidateSeedPhraseScreen
Here I import the AuthContext and make use of the function validateInput and state from the Context. Here I also don't know how to bring these into the testing file
and my test so far looks like this:
import React, {useContext} from "react";
import renderer from 'react-test-renderer';
import { setNavigator } from '../../src/navigationRef';
import ValidateScreenPhrase from '../../src/screens/ValidateSeedPhraseScreen'
import { Provider as AuthProvider, Context as AuthContext } from '../../src/context/AuthContext';
jest.mock('react-navigation', () => ({
withNavigation: ValidateScreenPhrase => props => (
<ValidateScreenPhrase navigation={{ navigate: jest.fn() }} {...props} />
), NavigationEvents: 'mockNavigationEvents'
}));
test('renders correctly', async () => {
const tree = renderer.create(
<AuthProvider>
<AuthContext.Consumer>
<ValidateScreenPhrase ref={(navigator)=>{ setNavigator(navigator) }}/>
</AuthContext.Consumer>
</AuthProvider>, {}).toJSON();
expect(tree).toMatchSnapshot();
});
I already tried out all lot of changes with the context and provider structure. I then always get errors like: "Authcontext is undefined" or "render is not a function".
Does anyone have an idea about how to approach this?

Geolocation clearWatch(watchId) does not stop location tracking (React Native)

I'm trying to create simple example of location tracker and I'm stuck with following case. My basic goal is to toggle location watch by pressing start/end button. I'm doing separation of concerns by implementing custom react hook which is then used in App component:
useWatchLocation.js
import {useEffect, useRef, useState} from "react"
import {PermissionsAndroid} from "react-native"
import Geolocation from "react-native-geolocation-service"
const watchCurrentLocation = async (successCallback, errorCallback) => {
if (!(await PermissionsAndroid.check(PermissionsAndroid.PERMISSIONS.ACCESS_FINE_LOCATION))) {
errorCallback("Permissions for location are not granted!")
}
return Geolocation.watchPosition(successCallback, errorCallback, {
timeout: 3000,
maximumAge: 500,
enableHighAccuracy: true,
distanceFilter: 0,
useSignificantChanges: false,
})
}
const stopWatchingLocation = (watchId) => {
Geolocation.clearWatch(watchId)
// Geolocation.stopObserving()
}
export default useWatchLocation = () => {
const [location, setLocation] = useState()
const [lastError, setLastError] = useState()
const [locationToggle, setLocationToggle] = useState(false)
const watchId = useRef(null)
const startLocationWatch = () => {
watchId.current = watchCurrentLocation(
(position) => {
setLocation(position)
},
(error) => {
setLastError(error)
}
)
}
const cancelLocationWatch = () => {
stopWatchingLocation(watchId.current)
setLocation(null)
setLastError(null)
}
const setLocationWatch = (flag) => {
setLocationToggle(flag)
}
// execution after render when locationToggle is changed
useEffect(() => {
if (locationToggle) {
startLocationWatch()
} else cancelLocationWatch()
return cancelLocationWatch()
}, [locationToggle])
// mount / unmount
useEffect(() => {
cancelLocationWatch()
}, [])
return { location, lastError, setLocationWatch }
}
App.js
import React from "react"
import {Button, Text, View} from "react-native"
import useWatchLocation from "./hooks/useWatchLocation"
export default App = () => {
const { location, lastError, setLocationWatch } = useWatchLocation()
return (
<View style={{ margin: 20 }}>
<View style={{ margin: 20, alignItems: "center" }}>
<Text>{location && `Time: ${new Date(location.timestamp).toLocaleTimeString()}`}</Text>
<Text>{location && `Latitude: ${location.coords.latitude}`}</Text>
<Text>{location && `Longitude: ${location.coords.longitude}`}</Text>
<Text>{lastError && `Error: ${lastError}`}</Text>
</View>
<View style={{ marginTop: 20, width: "100%", flexDirection: "row", justifyContent: "space-evenly" }}>
<Button onPress={() => {setLocationWatch(true)}} title="START" />
<Button onPress={() => {setLocationWatch(false)}} title="STOP" />
</View>
</View>
)
}
I have searched multiple examples which are online and code above should work. But the problem is when stop button is pressed location still keeps getting updated even though I invoke Geolocation.clearWatch(watchId).
I wrapped Geolocation calls to handle location permission and other possible debug stuff. It seems like watchId value that is saved using useRef hook inside useWatchLocation is invalid. My guess is based on attempting to call Geolocation.stopObserving() right after Geolocation.clearWatch(watchId). Subscription stops but I get warning:
Called stopObserving with existing subscriptions.
So I assume that original subscription was not cleared.
What am I missing/doing wrong?
EDIT: I figured out solution. But since isMounted pattern is generally considered antipattern: Does anyone have a better solution?
Ok, problem solved with isMounted pattern. isMounted.current is set at locationToggle effect to true and inside cancelLocationWatch to false:
const isMounted = useRef(null)
...
useEffect(() => {
if (locationToggle) {
isMounted.current = true // <--
startLocationWatch()
} else cancelLocationWatch()
return () => cancelLocationWatch()
}, [locationToggle])
...
const cancelLocationWatch = () => {
stopWatchingLocation(watchId.current)
setLocation(null)
setLastError(null)
isMounted.current = false // <--
}
And checked at mount / unmount effect, success and error callback:
const startLocationWatch = () => {
watchId.current = watchCurrentLocation(
(position) => {
if (isMounted.current) { // <--
setLocation(position)
}
},
(error) => {
if (isMounted.current) { // <--
setLastError(error)
}
}
)
}

Lodash debounce not working all of a sudden?

I'm using a component I wrote for one app, in a newer app. The code is like 99% identical between the first app, which is working, and the second app. Everything is fine except that debounce is not activating in the new app. What am I doing wrong?
// #flow
import type { Location } from "../redux/reducers/locationReducer";
import * as React from "react";
import { Text, TextInput, View, TouchableOpacity } from "react-native";
import { Input } from "react-native-elements";
import { GoogleMapsApiKey } from "../../.secrets";
import _, { debounce } from "lodash";
import { connect } from "react-redux";
import { setCurrentRegion } from "../redux/actions/locationActions";
export class AutoFillMapSearch extends React.Component<Props, State> {
textInput: ?TextInput;
state: State = {
address: "",
addressPredictions: [],
showPredictions: false
};
async handleAddressChange() {
console.log("handleAddressChange");
const url = `https://maps.googleapis.com/maps/api/place/autocomplete/json?key=${GoogleMapsApiKey}&input=${this.state.address}`;
try {
const result = await fetch(url);
const json = await result.json();
if (json.error_message) throw Error(json.error_message);
this.setState({
addressPredictions: json.predictions,
showPredictions: true
});
// debugger;
} catch (err) {
console.warn(err);
}
}
onChangeText = async (address: string) => {
await this.setState({ address });
console.log("onChangeText");
debounce(this.handleAddressChange.bind(this), 800); // console.log(debounce) confirms that the function is importing correctly.
};
render() {
const predictions = this.state.addressPredictions.map(prediction => (
<TouchableOpacity
style={styles.prediction}
key={prediction.id}
onPress={() => {
this.props.beforeOnPress();
this.onPredictionSelect(prediction);
}}
>
<Text style={text.prediction}>{prediction.description}</Text>
</TouchableOpacity>
));
return (
<View>
<TextInput
ref={ref => (this.textInput = ref)}
onChangeText={this.onChangeText}
value={this.state.address}
style={[styles.input, this.props.style]}
placeholder={"Search"}
autoCorrect={false}
clearButtonMode={"while-editing"}
onBlur={() => {
this.setState({ showPredictions: false });
}}
/>
{this.state.showPredictions && (
<View style={styles.predictionsContainer}>{predictions}</View>
)}
</View>
);
}
}
export default connect(
null,
{ setCurrentRegion }
)(AutoFillMapSearch);
I noticed that the difference in the code was that the older app called handleAddressChange as a second argument to setState. Flow was complaining about this in the new app so I thought async/awaiting setState would work the same way.
So changing it to this works fine (with no flow complaints for some reason. maybe because I've since installed flow-typed lodash. God I love flow-typed!):
onChangeText = async (address: string) => {
this.setState(
{ address },
_.debounce(this.handleAddressChange.bind(this), 800)
);
};

How do I take a screenshot of View in React Native?

I want to share screenshot of particular View Component instead of whole screen.
Any one help me out with this.
Take a look a picture. Want screenshot of Red mark area which is within View Component.
You can use library named react-native-view-shot
You just have to give wrap your View inside ViewShot, take a reference of that and call capture()
Here is example of code taken from that library
import ViewShot from "react-native-view-shot";
class ExampleCaptureOnMountManually extends Component {
componentDidMount () {
this.refs.viewShot.capture().then(uri => {
console.log("do something with ", uri);
});
}
render() {
return (
<ViewShot ref="viewShot" options={{ format: "jpg", quality: 0.9 }}>
<Text>...Something to rasterize...</Text>
</ViewShot>
);
}
}
Here is a working example example of code using react-native-view-shot with hooks
import React, { useState, useRef, useEffect } from "react";
import { View, Image, ScrollView, TouchableOpacity } from "react-native";
import ViewShot from "react-native-view-shot";
var RNFS = require("react-native-fs");
import Share from "react-native-share";
const TransactionReceipt = () => {
const viewShotRef = useRef(null);
const [isSharingView, setSharingView] = useState(false);
useEffect(() => {
if (isSharingView) {
const shareScreenshot = async () => {
try {
const uri = await viewShotRef.current.capture();
const res = await RNFS.readFile(uri, "base64");
const urlString = `data:image/jpeg;base64,${res}`;
const info = '...';
const filename = '...';
const options = {
title: info,
message: info,
url: urlString,
type: "image/jpeg",
filename: filename,
subject: info,
};
await Share.open(options);
setSharingView(false);
} catch (error) {
setSharingView(false);
console.log("shareScreenshot error:", error);
}
};
shareScreenshot();
}
}, [isSharingView]);
return (
<ViewShot ref={viewShotRef} options={{ format: "jpg", quality: 0.9 }}>
<View>
{!isSharingView && (
<TouchableOpacity onPress={() => setSharingView(true)}>
<Image source={Images.shareIcon} />
</TouchableOpacity>
)}
<ScrollView />
</View>
</ViewShot>);
}