Odd behavior in RN trying to center an image - react-native

I have an image I wish to center in the display using React Native. I have to use 'absolute' positioning for this particular image. At first, to get the image dimensions I am using the following code:
import React, { Component } from "react";
import { Button, Dimensions, Image, Platform, SafeAreaView, ScrollView, StyleSheet, Text, TouchableOpacity, View } from "react-native"; I
const halfWidth = Number((Dimensions.get("window").width) / 2); //half the device Width
const halfHeight = Number((Dimensions.get("window").height) / 2); //half the device Height
var logoImage = require('./images/logo_white_128.png');
var logoWidth = 0, logoHeight = 0, logo_X = 0, logo_Y = 0;
export default class Initial extends Component {
constructor(props) {
super(props);
this.state = {
...
};
}
getDims = async() => {
var source = Image.resolveAssetSource(logoImage);
//source.width, source.height`
logoWidth = source.width;
logoHeight = source.height;
logo_X = halfWidth - (logoWidth / 2);
logo_Y = halfHeight - (logoHeight / 2);
console.log("logoWidth is: " + source.width);
console.log("logoHeight is: " + source.height);
console.log("logo_X is: " + logo_X);
console.log("logo_Y is: " + logo_Y);
} //getDims
componentDidMount() {
this.getDims(); //call to determine dimension of logo image...
} //componentDidMount
As can be seen, I am using the 'compoinentDidMount' to call a function which uses the 'Image.resolveAssetSource' to determine the dimensions of the image. I am also setting the positioning of the X and Y coordinate values in global variables so as properly 'center' the image in the screen display.
In the 'rendering' part:
render() {
return (
<>
<SafeAreaView style={styles.container}>
<ScrollView style={styles.scrollView}>
...
<Image style={styles.img} source={require('./images/logo_white_128.png')} />
...
</ScrollView>
</SafeAreaView>
</>
)
} //render
} //class 'Initial' Component
const styles = StyleSheet.create({
img: {
position: "absolute",
top: logo_X,
left: logo_Y
}
});
For whatever reason the image is rendered in the 'top left' corner of the screen. It is not centered as it should be. Obviously it seems to be setting the image at the "0" position on the X and Y axis. Therefore for some reason the global variables I stated in the 'getDims' function did not properly reset the variable values. All of the 'math' is correct. The console.log statements indicate both the X and Y dimensions for the image are determined correctly...and the 'positioning' variables I set ('logo_X' and 'logo_Y') also seem correct. If anybody has an idea where the error may be please make a suggestion, it would be greatly appreciated!

Related

React Native / Expo custom font: German umlauts out of bounds

I've added a custom font (FS Me as an OTF file) to a React Native / Expo app using useFonts from expo-font. Unfortunately, I'm running into an issue where German umlauts (ä, ö, ü) are not visible due to the viewbox of the Text element being too small. For example, take this text: Über uns.
In the app, this simply renders as Uber uns because the two dots above the "U" are out of the Text boundary. When I increase the line height to 1.4x the font size, the umlauts show. However, this a) adds some sort of padding on the bottom of the text and b) it's obviously not a very good solution to increase the line height everywhere. Increasing it on a case-to-case basis isn't a viable solution either.
I'm using a custom Text component which sets a default font like this:
import React from 'react';
import {
TextProps,
Text as RNText, // eslint-disable-line no-restricted-imports
StyleSheet,
StyleProp,
} from 'react-native';
// Animated.createAnimatedComponent requires a class component
// eslint-disable-next-line react/prefer-stateless-function
class Text extends React.Component<TextProps> {
render() {
const { children, style } = this.props;
let styles = style;
let objStyle: StyleProp<any> = {};
if (Array.isArray(style)) objStyle = StyleSheet.flatten(style);
else if (style) objStyle = { ...(style as object) };
if (!objStyle.fontFamily || objStyle.fontFamily === 'FS_Me') {
objStyle.fontFamily = 'FS_Me_400_Regular';
if (objStyle.fontWeight === 'bold' || objStyle.fontWeight === '700') {
objStyle.fontFamily = 'FS_Me_700_Bold';
objStyle.fontWeight = undefined;
}
if (objStyle.fontWeight === 'light' || objStyle.fontWeight === '300') {
objStyle.fontFamily = 'FS_Me_300_Light';
if (objStyle.fontStyle === 'italic') {
objStyle.fontFamily = 'FS_Me_300_Light_Italic';
}
objStyle.fontWeight = undefined;
}
objStyle.backgroundColor = 'red';
styles = objStyle;
}
return (
<RNText allowFontScaling={false} {...this.props} style={styles}>
{children}
</RNText>
);
}
}
export default Text;
However, this also happens when I'm using the core RN Text component and only change the font family.
Does anyone have an idea why this happens?

React Native & Expo - onContextCreate function not calling when application ran

I don't know why and there is no error shown in debugger-ui. I only see white screen in my iphone with no errors. I also add a console.log inside onContextCreate function and there is no message, so it means onContextCreate function not triggered and here is my code. Any help is very helpful.
import { View as GraphicsView } from 'expo-graphics';
import ExpoTHREE, { THREE } from 'expo-three';
import React from 'react';
export default class App extends React.Component {
UNSAFE_componentWillMount() {
THREE.suppressExpoWarnings();
}
render() {
// Create an `ExpoGraphics.View` covering the whole screen, tell it to call our
// `onContextCreate` function once it's initialized.
return (
<GraphicsView
style={{backgroundColor: 'yellow'}}
onContextCreate={this.onContextCreate}
onRender={this.onRender}
/>
);
}
// This is called by the `ExpoGraphics.View` once it's initialized
onContextCreate = async ({
gl,
canvas,
width,
height,
scale: pixelRatio,
}) => {
console.log('onContextCreate ran...');
this.renderer = new ExpoTHREE.Renderer({ gl, pixelRatio, width, height });
this.renderer.setClearColor(0xffffff)
this.scene = new THREE.Scene();
this.camera = new THREE.PerspectiveCamera(75, width / height, 0.1, 1000);
this.camera.position.z = 5;
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshPhongMaterial({
color: 0xff0000,
});
this.cube = new THREE.Mesh(geometry, material);
this.scene.add(this.cube);
this.scene.add(new THREE.AmbientLight(0x404040));
const light = new THREE.DirectionalLight(0xffffff, 0.5);
light.position.set(3, 3, 3);
this.scene.add(light);
};
onRender = delta => {
this.cube.rotation.x += 3.5 * delta;
this.cube.rotation.y += 2 * delta;
this.renderer.render(this.scene, this.camera);
};
}
I realized that when i close remote debugger in EXPO than my codes are working. This is why happened i don't know. It is good to someone else explain it but it works when i close remote debugging in EXPO...

How to get the current number of lines in String of TextInput?

After entering text in TextInput
I want to know the currunt number of lines in TextInput.
Or
The current number of strings is also possible.
I've tried string.split('\n').length, but this code does not detect that the line is automatically incremented when the text is larger than the screen.
How do I get the number of lines
When I make this feature in Swift
Implemented using the string.enumerateLines function.
Do you have a similar function?
As far as I know, there is no offical way to get the currently used number of lines, but I have a workaround for you:
We make use of the onLayout method of our TextInput and we keep track of the currently used height. We need two state variables:
this.state = {
height: 0, // here we track the currently used height
usedLines: 0,// here we calculate the currently used lines
}
onLayout:
_onLayout(e) {
console.log('e', e.nativeEvent.layout.height);
// the height increased therefore we also increase the usedLine counter
if (this.state.height < e.nativeEvent.layout.height) {
this.setState({usedLines: this.state.usedLines+1});
}
// the height decreased, we subtract a line from the line counter
if (this.state.height > e.nativeEvent.layout.height){
this.setState({usedLines: this.state.usedLines-1});
}
// update height if necessary
if (this.state.height != e.nativeEvent.layout.height){
this.setState({height: e.nativeEvent.layout.height});
}
}
Render Method
render() {
console.log('usedLines', this.state.usedLines);
return (
<View style={styles.container}>
<TextInput multiline style={{backgroundColor: 'green'}} onLayout={(e)=> this._onLayout(e)} />
</View>
);
}
Working Example:
https://snack.expo.io/B1vC8dvJH
For react-native version >= 0.46.1
You can use onContentSizeChange for a more accurate line track, for example with react hooks as following:
/**
* used to tracker content height and current lines
*/
const [contentHeightTracker, setContentHeightTracker] = useState<{
height: number,
usedLines: number;
}>({
height: 0,
usedLines: 0
});
useEffect(() => {
// console.log('used line change : ' + lineAndHeightTracker.usedLines);
// console.log('props.extremeLines : ' + props.extremeLines);
if (contentHeightTracker.usedLines === props.extremeLines) {
if (extremeHeight.current === undefined) {
extremeHeight.current = contentHeightTracker.height;
}
}
//callback height change
if (contentHeightTracker.height !== 0) {
props.heightChange && props.heightChange(contentHeightTracker.height,
contentHeightTracker.usedLines >= props.extremeLines,
extremeHeight.current);
}
}, [contentHeightTracker]);
const _onContentHeightChange = (event: NativeSyntheticEvent<TextInputContentSizeChangeEventData>) => {
// console.log('event height : ', event.nativeEvent.contentSize.height);
// console.log('tracker height : ', lineAndHeightTracker.height);
// the height increased therefore we also increase the usedLine counter
if (contentHeightTracker.height < event.nativeEvent.contentSize.height) {
setContentHeightTracker({
height: event.nativeEvent.contentSize.height,
usedLines: contentHeightTracker.usedLines + 1
});
} else {
// the height decreased, we subtract a line from the line counter
if (contentHeightTracker.height > event.nativeEvent.contentSize.height) {
setContentHeightTracker({
height: event.nativeEvent.contentSize.height,
usedLines: contentHeightTracker.usedLines - 1
});
}
}
};
render() {
console.log('usedLines', this.state.usedLines);
return (
<View style={styles.container}>
<TextInput
multiline
style={{backgroundColor: 'green'}}
onContentSizeChange={_onContentHeightChange}
/>
</View>
);
}
The other solutions might fail if the user:
Highlights a lot of lines and deletes them all at once
Pastes a lot of lines at once
One way to solve this issue would be to set a lineHeight to the <TextInput> and use the onContentSizeChange prop:
<TextInput
style={{
lineHeight: 20,
}}
onContentSizeChange={e =>
console.log(e.nativeEvent.contentSize.height / 20) // prints number of lines
}
/>

Trying to load obj & mtl file with Three.js in React Native

Main objective : Load animated models exported from Maya into React Native app
Exported files : obj, mtl & png file
I have setup https://github.com/react-community/react-native-webgl in my React Native project and it is working properly.
Now, when I am trying to load the MTL file using the MTLLoader, I am getting following error:
Can't find variable: document
Apparently, the MTLLoader is calling TextureLoader which internally calls some load function which has 'document' reference. So what could be the solution to this ?
Here are the two files that I am using:
three.js
const THREE = require("three");
global.THREE = THREE;
if (!window.addEventListener)
window.addEventListener = () => { };
// require("three/examples/js/renderers/Projector");
require("three/examples/js/loaders/MTLLoader");
require("three/examples/js/loaders/OBJLoader");
export default THREE;
ThreeView.js
import React, { Component } from "react";
import { StyleSheet, View } from "react-native";
import { WebGLView } from "react-native-webgl";
import THREE from "./three";
import { image } from "src/res/image";
export default class ThreeView extends Component {
requestId: *;
componentWillUnmount() {
cancelAnimationFrame(this.requestId);
}
onContextCreate = (gl: WebGLRenderingContext) => {
const rngl = gl.getExtension("RN");
const { drawingBufferWidth: width, drawingBufferHeight: height } = gl;
const renderer = new THREE.WebGLRenderer({
canvas: {
width,
height,
style: {},
addEventListener: () => { },
removeEventListener: () => { },
clientHeight: height
},
context: gl
});
renderer.setSize(width, height);
renderer.setClearColor(0xffffff, 1);
let camera, scene;
let cube;
function init() {
camera = new THREE.PerspectiveCamera(75, width / height, 1, 1100);
camera.position.y = 150;
camera.position.z = 500;
scene = new THREE.Scene();
var mtlLoader = new THREE.MTLLoader();
mtlLoader.load('female-croupier-2013-03-26.mtl', function (materials) {
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.load('female-croupier-2013-03-26.obj', function (object) {
scene.add(object);
}, onLoading, onErrorLoading);
}, onLoading, onErrorLoading);
}
const onLoading = (xhr) => {
console.log((xhr.loaded / xhr.total * 100) + '% loaded');
};
const onErrorLoading = (error) => {
console.log('An error happened', error);
};
const animate = () => {
this.requestId = requestAnimationFrame(animate);
renderer.render(scene, camera);
// cube.rotation.y += 0.05;
gl.flush();
rngl.endFrame();
};
init();
animate();
};
render() {
return (
<View style={styles.container}>
<WebGLView
style={styles.webglView}
onContextCreate={this.onContextCreate}
/>
</View>
);
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: "#fff",
alignItems: "center",
justifyContent: "center"
},
webglView: {
width: 300,
height: 300
}
});
This error is as others have said caused by threejs trying to use features from a browser which react-native does not have.
I've gotten so far as to be able to load the textures (which is the stage you're getting the error from) by monkey patching the texture loader to use the loader in react-native-webgl. Add this in your init function (right near the top preferably).
//make sure you have defined renderer and rngl
/*
const renderer = new THREE.WebGLRenderer(...)
const rngl = gl.getExtension("RN");
*/
const loadTexture = async function(url, onLoad, onProgress, onError) {
let textureObject = new THREE.Texture();
console.log("loading",url,'with fancy texture loader');
let properties = renderer.properties.get(textureObject);
var texture = await rngl.loadTexture({yflip: false, image: url});
/*
rngl.loadTexture({ image: url })
.then(({ texture }) => {
*/
console.log("Texture [" + url + "] Loaded!")
texture.needsUpdate = true;
properties.__webglTexture = texture;
properties.__webglInit = true;
console.log(texture);
if (onLoad !== undefined) {
//console.warn('loaded tex', texture);
onLoad(textureObject);
}
//});
return textureObject;
}
THREE.TextureLoader.prototype.load = loadTexture;
This solves the problem of loading textures and I can see them load in Charles but they still don't render on a model so I'm stuck past that point. Technically a correct answer but you'll be stuck as soon as you've implemented it. I'm hoping you can comment back and tell me you've gotten further.
I had a similar setup and encountered same issue. My option was to switch to JSONLoader which doesn’t need document to render in react-native. So, I just loaded my model in Blender with a three-js addon, then exported it as json. Just check out this process of adding a three-js adon to Blender
https://www.youtube.com/watch?v=mqjwgTAGQRY
All the best
this might get you closer:
The GLTF format supports embedding texture images (as base64). If your asset pipeline allows it, you could convert to GLTF and then load into three/react-native.
I had to provide some "window" polyfills for "decodeUriComponent" and "atob" because GLTFLoader uses FileLoader to parse the base64:
I've successfully loaded embedded buffers, but you'll need more polyfills to load textures. TextureLoader uses ImageLoader, which uses document.createElementNS
You are using the MTLLoader which uses TextureLoader, and the TextureLoader uses the ImageLoader.
The imageloader uses the document.createElementNS() function.
what i did to solve this was to directly call the THREEjs TextureLoader:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
(for the use of Texture check the Texture documentation)
Then i used the Image class from React native (instead of the THREEjs Image, which requires the DOM to be constructed) to give that to the Texture as a property:
import { Image } from 'react-native';
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
And then finally map the texture over the target material:
const mat = new THREE.MeshPhongMaterial();
mat.map = texture;
In the react native documentation it will explain how the react native Image element can be used, it supports base64 encoded JPEG.
Maybe there's a way for you to single out the part where it calls for the TextureLoader and replace that part with this answer. Let me know how it works out.
side note, i havent tried to display this yet in my webGLView, but in the logs it looked like normal threejs objects, it's worth the try
Use TextureLoader from expo-three
import { TextureLoader } from "expo-three";
export function loadTexture(resource) {
if (textureCache[resource]) {
return textureCache[resource].clone();
}
const texture = new TextureLoader().load(resource);
texture.magFilter = NearestFilter;
texture.minFilter = NearestFilter;
textureCache[resource] = texture;
return texture;
}
Source: https://github.com/EvanBacon/Expo-Crossy-Road/blob/master/src/Node/Generic.js

How detect tablet landscape on React Native

I want to border margin of of screen S on phone and tablet to be different. There are variants for tablet landscape and portrait mode.
How to create different margin dimension for the variants on phone, tablet portrait, tablet landscape ?
For those curious how to do on Android , we just create some resource files at the right folder :
values for default
values-sw600dp for tablet default
values-sw600dp-land for tablet landscape
The other answers have already addressed the screen detection task. However, there is still the issue of detecting if the code is running on a Tablet device. You can detect that using the react-native-device-info package, in particular its isTablet method. So, as an example, in your component:
constructor(){
super();
this.state = {orientation: 'UNKNOWN'}
this._onOrientationChanged = this._onOrientationChanged.bind(this);
}
_onOrientationChanged(orientation){
this._setState({orientation})
}
componentDidMount(){
Orientation.addOrientationListener(this._onOrientationChanged);
}
componentWillUnmount(){
Orientation.removeOrientationListener(this._orientationDidChange);
}
render(){
let layoutStyles;
if(DeviceInfo.isTablet()){
layoutStyles = this.state.orientation == 'LANDSCAPE' ? landscapeTabletStyle : portraitTabletLandscape; // Basic example, this might get more complex if you account for UNKNOWN or PORTRAITUPSIDEDOWN
}else{
layoutStyles = this.state.orientation == 'LANDSCAPE' ? landscapeStyle : portraitLandscape;
}
render(){
<View style={[styles.container, layoutStyles]} // And so on...
}
}
Note that the state holds the UNKNOWN value on the beginning. Have a look at the getInitialOrientation() of the package function. I am intentionally leaving that bit out because it simply reads a property that is set when the JS code loads, and I am not sure if that satisfies your usecase (i.e. this is not your first screen). What I usually like to do is store the rotation value in a redux store (where I initialize the orientation value to that of getInitialOrientation() and then subscribe only once to the orientation listener).
I think this library will be helpful for you: https://github.com/yamill/react-native-orientation
You can do something like that with it:
Orientation.getOrientation((err,orientation)=> {
console.log("Current Device Orientation: ", orientation);
if(orientation === 'LANDSCAPE') {
//do stuff
} else {
//do other stuff
}
});
// Extract from the root element in our app's index.js
class App extends Component {
_onLayout = event => this.props.appLayout(event.nativeEvent.layout);
render() {
return (
<View onLayout={this._onLayout}>
{/* Subviews... */}
</View>
);
}
}
export const SET_ORIENTATION = 'deviceStatus/SET_ORIENTATION';
export function appLayout(event: {width:number, height:number}):StoreAction {
const { width, height } = event;
const orientation = (width > height) ? 'LANDSCAPE' : 'PORTRAIT';
return { type: SET_ORIENTATION, payload: orientation };
}
Code Copied from Here