Trying to load obj & mtl file with Three.js in React Native - react-native

Main objective : Load animated models exported from Maya into React Native app
Exported files : obj, mtl & png file
I have setup https://github.com/react-community/react-native-webgl in my React Native project and it is working properly.
Now, when I am trying to load the MTL file using the MTLLoader, I am getting following error:
Can't find variable: document
Apparently, the MTLLoader is calling TextureLoader which internally calls some load function which has 'document' reference. So what could be the solution to this ?
Here are the two files that I am using:
three.js
const THREE = require("three");
global.THREE = THREE;
if (!window.addEventListener)
window.addEventListener = () => { };
// require("three/examples/js/renderers/Projector");
require("three/examples/js/loaders/MTLLoader");
require("three/examples/js/loaders/OBJLoader");
export default THREE;
ThreeView.js
import React, { Component } from "react";
import { StyleSheet, View } from "react-native";
import { WebGLView } from "react-native-webgl";
import THREE from "./three";
import { image } from "src/res/image";
export default class ThreeView extends Component {
requestId: *;
componentWillUnmount() {
cancelAnimationFrame(this.requestId);
}
onContextCreate = (gl: WebGLRenderingContext) => {
const rngl = gl.getExtension("RN");
const { drawingBufferWidth: width, drawingBufferHeight: height } = gl;
const renderer = new THREE.WebGLRenderer({
canvas: {
width,
height,
style: {},
addEventListener: () => { },
removeEventListener: () => { },
clientHeight: height
},
context: gl
});
renderer.setSize(width, height);
renderer.setClearColor(0xffffff, 1);
let camera, scene;
let cube;
function init() {
camera = new THREE.PerspectiveCamera(75, width / height, 1, 1100);
camera.position.y = 150;
camera.position.z = 500;
scene = new THREE.Scene();
var mtlLoader = new THREE.MTLLoader();
mtlLoader.load('female-croupier-2013-03-26.mtl', function (materials) {
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.load('female-croupier-2013-03-26.obj', function (object) {
scene.add(object);
}, onLoading, onErrorLoading);
}, onLoading, onErrorLoading);
}
const onLoading = (xhr) => {
console.log((xhr.loaded / xhr.total * 100) + '% loaded');
};
const onErrorLoading = (error) => {
console.log('An error happened', error);
};
const animate = () => {
this.requestId = requestAnimationFrame(animate);
renderer.render(scene, camera);
// cube.rotation.y += 0.05;
gl.flush();
rngl.endFrame();
};
init();
animate();
};
render() {
return (
<View style={styles.container}>
<WebGLView
style={styles.webglView}
onContextCreate={this.onContextCreate}
/>
</View>
);
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: "#fff",
alignItems: "center",
justifyContent: "center"
},
webglView: {
width: 300,
height: 300
}
});

This error is as others have said caused by threejs trying to use features from a browser which react-native does not have.
I've gotten so far as to be able to load the textures (which is the stage you're getting the error from) by monkey patching the texture loader to use the loader in react-native-webgl. Add this in your init function (right near the top preferably).
//make sure you have defined renderer and rngl
/*
const renderer = new THREE.WebGLRenderer(...)
const rngl = gl.getExtension("RN");
*/
const loadTexture = async function(url, onLoad, onProgress, onError) {
let textureObject = new THREE.Texture();
console.log("loading",url,'with fancy texture loader');
let properties = renderer.properties.get(textureObject);
var texture = await rngl.loadTexture({yflip: false, image: url});
/*
rngl.loadTexture({ image: url })
.then(({ texture }) => {
*/
console.log("Texture [" + url + "] Loaded!")
texture.needsUpdate = true;
properties.__webglTexture = texture;
properties.__webglInit = true;
console.log(texture);
if (onLoad !== undefined) {
//console.warn('loaded tex', texture);
onLoad(textureObject);
}
//});
return textureObject;
}
THREE.TextureLoader.prototype.load = loadTexture;
This solves the problem of loading textures and I can see them load in Charles but they still don't render on a model so I'm stuck past that point. Technically a correct answer but you'll be stuck as soon as you've implemented it. I'm hoping you can comment back and tell me you've gotten further.

I had a similar setup and encountered same issue. My option was to switch to JSONLoader which doesn’t need document to render in react-native. So, I just loaded my model in Blender with a three-js addon, then exported it as json. Just check out this process of adding a three-js adon to Blender
https://www.youtube.com/watch?v=mqjwgTAGQRY
All the best

this might get you closer:
The GLTF format supports embedding texture images (as base64). If your asset pipeline allows it, you could convert to GLTF and then load into three/react-native.
I had to provide some "window" polyfills for "decodeUriComponent" and "atob" because GLTFLoader uses FileLoader to parse the base64:
I've successfully loaded embedded buffers, but you'll need more polyfills to load textures. TextureLoader uses ImageLoader, which uses document.createElementNS

You are using the MTLLoader which uses TextureLoader, and the TextureLoader uses the ImageLoader.
The imageloader uses the document.createElementNS() function.
what i did to solve this was to directly call the THREEjs TextureLoader:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
(for the use of Texture check the Texture documentation)
Then i used the Image class from React native (instead of the THREEjs Image, which requires the DOM to be constructed) to give that to the Texture as a property:
import { Image } from 'react-native';
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
And then finally map the texture over the target material:
const mat = new THREE.MeshPhongMaterial();
mat.map = texture;
In the react native documentation it will explain how the react native Image element can be used, it supports base64 encoded JPEG.
Maybe there's a way for you to single out the part where it calls for the TextureLoader and replace that part with this answer. Let me know how it works out.
side note, i havent tried to display this yet in my webGLView, but in the logs it looked like normal threejs objects, it's worth the try

Use TextureLoader from expo-three
import { TextureLoader } from "expo-three";
export function loadTexture(resource) {
if (textureCache[resource]) {
return textureCache[resource].clone();
}
const texture = new TextureLoader().load(resource);
texture.magFilter = NearestFilter;
texture.minFilter = NearestFilter;
textureCache[resource] = texture;
return texture;
}
Source: https://github.com/EvanBacon/Expo-Crossy-Road/blob/master/src/Node/Generic.js

Related

undefined is not an object (evaluating '_expoThree.AR.TrackingConfiguration')

I am trying to do an AR app with react native and expo and I followed this tutorial(https://blog.expo.dev/arkit-in-react-native-tutorial-the-basics-9f839539f0b9) everything worked great until I got to the Ar camera section. when I added this line of code - arTrackingConfiguration={AR.TrackingConfigurations.World};
my app crashed and I got this error:
TypeError: undefined is not an object (evaluating '_expoThree.AR.TrackingConfiguration')
this is my code:
import ExpoTHREE, { THREE, AR as ThreeAR} from 'expo-three';
import { View as GraphicsView } from 'expo-graphics';
onContextCreate = async ({gl, scale: pixelRatio, width, height, }) => {
// Insert 3D universe
this.renderer = new ExpoTHREE.Renderer({
gl,
pixelRatio,
width,
height,
});
this.scene = new THREE.Scene();
this.camera = new THREE.PerspectiveCamera(75, width / height, 0.1, 1000);
const geometry = new THREE.BoxGeometry(0.1, 0.1, 0.1);
const material = new THREE.MeshPhongMaterial({
color: 0xff00ff,
});
this.cube = new THREE.Mesh(geometry, material);
this.cube.position.z = -0.4
this.scene.add(this.cube);
this.scene.add(new THREE.AmbientLight(0xffffff));
}
onRender = () => {
this.renderer.render(this.scene, this.camera);
};
return (
<GraphicsView
style={{ flex: 1 }}
onContextCreate={this.onContextCreate}
onRender={this.onRender}
isArEnabled
arTrackingConfiguration={AR.TrackingConfiguration.World}
/>
);
}
I saw some errors that are the same but the solutions did not match my situation, I think that the AR variable from expo three is not included or the functions from the AR are not updated in the package, I didn't find a suitable solution for my errors. if someone has a solution to my problem please reach out to me. thanks to everyone:)

React Native & Expo - onContextCreate function not calling when application ran

I don't know why and there is no error shown in debugger-ui. I only see white screen in my iphone with no errors. I also add a console.log inside onContextCreate function and there is no message, so it means onContextCreate function not triggered and here is my code. Any help is very helpful.
import { View as GraphicsView } from 'expo-graphics';
import ExpoTHREE, { THREE } from 'expo-three';
import React from 'react';
export default class App extends React.Component {
UNSAFE_componentWillMount() {
THREE.suppressExpoWarnings();
}
render() {
// Create an `ExpoGraphics.View` covering the whole screen, tell it to call our
// `onContextCreate` function once it's initialized.
return (
<GraphicsView
style={{backgroundColor: 'yellow'}}
onContextCreate={this.onContextCreate}
onRender={this.onRender}
/>
);
}
// This is called by the `ExpoGraphics.View` once it's initialized
onContextCreate = async ({
gl,
canvas,
width,
height,
scale: pixelRatio,
}) => {
console.log('onContextCreate ran...');
this.renderer = new ExpoTHREE.Renderer({ gl, pixelRatio, width, height });
this.renderer.setClearColor(0xffffff)
this.scene = new THREE.Scene();
this.camera = new THREE.PerspectiveCamera(75, width / height, 0.1, 1000);
this.camera.position.z = 5;
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshPhongMaterial({
color: 0xff0000,
});
this.cube = new THREE.Mesh(geometry, material);
this.scene.add(this.cube);
this.scene.add(new THREE.AmbientLight(0x404040));
const light = new THREE.DirectionalLight(0xffffff, 0.5);
light.position.set(3, 3, 3);
this.scene.add(light);
};
onRender = delta => {
this.cube.rotation.x += 3.5 * delta;
this.cube.rotation.y += 2 * delta;
this.renderer.render(this.scene, this.camera);
};
}
I realized that when i close remote debugger in EXPO than my codes are working. This is why happened i don't know. It is good to someone else explain it but it works when i close remote debugging in EXPO...

How to create pdf and review in flutter

I'm trying to create pdf and review it.
I applied pdf plugin for creating the pdf , path_provider plugin for save the pdf to the device's storage and
flutter_full_pdf_viewer plugin for view the pdf.
I have followed create-a-pdf-in-flutter.
But getting errors in the code if I try to import with import 'package:pdf/widgets.dart'; , material element isn't working import 'package:flutter/material.dart'; .
What am I doing wrong?
Code:
import 'dart:io';
import 'package:flutter/material.dart';
import 'package:pdf/pdf.dart';
import 'package:path_provider/path_provider.dart';
import 'package:pdfdemo/pages/pdf_viewer.dart';
//import 'package:pdf/widgets.dart'
Variable:
final pdf = Document();
Creating pdf file page:
return Scaffold(
appBar: AppBar(title: Text("PDF CREATE"),
actions: <Widget>[
IconButton(
icon: Icon(Icons.save),
onPressed: () => savePdfFile(),
)
],),
body: pdf.addPage(Page(
pageFormat: PdfPageFormat.a4,
build: (BuildContext context) {
return Center(
child: Text("Hello Flutter"),
);
})),
);
Saving pdf file to the device's location:
savePdfFile()async{
final dir = await getExternalStorageDirectory();
print("Directoryyyyyyyyy:${dir.path}");
final String path = "${dir.path}/example.pdf";
final file = File(path);
await file.writeAsBytes(pdf.save());
Navigator.of(context).push(
MaterialPageRoute(builder: (_) => PgfViewerPage(path: path))
);
}
The Problem in your code is that you are using the material library and the PDF library at the same time. The Widgets that are provided by the PDF plugin dont work in the regular Scaffold from flutter. You build your PDF with them like they are showing in the example. To get the PDF file you need to generate it first and then pass it to the screen where you wanna display it.
Try it like this, it worked for me
Future<File> createPDF(){
final Document pdf = Document();
pdf.addPage(
//Your PDF design here with the widget system of the plugin
MultiPage(
pageFormat:
PdfPageFormat.letter.copyWith(marginBottom: 1.5 * PdfPageFormat.cm),
crossAxisAlignment: CrossAxisAlignment.start,
theme: Theme(
tableHeader: TextStyle(fontSize: 8.0),
tableCell: TextStyle(fontSize: 8.0),
),
header: (Context context) {
if (context.pageNumber == 1) {
return null;
}
return Container(
alignment: Alignment.centerRight,
margin: const EdgeInsets.only(bottom: 3.0 * PdfPageFormat.mm),
padding: const EdgeInsets.only(bottom: 3.0 * PdfPageFormat.mm),
decoration: const BoxDecoration(
border:
BoxBorder(bottom: true, width: 0.5, color: PdfColors.grey)),
child: Text('VCR',
style: Theme.of(context)
.defaultTextStyle
.copyWith(color: PdfColors.grey)));
},
);
output = await getTemporaryDirectory();
final file = File('${output.path}/example.pdf');
file.writeAsBytesSync(pdf.save());
return file;
}
After you created the PDF display it in a scaffold like this:
import 'package:flutter/material.dart';
import 'package:flutter_full_pdf_viewer/full_pdf_viewer_scaffold.dart';
class PDFScreen extends StatelessWidget {
final String pathPDF;
PDFScreen({this.pathPDF});
#override
Widget build(BuildContext context) {
return PDFViewerScaffold(
appBar: AppBar(
title: Text("Document"),
actions: <Widget>[
IconButton(
icon: Icon(Icons.share),
onPressed: () {},
),
],
),
path: pathPDF);
}
}
the pathPDf you get from the first function if you call file.absolute.path
IMPORTANT: the function and the PDFScreen must be in separate files!! Where you implement the function for generating the PDF you MUST NOT import 'package:flutter/material.dart';
hope this helps
import 'package:image_gallery_saver/image_gallery_saver.dart';
import 'package:intl/intl.dart' as intl;
import 'package:permission_handler/permission_handler.dart';
import 'package:screenshot/screenshot.dart';
import 'dart:typed_data';
import 'package:syncfusion_flutter_pdf/pdf.dart';
import 'package:path_provider/path_provider.dart';
import 'package:open_file/open_file.dart';
// Will take screenshot of the widget and save in Unit8List and create pdf of //Unit8List
//paste this function where needed
openPDFofSS();
//Add controller
ScreenshotController screenshotController = ScreenshotController();
//define controller before in widget as
Screenshot(
controller: screenshotController,
child: Text("replace child with the widget you want to convert in pdf"),
),
// paste these function
Future<void> openPDFofSS() async {
await screenshotController.capture().then((Uint8List image) {
//Capture Done
setState(() {
pdfLoading = true;
//save screenshot into Uint8List image
_imageFile = image;
//convert Unit8List image into PDF
_convertImageToPDF();
saveImage(_imageFile);
});
}).catchError((onError) {
print(onError);
});
}
Future<void> _convertImageToPDF() async {
//Create the PDF document
PdfDocument document = PdfDocument();
//Add the page
PdfPage page = document.pages.add();
//Load the image.
final PdfImage image = PdfBitmap(_imageFile);
//draw image to the first page
page.graphics.drawImage(
image, Rect.fromLTWH(-20, -20, page.size.width - 50, page.size.height));
//Save the docuemnt
List<int> bytes = document.save();
//Dispose the document.
document.dispose();
//Get external storage directory
Directory directory = await getApplicationDocumentsDirectory();
//Get directory path
String path = directory.path;
//Create an empty file to write PDF data
File file = File('$path/Output.pdf');
//Write PDF data
await file.writeAsBytes(bytes, flush: true);
print(path);
//Open the PDF document in mobile
OpenFile.open('$path/Output.pdf');
setState(() {
pdfLoading = false;
});
}
Future<String> saveImage(Uint8List image) async {
await [Permission.storage].request();
final result = await ImageGallerySaver.saveImage(image, name: 'autosmart');
return result['filePath'];
}

Save Sketched lines in LocalStorage

Sorry im posting this here
I want to save the info of each line i drew on canvas(the save action would be called onChange)
so i can retrieve this data and draw it on canvas again in case the user change screen or something.
I'm using expo-pixi to draw a image and sketch over it
onChangeAsync = async (param) => {
// here i want on change code i get the line informantion and store it
}
onLayout = async ({
nativeEvent: {
layout: { width, height },
},
}) => {
this.setState({
layoutWidth: width,
layoutHeight: height,
})
this.onReady();
}
onReady = async () => {
const { layoutWidth, layoutHeight, points } = this.state;
this.sketch.graphics = new PIXI.Graphics();
if (this.sketch.stage) {
if (layoutWidth && layoutHeight) {
const background = await PIXI.Sprite.fromExpoAsync(this.props.image);
background.width = layoutWidth * scaleR;
background.height = layoutHeight * scaleR;
this.sketch.stage.addChild(background);
this.sketch.renderer._update();
}
}
};
// The sketch component is pretty much as the example which comes with the lib
<Sketch
ref={ref => (this.sketch = ref)}
style={styles.sketch}
strokeColor={this.state.strokeColor}
strokeWidth={this.state.strokeWidth}
strokeAlpha={1}
onChange={this.onChangeAsync}
onReady={this.onReady}
/>
Does anyone have any clue? im kind of desperate
Thanks

React Native atob() / btoa() not working without remote JS debugging

I have a testing app in react native, and all works fine when I have enabled the debug js remotely. It works fine in device (from XCode) and simulator, after run:
react-native run ios
The problem is that if I stop remote js debugging, the login test not works anymore.The login logic is very simple, I'm making a fetch to an api to test a login, the API endpoint is over https.
What I need to change?
Updated: This code works perfetly with JS Debug Remote Enabled, if I disable it, it not works anymore.
/**
* Sample React Native App
* https://github.com/facebook/react-native
* #flow
*/
import React, { Component } from 'react'
import {
AppRegistry,
StyleSheet,
View,
Button,
Alert
} from 'react-native'
export default class MyClass extends Component {
constructor (props) {
super(props)
this.testFetch = this.testFetch.bind(this)
}
async testFetch () {
const email = 'email#example.com'
const password = '123456'
try {
const response = await fetch('https://www.example.com/api/auth/login', {
/* eslint no-undef: 0 */
method: 'POST',
headers: {
'Accept': 'application/json' /* eslint quote-props: 0 */,
'Content-Type': 'application/json',
'Authorization': 'Basic ' + btoa(email + ':' + password)
}
})
Alert.alert('Error fail!', 'Fail')
console.log(response)
} catch (error) {
Alert.alert('Error response!', 'Ok')
}
}
render () {
return (
<View style={styles.container}>
<Button
onPress={this.testFetch}
title="Test me!"
/>
</View>
)
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
backgroundColor: '#F5FCFF'
},
welcome: {
fontSize: 20,
textAlign: 'center',
margin: 10
},
instructions: {
textAlign: 'center',
color: '#333333',
marginBottom: 5
}
})
AppRegistry.registerComponent('testingReactNative', () => MyClass)
Thanks.
That's the ways I fixed it. As #chemitaxis suggests, add base-64 module from NPM:
npm i -S base-64
Based on it, I propose a couple of ways to use it:
Importing it in files you need it
Then, you can import 'encode' and 'decode' methods using aliases, this way:
import {decode as atob, encode as btoa} from 'base-64'
Of course, using aliases is optional.
Polyfill way
You can set atob and btoa as global variables on React Native. Then, you won't need to import them on each file you need it. You have to add this code:
import {decode, encode} from 'base-64'
if (!global.btoa) {
global.btoa = encode;
}
if (!global.atob) {
global.atob = decode;
}
You need to place it at the beginning of your index.js, so that it can be loaded before another file uses atob and btoa.
I suggest you to copy it on a separate file (let's say base64Polyfill.js), and then import it on index.js
Here you go (https://sketch.expo.io/BktW0xdje). Create a separate component (e.g. Base64.js), import it and it's ready to use. For instance Base64.btoa('123');
// #flow
// Inspired by: https://github.com/davidchambers/Base64.js/blob/master/base64.js
const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=';
const Base64 = {
btoa: (input:string = '') => {
let str = input;
let output = '';
for (let block = 0, charCode, i = 0, map = chars;
str.charAt(i | 0) || (map = '=', i % 1);
output += map.charAt(63 & block >> 8 - i % 1 * 8)) {
charCode = str.charCodeAt(i += 3/4);
if (charCode > 0xFF) {
throw new Error("'btoa' failed: The string to be encoded contains characters outside of the Latin1 range.");
}
block = block << 8 | charCode;
}
return output;
},
atob: (input:string = '') => {
let str = input.replace(/=+$/, '');
let output = '';
if (str.length % 4 == 1) {
throw new Error("'atob' failed: The string to be decoded is not correctly encoded.");
}
for (let bc = 0, bs = 0, buffer, i = 0;
buffer = str.charAt(i++);
~buffer && (bs = bc % 4 ? bs * 64 + buffer : buffer,
bc++ % 4) ? output += String.fromCharCode(255 & bs >> (-2 * bc & 6)) : 0
) {
buffer = chars.indexOf(buffer);
}
return output;
}
};
export default Base64;
The main part of your question has been answered, but I see there's still some uncertainty about why it works with remote debugging enabled.
When debugging remotely, the JavaScript code is actually running in Chrome, and the diffs to the virtual dom are being communicated to the native app over a web socket.
http://facebook.github.io/react-native/docs/javascript-environment
atob and btoa are available in the context of the browser, and that's why it works there.
When you stop debugging, though, the JavaScript is again interpreted in a process on your device or simulator, which doesn't have access to functions that are provided by the browser.