"Coroutine parameter must be passed by value for safe access" how can I do it? - c++-winrt

I made a C++/WinRT module which implements Windows.Storage.Pickers.FileOpenPicker and FileSavePicker to React Native for Windows. This is my FilePicker.h:
#pragma once
#include <winrt/Windows.Foundation.h>
#include <winrt/Windows.Storage.h>
#include <winrt/Windows.Storage.Pickers.h>
#include "pch.h"
#include "JSValue.h"
#include "NativeModules.h"
using namespace std;
using namespace winrt;
using namespace winrt::Microsoft::ReactNative;
using namespace Windows::Foundation;
using namespace Windows::Storage;
using namespace Windows::Storage::Pickers;
using namespace Windows::UI::Xaml;
namespace FilePicker
{
REACT_MODULE(Panel);
struct Panel
{
REACT_METHOD(Open, L"open");
fire_and_forget Open(wchar_t ext[], React::ReactPromise<string> promise) noexcept
{
wchar_t str[32] = L".";
wcscat_s(str, 32, ext);
FileOpenPicker openPicker;
openPicker.ViewMode(PickerViewMode::List);
openPicker.SuggestedStartLocation(PickerLocationId::DocumentsLibrary);
openPicker.FileTypeFilter().ReplaceAll({ str });
StorageFile file = co_await openPicker.PickSingleFileAsync();
if (file == nullptr) {
promise.Reject("No file selected.");
} else {
promise.Resolve(to_string(file.Path()));
}
}
REACT_METHOD(Save, L"save");
fire_and_forget Save(wchar_t ext[], wchar_t content[]) noexcept
{
wchar_t str[32] = L".";
wcscat_s(str, 32, ext);
FileSavePicker savePicker;
savePicker.SuggestedStartLocation(PickerLocationId::DocumentsLibrary);
savePicker.FileTypeChoices().Insert(L"", single_threaded_vector<hstring>({ str }));
StorageFile file = co_await savePicker.PickSaveFileAsync();
await FileIO::WriteTextAsync(file, content);
}
};
}
and I linked it to testing RNW project, which App.tsx is this:
import React from 'react';
import { Button, Text, View } from 'react-native';
import { open, save } from 'react-native-file-panel';
export default class App extends React.Component<undefined, { text: string }> {
constructor(props: undefined) {
super(props);
this.state = { text: '' };
}
render() {
return (
<View>
<Button title='SAVE' onPress={ () => save('txt', 'The quick fox jumps over the lazy dog.') } />
<Button title='OPEN' onPress={ () => open('txt').then((content: string) => this.setState({ text: content })) } />
<Text>{ this.state.text }</Text>
</View>
);
}
}
But when I ran it, this error occurs:
error C2338: Coroutine parameter must be passed by value for safe access: void __cdecl winrt::Microsoft::ReactNative::ValidateCoroutineArg<struct winrt::fire_and_forget,wchar_t*>(void) noexcept
I tried to modify, and googled, nothing was helped.
Anyone knows the solution?

Related

How to leave existing class attribute on image element - now it is being moved to a generated enclosing span

Background: Trying to use ckeditor5 as a replacement for my homegrown editor in a non-invasive way - meaning without changing my edited content or its class definitions. Would like to have WYSIWYG in the editor. Using django_ckeditor_5 as a base with my own ckeditor5 build that includes ckedito5-inspector and my extraPlugins and custom CSS. This works nicely.
Problem: When I load the following HTML into ClassicEditor (edited textarea.value):
<p>Text with inline image: <img class="someclass" src="/media/uploads/some.jpeg"></p>
in the editor view area, browser-inspection of the DOM shows:
...
<p>Text with an inline image:
<span class="image-inline ck-widget someclass ck-widget_with-resizer" contenteditable="false">
<img src="/media/uploads/some.jpeg">
<div class="ck ck-reset_all ck-widget__resizer ck-hidden">
<div ...></div></span></p>
...
Because the "someclass" class has been removed from and moved to the enclosing class attributes, my stylesheets are not able to size this image element as they would appear before editing.
If, within the ckeditor5 view, I edit the element using the browser inspector 'by hand' and add back class="someclass" to the image, ckeditor5 displays my page as I'd expect it with "someclass" and with the editing frame/tools also there. Switching to source-editing and back shows the class="someclass" on the and keeps it there after switching back to document editing mode.
(To get all this, I enabled the GeneralHtmlSupport plugin in the editor config with all allowed per instructions, and that seems to work fine.) I also added the following simple plugin:
export default class Extend extends Plugin {
static get pluginName() {
return 'Extend';
}
#updateSchema() {
const schema = this.editor.model.schema;
schema.extend('imageInline', {
allowAttributes: ['class']
});
}
init() {
const editor = this.editor;
this.#updateSchema();
}
}
to extend the imageInline model hoping that would make the Image plugin keep this class attribute.
This is the part where I need some direction on how to proceed - what should be added/modified in the Image Plugin or in my Extend plugin to keep the class attribute with the element while editing - basically to fulfill the WYSIWYG desire?
The following version does not rely on GeneralHtmlSupport but creates an imageClassAttribute model element and uses that to convert only the image class attribute and place it on the imageInline model view widget element.
import Plugin from '#ckeditor/ckeditor5-core/src/plugin';
export default class Extend extends Plugin {
static get pluginName() {
return 'Extend';
}
#updateSchema() {
const schema = this.editor.model.schema;
schema.register( 'imageClassAttribute', {
isBlock: false,
isInline: false,
isObject: true,
isSelectable: false,
isContent: true,
allowWhere: 'imageInline',
});
schema.extend('imageInline', {
allowAttributes: ['imageClassAttribute' ]
});
}
init() {
const editor = this.editor;
this.#updateSchema();
this.#setupConversion();
}
#setupConversion() {
const editor = this.editor;
const t = editor.t;
const conversion = editor.conversion;
conversion.for( 'upcast' )
.attributeToAttribute({
view: 'class',
model: 'imageClassAttribute'
});
conversion.for( 'dataDowncast' )
.attributeToAttribute({
model: 'imageClassAttribute',
view: 'class'
});
conversion.for ( 'editingDowncast' ).add( // Custom conversion helper
dispatcher =>
dispatcher.on( 'attribute:imageClassAttribute:imageInline', (evt, data, { writer, consumable, mapper }) => {
if ( !consumable.consume(data.item, evt.name) ) {
return;
}
const imageContainer = mapper.toViewElement(data.item);
const imageElement = imageContainer.getChild(0);
if ( data.attributeNewValue !== null ) {
writer.setAttribute('class', data.attributeNewValue, imageElement);
} else {
writer.removeAttribute('class', imageElement);
}
})
);
}
}
Well, Mr. Nose Tothegrind found two solutions after digging through ckeditor5 code, here's the first one. This extension Plugin restores all image attributes that are collected by GeneralHtmlSupport. It can be imported and added to a custom ckeditor5 build app.js file by adding config.extraPlugins = [ Extend ]; before the editor.create(...) statement.
import Plugin from '#ckeditor/ckeditor5-core/src/plugin';
import GeneralHtmlSupport from '#ckeditor/ckeditor5-html-support/src/generalhtmlsupport';
export default class Extend extends Plugin {
static get pluginName() {
return 'Extend';
}
static get requires() {
return [ GeneralHtmlSupport ];
}
init() {
const editor = this.editor;
this.#setupConversion();
}
#setupConversion() {
const editor = this.editor;
const t = editor.t;
const conversion = editor.conversion;
conversion.for ( 'editingDowncast' ).add( // Custom conversion helper
dispatcher =>
dispatcher.on( 'attribute:htmlAttributes:imageInline', (evt, data, { writer, mapper }) => {
const imageContainer = mapper.toViewElement(data.item);
const imageElement = imageContainer.getChild(0);
if ( data.attributeNewValue !== null ) {
const newValue = data.attributeNewValue;
if ( newValue.classes ) {
writer.setAttribute('class', newValue.classes.join(' '), imageElement);
}
if ( newValue.attributes ) {
for (const name of Object.keys(newValue.attributes)) {
writer.setAttribute( name, newValue.attributes[name], imageElement);
}
}
} else {
writer.removeAttribute('class', imageElement);
}
})
);
}a
}

How do we use Flutterwave payment in react native?

I want to integrate flutterwave in my react native application. I downloaded their npm package called flutterwave-react-native and followed their tutorial but still can't do it. I'm using their sample snippet on Github and I'm getting an error that says:
this.usePaymentLink is not a function
I searched everywhere but couldn't find where this.usePaymentLink was defined. You can check out my snippet and tell me what I missed and how this.usePaymentLink can look like.
import React from 'react';
import {View, TouchableOpacity} from 'react-native';
import {FlutterwaveInit} from 'flutterwave-react-native';
class MyCart extends React.Component {
abortController = null;
componentWillUnmout() {
if (this.abortController) {
this.abortController.abort();
}
}
handlePaymentInitialization = () => {
this.setState({
isPending: true,
}, () => {
// set abort controller
this.abortController = new AbortController;
try {
// initialize payment
const paymentLink = await FlutterwaveInit(
{
tx_ref: generateTransactionRef(),
authorization: '[merchant public key]',
amount: 100,
currency: 'USD',
customer: {
email: 'customer-email#example.com',
},
payment_options: 'card',
},
this.abortController
);
// use payment link
return this.usePaymentLink(paymentLink);
} catch (error) {
// do nothing if our payment initialization was aborted
if (error.code === 'ABORTERROR') {
return;
}
// handle other errors
this.displayErrorMessage(error.message);
}
});
}
render() {
const {isPending} = this.state;
return (
<View>
...
<TouchableOpacity
style={[
styles.paymentbutton,
isPending ? styles.paymentButtonBusy : {}
]}
disabled={isPending}
onPress={this.handlePaymentInitialization}
>
Pay $100
</TouchableOpacity>
</View>
)
}
}
so i have been trying to apply it on expo but finally got a breakthrough.
// so i made some little corrections before i could get it running
// this is the code directly from their npm or github
import {PayWithFlutterwave} from 'flutterwave-react-native';
<PayWithFlutterwave
...
onRedirect={handleOnRedirect}
options={{
tx_ref: transactionReference,
authorization: '[merchant public key]',
customer: {
email: 'customer-email#example.com'
},
amount: 2000,
currency: 'NGN',
payment_options: 'card'
}}
/>
// my correction
first of all handleOnRedirect must be a defined function
secondly i removed the three dots (...) before the handleOnRedirect function
then created a function to generate a randomized refrenced no
then i pasted my public flutterwave account key for "merchant public key"
i also pasted my flutterwave account email in place of this 'customer-email#example.com'
import {PayWithFlutterwave} from 'flutterwave-react-native';
const handleOnRedirect = () => {
console.log('sadi')
}
const generateRef = (length) => {
var a = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890".split("");
var b = [];
for (var i=0; i<length; i++) {
var j = (Math.random() * (a.length-1)).toFixed(0);
b[i] = a[j];
}
return b.join("");
}
<PayWithFlutterwave
onRedirect={handleOnRedirect}
options={{
tx_ref: generateRef(11),
authorization: 'MY_PUBLIC_KEY',
customer: {
email: 'user#gmail.com'
},
amount: 2000,
currency: 'NGN',
payment_options: 'card'
}}
/>
``

React navigation prevent double push()

I'm building an app with react-navigation-4.2.1. The app has multiple stack navigators. So there are a lots of navigation.push('Routename') calls.
Trouble is when the control surface (i.e. TouchableOpacity) is tapped rapidly multiple times (first one, and the rest during screen transition) I end up pushing multiple screens into the stack. Is there a way to restrict the surface to the first tap/call of push()?
The component below is what i use to make things touchable. it handle multiple touches in small period of time.
Use component below instead of TouchableOpacity. wrap any thing you want with this component and it will be touchable.
<SafeTouch
onPress={...}
>
<Text> hey! im a touchable text now</Text>
</SafeTouch>
The component below is written used TypeScirpt.
every touch within 300ms after first touch will be ignored(thats where help you with your problem).
import * as React from 'react'
import { TouchableOpacity } from 'react-native'
interface ISafeTouchProps {
onPress: () => void
onLongPress?: () => void
onPressIn?: () => void
onPressOut?: () => void,
activeOpacity?: number,
disabled?: boolean,
style: any
}
export class SafeTouch extends React.PureComponent<ISafeTouchProps> {
public static defaultProps: ISafeTouchProps = {
onPress: () => { },
onLongPress: () => { },
onPressIn: () => { },
onPressOut: () => { },
disabled: false,
style: null
}
private isTouchValid: boolean = true
private touchTimeout: any = null
public constructor(props: ISafeTouchProps) {
super(props)
{// Binding methods
this.onPressEvent = this.onPressEvent.bind(this)
}
}
public render(): JSX.Element {
return (
<TouchableOpacity
onPress={this.onPressEvent}
onLongPress={this.props.onLongPress}
onPressIn={this.props.onPressIn}
onPressOut={this.props.onPressOut}
activeOpacity={this.props.activeOpacity}
disabled={this.props.disabled}
style={[{minWidth: 24, minHeight: 24}, this.props.style]}
>
{
this.props.children
}
</TouchableOpacity>
)
}
public componentWillUnmount() {
this.clearTimeoutIfExists()
}
private onPressEvent(): void {
requestAnimationFrame(() => {
if (this.isTouchValid === false) {
return
}
this.isTouchValid = false
this.clearTimeoutIfExists()
this.touchTimeout = setTimeout(() => {
this.isTouchValid = true
}, 300)
if (typeof this.props.onPress === 'function') {
this.props.onPress()
}
})
}
private clearTimeoutIfExists(): void {
if (this.touchTimeout != null) {
clearTimeout(this.touchTimeout)
this.touchTimeout = null
}
}
}
This is the proper behavior for Push and it is not a bug if you want
to avoid the duplicate screen on double tab you can just use navigation.navigate.
To avoid pushing the screen more than once when clicking in the same button in a short span of time, I created a generic hook to avoid running a function more than once (accepting an interval to allow run again):
export const useCallOnce = <T extends unknown[], K>(
fn: (...args: T) => K,
allowAfter?: number,
) => {
const ref = React.useRef<number | undefined>();
const resultFn = (...args: T) => {
const now = new Date().getTime();
if (!ref.current || (allowAfter && ref.current + allowAfter < now)) {
ref.current = now;
return fn(...args);
}
};
return resultFn;
};
Then, you can just call it as in the following example:
const navigation = useNavigation<NativeStackNavigationProp<{ ExampleScreen: undefined }>>();
const push = useCallOnce(() => navigation.push('ExampleScreen'), 500);
// just call on the button click event as: onSomeEvent={() => push()}
You can create a generic button component that accept the push parameters with the hook above, similar to the example, and use this button whenever you want a button to navigate between pages.

Trying to load obj & mtl file with Three.js in React Native

Main objective : Load animated models exported from Maya into React Native app
Exported files : obj, mtl & png file
I have setup https://github.com/react-community/react-native-webgl in my React Native project and it is working properly.
Now, when I am trying to load the MTL file using the MTLLoader, I am getting following error:
Can't find variable: document
Apparently, the MTLLoader is calling TextureLoader which internally calls some load function which has 'document' reference. So what could be the solution to this ?
Here are the two files that I am using:
three.js
const THREE = require("three");
global.THREE = THREE;
if (!window.addEventListener)
window.addEventListener = () => { };
// require("three/examples/js/renderers/Projector");
require("three/examples/js/loaders/MTLLoader");
require("three/examples/js/loaders/OBJLoader");
export default THREE;
ThreeView.js
import React, { Component } from "react";
import { StyleSheet, View } from "react-native";
import { WebGLView } from "react-native-webgl";
import THREE from "./three";
import { image } from "src/res/image";
export default class ThreeView extends Component {
requestId: *;
componentWillUnmount() {
cancelAnimationFrame(this.requestId);
}
onContextCreate = (gl: WebGLRenderingContext) => {
const rngl = gl.getExtension("RN");
const { drawingBufferWidth: width, drawingBufferHeight: height } = gl;
const renderer = new THREE.WebGLRenderer({
canvas: {
width,
height,
style: {},
addEventListener: () => { },
removeEventListener: () => { },
clientHeight: height
},
context: gl
});
renderer.setSize(width, height);
renderer.setClearColor(0xffffff, 1);
let camera, scene;
let cube;
function init() {
camera = new THREE.PerspectiveCamera(75, width / height, 1, 1100);
camera.position.y = 150;
camera.position.z = 500;
scene = new THREE.Scene();
var mtlLoader = new THREE.MTLLoader();
mtlLoader.load('female-croupier-2013-03-26.mtl', function (materials) {
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.load('female-croupier-2013-03-26.obj', function (object) {
scene.add(object);
}, onLoading, onErrorLoading);
}, onLoading, onErrorLoading);
}
const onLoading = (xhr) => {
console.log((xhr.loaded / xhr.total * 100) + '% loaded');
};
const onErrorLoading = (error) => {
console.log('An error happened', error);
};
const animate = () => {
this.requestId = requestAnimationFrame(animate);
renderer.render(scene, camera);
// cube.rotation.y += 0.05;
gl.flush();
rngl.endFrame();
};
init();
animate();
};
render() {
return (
<View style={styles.container}>
<WebGLView
style={styles.webglView}
onContextCreate={this.onContextCreate}
/>
</View>
);
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: "#fff",
alignItems: "center",
justifyContent: "center"
},
webglView: {
width: 300,
height: 300
}
});
This error is as others have said caused by threejs trying to use features from a browser which react-native does not have.
I've gotten so far as to be able to load the textures (which is the stage you're getting the error from) by monkey patching the texture loader to use the loader in react-native-webgl. Add this in your init function (right near the top preferably).
//make sure you have defined renderer and rngl
/*
const renderer = new THREE.WebGLRenderer(...)
const rngl = gl.getExtension("RN");
*/
const loadTexture = async function(url, onLoad, onProgress, onError) {
let textureObject = new THREE.Texture();
console.log("loading",url,'with fancy texture loader');
let properties = renderer.properties.get(textureObject);
var texture = await rngl.loadTexture({yflip: false, image: url});
/*
rngl.loadTexture({ image: url })
.then(({ texture }) => {
*/
console.log("Texture [" + url + "] Loaded!")
texture.needsUpdate = true;
properties.__webglTexture = texture;
properties.__webglInit = true;
console.log(texture);
if (onLoad !== undefined) {
//console.warn('loaded tex', texture);
onLoad(textureObject);
}
//});
return textureObject;
}
THREE.TextureLoader.prototype.load = loadTexture;
This solves the problem of loading textures and I can see them load in Charles but they still don't render on a model so I'm stuck past that point. Technically a correct answer but you'll be stuck as soon as you've implemented it. I'm hoping you can comment back and tell me you've gotten further.
I had a similar setup and encountered same issue. My option was to switch to JSONLoader which doesn’t need document to render in react-native. So, I just loaded my model in Blender with a three-js addon, then exported it as json. Just check out this process of adding a three-js adon to Blender
https://www.youtube.com/watch?v=mqjwgTAGQRY
All the best
this might get you closer:
The GLTF format supports embedding texture images (as base64). If your asset pipeline allows it, you could convert to GLTF and then load into three/react-native.
I had to provide some "window" polyfills for "decodeUriComponent" and "atob" because GLTFLoader uses FileLoader to parse the base64:
I've successfully loaded embedded buffers, but you'll need more polyfills to load textures. TextureLoader uses ImageLoader, which uses document.createElementNS
You are using the MTLLoader which uses TextureLoader, and the TextureLoader uses the ImageLoader.
The imageloader uses the document.createElementNS() function.
what i did to solve this was to directly call the THREEjs TextureLoader:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
(for the use of Texture check the Texture documentation)
Then i used the Image class from React native (instead of the THREEjs Image, which requires the DOM to be constructed) to give that to the Texture as a property:
import { Image } from 'react-native';
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
And then finally map the texture over the target material:
const mat = new THREE.MeshPhongMaterial();
mat.map = texture;
In the react native documentation it will explain how the react native Image element can be used, it supports base64 encoded JPEG.
Maybe there's a way for you to single out the part where it calls for the TextureLoader and replace that part with this answer. Let me know how it works out.
side note, i havent tried to display this yet in my webGLView, but in the logs it looked like normal threejs objects, it's worth the try
Use TextureLoader from expo-three
import { TextureLoader } from "expo-three";
export function loadTexture(resource) {
if (textureCache[resource]) {
return textureCache[resource].clone();
}
const texture = new TextureLoader().load(resource);
texture.magFilter = NearestFilter;
texture.minFilter = NearestFilter;
textureCache[resource] = texture;
return texture;
}
Source: https://github.com/EvanBacon/Expo-Crossy-Road/blob/master/src/Node/Generic.js

Multiple gesture responders at the same time

I need some buttons that can be pressed at the same time, but currently if you press one, it 'claims' responsiveness and the others can't be pressed anymore. How do I do this?
Got it. You have to use ReactNativeEventEmitter to directly listen to touch events and bypass the Gesture Responder stuff entirely. Below is a decorator class that calls onTouchStart, onTouchEnd and onTouchMove in the wrapped class whenever those touch events are received.
'use strict';
import React, {Component} from 'react-native';
import ReactNativeEventEmitter from 'ReactNativeEventEmitter';
import NodeHandle from 'NodeHandle';
export const multitouchable = BaseComponent => {
return class extends Component {
constructor(props, context) {
super(props, context);
this.comp = null;
this.compId = null;
}
componentDidMount() {
if(this.comp && this.compId){
this.comp.onTouchStart && ReactNativeEventEmitter.putListener(this.compId, 'onTouchStart', e => this.comp.onTouchStart(e));
this.comp.onTouchEnd && ReactNativeEventEmitter.putListener(this.compId, 'onTouchEnd', e => this.comp.onTouchEnd(e));
this.comp.onTouchMove && ReactNativeEventEmitter.putListener(this.compId, 'onTouchMove', e => this.comp.onTouchMove(e));
}
}
componentWillUnmount() {
if(this.comp && this.compId){
this.comp.onTouchStart && ReactNativeEventEmitter.deleteListener(this.compId, 'onTouchStart');
this.comp.onTouchEnd && ReactNativeEventEmitter.deleteListener(this.compId, 'onTouchEnd');
this.comp.onTouchMove && ReactNativeEventEmitter.deleteListener(this.compId, 'onTouchMove');
}
}
render() {
return (
<BaseComponent {...this.props} {...this.state}
ref={c => {
this.comp = c;
const handle = React.findNodeHandle(c);
if(handle)
this.compId = NodeHandle.getRootNodeID(handle);
}}
/>
);
}
};
}