ngCordova imagePicker plugin error: Cannot read property 'getPictures' of undefined - ngcordova

I have installed the imagePicker plugin, as described in:
http://ngcordova.com/docs/plugins/imagePicker/
Here's a snippet of my code:
//See: http://ngcordova.com/docs/plugins/imagePicker/
var imagePickerOptions = {
maximumImagesCount: 10,
width: 800,
height: 800,
quality: 80
};
$scope.pickImage = function () {
$cordovaImagePicker.getPictures(imagePickerOptions).then(function (imageData) {
for (var i = 0; i < imageData.length; i++) {
$scope.registration.imgSrc = imageData[i];
}
}, function (error) {
console.log(error);
});
$scope.registerform.show();
};
While debugging the code, I can see that $cordovaImagePicker is correctly injected:
Only, when I call:
$cordovaImagePicker.getPictures(imagePickerOptions)
I get this error: "TypeError: Cannot read property 'getPictures' of undefined":
How can I fix this error?
Thanks!

Looks like this is an intrinsic problem of the plugin itself: WP is not supported (yet).
This is documented at the imagePicker page http://ngcordova.com/docs/plugins/imagePicker/ (see icons of supported systems at the top-right angle of the page).

Related

Getting "cannot read proper 'popperRef' of undefined" while adding a tooltip for cytoscape(cy)nodes

Getting "cannot read proper 'popperRef' of undefined" while adding a tooltip for cytoscape(cy)nodes. I am using Vue.js and Cytoscape.js. Not
mounted() {
cytoscape.use(popper)
this.addTooltip()
}
methods : {
addTooltip() {
let makeTippy = function (nodeTemp, node) {
return tippy( node.popperRef(), {
content: function(){
var div = document.createElement('div');
div.innerHTML = text;
return div;
},
trigger: 'manual',
arrow: true,
placement: 'bottom',
hideOnClick: false,
interactive: true
} ).tooltips[0]
}
var nodes = this.cy.nodes();
for (var i = 0; i < nodes.length; i++) {
var tippy = makeTippy(nodes[i]);
tippy.show();
}
}
}
Follow the documentation: https://github.com/cytoscape/cytoscape.js-popper#usage-with-tippyjs
If you have trouble making things within your app and you're not comfortable with using a debugger, then you should try to reproduce your issue outside of your app in s simple demo that's easier for your to reason about.
Here are materials for learning how to use the browser's debugger: https://developers.google.com/web/tools/chrome-devtools/javascript/

ArcGIS APi JS 3.28, panTo is not a function

I'm working with ArcGIS js-API 3.28 and Angular 7.
I have this code and works well:
The constructor of a map:
constructMap(opts: { container: string, basemap: any, center: any, zoom: number, showAttribution: boolean }): Promise<any[]> {
return new Promise((resolve, reject) => {
loadModules([
'esri/map',
'esri/config',
'dojo/domReady!'
]).then(([Map, esriConfig/*, Search ,HomeButton*/]) => {
esriConfig.defaults.map.zoomDuration = 250;
esriConfig.defaults.map.zoomRate = 50;
esriConfig.defaults.map.panDuration = 250; // time in milliseconds, default panDuration: 350
esriConfig.defaults.map.panRate = 50; // default panRate: 25
this.map = new Map(opts.container, {
basemap: opts.basemap,
center: opts.center,
zoom: opts.zoom,
showAttribution: opts.showAttribution
});
resolve(this.map);
});
});
}
And in the Component where I set new centers for each select of the dropdown, I have this code (part of it):
loadModules([
'esri/geometry/Point'
]).then(([lang, Point]) => {
const my_center = new Point([-99.94867549215655, 20.55088183550196]);
this.mapa.map.centerAndZoom(my_center, 5);
});
I can centerAndZoom to my desire point (same with centerAt). Also, I can change some config of pan like this one:
esri.config.defaults.map.panDuration = 1000;
esri.config.defaults.map.panRate = 25;
And I can see that slower pan on each point that I move with centerAt but when I want to use just this.mapa.map.panTo(my_center); I got the error:
ERROR Error: Uncaught (in promise): TypeError: _this.mapa.map.panTo is not a function
TypeError: _this.mapa.map.panTo is not a function
Why? I don't get why the other methods works ok but not panTo()
Looks like you are using version 3.x of the ArcGIS API and in that version the Map class does not have panTo method.

Trying to load obj & mtl file with Three.js in React Native

Main objective : Load animated models exported from Maya into React Native app
Exported files : obj, mtl & png file
I have setup https://github.com/react-community/react-native-webgl in my React Native project and it is working properly.
Now, when I am trying to load the MTL file using the MTLLoader, I am getting following error:
Can't find variable: document
Apparently, the MTLLoader is calling TextureLoader which internally calls some load function which has 'document' reference. So what could be the solution to this ?
Here are the two files that I am using:
three.js
const THREE = require("three");
global.THREE = THREE;
if (!window.addEventListener)
window.addEventListener = () => { };
// require("three/examples/js/renderers/Projector");
require("three/examples/js/loaders/MTLLoader");
require("three/examples/js/loaders/OBJLoader");
export default THREE;
ThreeView.js
import React, { Component } from "react";
import { StyleSheet, View } from "react-native";
import { WebGLView } from "react-native-webgl";
import THREE from "./three";
import { image } from "src/res/image";
export default class ThreeView extends Component {
requestId: *;
componentWillUnmount() {
cancelAnimationFrame(this.requestId);
}
onContextCreate = (gl: WebGLRenderingContext) => {
const rngl = gl.getExtension("RN");
const { drawingBufferWidth: width, drawingBufferHeight: height } = gl;
const renderer = new THREE.WebGLRenderer({
canvas: {
width,
height,
style: {},
addEventListener: () => { },
removeEventListener: () => { },
clientHeight: height
},
context: gl
});
renderer.setSize(width, height);
renderer.setClearColor(0xffffff, 1);
let camera, scene;
let cube;
function init() {
camera = new THREE.PerspectiveCamera(75, width / height, 1, 1100);
camera.position.y = 150;
camera.position.z = 500;
scene = new THREE.Scene();
var mtlLoader = new THREE.MTLLoader();
mtlLoader.load('female-croupier-2013-03-26.mtl', function (materials) {
materials.preload();
var objLoader = new THREE.OBJLoader();
objLoader.setMaterials(materials);
objLoader.load('female-croupier-2013-03-26.obj', function (object) {
scene.add(object);
}, onLoading, onErrorLoading);
}, onLoading, onErrorLoading);
}
const onLoading = (xhr) => {
console.log((xhr.loaded / xhr.total * 100) + '% loaded');
};
const onErrorLoading = (error) => {
console.log('An error happened', error);
};
const animate = () => {
this.requestId = requestAnimationFrame(animate);
renderer.render(scene, camera);
// cube.rotation.y += 0.05;
gl.flush();
rngl.endFrame();
};
init();
animate();
};
render() {
return (
<View style={styles.container}>
<WebGLView
style={styles.webglView}
onContextCreate={this.onContextCreate}
/>
</View>
);
}
}
const styles = StyleSheet.create({
container: {
flex: 1,
backgroundColor: "#fff",
alignItems: "center",
justifyContent: "center"
},
webglView: {
width: 300,
height: 300
}
});
This error is as others have said caused by threejs trying to use features from a browser which react-native does not have.
I've gotten so far as to be able to load the textures (which is the stage you're getting the error from) by monkey patching the texture loader to use the loader in react-native-webgl. Add this in your init function (right near the top preferably).
//make sure you have defined renderer and rngl
/*
const renderer = new THREE.WebGLRenderer(...)
const rngl = gl.getExtension("RN");
*/
const loadTexture = async function(url, onLoad, onProgress, onError) {
let textureObject = new THREE.Texture();
console.log("loading",url,'with fancy texture loader');
let properties = renderer.properties.get(textureObject);
var texture = await rngl.loadTexture({yflip: false, image: url});
/*
rngl.loadTexture({ image: url })
.then(({ texture }) => {
*/
console.log("Texture [" + url + "] Loaded!")
texture.needsUpdate = true;
properties.__webglTexture = texture;
properties.__webglInit = true;
console.log(texture);
if (onLoad !== undefined) {
//console.warn('loaded tex', texture);
onLoad(textureObject);
}
//});
return textureObject;
}
THREE.TextureLoader.prototype.load = loadTexture;
This solves the problem of loading textures and I can see them load in Charles but they still don't render on a model so I'm stuck past that point. Technically a correct answer but you'll be stuck as soon as you've implemented it. I'm hoping you can comment back and tell me you've gotten further.
I had a similar setup and encountered same issue. My option was to switch to JSONLoader which doesn’t need document to render in react-native. So, I just loaded my model in Blender with a three-js addon, then exported it as json. Just check out this process of adding a three-js adon to Blender
https://www.youtube.com/watch?v=mqjwgTAGQRY
All the best
this might get you closer:
The GLTF format supports embedding texture images (as base64). If your asset pipeline allows it, you could convert to GLTF and then load into three/react-native.
I had to provide some "window" polyfills for "decodeUriComponent" and "atob" because GLTFLoader uses FileLoader to parse the base64:
I've successfully loaded embedded buffers, but you'll need more polyfills to load textures. TextureLoader uses ImageLoader, which uses document.createElementNS
You are using the MTLLoader which uses TextureLoader, and the TextureLoader uses the ImageLoader.
The imageloader uses the document.createElementNS() function.
what i did to solve this was to directly call the THREEjs TextureLoader:
let texture = new THREE.Texture(
url //URL = a base 64 JPEG string in this case
);
(for the use of Texture check the Texture documentation)
Then i used the Image class from React native (instead of the THREEjs Image, which requires the DOM to be constructed) to give that to the Texture as a property:
import { Image } from 'react-native';
var img = new Image(128, 128);
img.src = url;
texture.normal = img;
And then finally map the texture over the target material:
const mat = new THREE.MeshPhongMaterial();
mat.map = texture;
In the react native documentation it will explain how the react native Image element can be used, it supports base64 encoded JPEG.
Maybe there's a way for you to single out the part where it calls for the TextureLoader and replace that part with this answer. Let me know how it works out.
side note, i havent tried to display this yet in my webGLView, but in the logs it looked like normal threejs objects, it's worth the try
Use TextureLoader from expo-three
import { TextureLoader } from "expo-three";
export function loadTexture(resource) {
if (textureCache[resource]) {
return textureCache[resource].clone();
}
const texture = new TextureLoader().load(resource);
texture.magFilter = NearestFilter;
texture.minFilter = NearestFilter;
textureCache[resource] = texture;
return texture;
}
Source: https://github.com/EvanBacon/Expo-Crossy-Road/blob/master/src/Node/Generic.js

Interacting with color input with protractor

It's simple to set checkbox or text input value. But how can I set value to input with color type using protractor? I tried to do this:
element(by.id("prop_border-color")).click();
browser.driver.actions()
.sendKeys(protractor.Key.BACK_SPACE)
.sendKeys(protractor.Key.BACK_SPACE)
.sendKeys("00")
.sendKeys(protractor.Key.ENTER)
.perform();
but it triggers this error:
Failed: : Failed to read the 'sessionStorage' property from
'Window': Storage is disabled inside 'data:' URLs.
Is it possible to interact with color picker window somehow?
UPD:
full test:
describe('Panel Editor app', function() {
function addToplevel() {
var elem = element(by.css(".widget-list-item-toplevel"));
var target = element(by.id('droparea'));
browser.driver.actions()
.mouseDown(elem)
.mouseMove(target)
.mouseUp(target)
.perform();
}
function addToToplevel(selector) {
var elem = element(by.css(selector));
var target = element(by.css('.toplevel'));
browser.driver.actions()
.mouseDown(elem)
.mouseMove(target)
.mouseUp(target)
.perform();
}
beforeEach(function() {
browser.get('http://localhost:8080/webapps/panel_editor/index.html');
});
afterEach(function() {
browser.executeScript('window.sessionStorage.clear();');
browser.executeScript('window.localStorage.clear();');
});
it('should check all widgets in toplevel', function() {
addToplevel();
addToToplevel(".widget-list-item-rows");
browser.sleep(300);
element(by.model("dialogCtrl.dialogs.widget.widget_model.props[q].value")).clear().sendKeys(4);
element(by.id("widget_modal")).element(by.buttonText("OK")).click();
browser.sleep(300);
element.all(by.css(".builder-rows > div")).then(function(rows) {
for (var i = 0, l = rows.length-1; i < l; i++) {
rows[i].getCssValue("border-color").then(function(val) {
expect(val == "rgb(221, 221, 221)").toBe(true);
})
}
});
element(by.id("prop_border-color")).click();
// Color picker shows.
browser.driver.actions()
.sendKeys(protractor.Key.BACK_SPACE)
.sendKeys(protractor.Key.BACK_SPACE)
.sendKeys("00")
.sendKeys(protractor.Key.ENTER)
.perform();
// ERROR HERE
element.all(by.css(".builder-rows > div")).then(function(rows) {
for (var i = 0, l = rows.length-1; i < l; i++) {
rows[i].getCssValue("border-color").then(function(val) {
expect(val == "rgb(221, 221, 0)").toBe(true);
})
}
});
});
});
UPD2:
I temporarily solved this problem by using executeScript method and setting value directly from js:
browser.executeScript("$('#prop_border-color').val('#FF0000'); $('#prop_border-color').change();");
But still looking for better solution
I suspect the backspaces are not sent to the color input, but instead make the browser go back in the browser history which leads to a blank page and a local storage access error.
Instead, resolve the click promise explicitly, use clear() to clear the input field and send the keys:
var colorInput = element(by.id("prop_border-color"));
colorInput.click().then(function () {
colorInput.clear();
colorInput.sendKeys("#FF0000");
});
Another approach to try would be to replace browser.driver with browser when calling the actions().

React Native Animated singleValue.stopTracking is not a function

I have the following code to animate in React Native
Animated.timing(
this.state.absoluteChangeX,
{toValue: 0},
).start(function() {
this.lastX = 0;
this.lastY = 0;
});
Pretty simple, but whenever it's triggered, I receive the error:
singleValue.stopTracking is not a function
Here's where the error originates:
/react-native/Libraries/Animates/src/AnimtaedImplementation.js
var timing = function(
value: AnimatedValue | AnimatedValueXY,
config: TimingAnimationConfig,
): CompositeAnimation {
return maybeVectorAnim(value, config, timing) || {
start: function(callback?: ?EndCallback): void {
var singleValue: any = value;
var singleConfig: any = config;
singleValue.stopTracking(); // <--------------- HERE!!!
if (config.toValue instanceof Animated) {
singleValue.track(new AnimatedTracking(
singleValue,
config.toValue,
TimingAnimation,
singleConfig,
callback
));
} else {
singleValue.animate(new TimingAnimation(singleConfig), callback);
}
},
stop: function(): void {
value.stopAnimation();
},
};
};
I'm not extremely versed in typeScript, but var singleValue: any means that "singleValue" could be any type. In my case, it's a number. Since numbers don't have methods, it would make sense that this would error.
Am I doing something wrong?
The value you wish to animate must be an instance of Animated.Value, or one of its subtypes. When you initialize your state, it should look something like this:
getInitialState() {
return { absoluteChangeX: new Animated.Value(0) };
}
The fact that the type declaration in the framework method is any is just a lack of constraint, not an explicit invitation to pass any value into it.
See the Animated docs for more examples.
I run into this issue sometimes (React hooks instead) when I forget to set my variable to the .current of the ref:
function MyComponent() {
const animatedValue = useRef(new Animated.Value(0.0)).current; // Notice the .current
}
This may not necessarily answer the original question, but developers who encounter this error while using React hooks may end up here so maybe it will help someone.
I ran into this issue because I used the animated value (2) instead of the object (1):
const animatedValue = useRef(new Animated.Value(0.0)).current; // (1)
const transform = animatedValue.interpolate({
inputRange: [0.0, 1.0],
outputRange: [0, 100]
}); // (2)
Animated.timing(animatedValue, { // USE animatedValue, NOT transform HERE!
toValue: 1.0,
duration: 3000,
});
Hope this can help anyone that was new to React Native Animation (like me :) )...