I'd like to use two geo location watchers in my app. One with useSignificantChangesand one with high accuracy.
The "lowres" watcher would provide my Redux store with approximate locations all the time, whereas the "highres" watcher would be enabled when the user is working in a live view.
Here are the options for the low res watcher
const options = {
enableHighAccuracy: false,
useSignificantChanges: true,
distanceFilter: 500,
};
And the high res watcher:
const options = {
enableHighAccuracy: true,
timeout: 60e3,
maximumAge: 10e3,
};
I have played around with the settings, but I can't see any difference in the output. Both watchers emits the exact same positions at the same time. I'm using the iOS simulator for the moment.
Questions:
I should be able to have several watcher, shouldn't I? What would be the point with the returned watchId otherwise?
Is this a problem only in the simulator?
Have I misunderstood or goofed?
Edit, the actual question is:
Why do I get high frequent accurate gps positions even in "significant changes" mode. This mode is supposed to save battery if I have understood correctly.
Thanks!
The useSignificantChanges option is fairly new and only recently implemented, so you need to make sure that:
You are using a version of React Native that supports it. Based on merge dates, looks to be v0.47+
You are testing on an iOS version that can use it. The Github issue states that this is a new feature that impacts iOS 11, so I presume that you would need at least that version to see any differences.
Related
I need to add the alarm functionality into my react-native app. An alarm that you have to manually stop/snooze like these ones . For that I have been browsing to know which library should I implement. These were the ones I found.
react-native-alarms: But only works for android...
react-native-alarm -notification: Also only works for android... And It seems more complex because it is based on implementing local push notification to look similar to an alarm.
react-native-alarm: The only one that works booth for IOS and Android, but the documentation is super poor, I couldn't figure out how to use it (I think this library doesn't work as you can check here).
No one of this solutions seems to be functional in iOS and Android, simultaneously.
There any other solution?
Thanks in advance, this is my first React-Native project, I'm kind of lost in this problem.
i was having the same issue earlier. I tried all kind of third party libraries but some major and minor issues are still and unable to get rid of those, then i decided to implement react-native-push-notifications
import PushNotification from 'react-native-push-notification';
PushNotification.localNotificationSchedule({
id: 1,
message: `Reminder Message`,
date: new Date(Date.now() + 60 * 1000),
repeatType: 'day',
// repeatTime:
allowWhileIdle: false,
exact: true,
})
For more understanding please see the example given by officials of push-notifications
https://github.com/zo0r/react-native-push-notification/tree/master/example
I'm building a React Native application (using Redux(Thunk)) where I lately have been facing performance issues that I haven't been able to find a good solution to.
The issue is that the UI/App is freezing/blocked every time I call an action, whether the state changes or not. The app is dealing with accounts data, and the issue started when the account count grew to about 3k accounts. I tested the following:
The time it takes to create the new state in the reducer.
For unnecessary renders.
Any manipulation on the data done in mapStateToProps or elsewhere.
None of the above is taking long or seems to be the issue. The example that I am testing on is a list view of the accounts, with an animated pop up for filtering. When I add a filter in the pop up and saves it (updates the store and closes pop up), I can see the accounts filtering instantly behind the pop up, but the UI/pop up animations is frozen for 2-3 seconds, before closing.
The data is stored with id's as index for accounts and a separate object for filters:
{
accountByIds: { acc1: {...}, acc2: {...} },
filters: {...}
}
I have been wondering whether using ImmutableJS would help, but as the application have grown rather large and I am not sure this would help, as the issue doesn't seem to be to many renders etc., I haven't changed it.
I am currently updating the store like below:
return {
...state,
filters: {
...state.filters,
newFilter
}
}
In sum up, it seems that when the store updates, the appropriate renders are called right away, but then the app seems to freeze up for a moment.
If anyone is able to shed any light on this or point me in the right direction I would appreciate it.
For the last few days I've been fighting performance issues of FlatList inside application I've created.
FlatList consists of static header and x rows. In the measured case there are 62 rows, every one of them consists of 4 components - one of them is used 4 times, which sums up to 7 row cells. Every cell of this list is either TouchableNativeFeedback or TouchableOpacity (for testing purpose those have attached () => null to onPress. There is shouldComponentUpdate used on few levels (list container, row, single cell) and I think render performance is good enough for that kind of list.
For consistent measurements I've used initialNumToRender={data.length}, so whole list renders at once. List is rendered using button, data loading isn't part of my measurement - it's preloaded to component local state.
According to attached Chrome Performance Profile JS thread takes 1.33s to render component. I've used CPU slowdown x6 to simulate Android device more accurately.
However, list shows up on device at around 15s marker, so actual render from button press to list showing up takes more than 14s!
What I am trying to figure out is what happens between JS rendering component and component actually showing up on screen, because device is
unresponsive whole that time. Every touch event is registered, but it is played out only when the list is finally on screen.
I've attached trace from chrome dev tools, systrace taken using android systrace tool and screen from android profiler (sadly I couldn't find option to export the latter).
Tracing was run almost simultaneously - the order is systrace, android profiler, chrome dev tools.
What are the steps that I should make to help me understand what's going on while the app freezes?
Simple reproduction application (code in src.js, first commit)
Chrome Performance Profile
Systrace HTML
Android Profiler
Looking at the source code you posted, I don't think this is a React rendering problem.
The issue is that you're doing too much work in your render method and the helper methods you are calling during the render pass.
Every time you call .filter, .forEach or .map on an array, the entire list is iterated over n times. When you do this for m components, you get a computational complexity of O(n * m).
For example, this is the TransportPaymentsListItem render method:
/**
* Render table row
*/
render() {
const {
chargeMember,
obligatoryChargeNames,
sendPayment,
eventMainNavigation
} = this.props;
/**
* Filters obligatory and obligatory_paid_in_system charges for ChargeMember
*/
const obligatoryChargesWithSys = this.props.chargeMember.membership_charges.filter(
membershipCharge =>
membershipCharge.type === "obligatory" ||
membershipCharge.type === "obligatory_paid_in_system"
);
/**
* Filters obligatory charges for ChargeMember
*/
const obligatoryCharges = obligatoryChargesWithSys.filter(
membershipCharge => membershipCharge.type === "obligatory"
);
/**
* Filters obligatory adjustments for ChargeMember
*/
const obligatoryAdjustments = this.props.chargeMember.membership_charges.filter(
membershipCharge =>
membershipCharge.type === "optional" &&
membershipCharge.obligatory === true
);
/**
* Filters obligatory trainings for ChargeMember
*/
const obligatoryTrainings = this.props.chargeMember.membership_charges.filter(
membershipCharge =>
membershipCharge.type === "training" &&
membershipCharge.obligatory === true
);
/**
* Filters paid obligatory adjustments for ChargeMember
*/
const payedObligatoryTrainings = obligatoryTrainings.filter(
obligatoryTraining =>
obligatoryTraining.price.amount ===
obligatoryTraining.amount_paid.amount
);
// This one does a .forEach as well...
const sums = this.calculatedSums(obligatoryCharges);
// Actually render...
return (
In the sample code there are 11 such iterator calls. For the dataset of 62 rows you have used, the array iterators are called total of 4216 times! Even if every iterator is very simple comparison, simply iterating through all those lists is too slow and blocks the main JS thread.
To solve this, you should lift the state transformation higher up in the component chain, so that you can only do the iteration once and build up a view model that you can pass down the component tree and render declaratively without additional work.
I was trying different things for a while now, I was even considering wrapping native Android RecyclerView, but to be fair, it seemed quite a challenge as I've had no prior experience with native Android code.
One of the things I've tried over the past few days was using react-native-largelist, but it didn't deliver promised performance improvement. To be fair, it might've been even slower than FlatList, but I didn't take exact measurements.
After few days of googling, coding and profiling I've finally managed to get my hands on this Medium post, which references recyclerlistview package and it seems to provide better experience than FlatList. For profiled case rendering time dropped to about 2s, including 300ms of JS thread work.
It's mandatory to notice that improvement of initial rendering comes from reduced number of items rendered (11 in my case). FlatList with initialNumToRender={11} setting renders initially in about the same time.
In my case initial rendering, while still important, isn't the only thing that matters. FlatList performance drops for larger lists mainly because while scrolling it keeps all rendered rows in memory, while recyclerlistview recycles rendered rows putting in new data.
Reason of observer performance improvement for re-rendering is actually simple to test. I've added console.log to shouldComponentUpdate in my row component and counted how many rows is actually re-rendered. For my row height and test device resolution recyclerlistview re-renders only 17 rows, while FlatList triggers shouldComponentUpdate for every item in dataset. It's also worth noticing that number of rows re-rendered for recyclerlistview is not dependent on dataset size.
My conclusion is that FlatList performance may degrade even more with bigger datasets, while recyclerlistview speed should stay at similar level.
TouchableNativeFeedback inside recyclerlistview also seem to be more responsive, as the animation fires up without delay, but I can't explain that behaviour looking at profilers.
There's definitely still room for improvement in my row component, but for now I am happy with overall list rendering performance.
Simple reproduction application with recyclerlistview (code in src.js, second commit)
Chrome Performance Profile (recyclerlistview)
Systrace HTML (recyclerlistview)
Just give one more source to try.
You can enable Hermes in android/app/build.gradle like this.
project.ext.react = [
entryFile : "index.js",
enableHermes: true, // clean and rebuild if changing
]
My app not crashing but the touchable will stop working after some heavy process. When this happens, the process is still running ok, the app's drawer is still working. But all touchable in my app stop working without any error message. I spent 3 days to find a solution and finally tried hermes. On react native docs, it says
Hermes is an open-source JavaScript engine optimized for running React Native apps on Android.
I am in the process of building a mobile application that relies heavily on the user's current location. Until recently, I could easily enter custom coordinates into my simulator Debug -> Location and have my app reflect that change. I recently upgraded to react-native 0.41.1 and my app can no longer register the change in location.
My stack:
react: 15.4.2
react-native: 0.41.1
xcode: 8.2.1
mac os sierra
ios 10.2
My code:
componentDidMount() {
//this._getUserLocation();
navigator.geolocation.getCurrentPosition(
(position) => {
var initialPosition = JSON.stringify(position);
console.log(initialPosition);
},
(error) => alert(JSON.stringify(error)),
{enableHighAccuracy: true, timeout: 20000, maximumAge: 1000}
);
}
I have all the correct location permissions in my info.plist, and I have tried numerous ways of removing, re-adding, removing from location services in settings and reallowing the permission, etc. But I continue to get the same results.
My app will load whatever the Debug -> Location is set to on boot, and then not register any change I make to simulate a new location.
example:
On boot it loads:
{"coords": {"speed":0,"longitude":-122.0312186,"latitude":37.33233141,"accuracy":5,"he ading":-1,"altitude":0,"altitudeAccuracy":-1},"timestamp":1488260204384.006}
That's Apple HQ. I then go to Debug -> Location and change to custom coordinates, reload my app, and I get the same output. This used to not be the case. I can try any new coordinates and it's always the same. Though sometimes, if I change the coordinates and reboot, it will load with the new coordinates, and then the problem repeats itself.
Changing the maximumAge value has no effect.
Any help at all would be greatly appreciated. I've been at this issue for a few weeks now and totally stuck.
Thanks in advance.
We're exploring WebRTC but have seen conflicting information on what is possible and supported today.
With WebRTC, is it possible to recreate a screen sharing service similar to join.me or WebEx where:
You can share a portion of the screen
You can give control to the other party
No downloads are necessary
Is this possible today with any of the WebRTC browsers? How about Chrome on iOS?
The chrome.tabCapture API is available for Chrome apps and extensions.
This makes it possible to capture the visible area of the tab as a stream which can be used locally or shared via RTCPeerConnection's addStream().
For more information see the WebRTC Tab Content Capture proposal.
Screensharing was initially supported for 'normal' web pages using getUserMedia with the chromeMediaSource constraint – but this has been disallowed.
EDIT 1 April 2015: Edited now that screen sharing is only supported by Chrome in Chrome apps and extensions.
You guys probably know that screencapture (not tabCapture ) is avaliable in Chrome Canary (26+) , We just recently published a demo at; https://screensharing.azurewebsites.net
Note that you need to run it under https:// ,
video: {
mandatory: {
chromeMediaSource: 'screen'
}
You can also find an example here; https://html5-demos.appspot.com/static/getusermedia/screenshare.html
I know I am answering bit late, but hope it helps those who stumble upon the page if not the OP.
At this moment, both Firefox and Chrome support sharing entire screen or part of it( some application window which you can select) with the peers through WebRTC as a mediastream just like your camera/microphone feed, so no option to let other party take control of your desktop yet. Other that that, there another catch, your website has to be running on https mode and in both firefox and chrome the users are gonna have to install extensions.
You can give it a try in this Muaz Khan's Screen-sharing Demo, the page contains the required extensions too.
P. S: If you do not want to install extension to run the demo, in firefox ( no way to escape extensions in chrome), you just need to modify two flags,
go to about:config
set media.getusermedia.screensharing.enabled as true.
add *.webrtc-experiment.com to media.getusermedia.screensharing.allowed_domains flag.
refresh the demo page and click on share screen button.
To the best of my knowledge, it's not possible right now with any of the browsers, though the Google Chrome team has said that they're eventually intending to support this scenario (see the "Screensharing" bullet point on their roadmap); and I suspect that this means that eventually other browsers will follow, presumably with IE and Safari bringing up the tail. But all of that is probably out somewhere past February, which is when they're supposed to finalize the current WebRTC standard and ship production bits. (Hopefully Microsoft's last-minute spanner in the works doesn't screw that up.) It's possible that I've missed something recent, but I've been following the project pretty carefully, and I don't think screensharing has even made it into Chrome Canary yet, let alone dev/beta/prod. Opera is the only browser that has been keeping pace with Chrome on its WebRTC implementation (FireFox seems to be about six months behind), and I haven't seen anything from that team either about screensharing.
I've been told that there is one way to do it right now, which is to write your own webcamera driver, so that your local screen appeared to the WebRTC getUserMedia() API as just another video source. I don't know that anybody has done this - and of course, it would require installing the driver on the machine in question. By the time all is said and done, it would probably just be easier to use VNC or something along those lines.
navigator.mediaDevices.getDisplayMedia(constraint).then((stream)=>{
// todo...
})
now you can do that, but Safari is different from Chrome in audio.
it is Possible I have worked on this and built a Demo for Screen share. During this watcher can access your mouse and Keyboard. If he moves his mouse then Your mouse also moves and if he types from his Keyboard, it will be typed into your pc.
View this code this code is for Screen share...
Right now in this days you can share screen with this, you not need any extentions.
const getLocalScreenCaptureStream = async () => {
try {
const constraints = { video: { cursor: 'always' }, audio: false };
const screenCaptureStream = await navigator.mediaDevices.getDisplayMedia(constraints);
return screenCaptureStream;
} catch (error) {
console.error('failed to get local screen', error);
}
};