React native navigation params and demeter law - react-native

When you want to access params in a screen component passed through navigation (react native navigation), you have to do it like this for example:
this.myParameter = this.navigation.state.params.myParameter
Does this break demeter law ? And what if you want to use deep linking later on ?
A solution would be to just create a wrapper component that maps the navigation parameters as props.

Indeed, the Law of Demeter (or Principle of Least Knowledge) would require that your component know nothing about the internal structure of the navigation object.
In my view, the best thing would have been that the navigation object passed down to the component would have already had a function called params(), which would have returned a parameter map to you.
However, since this is not the case, you could just add it yourself - either by introducing a layer of indirection (through a function of your own that would look like screenParamsFrom(navigation), or you could try to hook into the navigation object and add the function yourself by supplying the navigation object to the root navigator:
<MyRootNavigator navigation={addNavigationHelpers({
dispatch: dispatch,
state: state,
params: paramsFunction})}/>
In this case, however, you'll have to manage the state yourself (through Redux or some other similar mechanism. See the integration guide).

Related

React Navigation: Why to use methods on the navigation object instead of using action creators and dispatch (i.e. CommonActions)?

In the official docs of react-navigation it says:
It's highly recommended to use the methods on the navigation object instead of using action creators and dispatch. It should only be used for advanced use cases.
But why is that? I didn't find any explanation for this strong recommendation in the react-navigation docs.
With CommonActions (or NavigationActions in react-navigation smaller equal 4.x) I can nicely create navigation functions which I can access from anywhere without the need of passing a navigation prop around. And I didn't encounter issues with it which I could relate to it being a CommonAction instead of a part of the navigation prop - even in more complex scenarios with many screens and special routing arrangements.
There are several reasons:
Methods on the navigation object are strictly type-checked, i.e. if you use something like TypeScript or Flow, you'll get errors if you pass incorrect values, it's way more relaxed with action objects, which will only log error during runtime.
When a method exists, it means the related navigator is definitely rendered. You can't call openDrawer if you don't have a drawer, but you can call dispatch(DrawerActions.openDrawer()).
It's more typing navigate('Home') vs dispatch(CommonActions.navigate('Home'))
without the need of passing a navigation prop around
Not sure how action creators help with that, but you could just use useNavigation

why useObserver hook won't rerender twice, comparing to observer hoc?

I'm reading the docs of mobx-react-lite and confusing with the difference between observer hoc and useObserver hook. According to the docs, observer hoc will trigger re-render twice while useObserver won't:
One good thing about this is that if any hook changes an observable for some reason then the component won't rerender twice unnecessarily. (example pending)
I'm not so familiar with mobx-react-lite, but interested in what causes that differences.
Here is the docs: https://mobx-react.js.org/observer-hook
The useObserver hook is only aware of observables referenced within a functional component, whereas the observer HOC is reactive to any observable props. The observer HOC actually just wraps the entire component in useObserver.

UI is rendered but screen is not painted unless touched

We are able to confirm that the render() method of our component is getting invoked. We also see that the data that needs to be shown is correctly passed in via props. However, the actual phone display won't repaint the updated UI until it is touched.
Interestingly this only happens in the production build not on the development builds of the app. Sigh.
We have seen this in the past when updates are done from InteractionManager.runAfterInteractions, but in this case we have removed every use of runAfterInteractions and are still seeing this behavior.
Using react native 0.57 but also seeing the same issue on 0.58.
I can provide more specifics if needed, but wanted to know if anyone has seen anything like this before and what if anything they did to fix such an issue.
I had this issue also when the content that should repaint was inside a 'ScrollView'. I refactored my code using a 'FlatList' instead of a 'ScrollView' and the problem disappeared.
It was a struggle, but I figured out what was happening.
TL;DR
The solution is to wrap your redux action dispatches inside a setTimeout(..., 0) whenever you are doing UI updates from a function passed to runAfterInteractions or from a Realm callback.
Some quick background:
I was dispatching a redux action inside a Realm callback. The action would in turn cause the UI to re-render, except the rendered UI would not repaint on screen until it was touched.
Similar behavior was observed when you dispatch redux actions from a callback passed to InteractionManager.runAfterInteractions. This makes sense because callbacks passed to runAfterInteractions wait for UI to do its thing and are then executed as a setImmediate batch. These callbacks were intended for heavy background jobs that would otherwise block UI rendering. So if one were to do UI rendering in such callbacks those would themselves get halted like a background job!
Solution:
The point of the realm callback is to notify you that some data has changed, so that you can re-render your UI. However, the realm callback was behaving like one passed to runAfterInteractions and so UI re-rendering was getting held up. I took a leap of faith and decided to move the Redux action dispatching code to a different queue, namely setTimeout. Here's what the fix looked like:
...
// assume this is in a callback passed to runAfterInteractions
// or in a realm callback
// PUT YOUR REDUX ACTION DISPATCHES IN A setTimeout
// callback
setTimeout(() => {
getFilteredCartAreas(dispatch, realm)
dispatch(new ActionCartAreaCardChanged())
}, 0)
...
The fix is to do the UI rendering logic inside a callback passed to setTimeout. Notice the zero delay; the goal is to simply move logic to a different queue that is not held up by UI interactions.

Will a JS root view always receive events from native after it has been removed from its superview?

My (heavily simplified) code looks like this:
// objc
self.currentSwipeUpView = [[RCTRootView alloc]
initWithBridge:_bridge moduleName:#"PhotoSwipeUpView"
initialProperties:nil
];
// elsewhere...
[self.currentSwipeUpView removeFromSuperview];
self.currentSwipeUpView = nil;
// js
function PhotoSwipedUpView() {
return <TextInput style={{flex: 1}} onChangeText={console.warn} />
}
AppRegistry.registerComponent('PhotoSwipeUpView', () => PhotoSwipeUpView)
self.currentSwipeUpView is removed from its superview and dereferenced. When this happen, is it possible for the JS thread to not receive a pending onChangeText event? Would it lead to console.warn to being with the new text?
I'm imagining a case where the event would have been sent to the JS thread after the RCTRootView was removed from its superview and dereferenced.
Furthermore, I'm curious if it's possible to be notified when a RCTRootView is removed from its superview from the JS side.
I'm using ARC and I'm not holding any other references to self.currentSwipeUpView.
Edit: I have two RCTRootViews. Only one of them will be unloaded, the other will persist across the state of the application. They use the same bridge, and therefore use the same bundle and JS environment.
Loading/Unloading JS Bundles
Your loaded js bundle is tied to the root view and is not persistent other than in that view.
Once you remove the root view from it's parent, the bundle is also unloaded. If the root view is removed and dereferenced, the text change listeners should also be removed and should not trigger.
The JSX in the bundle is used to ultimately setup native elements plus their listeners. The JSX is basically the script. The JSX with a RN TextInput basically results in a native UITextField. If the JSX specifies a listener, then a native listener for text changes is added to the native element instance.
Even if you had two root views loaded, they would operate independently from each other. If one root view is unloaded, the listeners associated with the TextInputs on the screen should no longer receive events. Your second Root View should be unaffected as it would have separate listeners setup for the TextInput.
Events/Timing issue
I suppose there is a small chance with perfect timing, and depending on where the removeFromSuperview is called, that the js could just receive the event before the view is completely removed and dereferenced, but if that has a possibility of affecting the functionality of your app, I would suggest using another pattern to interact with your data and remove the screen.
Possible Solution
It's hard to say without knowing what you're trying to do, but you could, for example, post a notification to close the React Native view using RCTEventEmitter/NativeEventEmitter, then have the React Native screen close itself with a native function that just removes it from the super view. This type of pattern would also answer your second question (in a sense), because you're effectively telling the RN screen before it should close allowing it to take any actions necessary before being removed.

How my react js code can interact with existing Obj-C logic?

I have existing Obj-C project with rich business logic. I wanna try using React Native on the particular screen (I mean View Controller in terms of Cocoa), but every example I see in repo contains logic in javascript. How can I treat React Native as rendering, but pass user actions to my Objective-C code?
EDIT 31 march 2015:
Native view module does not seem to be a good solution, because native modules get instanciated from React code. Thus if I want to use already created view model for that view controller, I need to have some singleton, which will be like a some shared state on the side. I think this is bad.
It's not possible for React views to call native methods directly without going through JavaScript, unless you create custom native view plugins for literally everything onscreen.
Your best bet is probably to create a custom native module that exports all the native methods you want to call, then write a minimal React JavaScript application that does nothing except forward touch events from the views to your module by calling those methods.
If you need to communicate back to the JS application, your module can either use callbacks passed to your exported methods, or broadcast events which the JS code can observe.
To get the most out of React Native though, I'd recommend that you try to keep all the view and controller logic in the JS part, and only expose the business logic from the native side. Otherwise you lose all the benefits of rapid reloading to test changes, etc.
I have the same problem and what I did to solve it:
Create Native Module wrapper around NSNotificationCenter, so javascript code could publish iOS events.
Subscribe for that event within ReactController-wrapper around React-Native code.
Raise event when user clicked button (when we need to give control back to objective-c) with needed data to pass as dictionary.
Catch event from ReactController-wrapper (objective-c), process data, open other controllers/etc.
Using React Native within existing iOS app for some views only
Have a look at the documentation:
http://facebook.github.io/react-native/docs/nativemodulesios.html#content
React Native provides "Native Modules" as a way of bridging between Obj-C and JavaScript. There's also a little bit more about this on the React Native homepage:
http://facebook.github.io/react-native/
Under the "Extensibility" heading, it shows how to expose custom iOS views.
IF you implement RCTRootView in your view controller you can use it to access RTCBridge and then from this gain access to your module.
This will allow you to keep a reference of your view controller inside your module, making you able to pass calls to method directly instead of relying on NSNotifications.
example :
In your view controller
let reactView = RCTRootView(bundleURL: jsCodeLocation, moduleName: "MyReactApp", initialProperties: nil, launchOptions: nil)
self.reactBridge = reactView.bridge
let myModule = self.reactBridge.module(for: MYModuleClass.self) as! MYModuleClass
myModule.viewController = self
And in your module keep the reference :
#objc(MYModuleClass)
class MYModuleClass: NSObject {
weak var viewController: MyViewController!
}