Circular module dependencies between stores - react-native

In my react native app that tracks instrument practice I have three stores:
SessionStore
GoalStore
InstrumentStore
The stores each manage one model (Session, Goal, Instrument) and getting/updating the server via a REST api.
The SessionStore listens to actions regarding Sessions (obviously): session.add, session.update. But it also listens to changes to the other stores, to be able to update the Sessions if a Goal or Instrument changes name.
Correspondingly the InstrumentStore listens to Instrument actions, but also to Session actions to update statistics on how many sessions uses a particular instrument.
To be able to not have race conditions, the InstrumentStore will act on the action session.add but wait for the SessionStore to handle the action first (to ensure the Session has been updated in the API). To do this I use dispatcher.waitFor with the SessionStore dispatchToken as a semaphore.
The problem: since all stores use each others dispatchTokens they all have to import each other. This is a circular dependency on modules and leads to strange race conditions. Sometimes one of the stores haven't been constructed when it's included by one of the other stores.
Here are my stores: https://github.com/osirisguitar/GuitarJournalApp/tree/feature/flat-ui/js/stores
Am I using the flux pattern the wrong way?
Addition
This is what I want to happen (in sequence):
Session is updated:
Send updated session to API
Refresh SessionStore
Refresh GoalStore
Refresh InstrumentStore
2, 3 and 4 need to wait for 1 to complete, that's why GoalStore and InstrumentStore need the SessionStore dispatch token.
Goal is update:
Send updated goal to API
Refresh GoalStore
Refresh SessionStore
2 and 3 need to wait for 1, this is why SessionStore needs the GoalStore dispatchToken which introduces the circular dependency.

You have some duplication going on.
All stores will hear all dispatches. That's the beauty of having a single dispatcher. So when you dispatch a sessions.add or sessions.update action, you're hitting three different Stores, and two of them are doing the exact same thing. That's a no-no.
As a rule, each Store's dispatch token should only be responsible for updating that store. So your Goal and Instrument stores should not be updating the SessionsStore. The .refresh and .emit should be happening within the SessionsStore dispatch token only.
EDIT to answer your edited question.
I think your confusion is because you're not recognizing that the dispatcher.register takes in a function as it's argument, and not an object.
Functions, in JS, do not evaluate their contents on declaration. They are evaluated when executed only.
Simple example;
func = function(){ console.log(testVar) } // No error, even though testVar is undefined
func() // ERROR: testVar is undefined
var testVar = 'hey';
func() // log: 'hey';
dispatcher.register takes a function as it's input, and returns an key (in the format ID_#). That key is generated by the dispatcher itself without running the input function. The input function is simply stored for later and run each time a payload is dispatched.
That means that you don't need the internal variables to be defined until your first dispatch. And because you also don't want to dispatch anything until you've created your stores, this becomes a non-issue.
But it also means that the dispatcher, by default, has a sort-of circular dependency against itself (relying on the return values of it's own functions, as stored in external variables). But that's the design of the dispatcher. Unless you're going to write a new dispatcher, that's just part of the deal.
It's also worth pointing out that if you create a true circular dependency by calling multiple waitFors that deadlock against one another, the dispatcher will correctly throw an error saying as much;
Dispatcher.waitFor(...): Circular dependency detected while waiting for ID_#

Related

How to queue requests in React Native without Redux?

Let's say I have a notes app. I want to enable the user to make changes while he is offline, save the changes optimistically in a Mobx store, and add a request to save the changes (on the server) to a queue.
Then when the internet connection is re-established I want to run the requests in the queue one by one so the data in the app syncs with data on the server.
Any suggestions would help.
I tried using react-native-job-queue but it doesn't seem to work.
I also considered react-native-queue but the library seems to be abandoned.
You could create a separate store (or an array in AsyncStorage) for pending operations, and add the operations to an array there when the network is disconnected. Tell your existing stores to look there for data, so you can render it optimistically. Then, when you detect a connection, run the updates in array order, and clear the array when done.
You could also use your existing stores, and add something like pending: true to values that haven't posted to your backend. However, you'll have less control over the order of operations, which sounds like it is important.
As it turns out I was in the wrong. The react-native-job-queue library does work, I just made a mistake by trying to pass a function reference (API call) to the Worker instead of just passing an object that contains the request URL and method and then just implement the Worker to make the API call based on those parameters.

control moto's state transitions of EC2 instances?

To test that my application handles state transitions correctly, I'd like control over the lifecycle of moto's fake EC2 instances:
Rather than have instances start immediately in running, it would be nice to have them start in pending, let me confirm some things, and then explicitly transition them to running.
Relatedly, there are some actions I'd like to trigger in my tests when the instances switch to running.
Is any of this possible? I found InstanceBackend in moto's code -- is there a way for users to hook into or override methods there?
There are a few feature requests for more control over the transition cycle, but nothing has been implemented yet.
It is possible to use the internal API to set the status directly, as you said, using the InstanceBackend.
If you only have one instance, you can use the following code:
ec2_backend = moto.ec2.models.ec2_backends[my-region]
list(ec2_backend.reservations.values())[0].instances[0].state = "..."
If you have multiple reservations, you can use the reservation ID like this:
ec2_backend.reservations["r-7df1884b"].instances[0].state = "..."
Note that both AWS and Moto use two properties to track state, state and state_code. Only updating state may result in undefined behaviour, so you may want to update both to ensure they are in sync:
ec2_backend.reservations["r-7df1884b"].instances[0].state_code = ..
Note that this is an internal API, so changes to this data structure may occur without warning.

Redux saga: How can i make sure only my saga is able to update a certain state?

I have a mobile app made in React Native, and I've just run into a best practice dilemma i've encountered many times while using Redux/Redux Saga. I would love if i could get someone else's thoughts on this.
For a new piece of functionality i'm implementing, i need to be able to tell how many times the app has been launched. This involves asynchronously retrieving how many times the app was previously launched from the device storage. If there's a new launch happening, i also need to add +1 to the number and store that in the device storage.
This is how i currently do it:
Dispatch appLaunched() action when app launches.
Redux Saga takes event.
Inside Saga: Retrieve how many times app was previously launched (appLaunchCount) from device storage (wait for async to finish).
Add +1 to previous appLaunchCount.
Store new appLaunchCount in device storage (wait for async to finish).
Dispatch put() with new appLaunchCount to reducer.
Update state with new appLaunchCount inside reducer.
My problem with this method is step 6. Technically any part of my app could dispatch a new app launch count to my reducer, with any integer, and the reducer would update the state just the same even though it didn't come from the saga.
My question is this: How can i protect my Reducers/Sagas/Actions so that only my saga can dispatch the action with the current appLaunchCount ?
P.S The only solution i can think of is writing my saga and reducer in the same file, and use private actions that only the saga and reducer can access. I would really hate to have to keep all that code together though.
Private actions aren't really a thing. The store is, by design, a global object. And since actions are just objects with a type property, anyone who can construct an action object of the right type can in principle dispatch an action and kick off your reducer.
What you could do is make the action have a type that makes it obvious that it's meant to be private. For example, maybe the action looks like:
{
type: '__PRIVATE_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED__'
// You could tone it down a bit from this :)
}
That of course doesn't make it actually private, but at least if someone wants to use it, it's impossible for them to not realize your intent.
If you wanted to make it more secure, perhaps you could use a symbol as the type, and therefore only anyone with access to the symbol could construct the right action. For example:
const appLaunchCount = Symbol('appLaunchCount');
// action would look like:
{
type: appLaunchCount
}
But then the issue is making sure that symbol stays hidden, and can be accessed only by those who you want to access it. Similar to one of the things you mentioned, if you have the saga/reducer in the same file, then you could make sure that other files couldn't access this symbol; but once you start exporting it it becomes harder to control.

Keeping SAP's RFC data for consecutive calls of RFC using JCO

I was wondering if it was possible to keep an RFC called via JCO opened in SAP memory so I can cache stuff, this is the scenario I have in mind:
Suppose a simple function increments a number. The function starts with 0, so the first time I call it with import parameter 1 it should return 1.
The second time I call it, it should return 2 and so on.
Is this possible with JCO?
If I have the function object and make two successive calls it always return 1.
Can I do what I'm depicting?
Designing an application around the stability of a certain connection is almost never a good idea (unless you're building a stability monitoring software). Build your software so that it just works, no matter how often the connection is closed and re-opened and no matter how often the session is initialized and destroyed on the server side. You may want to persist some state using the database, or you may need to (or want to) use the shared memory mechanisms provided by the system. All of this is inconsequential for the RFC handling itself.
Note, however, that you may need to ensure that a sequence of calls happen in a single context or "business transaction". See this question and my answer for an example. These contexts are short-lived and allow for what you probably intended to get in the first place - just be aware that you should not design your application so that it has to keep these contexts alive for minutes or hours.
The answer is yes. In order to make it work, you need to implement two tasks:
The ABAP code needs to store its variable in the ABAP session memory. A variable in the function group's global section will do that. Or alternatively you could use the standard ABAP technique "EXPORT TO MEMORY/IMPORT FROM MEMORY".
JCo needs to keep the user session between calls. By default, JCo resets the backend-side user session after every call, which of course destroys all data stored in that user session memory. In order to prevent it, you need to use JCoContext.begin() and JCoContext.end() to get a stateful RFC connection that keeps the user session alive on backend side.
Sample code:
JCoDestination dest = ...
JCoFunction func = ...
try{
JCoContext.begin(dest);
func.execute(dest); // Will return "1"
func.execute(dest); // Will return "2"
}
catch (JCoException e){
// Handle network problems, ABAP exceptions, SYSTEM_FAILUREs
}
finally{
// Make sure to release the stateful connection, otherwise you have
// a resource-leak in your program and on backend side!
JCoContext.end(dest);
}

Flux without data caching?

Almost all examples of flux involve data cache on the client side however I don't think I would be able to do this for a lot of my application.
In the system I am thinking about using React/Flux, a single user can have 100's of thousands of the main piece of data we store (and 1 record probably has at least 75 data properties). Caching this much data on the client side seems like a bad idea and probably makes things more complex.
If I were not using Flux, I would just have a ORM like system that can talk to a REST API in which case a request like userRepository.getById(123) would always hit the API regardless if I requested that data in the last page. My idea is to just have the store have these methods.
Does Flux consider it bad that if I were to make request for data, that it always hit the API and never pulls data from a local cache instance? Can I use Flux in a way were a majority of the data retrieval requests are always going to hit an API?
The closest you can sanely get to no caching is to reset any store state to null or [] when an action requesting new data comes in. If you do this you must emit a change event, or else you invite race conditions.
As an alternative to flux, you can simply use promises and a simple mixin with an api to modify state. For example, with bluebird:
var promiseStateMixin = {
thenSetState: function(updates, initialUpdates){
// promisify setState
var setState = this.setState.bind(this);
var setStateP = function(changes){
return new Promise(function(resolve){
setState(changes, resolve);
});
};
// if we have initial updates, apply them and ensure the state change happens
return Promise.resolve(initialUpdates ? setStateP(initialUpdates) : null)
// wait for our main updates to resolve
.then(Promise.params(updates))
// apply our unwrapped updates
.then(function(updates){
return setStateP(updates);
}).bind(this);
}
};
And in your components:
handleRefreshClick: function(){
this.thenSetState(
// users is Promise<User[]>
{users: Api.Users.getAll(), loading: false},
// we can't do our own setState due to unlikely race conditions
// instead we supply our own here, but don't worry, the
// getAll request is already running
// this argument is optional
{users: [], loading: true}
).catch(function(error){
// the rejection reason for our getUsers promise
// `this` is our component instance here
error.users
});
}
Of course this doesn't prevent you from using flux when/where it makes sense in your application. For example, react-router is used in many many react projects, and it uses flux internally. React and related libraries/patters are designed to only help where desired, and never control how you write each component.
I think the biggest advantage of using Flux in this situation is that the rest of your app doesn't have to care that data is never cached, or that you're using a specific ORM system. As far as your components are concerned, data lives in stores, and data can be changed via actions. Your actions or stores can choose to always go to the API for data or cache some parts locally, but you still win by encapsulating this magic.