In my vue app i have the following computed property in two different components:
normalizeName() {
website = this.form.website_id;
let res = '', new_val = '';
if (this.websites.find(obj => obj.website_id === website)) {
this.form.name = '';
res = this.websites.find(obj => obj.website_id === website);
new_val = res.acronym + ' - ';
this.form.name = new_val;
return new_val;
}
}
Now, i have a state mgmt defined using the $store but the question is:
in terms of best practice and performance, should i define normalizeName() in the $store and use its logic from there or should i implement the exact same logic in two different components?
Realistically there will be little performance difference either way. In terms of your code duplication vs $store abstraction problem - personally I find that a small amount of duplication lends to better readability and maintainability. When compared to pushing that function to somewhere else.
This article explains why that can be the case much better than I could.
You need to consider whether it makes sense for a $store to have a function for normalising a name. Also if more than just these two component use the store - that function probably belongs in the components themselves rather than in the store. To me that normalisation functionality would look out of place in a store.
Perhaps if you thought that function was going to be needed to be implemented a 3rd time, then you should find a way to move it elsewhere for the sake of consistency/convenience.
Related
What is better to use in templates: expressions or computed properties?
Ex:
<span :class="'static_part' + dynamic_part"></span>
...
data: {
dynamic_part: 'xxx',
}
or
<span :class="span_class"></span>
...
data: {
dynamic_part: 'xxx',
},
computed: {
span_class() {
return 'static_part' + dynamic_part;
}
}
1-st way is smaller and easier to understand. But what about performance?
According to official docs
In-template expressions are very convenient, but they are meant for simple operations. Putting too much logic in your templates can make them bloated and hard to maintain
and
Instead of a computed property, we can define the same function as a method. For the end result, the two approaches are indeed exactly the same. However, the difference is that computed properties are cached based on their reactive dependencies
I see that using computed property could also separate the logic from the content and help other who will read your code that this properties are calculated and based on other ones
Within the mobx-react documentation there are variations in how stores are created. For example, on the React Context page:
In the first code sample, the store is instantiated with useLocalStore:
const store = useLocalStore(createStore)
In the second code sample, the stores are initiated by directly "newing" the stores":
counterStore: new CounterStore(),
themeStore: new ThemeStore(),
By inference, the first is a "local" store (and thus needs useLocalStore), and the second is a "global" store and thus doesn't. However, it is not clear why this is, and what the subsequent differnce in behaviour is.
Why is useLocalStore not needed in second example, and what difference does this make to the behavior of the stores and mobx within React?
Thanks for any input
OK, I found the answer. useLocalStore turns a javascript literal into a store with observable properties. This is not needed if a store is created from a class object with observable attributes.
Thanks to #freddyc for the answer
useLocalStore has been deprecated in favor of useLocalObservable, see here.
Links from the question are no longer valid, mobx-react docs are now here:
https://mobx.js.org/react-integration.html
useLocalObservable(initializer, annotations)
is just a shorthand for:
useState(() => observable(initializer(), annotations, {autoBind: true}))[0]
useLocalStore is almost the same, but it was deprecated because it wasn't as practical.
Why is this needed?
useState() is the main way to store a variable in React.
observable({...}) is the main way to create observable objects in Mobx.
And when using Mobx+React you'll often need the useState + observable combo, so it is nice to have a shorthand.
When is it not needed?
If you are using a class to create your store then you don't need the observable wrapper and you can simply do:
const myStore = useState(() => new MyStore())[0];
If you are using React class components you can save the store on a class field.
class MyComponent
{
myStore = new MyStore();
render() {...}
}
Bonus tip:
Why do we use useState for memorizing values instead of useMemo?
useMemo is meant to be used more like a cache for performance optimization, rather than for storing app state.
I'm trying to get Mobx's autorun to work correctly.
My use case is I have one model that I like to serialize (or dehydrate) when it is changed and add that information to another model's data. This brings me rudimentary time travel of model states. Both are observables.
Edit: Idea in model separation is that one is app's data model and other should be completely separate library that I could use from the app. I need to track changes in the app regularly, but show UI for the state tool on the same page.
Now, autorun seems to make its own inferences of what I'm actually tracking. When I moved the model instance inside observing model's instantiation, autorun wasn't called anymore when changes happened. When model instance was created on the module top level, it worked as I expected. This was when I only changed one property of observing model (the one that gets changed by every autorun call). When I tried changing two things at once in the observing model, autorun was now called for these changes also, leading to a unending cycle (which Mobx caught).
I'd like to know how to express what I'm tracking with autorun function be more explicit, or wether there are other ways to keep track of model changes and update other model when anything happens.
Edit with code example.
This is what I did (greatly simplified):
class DataModel {
#observable one_state = null;
}
class StateStore {
#observable states = [];
}
let data = new DataModel();
let store = new StateStore();
autorun(() => {
store.states.push(data.one_state);
console.log("new data", toJSON(store.states));
});
data.one_state = "change 1";
data.one_state = "change 2";
And this creates circular dependency because autorun gets called for both original data model change and the resulting store change, whilst I'm only interested in tracking changes to the former.
Edit with working result:
class DataModel {
#observable one_state = null;
}
class StateStore {
#observable states = asFlat([]);
}
let data = new DataModel();
let store = new StateStore();
autorun(() => {
store.states.push(data.one_state);
});
data.one_state = "change 1";
data.one_state = "change 2";
As per #mweststrate answer, using asFlat with store's states variable and removing the logging from autorun broke the problem cycle.
It is a bit tough to answer this question without any real code. Could you share some code? But note that MobX works best if you make a small mind shift: instead of imperatively saying "if X happens Y should be changed" it is better to say "Y can be derived from X". If you think along those lines, MobX will really start to shine.
So instead of having two observable models, I think one of them should be a derivation of the other (by using computed indeed). Does that make sense? Otherwise, feel free to elaborate on your question a bit more :)
Edit:
Ok thanks for the code. You should remove the log statement to avoid it from looping; Currently you log the states model, so each time it changes, the autorun will run, adding the first item (again!), changing the stateModel etc...
Secondly I'm not sure whether the states list should be observable, but at least its contents should not be observable (since it is a snapshot and the data per state should not change). To express that, you can use the asFlat modifier, which indicats that the states collection should only be shallowly observable: #observable states = asFlat([]).
Does that answer your question?
I must be missing something simple, but can't figure it out. I'm retrieving a bunch of lookup tables in 1 Web API call.
return EntityQuery.from('Lookups')
.noTracking(true)
.using(manager).execute()
.then(processLookups);
In processLookups I'm calling getLocal for each array that was returned. Example: State table
datacontext.lookups = {
state: getLocal('States', orderBy.state, true),
....
}
function getLocal(resource, ordering, includeNullos) {
var query = EntityQuery.from(resource)
.orderBy(ordering)
.noTracking(true);
if (!includeNullos) {
query = query.where('id', '!=', 0);
}
return manager.executeQueryLocally(query);
}
The arrays are not observable, but each property in the array objects are observable functions. This is just overhead I don't need since these will not be changing.
How can I prevent the object properties from being observable?
Thanks
The raw lookups are available to you right there in the success callback from the query. No reason to look at cache ... even if they were there (which they are not as Jay makes clear).
But what would you DO with these lookups? Presumably you want them to be related (by Breeze navigation paths) to real entities. For example, you'd like session.room to return the related room object. But if the room is one of your lookups and is NOT an entity, then the session.room navigation property won't return it; nav properties always return entities.
I can think of ways around this. But it's just more work and more trickery.
Let's stop for a moment and ask the most important question: Why?
Why do you care if the lookups are entities with observable properties? It may be "overhead you don't need". But is it overhead that hurts you? Hurts you how? Have you measured it?
Forgive me but I sense premature optimizations that could be distracting you from more worthy pursuits. Happy to be proven wrong.
I'm not sure I completely understand the situation but the 'noTracking' option is really only relevant with 'remote' queries. i.e. not local ones. Basically, 'noTracking' tells breeze not process the results of the query into breeze entities AND ALSO not to cache these results.
When you are querying the cache, which is what 'executeQueryLocally' is doing, both of these steps have already occurred, so 'noTracking' is ignored.
I don't know much about Dojo but is the following possible:
I assume it has a getter/setter for access to its datastore, is it possible to override this code.
For example:
In the dojo store i have 'Name: #Joe'
is it possible to check the get to:
get()
if name.firstChar = '#' then just
return 'Joe'
and:
set(var)
if name.firstChar = '#' then set to #+var
Is this sort of thing possible? or will i needs a wrapper API?
You can get the best doc from http://docs.dojocampus.org/dojo/data/api/Read
First, for getting the data from a store you have to use
getValue(item, "key")
I believe you can solve the problem the following way. It assumes you are using a ItemFileReadStore, you may use another one, just replace it.
dojo.require("dojo.data.ItemFileReadStore");
dojo.declare("MyStore", dojo.data.ItemFileReadStore, {
getValue:function(item, key){
var ret = this.inherited(arguments);
return ret.charAt(0)=="#" ? ret.substr(1) : ret;
}
})
And then just use "MyStore" instead of ItemFileReadStore (or whatever store you are using).
I just hacked out the code, didn't try it, but it should very well show the solution.
Good luck
Yes, I believe so. I think what you'll want to do is read this here and determine how, if it will work:
The following statement leads me to believe the answer is yes:
...
By requiring access to go through
store functions, the store can hide
the internal structure of the item.
This allows the item to remain in a
format that is most efficient for
representing the datatype for a
particular situation. For example, the
items could be XML DOM elements and,
in that case, the store would access
the values using DOM APIs when
store.getValue() is called.
As a second example, the item might be
a simple JavaScript structure and the
store can then access the values
through normal JavaScript accessor
notation. From the end-users
perspective, the access is exactly the
same: store.getValue(item,
"attribute"). This provides a
consistent look and feel to accessing
a variety of data types. This also
provides efficiency in accessing items
by reducing item load times by
avoiding conversion to a defined
internal format that all stores would
have to use.
...
Going through store accessor function
provides the possibility of
lazy-loading in of values as well as
lazy reference resolution.
http://www.dojotoolkit.org/book/dojo-book-0-9/part-3-programmatic-dijit-and-dojo/what-dojo-data/dojo-data-design
I'd love to give you an example but I think it's going to take a lot more investigation.