Redux store design – two arrays or one - react-native

I guess this could be applied to any Redux-backed system, but imagine we are building simple React Native app that supports two actions:
fetching a list of messages from a remote API
the ability to mark those messages as having been read
At the moment I have a messagesReducer that defines its state as...
const INITIAL_STATE = {
messages: [],
read: []
};
The messages array stores the objects from the remote API, for example...
messages: [
{ messageId: 1234, title: 'Hello', body: 'Example' },
{ messageId: 5678, title: 'Goodbye', body: 'Example' }
];
The read array stores the numerical IDs of the messages that have been read plus some other meta data, for example...
read: [
{ messageId: 1234, meta: 'Something' },
{ messageId: 5678, meta: 'Time etc' }
];
In the React component that displays a message in a list, I run this test to see if the message should be shown as being read...
const isRead = this.props.read.filter(m => m.messageId == this.props.currentMessage.messageId).length > 0;
This is working great at the moment. Obviously I could have put a boolean isRead property on the message object but the main advantage of the above arrangement is that the entire contents of the messages array can be overwritten by what comes from the remote API.
My concern is about how well this will scale, and how expensive the array.filter method is when the array gets large. Also keep in mind that the app displays a list of messages that could be hundreds of messages, so the filtering is happening for each message in the list. It works on my modern iPhone, but it might not work so well on less powerful phones.
I'm also thinking I might be missing some well established best practice pattern for this sort of thing.
Let's call the current approach Option 1. I can think of two other approaches...
Option 2 is to put isRead and readMeta properties on the message object. This would make rendering the message list super quick. However when we get the list of messages from the remote API, instead of just overwriting the current array we would need to step through the JSON returned by the API and carefully update and delete the messages in the local store.
Option 3 is keep the current read array but also to add isRead and readMeta properties on the message object. When we get the list of messages from the remote API we can overwrite the entire messages array, and then loop through the read array and copy the data into the corresponding message objects. This would also need to happen whenever the user reads a message – data would be duplicated in two places. This makes me feel uncomfortable, but maybe it's ok.
I've struggled to find many other examples of this type of store, but it could be that I'm just Googling the wrong thing. I'm quite new to Redux and some of my terminology is probably incorrect.
I'd really value any thoughts on this.

Using reselect you can memorize the results of the array.filter to prevent the array from being filtered when neither the messages or read arrays have changed, which will allow you to use Option 1.
In this way, you can easily store the raw data in your reducers, and also access the computed data efficiently for display. A benefit from this is that you are decoupling the requirements for data structure and storage from the requirements for the way the data is displayed.
You can learn more about efficiently computing derived data in the redux docs

How about using a lookup table object, where the id's are the keys.
This way you don't need to filter nor loop to see if a certain message id is there. just check if the object holds a key with the corresponding id:
So in your case it will be:
const isRead = !!this.props.read[this.props.currentMessage.messageId];
Small running example:
const read = {
1234: {
meta: 'Something'
},
5678: {
meta: 'Time etc'
}
};
const someMessage = {id: 5678};
const someOtherMessage = {id: 999};
const isRead = id => !!read[id];
console.log('someMessage is ',isRead(someMessage.id));
console.log('someOtherMessage is ',isRead(someOtherMessage.id));
Edit
I recommend reading about Normalizing State Shape from the redux documentations.
There are great examples of designing and organizing the data and state.

Related

Mobx-state-tree create form with types.identifier field on model

I've started using mobx-state-tree recently and I have a practical question.
I have a model that has a types.identifier field, this is the database id of the resource and when I query for existing stuff it gets populated.
When I am creating a new instance, though, following the example that Michel has on egghead, I need to pass an initial id to my MyModel.create() on initial state, however, this ID will only be known once I post the creation to the API and get the resulting created resource.
I have searched around for a simple crud example using mobx-state-tree but couldn't find one (suggestions?).
What is the best practice here? Should I do a `MyModel.create({ id: 'foobar' }) and weed it out when I post to the API (and update the instance once I get the response from the API)?
This is a limitation of mobx-state-tree's current design. Identifiers are immutable.
One strategy I've seen to get around this issue is to store your persistence layer's id in a separate field from your types.identifier field. You would then use a library like uuid to generate the types.identifier value:
import { v4 } from "node-uuid"
const Box = types
.model("Box", {
id: types.identifier,
name: "hal",
x: 0,
y: 0
})
const box = Box.create({ 'hal', 10, 10, id: v4() })

Zapier lazy load input fields choices

I'm building a Zapier app for a platform that have dynamic fields. I have an API that returns the list of fields for one of my resource (for example) :
[
{ name: "First Name", key: "first_name", type: "String" },
{ name: "Civility", key: "civility", type: "Multiple" }
]
I build my action's inputFields based on this API :
create: {
[...],
operation: {
inputFields: [
fetchFields()
],
[...]
},
}
The API returns type that are list of values (i.e : Civility), but to get these values I have to make another API call.
For now, what I have done is in my fetchFields function, each time I encounter a type: "Multiple", I do another API call to get the possible values and set it as choices in my input field. However this is expensive and the page on Zapier takes too much time to display the fields.
I tried to use the z.dehydrate feature provided by Zapier but it doesn't work for input choices.
I can't use a dynamic dropdown here as I can't pass the key of the field possible value I'm looking for. For example, to get back the possible values for Civility, I'll need to pass the civility key to my API.
What are the options in this case?
David here, from the Zapier Platform team.
Thanks for writing in! I think what you're doing is possible, but I'm also not 100% that I understand what you're asking.
You can have multiple API calls in the function (which it sounds like you are). In the end, the function should return an array of Field objects (as descried here).
The key thing you might not be aware of is that subsequent steps have access to a partially-filled bundle.inputData, so you can have a first function that gets field options and allows a user to select something, then a second function that runs and pulls in fields based on that choice.
Otherwise, I think a function that does 2 api calls (one to fetch the field types and one to turn them into Zapier field objects) is the best bet.
If this didn't answer your question, feel free to email partners#zapier.com or join the slack org (linked at the bottom of the readme) and we'll try to solve it there.

Searching in large object arrays

I have a Vuex.Store where I have centralized all site data, this object is actually pretty big. In different pages, the user needs to search data while the result has to be paged.
In order to show all data, I partition a particular object array in my store in a way that I keep the length and totalPages in a component, and then iterate through the original array maintained by Vuex.
However, for searching, I have created an action in my Vuex module which takes an argument and compares items within the object array against that, and it finally returns a promise.
The actual problem is that, I then create a temporary array in my component for the searched items followed by re-assigning the length and totalPages values. I feel like this temporary array is something redundant which only increases the size of the DOM and I also feel like it is such a bad idea.
The point of posting this question is to know, whether this approach would be considered as a standard solution, or whether there are better ways for searching large repositories.
Let's take for example a simple blog.
We have the following pages:
/blog/page/1
/blog/page/4
/slug-of-the-post
/slug-of-another-post
My Vuex store has the following keys:
{
posts: {},
pages: {},
}
Now when the user goes to /blog/page/1 I fetch all the data for that page from the API and store each post in the posts object, and in the pages object I store only a reference to the post.
This is how the store looks like after going to /blog/page/1
{
posts: {
'blog-post-example': {
id: 1,
title: 'test'
},
'another-blog-post-example': {
id: 2,
title: 'another title'
}
},
pages: {
1: [
'blog-post-example',
'another-blog-post-example'
]
},
}
Same happens if they go to the next page "/blog/page/2".
However, if they go back to "/blog/page/1", I can detect that the data of that page has already been fetched and I don't need to go back to the API.
This way you smoothen the UX, without loading all your blog posts into the store at once.
Hope that helps.

Unable to use Ember data with JSONAPI and fragments to support nested JSON data

Overview
I'm using Ember data and have a JSONAPI. Everything works fine until I have a more complex object (let's say an invoice for a generic concept) with an array of items called lineEntries. The line entries are not mapped directly to a table so need to be stored as raw JSON object data. The line entry model also contains default and computed values. I wish to store the list data as a JSON object and then when loaded back from the store that I can manipulate it as normal in Ember as an array of my model.
What I've tried
I've looked at and tried several approaches, the best appear to be (open to suggestions here!):
Fragments
Replace problem models with fragments
I've tried making the line entry model a fragment and then referencing the fragment on the invoice model as a fragmentArray. Line entries add to the array as normal but default values don't work (should they?). It creates the object and I can store it in the backend but when I return it, it fails with either a normalisation issue or a serialiser issue. Can anyone state the format the data be returned in? It's confusing as normalising the data seems to require JSONAPI but the fragment requires JSON serialiser. I've tried several combinations but no luck so far. My line entries don't have actual ids as the data is saved and loaded as a block. Is this an issue?
DS.EmbeddedRecordsMixin
Although not supported in JSONAPI, it sounds possible to use JSONAPI and then switch to JSONSerializer or RESTSerializer for the problem models. If this is possible could someone give me a working example and the JSON format that should be returned by the API? I have header authorisation and other such data so would I still be able to set this at the application level for all request not using my JSONAPI?
Ember-data-save-relationships
I found an add on here that provides an add on to do this. It seems more involved than the other approaches but when I've tried this I can send the data up by setting a the data as embedded. Great! But although it saves it doesn't unwrap it correct and I'm back with the same issues.
Custom serialiser
Replace the models serialiser with something that takes the data and sends it as plain JSON data and then deserialises back into something Ember can use. This sounds similar to the above but I do the heavy lifting. The only reason to do this is because all examples for the above solutions are quite light and don't really show how to set this up with an actual JSONAPI set up that would need it.
Where I am and what I need
Basically all approaches lead to saving the JSON fine but the return JSON from the server not being the correct format or the deserialisation failing but it's unclear what it should be or what needs to change without breaking the existing JSONAPI models that work fine.
If anyone know the format for return API data it may resolve this. I've tried JSONAPI with lineEntries returning the same format as it saved. I've tried placing relationship sections like the add on suggested and I've also tried placing relationship only data against the entries and an include section with all the references. Any help on this would be great as I've learned a lot through this but deadlines a looming and I can't see a viable solution that doesn't break as much as it fixes.
If you are looking for return format for relational data from the API server you need to make sure of the following:
Make sure the relationship is defined in the ember model
Return all successes with a status code of 200
From there you need to make sure you return relational data correctly. If you've set the ember model for the relationship to {async: true} you need only return the id of the relational model - which should also be defined in ember. If you do not set {async: true}, ember expects all relational data to be included.
return data with relationships in JSON API specification
Example:
models\unicorn.js in ember:
import DS from 'ember-data';
export default DS.Model.extend({
user: DS.belongsTo('user', {async: true}),
staticrace: DS.belongsTo('staticrace',{async: true}),
unicornName: DS.attr('string'),
unicornLevel: DS.attr('number'),
experience: DS.attr('number'),
hatchesAt: DS.attr('number'),
isHatched: DS.attr('boolean'),
raceEndsAt: DS.attr('number'),
isRacing: DS.attr('boolean'),
});
in routes\unicorns.js on the api server on GET/:id:
var jsonObject = {
"data": {
"type": "unicorn",
"id": unicorn.dataValues.id,
"attributes": {
"unicorn-name" : unicorn.dataValues.unicornName,
"unicorn-level" : unicorn.dataValues.unicornLevel,
"experience" : unicorn.dataValues.experience,
"hatches-at" : unicorn.dataValues.hatchesAt,
"is-hatched" : unicorn.dataValues.isHatched,
"raceEndsAt" : unicorn.dataValues.raceEndsAt,
"isRacing" : unicorn.dataValues.isRacing
},
"relationships": {
"staticrace": {
"data": {"type": "staticrace", "id" : unicorn.dataValues.staticRaceId}
},
"user":{
"data": {"type": "user", "id" : unicorn.dataValues.userId}
}
}
}
}
res.status(200).json(jsonObject);
In ember, you can call this by chaining model functions. For example when this unicorn goes to race in controllers\unicornracer.js:
raceUnicorn() {
if (this.get('unicornId') === '') {return false}
else {
return this.store.findRecord('unicorn', this.get('unicornId', { backgroundReload: false})).then(unicorn => {
return this.store.findRecord('staticrace', this.get('raceId')).then(staticrace => {
if (unicorn.getProperties('unicornLevel').unicornLevel >= staticrace.getProperties('raceMinimumLevel').raceMinimumLevel) {
unicorn.set('isRacing', true);
unicorn.set('staticrace', staticrace);
unicorn.set('raceEndsAt', Math.floor(Date.now()/1000) + staticrace.get('duration'))
this.set('unicornId', '');
return unicorn.save();
}
else {return false;}
});
});
}
}
The above code sends a PATCH to the api server route unicorns/:id
Final note about GET,POST,DELETE,PATCH:
GET assumes you are getting ALL of the information associated with a model (the example above shows a GET response). This is associated with model.findRecord (GET/:id)(expects one record), model.findAll(GET/)(expects an array of records), model.query(GET/?query=&string=)(expects an array of records), model.queryRecord(GET/?query=&string=)(expects one record)
POST assumes you at least return at least what you POST to the api server from ember , but can also return additional information you created on the apiServer side such as createdAt dates. If the data returned is different from what you used to create the model, it'll update the created model with the returned information. This is associated with model.createRecord(POST/)(expects one record).
DELETE assumes you return the type, and the id of the deleted object, not data or relationships. This is associated with model.deleteRecord(DELETE/:id)(expects one record).
PATCH assumes you return at least what information was changed. If you only change one field, for instance in my unicorn model, the unicornName, it would only PATCH the following:
{
data: {
"type":"unicorn",
"id": req.params.id,
"attributes": {
"unicorn-name" : "This is a new name!"
}
}
}
So it only expects a returned response of at least that, but like POST, you can return other changed items!
I hope this answers your questions about the JSON API adapter. Most of this information was originally gleamed by reading over the specification at http://jsonapi.org/format/ and the ember implementation documentation at https://emberjs.com/api/data/classes/DS.JSONAPIAdapter.html

Actual property name on REQUIRED_CHILDREN connetion

In relay, when using REQUIRED_CHILDREN like so:
return [{
type: 'REQUIRED_CHILDREN',
children: [
Relay.QL`
fragment on Payload {
myConnection (first: 50) {
edges {
node {
${fragment}
}
}
}
}
`
]
}]
and reading off the response through the onSuccess callback:
Relay.Store.commitUpdate(
new AboveMutation({ }), { onFailure, onSuccess }
)
the response turns the property myConnection into a hashed name (i.e. __myConnection652K), which presumably is used to prevent connection/list conflicts inside the relay store.
However, since this is a REQUIRED_CHILDREN and I'm manually reading myConnection, it just prevents access to it.
Is there an way to get the actual property names when using the onSuccess callback?
Just as Ahmad wrote: using REQUIRED_CHILDREN means you're not going to store the results. The consequence of it is that data supplied to the callback is in raw shape (nearly as it came from server) and data masking does not applies.
Despite not storing the data, it seems to be no reason (though core team member's opinion would be certainly more appropriate here) not to convert it to client style shape. This is the newest type of mutation, so there is a chance such feature was accidentally omitted. This is normal that queries are transformed to the server style shape, the opposite transformation could take place as well. However until now is has not been needed - while saving the data to the store and updating components props, transformation was made meanwhile. Currently most of Relay team is highly focused on rewriting much of the implementation, so I would not expect this issue to be improved very soon.
So again, solution proposed by Ahmed to convert type to GraphQLList seems to be the easiest and most reliable. If for any reason you want to stand by connection, there is an option to take GraphQL fragment supplied as children (actually its parsed form stored in __cachedFragment__ attribute of that original fragment) and traverse it to obtain the serializationKey for desired field (eg __myConnection652K).