Using meteor subscribe onReady function followed by observe results in repeated data - datatables

I use datatables on the client to allow speedy live sorting/filtering of around 10,000 rows of data. It is much faster to supply an array of rows to a DataTable during table creation than to add the rows individually. I can use the onReady function in subscribe to achieve this.
If I then call observe to pick up changes, I get the data already supplied in subscribe again.
While I can hack around this, I presume I am just not using meteor correctly and appreciate any advice.
Here is some sample code:
Meteor.subscribe("books", function(){
// Runs when subscription is complete
var mData = Books.find().fetch();
MyTable = $('#testTable').dataTable( {
'aoColumns': [
{ sTitle: 'title', sClass: 'alignRight', mDataProp: 'title'},
],
'aaData' : mData
});
// Add any new books.
Books.find().observe({added: function(item){
// ERR: Adds the books already fetched into mData as well as any new books.
MyTable.fnAddData([item]);
}});
});

There's a hidden option to observe ({_suppress_initial: true}) that avoids this behaviour. I'm not sure if it's a good idea to use it, but it is there.
As for advice around how to structure your code; it's not as easy as it should be, but I think you want to something like the following:
Wrap your table in a {{#constant}} helper so it never gets re-rendered.
Make sure the table doesn't get rendered the one-and-only time until the data is ready (this could help: https://github.com/oortcloud/unofficial-meteor-faq#how-do-i-know-when-my-subscription-is-ready-and-not-still-loading)
Do your code above in the table's Template.table.rendered callback.
That approach seems more modular.

Related

KeystoneJS `filter` vs `Item` list access control

I am trying to understand more in depth the difference between filter and item access control.
Basically I understand that Item access control is, sort of, higher order check and will run before the GraphQL filter.
My question is, if I am doing a filter on a specific field while updating, for instance a groupID or something like this, do I need to do the same check in Item Access Control?
This will cause an extra database query that will be part of the filter.
Any thoughts on that?
The TL;DR answer...
if I am doing a filter on a specific field [..] do I need to do the same check in Item Access Control?
No, you only need to apply the restriction in one place or the other.
Generally speaking, if you can describe the restriction using filter access control (ie. as a graphQL-style filter, with the args provided) then that's the best place to do it. But, if your access control needs to behave differently based on values in the current item or the specific changes being made, item access control may be required.
Background
Access control in Keystone can be a little hard to get your head around but it's actually very powerful and the design has good reasons behind it. Let me attempt to clarify:
Filter access control is applied by adding conditions to the queries run against the database.
Imagine a content system with lists for users and posts. Users can author a post but some posts are also editable by everyone. The Post list config might have something like this:
// ..
access: {
filter: {
update: () => ({ isEditable: { equals: true } }),
}
},
// ..
What that's effectively doing is adding a condition to all update queries run for this list. So if you update a post like this:
mutation {
updatePost(where: { id: "123"}, data: { title: "Best Pizza" }) {
id name
}
}
The SQL that runs might look like this:
update "Post"
set title = 'Best Pizza'
where id = 234 and "isEditable" = true;
Note the isEditable condition that's automatically added by the update filter. This is pretty powerful in some ways but also has its limits – filter access control functions can only return GraphQL-style filters which prevents them from operating on things like virtual fields, which can't be filtered on (as they don't exist in the database). They also can't apply different filters depending on the item's current values or the specific updates being performed.
Filter access control functions can access the current session, so can do things like this:
filter: {
// If the current user is an admin don't apply the usual filter for editability
update: (session) => {
return session.isAdmin ? {} : { isEditable: { equals: true } };
},
}
But you couldn't do something like this, referencing the current item data:
filter: {
// ⚠️ this is broken; filter access control functions don't receive the current item ⚠️
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId ? {} : { isEditable: { equals: true } };
},
}
The benefit of filter access control is it doesn't force Keystone to read an item before an operation occurs; the filter is effectively added to the operation itself. This can makes them more efficient for the DB but does limit them somewhat. Note that things like hooks may also cause an item to be read before an operation is performed so this performance difference isn't always evident.
Item access control is applied in the application layer, by evaluating the JS function supplied against the existing item and/or the new data supplied.
This makes them a lot more powerful in some respects. You can, for example, implement the previous use case, where authors are allowed to update their own posts, like this:
item: {
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId || item.isEditable;
},
}
Or add further restrictions based on the specific updates being made, by referencing the inputData argument.
So item access control is arguably more powerful but they can have significant performance implications – not so much for mutations which are likely to be performed in small quantities, but definitely for read operations. In fact, Keystone won't let you define item access control for read operations. If you stop and think about this, you might see why – doing so would require reading all items in the list out of the DB and running the access control function against each one, every time a list was read. As such, the items accessible can only be restricted using filter access control.
Tip: If you think you need item access control for reads, consider putting the relevant business logic in a resolveInput hook that flattens stores the relevant values as fields, then referencing those fields using filter access control.
Hope that helps

Redux store design – two arrays or one

I guess this could be applied to any Redux-backed system, but imagine we are building simple React Native app that supports two actions:
fetching a list of messages from a remote API
the ability to mark those messages as having been read
At the moment I have a messagesReducer that defines its state as...
const INITIAL_STATE = {
messages: [],
read: []
};
The messages array stores the objects from the remote API, for example...
messages: [
{ messageId: 1234, title: 'Hello', body: 'Example' },
{ messageId: 5678, title: 'Goodbye', body: 'Example' }
];
The read array stores the numerical IDs of the messages that have been read plus some other meta data, for example...
read: [
{ messageId: 1234, meta: 'Something' },
{ messageId: 5678, meta: 'Time etc' }
];
In the React component that displays a message in a list, I run this test to see if the message should be shown as being read...
const isRead = this.props.read.filter(m => m.messageId == this.props.currentMessage.messageId).length > 0;
This is working great at the moment. Obviously I could have put a boolean isRead property on the message object but the main advantage of the above arrangement is that the entire contents of the messages array can be overwritten by what comes from the remote API.
My concern is about how well this will scale, and how expensive the array.filter method is when the array gets large. Also keep in mind that the app displays a list of messages that could be hundreds of messages, so the filtering is happening for each message in the list. It works on my modern iPhone, but it might not work so well on less powerful phones.
I'm also thinking I might be missing some well established best practice pattern for this sort of thing.
Let's call the current approach Option 1. I can think of two other approaches...
Option 2 is to put isRead and readMeta properties on the message object. This would make rendering the message list super quick. However when we get the list of messages from the remote API, instead of just overwriting the current array we would need to step through the JSON returned by the API and carefully update and delete the messages in the local store.
Option 3 is keep the current read array but also to add isRead and readMeta properties on the message object. When we get the list of messages from the remote API we can overwrite the entire messages array, and then loop through the read array and copy the data into the corresponding message objects. This would also need to happen whenever the user reads a message – data would be duplicated in two places. This makes me feel uncomfortable, but maybe it's ok.
I've struggled to find many other examples of this type of store, but it could be that I'm just Googling the wrong thing. I'm quite new to Redux and some of my terminology is probably incorrect.
I'd really value any thoughts on this.
Using reselect you can memorize the results of the array.filter to prevent the array from being filtered when neither the messages or read arrays have changed, which will allow you to use Option 1.
In this way, you can easily store the raw data in your reducers, and also access the computed data efficiently for display. A benefit from this is that you are decoupling the requirements for data structure and storage from the requirements for the way the data is displayed.
You can learn more about efficiently computing derived data in the redux docs
How about using a lookup table object, where the id's are the keys.
This way you don't need to filter nor loop to see if a certain message id is there. just check if the object holds a key with the corresponding id:
So in your case it will be:
const isRead = !!this.props.read[this.props.currentMessage.messageId];
Small running example:
const read = {
1234: {
meta: 'Something'
},
5678: {
meta: 'Time etc'
}
};
const someMessage = {id: 5678};
const someOtherMessage = {id: 999};
const isRead = id => !!read[id];
console.log('someMessage is ',isRead(someMessage.id));
console.log('someOtherMessage is ',isRead(someOtherMessage.id));
Edit
I recommend reading about Normalizing State Shape from the redux documentations.
There are great examples of designing and organizing the data and state.

how to populate keystoneJS relationship

in keystoneJs's doc:
Populating related data in queries
You can populate related data for relationship fields thanks to Mongoose's populate functionality. To populate the author and category documents when loading a Post from the example above, you would do this:
Post.model.findOne().populate('author categories').exec(function(err,post) {
// the author is a fully populated User document
console.log(post.author.name);
});
my question is that there is any options I can configure so these List APIs can populate the many relationship automatically.
thanks.
mike so
I think not. This is how I do it when I use Keystone as an API (using .populate).
exports.getStoreWithId = function (req, res) {
Store.model
.find()
.populate('productTags productCategories')
.where('_id', req.params.id)
.exec(function (err, item) {
if (err) return res.apiError('database error', err);
res.apiResponse({
store: item,
});
});
};
Pretty sure the short answer here is no. If you want to populate you'll need to include the .populate.
That being said, keystone gives you access to the mongoose schema, so the answer here should work. Obviously their mongoose.Schema is done by your Post.add stuff so I think you can ignore their first snippet, and you should be able to add the hooks as Post.schema.pre(... for the second snippet.
The Post.schema.pre('save',... hooks definitely work with keystone, so I assume the pre-find hooks work too, however I've not actually tested this. (I'd be interested to know the outcome though!)
Finally, if that works, you could also have a look at the mongoose-autopopulate package, and see if you can get that to play nicely with keystone.

Actual property name on REQUIRED_CHILDREN connetion

In relay, when using REQUIRED_CHILDREN like so:
return [{
type: 'REQUIRED_CHILDREN',
children: [
Relay.QL`
fragment on Payload {
myConnection (first: 50) {
edges {
node {
${fragment}
}
}
}
}
`
]
}]
and reading off the response through the onSuccess callback:
Relay.Store.commitUpdate(
new AboveMutation({ }), { onFailure, onSuccess }
)
the response turns the property myConnection into a hashed name (i.e. __myConnection652K), which presumably is used to prevent connection/list conflicts inside the relay store.
However, since this is a REQUIRED_CHILDREN and I'm manually reading myConnection, it just prevents access to it.
Is there an way to get the actual property names when using the onSuccess callback?
Just as Ahmad wrote: using REQUIRED_CHILDREN means you're not going to store the results. The consequence of it is that data supplied to the callback is in raw shape (nearly as it came from server) and data masking does not applies.
Despite not storing the data, it seems to be no reason (though core team member's opinion would be certainly more appropriate here) not to convert it to client style shape. This is the newest type of mutation, so there is a chance such feature was accidentally omitted. This is normal that queries are transformed to the server style shape, the opposite transformation could take place as well. However until now is has not been needed - while saving the data to the store and updating components props, transformation was made meanwhile. Currently most of Relay team is highly focused on rewriting much of the implementation, so I would not expect this issue to be improved very soon.
So again, solution proposed by Ahmed to convert type to GraphQLList seems to be the easiest and most reliable. If for any reason you want to stand by connection, there is an option to take GraphQL fragment supplied as children (actually its parsed form stored in __cachedFragment__ attribute of that original fragment) and traverse it to obtain the serializationKey for desired field (eg __myConnection652K).

Right way to dynamically update view in Angular

What is the right way to updated the Model in the view, say after a successful API POST. I've a textarea, something like in a Twitter, where a user can enter text and post. The entered text must show up soon after it is posted successfully.
How to achieve this? Should I make another call to get the posts separately or is there any other way to do this?
My Code looks like
feedsResolve.getFeeds().then(function(feeds){
$scope.feeds = feeds;
}
where feedsResolve is a service returning a promise
$scope.postFeed = function(){
var postObj = Restangular.all('posts');
postObj.post( $scope.feed.text ).then(function(res){
//res contains only the new feed id
})
}
How do I update the $scope.feeds in the view?
I assume you are posting a new post and that generally posts look like:
{
id: 42,
text: 'This is my text'
}
In this case you can do something like:
$scope.postFeed = function(){
var postObj = Restangular.all('posts');
var feedText = $scope.feed.text;
postObj.post( feedText ).then(function(res){
$scope.feeds.push({ id: res.id, text: feedText});
})
};
A better practice when writing restful service though is to just have your POST return an actual JSON object with the new feed that was added (not just the id). If that were the case you could just add it to your feeds array.
If your JSON object is complex, this practice is the most common an easiest way to handle this without needing extra requests to the server. Since you already are on the server, and you've likely already created the object (in order to be able to insert it into the database), all you have to do is serialize it back out to the HTTP response. This adds little to no overhead and gives the client all the information it needs to effortlessly update.