I am creating an app, where announcements are shown, stored in firestore and with that there is a hasRead object for each announcement.
It works, as in when a user is reading the announcement it is shown as read on the users app. But when another user is reading the same announcement, his/her usrid is being stored, overwriting the any other usrid stored.
Here his how I store it.
setAnnounceToRead(userId) {
firebase.firestore().collection('announcements').doc(this.state.id).set({
hasread: {
userId
}
},
{ merge: true });
}
I already found out that it is because of the merge, as it doesn't "adds" the usrid but overrides it instead.
How can I add every userid that reads the announcement, but keeping the already existing userids?
Cheers
Right now you're storing each user's UID as a field named userId. Since you're using the same field name for each user, you end up storing only the last user's UID.
To store the UID for all users, you'd usually have a structure like this:
hasread: {
udartsUid: true,
pufsUid: true
}
In your code that would translate to something like:
let update = {};
update[userId] = true;
firebase.firestore().collection('announcements').doc(this.state.id).set({
hasread: update
},
{ merge: true });
But this type of operation got a lot easier recently, since Firestore now has operations that allow you to use an array for this type of information.
let doc = firebase.firestore().collection('announcements').doc(this.state.id);
doc.update({ "hasRead": FieldValue.arrayUnion(userId) });
This snippets will add the userId value to the array if it isn't already in there. If the value is already in the array, it does nothing.
For more on the latter, see the blog post Better arrays in Cloud Firestore.
Related
I am trying to understand more in depth the difference between filter and item access control.
Basically I understand that Item access control is, sort of, higher order check and will run before the GraphQL filter.
My question is, if I am doing a filter on a specific field while updating, for instance a groupID or something like this, do I need to do the same check in Item Access Control?
This will cause an extra database query that will be part of the filter.
Any thoughts on that?
The TL;DR answer...
if I am doing a filter on a specific field [..] do I need to do the same check in Item Access Control?
No, you only need to apply the restriction in one place or the other.
Generally speaking, if you can describe the restriction using filter access control (ie. as a graphQL-style filter, with the args provided) then that's the best place to do it. But, if your access control needs to behave differently based on values in the current item or the specific changes being made, item access control may be required.
Background
Access control in Keystone can be a little hard to get your head around but it's actually very powerful and the design has good reasons behind it. Let me attempt to clarify:
Filter access control is applied by adding conditions to the queries run against the database.
Imagine a content system with lists for users and posts. Users can author a post but some posts are also editable by everyone. The Post list config might have something like this:
// ..
access: {
filter: {
update: () => ({ isEditable: { equals: true } }),
}
},
// ..
What that's effectively doing is adding a condition to all update queries run for this list. So if you update a post like this:
mutation {
updatePost(where: { id: "123"}, data: { title: "Best Pizza" }) {
id name
}
}
The SQL that runs might look like this:
update "Post"
set title = 'Best Pizza'
where id = 234 and "isEditable" = true;
Note the isEditable condition that's automatically added by the update filter. This is pretty powerful in some ways but also has its limits – filter access control functions can only return GraphQL-style filters which prevents them from operating on things like virtual fields, which can't be filtered on (as they don't exist in the database). They also can't apply different filters depending on the item's current values or the specific updates being performed.
Filter access control functions can access the current session, so can do things like this:
filter: {
// If the current user is an admin don't apply the usual filter for editability
update: (session) => {
return session.isAdmin ? {} : { isEditable: { equals: true } };
},
}
But you couldn't do something like this, referencing the current item data:
filter: {
// ⚠️ this is broken; filter access control functions don't receive the current item ⚠️
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId ? {} : { isEditable: { equals: true } };
},
}
The benefit of filter access control is it doesn't force Keystone to read an item before an operation occurs; the filter is effectively added to the operation itself. This can makes them more efficient for the DB but does limit them somewhat. Note that things like hooks may also cause an item to be read before an operation is performed so this performance difference isn't always evident.
Item access control is applied in the application layer, by evaluating the JS function supplied against the existing item and/or the new data supplied.
This makes them a lot more powerful in some respects. You can, for example, implement the previous use case, where authors are allowed to update their own posts, like this:
item: {
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId || item.isEditable;
},
}
Or add further restrictions based on the specific updates being made, by referencing the inputData argument.
So item access control is arguably more powerful but they can have significant performance implications – not so much for mutations which are likely to be performed in small quantities, but definitely for read operations. In fact, Keystone won't let you define item access control for read operations. If you stop and think about this, you might see why – doing so would require reading all items in the list out of the DB and running the access control function against each one, every time a list was read. As such, the items accessible can only be restricted using filter access control.
Tip: If you think you need item access control for reads, consider putting the relevant business logic in a resolveInput hook that flattens stores the relevant values as fields, then referencing those fields using filter access control.
Hope that helps
This, to me, is the most basic authentication scheme for user-generated content, given a collection called "posts":
Allow any authenticated user to insert into "posts" collection
Allow the user who inserted the document into collection "posts", to read, update, and destroy the document, and deny all others
Allow the user to list all documents in collection "posts" if they are the one who created the documents originally
All examples I've found so far seem to rely on the document ID being the same as the user's id, which would only work for user's "profile" data (again, all the examples seem to be for this single limited scenario).
It doesn't seem that there is any sort of metadata for who the authenticated user was when a document was created, so it seems i must store the ID on the doc myself, but I haven't been able to get past this point and create a working example. Also, this opens up the opportunity for user's to create documents as other users, since the user ID is set by the client.
I feel like I am missing something fundamental here since this has to be the most basic scenario but have not yet found any concise examples for doing this.
This answer is from this github gist. Basically, in the document collection posts there is a field called uid and it checks if it matches the users uid.
// Checks auth uid equals database node uid
// In other words, the User can only access their own data
{
"rules": {
"posts": {
"$uid": {
".read": "$uid === auth.uid",
".write": "$uid === auth.uid"
}
}
}
}
-- Edit --
DSL rules
match /Posts/{document=**}{
allow read : if uid == request.auth.uid;
allow write: if uid == request.auth.uid;
}
I guess this could be applied to any Redux-backed system, but imagine we are building simple React Native app that supports two actions:
fetching a list of messages from a remote API
the ability to mark those messages as having been read
At the moment I have a messagesReducer that defines its state as...
const INITIAL_STATE = {
messages: [],
read: []
};
The messages array stores the objects from the remote API, for example...
messages: [
{ messageId: 1234, title: 'Hello', body: 'Example' },
{ messageId: 5678, title: 'Goodbye', body: 'Example' }
];
The read array stores the numerical IDs of the messages that have been read plus some other meta data, for example...
read: [
{ messageId: 1234, meta: 'Something' },
{ messageId: 5678, meta: 'Time etc' }
];
In the React component that displays a message in a list, I run this test to see if the message should be shown as being read...
const isRead = this.props.read.filter(m => m.messageId == this.props.currentMessage.messageId).length > 0;
This is working great at the moment. Obviously I could have put a boolean isRead property on the message object but the main advantage of the above arrangement is that the entire contents of the messages array can be overwritten by what comes from the remote API.
My concern is about how well this will scale, and how expensive the array.filter method is when the array gets large. Also keep in mind that the app displays a list of messages that could be hundreds of messages, so the filtering is happening for each message in the list. It works on my modern iPhone, but it might not work so well on less powerful phones.
I'm also thinking I might be missing some well established best practice pattern for this sort of thing.
Let's call the current approach Option 1. I can think of two other approaches...
Option 2 is to put isRead and readMeta properties on the message object. This would make rendering the message list super quick. However when we get the list of messages from the remote API, instead of just overwriting the current array we would need to step through the JSON returned by the API and carefully update and delete the messages in the local store.
Option 3 is keep the current read array but also to add isRead and readMeta properties on the message object. When we get the list of messages from the remote API we can overwrite the entire messages array, and then loop through the read array and copy the data into the corresponding message objects. This would also need to happen whenever the user reads a message – data would be duplicated in two places. This makes me feel uncomfortable, but maybe it's ok.
I've struggled to find many other examples of this type of store, but it could be that I'm just Googling the wrong thing. I'm quite new to Redux and some of my terminology is probably incorrect.
I'd really value any thoughts on this.
Using reselect you can memorize the results of the array.filter to prevent the array from being filtered when neither the messages or read arrays have changed, which will allow you to use Option 1.
In this way, you can easily store the raw data in your reducers, and also access the computed data efficiently for display. A benefit from this is that you are decoupling the requirements for data structure and storage from the requirements for the way the data is displayed.
You can learn more about efficiently computing derived data in the redux docs
How about using a lookup table object, where the id's are the keys.
This way you don't need to filter nor loop to see if a certain message id is there. just check if the object holds a key with the corresponding id:
So in your case it will be:
const isRead = !!this.props.read[this.props.currentMessage.messageId];
Small running example:
const read = {
1234: {
meta: 'Something'
},
5678: {
meta: 'Time etc'
}
};
const someMessage = {id: 5678};
const someOtherMessage = {id: 999};
const isRead = id => !!read[id];
console.log('someMessage is ',isRead(someMessage.id));
console.log('someOtherMessage is ',isRead(someOtherMessage.id));
Edit
I recommend reading about Normalizing State Shape from the redux documentations.
There are great examples of designing and organizing the data and state.
I can't figure out how Ember determines if it should update or create a record. I would assume its based on the ID or on the Store entry, but it seems to be something else. The code example clarifies:
// this returns the user without making an api call
currentUser.get('store').find('user_detail', '49')
// this returns 49
currentUser.get('id')
// this returns true
currentUser.get('store').hasRecordForId('user_detail', 49)
// this issues a create to api/userDetails instead
// of updating /api/userDetails/49
currentUser.save()
// maybe this is a lead, not the 48 at the end
currentUser.toString()
// <EmberApp.UserDetail:ember461:48>
// it looks as though currentState is involved here
// http://emberjs.com/api/data/classes/DS.RootState.html
currentUser.currentState
// returns root.loaded.created.uncommitted
currentUser.get('currentState.stateName');
// also isNew is wrong and returns true
currentUser.get('isNew');
Let me explain why I have this issue. My app has a current user. If you logout I update the current user. So I set Ember.currentUser.setProperties(newUserData). I update the currentUser object so that ember automatically triggers updates throughout my app. If I would replace the currentUser Ember.currentUser = newUser; Nothing would update. If I cant solve the above problem an alternative solution for the swapping of the user object would also work.
This is how I handle the global user state
container.register('user:current', Ember.currentUser);
// and handle updates via Ember.currentUser.setProperties()
application.inject('controller', 'user', 'user:current');
application.inject('route', 'user', 'user:current');
A proper solution would replace Ember.currentUser, however doing that doesnt trigger updates.
A new model will have the isNew and isDirty properties set to true, an existing record that needs to be updated will only have isDirty set to true.
I'd recommend pushing your user one level deeper and not storing it on the Ember namespace, that way you can set it from anywhere else, yet still inject it during injection
var users = Em.Object.create({
current: currentUser
});
container.register('users:current', users, {instantiate: false});
// and handle updates via Ember.currentUser.setProperties()
application.inject('controller', 'users', 'users:current');
application.inject('route', 'users', 'users:current');
Then from any controller you can access/watch it on users.current, yet you can also set it using this.users.set('current', newUser) which would effect anyone watching that property on any controller or route.
Example: http://emberjs.jsbin.com/OxIDiVU/1145/edit
Additionally a lot of things you are doing are async calls and should use the promise pattern for viewing properties etc.
What is the right way to updated the Model in the view, say after a successful API POST. I've a textarea, something like in a Twitter, where a user can enter text and post. The entered text must show up soon after it is posted successfully.
How to achieve this? Should I make another call to get the posts separately or is there any other way to do this?
My Code looks like
feedsResolve.getFeeds().then(function(feeds){
$scope.feeds = feeds;
}
where feedsResolve is a service returning a promise
$scope.postFeed = function(){
var postObj = Restangular.all('posts');
postObj.post( $scope.feed.text ).then(function(res){
//res contains only the new feed id
})
}
How do I update the $scope.feeds in the view?
I assume you are posting a new post and that generally posts look like:
{
id: 42,
text: 'This is my text'
}
In this case you can do something like:
$scope.postFeed = function(){
var postObj = Restangular.all('posts');
var feedText = $scope.feed.text;
postObj.post( feedText ).then(function(res){
$scope.feeds.push({ id: res.id, text: feedText});
})
};
A better practice when writing restful service though is to just have your POST return an actual JSON object with the new feed that was added (not just the id). If that were the case you could just add it to your feeds array.
If your JSON object is complex, this practice is the most common an easiest way to handle this without needing extra requests to the server. Since you already are on the server, and you've likely already created the object (in order to be able to insert it into the database), all you have to do is serialize it back out to the HTTP response. This adds little to no overhead and gives the client all the information it needs to effortlessly update.