How to add one item from observable to an array of items in a different observable? - typescript2.0

I am struggling with using observables and pipes where I want to add one item from one observable to another observable containing a list of items of the same type.
I have type X. And of the type X I have an observable array:
readonly arrayOfx$: Observable<X[]>;
I also have an observable of only the the type X:
private readonly _x$: Observable<UpdateOfX>;
interface UpdateOfX {
x: X,
updateState: "Add" | "Modified" | "Removed"
}
All this code is in a Service Class where the service should only expose the array of X. The data in the array I want to show in my html with async piping and this part of the functionality works. The host and the client are connected with the signalR technique and onConnected, an array of items of type X is retrieved. But as the application runs, in the backend new items of type X can be created, existing items can be changed or can be removed and when this occures, only this item will be send via the signalR connection and the modification state.
In the front end, this item must be added to the already retrieved array of items of type X. In the service, the pipe technique is used and my question is, how do I add the single item I get in a later moment to the list of items I retrieved earlier?
constructor() {
this.arrayOfx$ = this._someSignalRHelperService.retrieveMultipleItems$.pipe(
tap((xArray: X[]) => console.log(xArray)),
//can I somehow get the a later created x from the server here...
);
this._x$ = this._someSignalRHelperService.retrieveOneItem$.pipe(
tap((updateOfX: UpdateOfX) => console.log(updateOfX)),
map((updateOfX: UpdateOfX) => {
//process the updateState
//... or must I do something here to get x into x[]?
})
);
}
Since SignalR is used, the backend is in control when the client receives a new item of type X when there is one created.

You can use combineLatest(), and do whatever manipulation you want as soon as your receive the two emissions:
constructor() {
this.combinedOfX$ = combineLatest(
this._someSignalRHelperService.retrieveMultipleItems$,
this._someSignalRHelperService.retrieveOneItem$
).pipe(
map(([singleOfX, multipleOfX]) => {
// do your adding or mapping and whatever here.
})
)
}
Not sure if that is by design but the problem with having the frontend (client) to deal with the data is not so ideal. Your single source of truth is now based on your client, which different machines have different processing speed and may cause nuances and inconsistent displays. Also, the code will be messy in a sense that you will now need to check through the entire arrayOfX every time a singleOfX gets updated - you need to check if it exists in the current list, if yes, edit/delete; else, append to the list. What if the user refreshes his browser accidentally? You lost all of your processing.
Since you are already using SignalR, it would be more advisable that you let the server handle all the data, and let the server be single source of truth. Then you will just need to subscribe to one hub and listen to the changes of the arrayOfX; and pretty much don't care of the single updates.

Related

KeystoneJS `filter` vs `Item` list access control

I am trying to understand more in depth the difference between filter and item access control.
Basically I understand that Item access control is, sort of, higher order check and will run before the GraphQL filter.
My question is, if I am doing a filter on a specific field while updating, for instance a groupID or something like this, do I need to do the same check in Item Access Control?
This will cause an extra database query that will be part of the filter.
Any thoughts on that?
The TL;DR answer...
if I am doing a filter on a specific field [..] do I need to do the same check in Item Access Control?
No, you only need to apply the restriction in one place or the other.
Generally speaking, if you can describe the restriction using filter access control (ie. as a graphQL-style filter, with the args provided) then that's the best place to do it. But, if your access control needs to behave differently based on values in the current item or the specific changes being made, item access control may be required.
Background
Access control in Keystone can be a little hard to get your head around but it's actually very powerful and the design has good reasons behind it. Let me attempt to clarify:
Filter access control is applied by adding conditions to the queries run against the database.
Imagine a content system with lists for users and posts. Users can author a post but some posts are also editable by everyone. The Post list config might have something like this:
// ..
access: {
filter: {
update: () => ({ isEditable: { equals: true } }),
}
},
// ..
What that's effectively doing is adding a condition to all update queries run for this list. So if you update a post like this:
mutation {
updatePost(where: { id: "123"}, data: { title: "Best Pizza" }) {
id name
}
}
The SQL that runs might look like this:
update "Post"
set title = 'Best Pizza'
where id = 234 and "isEditable" = true;
Note the isEditable condition that's automatically added by the update filter. This is pretty powerful in some ways but also has its limits – filter access control functions can only return GraphQL-style filters which prevents them from operating on things like virtual fields, which can't be filtered on (as they don't exist in the database). They also can't apply different filters depending on the item's current values or the specific updates being performed.
Filter access control functions can access the current session, so can do things like this:
filter: {
// If the current user is an admin don't apply the usual filter for editability
update: (session) => {
return session.isAdmin ? {} : { isEditable: { equals: true } };
},
}
But you couldn't do something like this, referencing the current item data:
filter: {
// ⚠️ this is broken; filter access control functions don't receive the current item ⚠️
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId ? {} : { isEditable: { equals: true } };
},
}
The benefit of filter access control is it doesn't force Keystone to read an item before an operation occurs; the filter is effectively added to the operation itself. This can makes them more efficient for the DB but does limit them somewhat. Note that things like hooks may also cause an item to be read before an operation is performed so this performance difference isn't always evident.
Item access control is applied in the application layer, by evaluating the JS function supplied against the existing item and/or the new data supplied.
This makes them a lot more powerful in some respects. You can, for example, implement the previous use case, where authors are allowed to update their own posts, like this:
item: {
// The current user can update any post they authored, regardless of the isEditable flag
update: (session, item) => {
return item.author === session.itemId || item.isEditable;
},
}
Or add further restrictions based on the specific updates being made, by referencing the inputData argument.
So item access control is arguably more powerful but they can have significant performance implications – not so much for mutations which are likely to be performed in small quantities, but definitely for read operations. In fact, Keystone won't let you define item access control for read operations. If you stop and think about this, you might see why – doing so would require reading all items in the list out of the DB and running the access control function against each one, every time a list was read. As such, the items accessible can only be restricted using filter access control.
Tip: If you think you need item access control for reads, consider putting the relevant business logic in a resolveInput hook that flattens stores the relevant values as fields, then referencing those fields using filter access control.
Hope that helps

Redux store design – two arrays or one

I guess this could be applied to any Redux-backed system, but imagine we are building simple React Native app that supports two actions:
fetching a list of messages from a remote API
the ability to mark those messages as having been read
At the moment I have a messagesReducer that defines its state as...
const INITIAL_STATE = {
messages: [],
read: []
};
The messages array stores the objects from the remote API, for example...
messages: [
{ messageId: 1234, title: 'Hello', body: 'Example' },
{ messageId: 5678, title: 'Goodbye', body: 'Example' }
];
The read array stores the numerical IDs of the messages that have been read plus some other meta data, for example...
read: [
{ messageId: 1234, meta: 'Something' },
{ messageId: 5678, meta: 'Time etc' }
];
In the React component that displays a message in a list, I run this test to see if the message should be shown as being read...
const isRead = this.props.read.filter(m => m.messageId == this.props.currentMessage.messageId).length > 0;
This is working great at the moment. Obviously I could have put a boolean isRead property on the message object but the main advantage of the above arrangement is that the entire contents of the messages array can be overwritten by what comes from the remote API.
My concern is about how well this will scale, and how expensive the array.filter method is when the array gets large. Also keep in mind that the app displays a list of messages that could be hundreds of messages, so the filtering is happening for each message in the list. It works on my modern iPhone, but it might not work so well on less powerful phones.
I'm also thinking I might be missing some well established best practice pattern for this sort of thing.
Let's call the current approach Option 1. I can think of two other approaches...
Option 2 is to put isRead and readMeta properties on the message object. This would make rendering the message list super quick. However when we get the list of messages from the remote API, instead of just overwriting the current array we would need to step through the JSON returned by the API and carefully update and delete the messages in the local store.
Option 3 is keep the current read array but also to add isRead and readMeta properties on the message object. When we get the list of messages from the remote API we can overwrite the entire messages array, and then loop through the read array and copy the data into the corresponding message objects. This would also need to happen whenever the user reads a message – data would be duplicated in two places. This makes me feel uncomfortable, but maybe it's ok.
I've struggled to find many other examples of this type of store, but it could be that I'm just Googling the wrong thing. I'm quite new to Redux and some of my terminology is probably incorrect.
I'd really value any thoughts on this.
Using reselect you can memorize the results of the array.filter to prevent the array from being filtered when neither the messages or read arrays have changed, which will allow you to use Option 1.
In this way, you can easily store the raw data in your reducers, and also access the computed data efficiently for display. A benefit from this is that you are decoupling the requirements for data structure and storage from the requirements for the way the data is displayed.
You can learn more about efficiently computing derived data in the redux docs
How about using a lookup table object, where the id's are the keys.
This way you don't need to filter nor loop to see if a certain message id is there. just check if the object holds a key with the corresponding id:
So in your case it will be:
const isRead = !!this.props.read[this.props.currentMessage.messageId];
Small running example:
const read = {
1234: {
meta: 'Something'
},
5678: {
meta: 'Time etc'
}
};
const someMessage = {id: 5678};
const someOtherMessage = {id: 999};
const isRead = id => !!read[id];
console.log('someMessage is ',isRead(someMessage.id));
console.log('someOtherMessage is ',isRead(someOtherMessage.id));
Edit
I recommend reading about Normalizing State Shape from the redux documentations.
There are great examples of designing and organizing the data and state.

Actual property name on REQUIRED_CHILDREN connetion

In relay, when using REQUIRED_CHILDREN like so:
return [{
type: 'REQUIRED_CHILDREN',
children: [
Relay.QL`
fragment on Payload {
myConnection (first: 50) {
edges {
node {
${fragment}
}
}
}
}
`
]
}]
and reading off the response through the onSuccess callback:
Relay.Store.commitUpdate(
new AboveMutation({ }), { onFailure, onSuccess }
)
the response turns the property myConnection into a hashed name (i.e. __myConnection652K), which presumably is used to prevent connection/list conflicts inside the relay store.
However, since this is a REQUIRED_CHILDREN and I'm manually reading myConnection, it just prevents access to it.
Is there an way to get the actual property names when using the onSuccess callback?
Just as Ahmad wrote: using REQUIRED_CHILDREN means you're not going to store the results. The consequence of it is that data supplied to the callback is in raw shape (nearly as it came from server) and data masking does not applies.
Despite not storing the data, it seems to be no reason (though core team member's opinion would be certainly more appropriate here) not to convert it to client style shape. This is the newest type of mutation, so there is a chance such feature was accidentally omitted. This is normal that queries are transformed to the server style shape, the opposite transformation could take place as well. However until now is has not been needed - while saving the data to the store and updating components props, transformation was made meanwhile. Currently most of Relay team is highly focused on rewriting much of the implementation, so I would not expect this issue to be improved very soon.
So again, solution proposed by Ahmed to convert type to GraphQLList seems to be the easiest and most reliable. If for any reason you want to stand by connection, there is an option to take GraphQL fragment supplied as children (actually its parsed form stored in __cachedFragment__ attribute of that original fragment) and traverse it to obtain the serializationKey for desired field (eg __myConnection652K).

Adobe Dynamic Tag Management - Dynamic direct-call rules

We are using the data layer specfications from W3 (http://www.w3.org/2013/12/cedd..., which defines event data as an array of events. Not an issue for data elements accessing the last item in the array. The problem arrises when multiple events happen in quick sucession. Now when DTM goes goes to collect the event data the last event object in the array might not be the right one, and if two events are sent quickly the first event's data object is skipped and the last event's data object is used twice.
Stategy 1:
Creating many direct-call rules, one for each possible number of events and the data element for each rule accessing that item in the array:
_satillite.track('event_0')
_satillite.track('event_1')...
Not exactly fun to set up, still encounters the problem of perhaps having enough rules set up, and is not clean.
Strategy 2:
There's also a posibility of using data elements in the direct-call condition:
event_%event_number%
Not sure how using data-elements in the condition string would work.
Strategy 3
Use a FIFO queue to hold the keys for the order of the event and an object for which event in the data layer the event is.
var order_of_events = ['asdf', '1234'];
var events_number = {
'asdf': 1,
'1234': 2
};
Then send a direct-call event rule:
_satillite.track('event');
Then in the tag use Data Evements to query for the correct event data:
// Data Element code
// %next_eventName%
var event_key = getKey(); // returns first key in array
event_number = getValue(asdf); // returns 1
getEventName(event_number); // returns "Event Name"
How to notify the queue that the tag is done with the event details and to move the key out of the queue?
What strategy could be used to assure the correct event data object is used by the data element, any of the above, or has this problem already been tackled?
One possible solution:
Use the event object's eventInfo.eventAction key value. When you add an event in the array, you can call a generic _satllite.track('event_added') direct call rule. In your direct call rule, you can pull the events out of digitalData.events, loop over them and call a corresponding method in the direct call rule.
var eventActions={
method1:function(event){
//_satellite.track('method1');
//or s.tl(event.eventInfo.eventName, true, 'hello world');
//handle each event however you may like for analytics, testing, page manipulation etc.
}
}
digitalData.events.forEach(function(item,index,arr){
eventActions[item.eventInfo.eventAction](item);
});
This is what I was planning on doing but it may not suite your needs.

WCF Data Service - update a record instead of inserting it

I'm developing a WCF Data Service with self tracking entities and I want to prevent clients from inserting duplicated content. Whenever they POST data without providing a value for the data key, I have to execute some logic to determine whether that data is already present inside my database or not. I've written a Change interceptor like this:
[ChangeInterceptor("MyEntity")]
public void OnChangeEntity(MyEntity item, UpdateOperations operations){
if (operations == UpdateOperations.Add)
{
// Here I search the database to see if a matching record exists.
// If a record is found, I'd like to use its ID and basically change an insertion
// into an update.
item.EntityID = existingEntityID;
item.MarkAsModified();
}
}
However, this is not working. The existingEntityID is ignored and, as a result, the record is always inserted, never updated. Is it even possible to do? Thanks in advance.
Hooray! I managed to do it.
item.EntityID = existingEntityID;
this.CurrentDataSource.ObjectStateManager.ChangeObjectState(item, EntityState.Modified);
I had to change the object state elsewhere, ie. by calling .ChangeObjectState of the ObjectStateManager, which is a property of the underlying EntityContext. I was mislead by the .MarkAsModified() method which, at this point, I'm not sure what it does.