Self contained component data model vs master root data model - vue.js

I looking for feedback regarding how I am implementing Vue components.
I have a Vue instance that contains a list of orders and a reference to the current order. Let’s call this root instance "orders".
When the current order is set (based on clicking on one of the orders in the list), I create a new component inside "orders" called "current-order". "current-order" has a property that the parent passes it called "order_id", this property is used within "current-order" to recall the data for the order and present an editable form.
Within "current-order", besides the meta-data associated with the order (customer, etc), I have a third component to contain a group of items, let’s call this final instance "item-group".
Here is the general layout of how these instances would look:
orders
current-order
item-group
item-group
item-group
"orders" only saves a list of the orders; it does not save any order data.
"current-order" saves the meta-data associated with the order, as well as the item data.
Discussing this model with a co-worker, he explained to me that this is not the best-practice way to implement this. He felt that the proper way to implement this would be to save all of the data for all of the components on the root instance "orders", versus the way I implemented it - data saved at each level.
The model he explained seems less maintainable to me. "current-order" may be used on other pages of our application, so if I maintained its data in the root instance, I would have to do that in all of the root instances that I attach it to.
With the way I have implemented it, all you have to pass the component is an order_id, and it will fill itself with data.
He continued to explain to me that saving data on the component like I am doing can be reset by re-renders of the instance, which I didn't quite understand.
Both the way I implemented, and the way he described would work, but I'm trying to find out what the best-practice approach would be for maintainability.
Note: This is not a large SPA, and I don’t think Vuex would suite what we are trying to achieve right now.
Any feedback would be appreciated.

It all depends on what your child-components are responsible for. If a child is extending the functionality of your parent component then the state should always be maintained in your parent component. For example in a CRUD based situation instead of creating separate components for create and update you can write only one and maintain its state (updated/created) in your parent component.
If in your current-order you are not updating anything related to the order then no need to maintain its state in your order i.e. if meta-data can be treated as a separate entity of your order no need to maintain it in the parent. But in case both order and its meta is one single entity, you should maintain its state in your parent.

Related

What is the "key" which changes on every route change with connected-react-router?

When an action for navigating to a route is triggered, an action triggers a new state where the router.location.pathname changes according to the browser's history.
Another property changes as well: router.location.key, to a new random string.
Even when the pathname itself doesn't change (clicking on a link to a page from the page itself), the key still updates.
What's the purpose of the key property? In which situations would I want my own state to have a randomly generated key which updates on very action dispatch? Why is it not a number which simply increments?
connected-react-router simply stores the location object from react-router which in turn creates the location object using the history package. In the readme of history the key property is described:
Locations may also have the following properties:
location.key - A unique string representing this location (supported in
createBrowserHistory and createMemoryHistory)
It is used internally (e.g. in https://github.com/ReactTraining/history/blob/master/modules/createBrowserHistory.js to find locations in the current history stack) and should be treated as an implementation detail of react-router. I suspect a random key instead of a incrementing sequence number was simply the easiest way to implement unique ids (you don't have to store the current sequence number).
This causes unnecessary rerender of the current route, when visited once again, because props change. One way to fix that would be to use React.memo, and comparing the location.path which stays the same. But then you will have to be careful if your component receives other props, so to include them in the comparison.
From the React Router Docs
Each location gets a unique key. This is useful for advanced cases
like location-based scroll management, client side data caching, and
more. Because each new location gets a unique key, you can build
abstractions that store information in a plain object, new Map(), or
even locationStorage.

FactoryImpl to set atts via props for bound inputs

First, thanks for any advice. I am new to all of this and apologize for any obvious blunders.
Second, the question:
In an interface for entering clients that often possess a number of roles, it seemed efficient to create a set of inputs which possessed both visual characteristics and associated data binding based simply on the inputs name.
For example, inquirerfirstname would be any caller or emailer who contacted our company.
The name would dictate a label, placeholder, and the location in firebase where the data would be stored.
The single name could be used--I thought--with a relational table (state machine or series of nested ifs) to define the properties of the input and change its outward appearance and inner bindings through property manipulation.
I created a set of nested iffs, and console logged the property changes in the inputs, but their representation in the host element (a collection of inputs that generated messages to clients as well as messages to sales staff) remained unaffected.
I attempted using the ready callback. I forced the state change with a button.
I was unable to use the var name = new MyInput( name). I believe using this method would be most effective but am unsure how to "stamp" the JavaScript into a heavyweight stamped parent element.
An example of a more complicated and dynamic use of a constructor and a factory implementation that can read database (J-son) objects and respond to generate HTML elements would be awesome.
In vanilla a for each would seem to do the trick but definitions and structure as well as binding would not be organic--read it might be easier just to HTML stamp the inputs in polymer by hand.
I would be really greatful for any help. I have looked for a week and failed to find one example that took data binding, physical appearance, attribute swapping, property binding and object reading into account.
I guess it's a lot, but each piece independently (save the use of the constructor) I think I get.
Thanks again.
Jason
Ps: I am aware that the stamping of the element seems to preclude dynamic property attribute and binding assignments. I was hoping a compute attribute mixed with a factoryimpl would be an option (With a nice example).

In Symfony2, should I use an Entity or a custom Repository

I am creating a new web app and would like some help on design plans.
I have "store" objects, and each one has a number of "message" objects. I want to show a store page that shows this store's messages. Using Doctrine, I have mapped OneToMany using http://symfony.com/doc/current/book/doctrine.html
However, I want to show messages in reverse chronological order. So I added a:
* #ORM\OrderBy({"whenCreated" = "DESC"})
Still I am calling the "store" object, then calling
$store->getMessages();
Now I want to show messages that have been "verified". At this point, I am unsure how to do this using #ORM so I was thinking I need a custom Repository layer.
My question is twofold:
First, can I do this using the Entity #ORM framework?
And second, which is the correct way to wrap this database query?
I know I eventually want the SQL SELECT * FROM message WHERE verified=1 AND store_id=? ORDER BY myTime DESC but how to make this the "Symfony2 way"?
For part 1 of your question... technically I think you could do this, but I don't think you'd be able to do it in an efficient way, or a way that doesn't go against good practices (i.e. injecting the entity manager into your entity).
Your question is an interesting one, because at first glance, I would also think of using $store->getMessages(). But because of your custom criteria, I think you're better off using a custom repository class for Messages. You might then have methods like
$messageRepo->getForStoreOrderedBy($storeId, $orderBy)
and
$messageRepo->getForStoreWhereVerified($storeId).
Now, you could do this from the Store entity with methods like $store->getMessagesWhereVerified() but I think that you would be polluting the store entity, especially if you need more and more of these custom methods. I think by keeping them in a Message repository, you're separating your concerns in a cleaner fashion. Also, with the Message repository, you might save yourself a query by not needing to first fetch your Store object, since you would only need to query to Message table and use its store_id in your WHERE clause.
Hope this helps.

Event Sourcing using NHibernate

I'm moving from pure DDD paradigm to CQRS. My current concern is with Event Sourcing and, more specifically, organizing Event Store. I've read tons of blogs posts but still can't understand some things. So correct me if I'm wrong.
Each event basically consists of:
- Event date/time
- type of Event (we can figure out type of AggregateRoot from this as well)
- AggregateRoot id (Guid)
- AggregateRoot version (to maintain the order of updates)
- Event data (some serialized class with data necessary to make update)
Now, if my Event data consists of simple value types (ints, strings, enums, etc.) then it's easy. But what if I have to pass another AggregateRoot? I can't serialize the whole AR as a part of Event data (think of all the data and lazy loading), basically I only need to store Id of that AR. But then, when I need to apply that event, I'd need to get that AR from database first. And it doesn't feel right to do so from my Domain Model (calling Repositories and working with AR Ids).
What's the best approach for this?
p.s. For a concrete example, let's assume there's a Model which consists of Task and User entities (both ARs). Task hold a reference to User responsible. But the responsible User can be changed.
Update: I think I've found the source of my confusion. I believe event sourcing should be used only for building read model. And in this case passing Ids and raw data is ok. But the same events used on aggregates themselves. And this I cannot understand.
In DDD an aggregate is a consistency/invariant boundary, so one may never depend on another to maintain its invariants. When we start using this design restriction we find very few situations where is necessary to store a full reference to the other, usually we store its id and (if necessary) version and a copy of the relevant attributes.
For example, using the usual Order/LineItem and Product problem we would copy the Product's id and price in the LineItem, instead of a full reference. This way prevents changes in the Product's price affect the Order/LineItem aggregate's invariants. If is necessary to update the LineItem price after Product price changes we need to keep track of the PriceChanged event from used Products and send a compensating command to the Order/LineItem. Usually this coordination/synchronization is handled by a saga.
In Event Sourcing, the state of the aggregate is defined by Events, and nothing more. All domain model stuff (ala DDD) is there just to decide what domain events should be raised. Event should know nothing about your Domain, it should be simple DTO. In fact, it is perfectly OK to have Event Sourcing without DDD.
As i understand Event Sourcing, it is supposed to help people get rid of relational data models and ORM like NHibernate or Entity Framework, since each of them is a science on its own. Programmers could then simply focus on business logic. I saw here some relational schemas used for event stores, and they were simply ID, Version, Timestamp plus an NClob or NVarchar(max) column to store the event payload schema-less.

Fetched Properties, cross store relationships

I've got a store that is synchronized externally and a store that is unique to the application instance, so in order to cleanly differentiate the two I want to have some join entities between them and then resolve through to the entities between using Fetched Properties, as "discussed" in the Core Data Programming Guide:
developer.apple.com/documentation/Cocoa/Conceptual/CoreData/Articles/cdRelationships.html#//apple_ref/doc/uid/TP40001857-SW5
I think I just don't really "get" how Fetched Properties are supposed to be used - and I've spent a fair number of hours looking for examples with no real luck.
The way I think of it is,
I have the following Entities each in a different store
Foo with attribute relatedBarName in store A
Bar with attribute barName in store B
I need to create a fetched property on Foo named findRelatedBar that relates Foo to Bar loosely through barName = relatedBarName.
However, I don't understand how since Foo and Bar are in different stores how to declare any relationship of any sort, whether through the fetched property or not, from Foo to Bar?
The predicate builder in XCode seems to want a Destination entity. If they are in different schemas, how can you declare the destination? If you don't declare a destination, how do you at runtime indicate that findRelatedBar on Foo is describing Bar?
Otherwise, do they need to be in the same schema but just stored in different stores?
In crafting this question, I thought of these questions and answered them myself by more focused examination of the documentation. I assume if I found it confusing, others might as well, so I'll inline them with this post to make it easier to find related answers to fetched properties / core data stores.
Q) If a store coordinator have more than one store associated with it of the same schema, how do insertions know which store to insert to?
A) You use the assignObject:toPersistentStore: method on the managed object context.
Q) What does FETCH_SOURCE refer to in specific?
A) It's simply the managed object which has the fetched property associated with it. Sort of like "self"
Q) What does FETCHED_PROPERTY refer to in specific?
A) It is a reference to the fetched property description instance you are using to query with - you can use this to insert per query variable substitution. By setting a property (as in the Core Data Programming example) on the userInfo of the property description instance you're using, you can inject that value into the expression.
Thanks!!!!
The answer is:
Yes, you need to do a cross store fetched property with shared schemas. If you do this, you need to make sure you attribute the inserts with the assignObject:: method as described in the question. However, due to the limitations of the SQLLITE persistent store, natural things like IN $FETCH_SOURCE.attribute do not work.
Q) If a store coordinator have more
than one store associated with it of
the same schema, how do insertions
know which store to insert to?
This is what configurations are for. You create a configuration for each store and then assign entities to that configuration. You then create the store with the proper configuration. When you save the context, each entity will automatically go to the correct store.