Returning objects on CQRS commands with MediatR - asp.net-core

I have been reading about MediatR and CQRS latelly and I saw many people saying that commands shouldn't return domain objects. They can return values but they're limited to returning erros values, failure/success information and the Id of the newly created entities.
My question is how to return this new objetct to the client if the command can return only the Id of the new entity.
1) Should I query the database again with this new Id? If so, isn't that bad that I making a new trip to the database to get an object that was in the memory a few seconds ago?
2) What's the correct way of returning the entities created by the commands?

I think the more important question is why you shouldn't return domain objects from commands. If the reason for that seems like a valid reason for you, you should look into alternatives such as executing a query right after the command to fetch the domain object.
If, however, returning the domain object from the command fits your needs and does not impose any direct problems, then why not just do it and keep things simple and straightforward?

Related

Retrieving records and manipulating them as Ruby objects

New to Sequel and SQL in general, so bear with me. I'm using Sequel's many_through_many plugin and I retrieve resources that are indirectly associated with particular tasks through groups, via a groups_tasks join table and a groups_resources join table. Then when I query task.resource on a Task dataset I get resource objects in Ruby, like so:
>>[#<Resource #values={:id=>2, :group_id=>nil, :display_name=>"some_name"}>, #<Resource #values={:id=>3, :group_id=>nil, :display_name=>"some_other_name"}>]
Now, I want to be able to add a new instance variable, schedule to these resource objects and do work on it in Ruby. However, every time I query task.resources for each task, Sequel is bringing resources objects in to ruby as different resource objects each time (which makes sense), despite being the same record in the database:
>>
"T3"
#<Resource:0x007fd4ca0c6fd8>
#<Resource:0x007fd4ca0c6920>
#<Resource:0x007fd4ca0c60d8>
#<Resource:0x007fd4ca0c57a0>
"T1"
#<Resource:0x007fd4ca0a4c08>
#<Resource:0x007fd4ca097f58>
#<Resource:0x007fd4ca097b48>
"T2"
#<Resource:0x007fd4ca085ba0>
#<Resource:0x007fd4ca0850d8>
I had wanted to just put a setter in class Resource and do resource.schedule = Schedule.new, but since they're all different objects, each resource is going to have a ton of different schedules. What's the most straightforward way to manipulate these resource objects client side, but maintain their task associations that I query from the server?
If I am understanding your question correctly, you want to retrieve Resource objects and then manipulate some attribute named schedule. I am not very familiar with Sequel, but looking over the docs it seems to work similarly to ActiveRecord.
Set up your instance variable (I imagine using something like attr_accessor :schedule on the Resource class).
Store the records in a variable, you will be working with same instance each time, rather than the new instance Sequel returns.

How to get multiple data from gemfire cacheloader?

We are going to implement gemfire for our project. We are currently syncing gemfire cache with our DB2 database. So, we are facing issue while putting DB data into cache.
To put DB data into region. I have implement com.gemstone.gemfire.cache.CacheLoader and override load method of it. As written in java doc load method will return only one Object. But for our requirement we will have to return multiple VO from load method
public List<CmDvceInvtrGemfireBean> load(LoaderHelper<CmDvceInvtrGemfireBean, CmDvceInvtrGemfireBean> helper)
throws CacheLoaderException
While returining multiple VO in form of List<CmDvceInvtrGemfireBean> gemfire region consider it's as single value.
So, when i invoke,
System.out.println("return COUNT" + cmDvceInvtrRecord.query("SELECT COUNT(*) FROM /cmDvceInvtrRecord"));
It return count of one. But i can see total 7 number of data into it.
So, I want to implement the kind of mechanism that will put all the 7 values as a separate VO in Region
Is there any way to do this using Gemfire CacheLoader?
A CacheLoader was meant to load a value only for a single entry in the GemFire Region on a cache miss. As the Javadoc states...
..creates the value for the desired key..
While a key can map to a multi-valued (e.g. an array/Collection) value, the CacheLoader can only populate a single entry.
You will have to resort to other means of populating the cache with multiple "entries" in a single operation.
Out of curiosity, why do you need (requirement?) to load multiple entries (from the DB) at once? Are you trying to minimize the number of round trips to the DB?
Also, what logic are you using to decide what VO from the DB will be loaded based on the information (i.e. key) provided in the CacheLoader?
For instance, are you somehow trying to predictably select values from the DB based on the CacheLoader key that would subsequently minimize cache misses on future Region.get(key) calls?
Sorry, I don't have a better answer for you right now, but answers to some of these questions may help me give you some ideas for alternatives.
Cheers,
John

Getting specific Backbone.js models from a collection without getting all models first

I'm new to Backbone.js. I'm intrigued by the idea that you can just supply a URL to a collection and then proceed to create, update, delete, and get models from that collection and it handle all the interaction with the API.
In the small task management sample applications and numerous demo's I've seen of this on the web, it seems that the collection.fetch() is used to pull down all models from the server then do something with them. However, more often than not, in a real application, you don't want to pull down hundreds of thousands or even millions of records by issuing a GET statement to the API.
Using the baked-in connection.sync method, how can I specify parameters to GET specific record sets? For example, I may want to GET records with a date of 2/1/2014 or GET records that owned by a specific user id.
In this question, collection.find is used to do this, but does this still pull down all records to the client first then "finds" them or does the collection.sync method know to specify arguments when doing a GET to the server?
You do use fetch, but you provide options as seen in collection.fetch([options]).
So for example to obtain the one model where id is myIDvar:
collection.fetch(
{
data: { id: myIDvar },
success: function (model, response, options) {
// do a little dance;
}
};
My offhand recollections is that find, findWhere and where would invoke all models being downloaded and then the filtering taking place on the client. I believe with fetch the filtering takes places on the server side.
You can implement some kind of pagination on server side and update your collection with limited number of records. In this case all your data will be up to date with backend.
You can do it by overriding fetch method with you own implementaion, or specify params
For example:
collection.fetch({data: {page: 3})
You can also use find where method here
collection.findWhere(attributes)

Are NHibernate ICriteria queries cached or put in the identity map?

Using NHibernate I usually query for single records using the Get() or Load() methods (depending on if I need a proxy or not):
SomeEntity obj = session.Get<SomeEntity>(new PrimaryKeyId(1));
Now, if I execute this statement twice, like the example below, I only see one query being executed in my unittests:
SomeEntity obj1 = session.Get<SomeEntity>(new PrimaryKeyId(1));
SomeEntity obj2 = session.Get<SomeEntity>(new PrimaryKeyId(1));
So far, so good. But I noticed some strange behaviour when getting the same object using a ICriteria query. Check out my code below: I get the first object instance. I then change the value of a property to 10 (the value in the database is 8), get another instance and finally check the values of the second object instance.
//get the first object instance.
SomeEntity obj1 = session.CreateCriteria(typeof(SomeEntity))
.Add(Restrictions.Eq("Id", new PrimaryKeyId(1)))
.UniqueResult<SomeEntity>();
//the value in the database and the property is 8 at this point. Let's set it to 10.
obj1.SomeValue = 10;
//get the second object instance.
SomeEntity obj2 = session.CreateCriteria(typeof(SomeEntity))
.Add(Restrictions.Eq("Id", new PrimaryKeyId(1)))
.UniqueResult<SomeEntity>();
//check if the values match.
Assert.AreEqual(8, obj2.SomeValue);
Now, for some reason the assert fails, because the value is 10 of obj2 even though I asked for the object with a new query. the funny thing is, there are 2 exactly the same select queries being executed according to my unit test output window. My question: why are there 2 queries being executed if the second object is fetched from the first level cache?
Am I missing something or is this a bug?
Regards, Ted
edit #1: using NHibernate v2.1.2GA
edit #2: I added some extra explanation about the 2 queries being executed to the last paragraph.
Well, having learned a lot more about NHibernate I can now answer this question myself:
The ICriteria query returns a list of objects fetched by NHibernate. NHibernate does not know which objects are returned until they are matched one by one with the object in the first level cache. If the item is already in the first level cache map the item read from the database is discarded. if it is not in the identity map, the item is put into the first level cache.
Another "a-ha!" moment: suppose you run the query for the first time while there are 5 rows in the database all rows are fetched and put into first level cache. now over time 5 more records are added to the table and you rerun the query. Now all 10 records are fetched, but NHibernate sees 5 of them are already in the cache and will only add the 5 latter records. So basically you fetched 5 records for nothing (just to match the identifiers with the object identifiers in the identity map).
Get/Load use the 1st level cache, this is why you don't see the 2nd call out the db. Queries do not use the 1st level cache. However, you can set up queries to use the 2nd level cache. See details here
UPDATE What's likely happening is the query is doing a 2 phase load. So it's getting the result set, but also checking the 1st level cache to see if any entities exist there. If they do, then it returns the cached object. See NHibernate.Loader.Loader.GetRow method.
Here is the relevant line:
//If the object is already loaded, return the loaded one
obj = session.GetEntityUsingInterceptor(key);
AFAIK, only 'Get' (and maybe Load) use the 1st level cache.
Using the Criteria API always results in a query hitting the DB, unless the 2nd level cache is enabled.
Edit: more information can be found here
I am not sure why a second query is ran, but the expected behavior of NHibernate is if you ask for the same object by ID from the same session, you get the first level cache.
In my understanding, when using a Criteria, you are basically saying to NHibernate: "I want to filter rows based on expressions".
When seen that way, NHibernate has no way of knowing if the query will always return the same filtered row(s) from the database, so it has to query it again.
Also, you can use query caching only with second-level caching, as per the documentation:
So the query cache should always be used in conjunction with the second-level cache.
From here
NHibernate is probably issuing an update between the first and second queries to protect you from a concurrency problem. As Frederik pointed out, you should always use Get to retrieve an object by its key.
I'm curious, what is the PrimaryKeyId wrapper adding?
EDIT:
However it's working (my money's still on an update before select), this behavior is by design. If you want to discard your in-memory object and load a new instance of it from the session, then Evict the original from the session first. There is also a Refresh method you could try.

Grouping a Core Data data result?

I am prototyping an idea on the iPhone but I am at the SQLite vs CoreData crossroads. The main reason is that I can't seem to figure out how to do grouping with core data.
Essentially I want to show the most recent item posted grouped by username. It is really easy to do in a SQL statement but I have not been able to make it work in core data. I figure since I am starting a new app, I might as well try to make core data work but this part is a major snag.
I added a predicate to my fetchrequest but that only gave me the single most recently added record and not the most recently added record per user.
The data model is pretty basic at this point. It uses the following fields:
username (string), post (string), created (datetime)
So long story short, are these types of queries possible with CoreData? I imagine that if SQLite is under the hood, there has to be some way to do it.
First of all, don't think of Core Data as another way of doing SQL. SQL is not "under the hood" of Core Data. Core Data deals with objects. Entity descriptions are not tables and entity instances are not records. Programming with Core Data has nothing to do with SQL, it merely uses SQL as one of several possible types of persistent stores. You don't deal with it directly and should never, ever think of Core Data in SQL terms.
That way lies madness.
You need drink a lot of tequila and punch yourself in the head repeatedly until you forget everything you ever knew about SQL. Otherwise, you will just end up with an object graph that is nothing but a big spread sheet.
There are several ways to accomplish what you want in Core Data. Usually you would construct fetch with a compound predicate that would return all post within a certain date range made by a specific user. Fetched results controllers are especially handy for this.
A most straightforward method would be to set up you object graph like:
UserEntity
--Attribute username
--Relationship post <-->> PostEntity
PostEntity
--Attribute creationDate
--Attribute content
-- Relationship user <<--> UserEntity
Then in your UserEntity class have a method like so:
- (NSArray *) mostRecentPost{
NSPredicate *recentPred=[NSPredicate predicateWithFormat:#"creationDate>%#", [NSDate dateWithTimeIntervalSinceNow:-(60*60*24)]];
NSSet *recentSet=[self.post filteredSetUsingPredicate:recentPred];
NSSortDescriptor *dateSort=[[NSSortDescriptor alloc] initWithKey:#"creationDate" ascending:NO];
NSArray *returnArray=[[recentSet allObjects] sortedArrayUsingDescriptors:[NSArray arrayWithObject:dateSort]];
return returnArray;
}
When you want a list of the most recent post of a particular user sorted by date just call:
NSArray *arrayForDisplay=[aUserEntityClassInstance mostRecentPost];
Edit:
...do I just pass each post block of
data (content,creationDate) to the
post entity? Do I also pass the
username to the post entity? How does
the user entity know when to create a
new user?
Let me pseudo code it. You have two classes that define instances of userObj and a postObj. When a new post comes in, you:
Parse inputPost for a user;
Search existing userObj for that name;
if userObj with name does not exist
create new userObj;
set userObj.userName to name;
else
return the existing userObj that matches the name;
Parse inputPost for creation date and content;
Search post of chosen userObj;
if an exiting post does not match content or creation date
create new postObj
set postObj.creationDate to creation date;
set postObj,content to content;
set postObj.user to userObj; // the reciprocal in userObj is created automatically
else // most likely ignore as duplicate
You have separate userObj and postObj because while each post is unique, each user may have many post.
The important concept to grasp is that your dealing with object i.e. encapsulated instance of data AND logic. This isn't just rows and columns in a db. For example, you could write managed object subclasses in which a single instance could decide whether to form a relationship with an instance of another class unless a specific internal state of the object was reached. Records in dbs don't have that sort of logic or autonomy.
The best way to get a handle on using objects graphs for data models is to ignore not only db but Core Data itself. Instead, set out to write a small test app in which you hand code all the data model classes. It doesn't have to be elaborate just a couple of attributes per class and a reference of some sort to the other class. Think about how you would manage parsing the data out to each class, linking the classes and their data together and then getting it back out. Do that by hand once or twice and the nature of object graphs becomes readily apparent.
There are other considerations that might tip your decision in the direction of SQLite versus Core Data with a SQLite store. I found myself nodding in agreement while reading a good blog post on the subject. I've found exactly the same thing (and am consequently moving a high-performance app away from Core Data): "Core Data is the right answer, except when it’s not..."
It's a great technology, but one size definitely does not fit all.
If 'posts' is a NSSet of User, you could get the last post with a predicate:
NSDate *lastDate = [userInstance valueForKeyPath:#"#max.date"];
NSSet *resultsTemp = [setOfPosts filteredSetUsingPredicate:[NSPredicate predicateWithFormat:#"fecha==%#", lastDate] ];
The resultsTemp set will contain an object of type Post which has the newest date.