Preventing breeze.js from creating observables properties on array objects - properties

I must be missing something simple, but can't figure it out. I'm retrieving a bunch of lookup tables in 1 Web API call.
return EntityQuery.from('Lookups')
.noTracking(true)
.using(manager).execute()
.then(processLookups);
In processLookups I'm calling getLocal for each array that was returned. Example: State table
datacontext.lookups = {
state: getLocal('States', orderBy.state, true),
....
}
function getLocal(resource, ordering, includeNullos) {
var query = EntityQuery.from(resource)
.orderBy(ordering)
.noTracking(true);
if (!includeNullos) {
query = query.where('id', '!=', 0);
}
return manager.executeQueryLocally(query);
}
The arrays are not observable, but each property in the array objects are observable functions. This is just overhead I don't need since these will not be changing.
How can I prevent the object properties from being observable?
Thanks

The raw lookups are available to you right there in the success callback from the query. No reason to look at cache ... even if they were there (which they are not as Jay makes clear).
But what would you DO with these lookups? Presumably you want them to be related (by Breeze navigation paths) to real entities. For example, you'd like session.room to return the related room object. But if the room is one of your lookups and is NOT an entity, then the session.room navigation property won't return it; nav properties always return entities.
I can think of ways around this. But it's just more work and more trickery.
Let's stop for a moment and ask the most important question: Why?
Why do you care if the lookups are entities with observable properties? It may be "overhead you don't need". But is it overhead that hurts you? Hurts you how? Have you measured it?
Forgive me but I sense premature optimizations that could be distracting you from more worthy pursuits. Happy to be proven wrong.

I'm not sure I completely understand the situation but the 'noTracking' option is really only relevant with 'remote' queries. i.e. not local ones. Basically, 'noTracking' tells breeze not process the results of the query into breeze entities AND ALSO not to cache these results.
When you are querying the cache, which is what 'executeQueryLocally' is doing, both of these steps have already occurred, so 'noTracking' is ignored.

Related

Is there any possibility that QAbstractItemModel::beginResetModel and endResetModel can create a performance issue?

My Dev setup:
Qt version : Qt 5.15.0
OS: Embedded Linux
I have a list of information.
Assume I have a structure called MyStruct
My model class is having a member variable of QList of above structure, to hold data for my view. Whenever I am opening the view, I am updating the QList (Note: There may or may not be a change). Updating here is something like assigning a new QList to existing one. before assignment, I am calling beginResetModel and after assignment I am calling endResetModel,
void MyModelClass::SomeInsertMethod(const QList<MyStruct>& aNewData)
{
beginResetModel();
m_lstData = aNewData;
endResetModel();
}
One thing I believe can be improved, is putting a check, if the new data is different than the existing data and then doing the above. Something like this:
void MyModelClass::SomeInsertMethod(const QList<MyStruct>& aNewData)
{
if (m_lstData != aNewData)
{
beginResetModel();
m_lstData = aNewData;
endResetModel();
}
}
Apart from that, is there any possibilities of getting a performance issue for calling beginResetModel/endResetModel? I m seeing a very small delay in the view coming up in my application.
I checked the documentation of QAbstractItemModel for above methods. Didn't get anything specific to the performance issue.
The other way, which this can be done, is by individually comparing the elements of the lists and triggering a dataChanged signal with appropriate model index and roles. But I feel, this will unnecessarily introduce some additional loops and comparisons, which again may cause some other performance issue. Correct me if I am wrong.
Is there any advantage of using dataChanged over beginResetModel/EndResetModel?
Please let me know your views on the above.

Making decisions on designing classes interfaces

I would like to get some thoughts from others about the following problem.
Let's assume we have two classes Products and Items. Products object allows us to access any Item object. Here's the example.
$products = new Products();
// get existing item from products
$item = $products->get(123);
// create item
$item = $products->create();
$item->setName("Some new product");
$item->setPrice(2.50);
Now what would be the best way to update/save state of the item? I see 2 options:
$item->save();
or
$products->save($item);
First aproach seems very straigh forward. Once attributes of Item object are set calling save method on it will persist changes.
On the other hand I feel like latter approach is better. We're separating the roles of two objects. Item object contains only state and Products object operates on that state. This solution may be also better for writing unit tests.
Any thoughts?
So, effectively the items are buffering the actual changes.
Clearly both approaches will work, however it comes down to how closely you want to adhere to the underlying database's model or the overlaid object model.
Viewed from the outside, $item->save() makes the most sense in terms of the model - as you point out, you update the item's properties and then save them down. Plus it is conceptually an action that is performed on the item.
However, $products->save($item) offers two noticable advantages, and a drawback.
On the plus side, by moving save into products, it can (potentially) handle batching / reordering of the updates in a smarter way since it has visibility of all the items. It also allows the save code to be used as ->add() (more or less)
A downside is it is going to (from the object model view) add the following possible use, which you probably don't want:
$p1 = new Products();
$p2 = new Products();
$item = $p1->create();
// set $item values
$p2->save($item);
Obviously, you could just add an 'is this mine? no? then throw an error' test to Products::save, but that is extra code for blocking a use case the syntax implies could/should work. Or at least would probably slip through a code review.
So, I'd say go with the approach that seems the simplest and binds tightest to the desired functionality ($item->save()), unless you need to do caching/batching/whatever that forces you to go with the other.

Stateful objects, properties and parameter-less methods in favour of stateless objects, parameters and return values

I find this class definition a bit odd:
http://www.extremeoptimization.com/Documentation/Reference/Extreme.Mathematics.LinearAlgebra.SingleLeastSquaresSolver_Members.aspx
The Solve method does have a return value but would not need to because the result is also available in the Solution property.
This is what I see as traditional code:
var sqrt2 = Math.Sqrt(2)
This would be an alternative in the same spirit as the solver in the link:
var sqrtCalculator = new SqrtCalculator();
sqrtCalculator.Parameter = 2;
sqrtCalculator.Run();
var sqrt2 = sqrtCalculator.Result;
What are the pros and cons besides the second version being a bit "untraditional"?
Yes, the compiler won't help the user who forgot to assign some property (parameter) BUT this is the case with all components that contain writeable properties and don't have mandatory values in the constructor.
Yes, threading will not work, BUT each thread can create its own solver.
Yes, the garbage collector won't be able to dispose the solver's result, BUT if the entire solver is disposed it will.
Yes, compilers and processors have special treatment of parameters and return values which makes them fast, BUT the time for parameter handling is mostly neglectable.
And so on. Other ideas?
Well, after a year I found a clear flaw with this "introvert" approach. I am using an existing filter object which should operate on a measurement object but rather operates on itself in a "it's all me and nothing else"-fashion described above. Now the customer wants a recalculation of a measurement object a few minutes after the first calculation, and meanwhile the filter has processed other measurement objects. If it had been stateless and stored its data in the measurement object, it would have been an easy matter to implement a Recalculate method. The only way to solve the problem with an introvert filter is to let a filter instance be a part of the measurement object. Then filters need to be instantiated for every new measurement object. And since filters are a part of a chain the entire chain needs to be recreated. Well, there is some merit to being stateless.

Spring.Net HibernateTemplate.Execute Clarification

I am taking over a project that was written by third party consultants who have already left.
I come from EF backgournd. One of the DAO class has the following which I find very hard to get my head around on details of what is exactly happening step by step. If anyone could kindly help me to understand this code section will be much appreciated.
return HibernateTemplate.Execute(
delegate(ISession hbSession) // <<--What is this code actually trying to do?
{
string queryText = "from {0} x where x.Code = :Code";
queryText = string.Format(queryText, typeof(Product));
IQuery query = hbSession.CreateQuery(queryText);
query.SetParameter("Code", productCode);
query.SetCacheable(true);
query.SetCacheRegion(CoreCacheConstants.ProductQueryCacheRegion); // <-- What is this code trying to do.
var fund = query.UniqueResult(); // <-- Is this similar to DISTINCT keyword in LINQ?
if (fund == null)
throw new ArgumentException(String.Format("No product found with productcode: {0}", productCode: ));
NHibernateUtil.Initialize(((Product)Product).Details); // <--What is this code trying to do. And where is the execute method for above queries.
return fund;
}
) as Product
Basically I am confused with delegate part and why delegate is being used instead of simple query to database. And what is the benefit of above approach.
Also I cant see any nHibernate ORM mapping xml. Does Spring.NET requires mapping files in order to pass data from/to data source?In order words how does ISession knows which database to connect to and which table to use etc
This is what in the spring documents is referred to as Classic Hibernate Usage. It is not the currently recommended approach to work with NHibernate, which is described in the chapter on object relational mappers.
The "convenient" usage of delegates here is basically done to provide the HibernateTemplate the means to manage a session and hand this managed session over to other custom methods (in this particular case an anonymous method). (I think it's an implementation of the visitor pattern, btw).
Using this approach, the classic HibernateTemplate can provide functionality to methods it doesn't "know of", such as opening and closing sessions correctly and participating in transactions.
The query is actually being executed by HibernateTemplate.Execute(myMethod); I imagine it creates and initializes a session for you, does transaction management, executes your method with the managed session and cleans everything up.
I've never used HibernateTemplate, but I'm sure it would require mapping files and a SessionFactory, so if this code is hit during execution and doesn't throw any exceptions, the configuration for those has to be there somewhere!
With respect to the questions (besides the delegate part) within your posted code: the use of NHibernateTemplate hasn't really got anything to do with it: you can just as well run this code in any piece of code where you've got hold of a valid ISession instance:
the session executes a HQL query; this query is added to the query cache. I've never used SetCacheRegion myself, but apparently it gives you "fine-grained control over query cache expiration policies".
query.UniqueResult
NHibernateUtil.Initialize

DoJo get/set overriding possible

I don't know much about Dojo but is the following possible:
I assume it has a getter/setter for access to its datastore, is it possible to override this code.
For example:
In the dojo store i have 'Name: #Joe'
is it possible to check the get to:
get()
if name.firstChar = '#' then just
return 'Joe'
and:
set(var)
if name.firstChar = '#' then set to #+var
Is this sort of thing possible? or will i needs a wrapper API?
You can get the best doc from http://docs.dojocampus.org/dojo/data/api/Read
First, for getting the data from a store you have to use
getValue(item, "key")
I believe you can solve the problem the following way. It assumes you are using a ItemFileReadStore, you may use another one, just replace it.
dojo.require("dojo.data.ItemFileReadStore");
dojo.declare("MyStore", dojo.data.ItemFileReadStore, {
getValue:function(item, key){
var ret = this.inherited(arguments);
return ret.charAt(0)=="#" ? ret.substr(1) : ret;
}
})
And then just use "MyStore" instead of ItemFileReadStore (or whatever store you are using).
I just hacked out the code, didn't try it, but it should very well show the solution.
Good luck
Yes, I believe so. I think what you'll want to do is read this here and determine how, if it will work:
The following statement leads me to believe the answer is yes:
...
By requiring access to go through
store functions, the store can hide
the internal structure of the item.
This allows the item to remain in a
format that is most efficient for
representing the datatype for a
particular situation. For example, the
items could be XML DOM elements and,
in that case, the store would access
the values using DOM APIs when
store.getValue() is called.
As a second example, the item might be
a simple JavaScript structure and the
store can then access the values
through normal JavaScript accessor
notation. From the end-users
perspective, the access is exactly the
same: store.getValue(item,
"attribute"). This provides a
consistent look and feel to accessing
a variety of data types. This also
provides efficiency in accessing items
by reducing item load times by
avoiding conversion to a defined
internal format that all stores would
have to use.
...
Going through store accessor function
provides the possibility of
lazy-loading in of values as well as
lazy reference resolution.
http://www.dojotoolkit.org/book/dojo-book-0-9/part-3-programmatic-dijit-and-dojo/what-dojo-data/dojo-data-design
I'd love to give you an example but I think it's going to take a lot more investigation.