When there are nested objects, I am currently parsing it as object1.object2.object3.. and so on. However, this trouble some. Is there any design pattern or a better way to deal with nested objects.
From my understanding, probably the visitor pattern might work for your case. you could take a look at the link. https://www.tutorialspoint.com/design_pattern/visitor_pattern.htm
Related
I've been trying out various approaches to "string interning" in a database that's accessed primarily using SQLAlchemy ORM. I've tried a couple things, and so far I'm not really loving any of them. It seems like a common pattern, and I feel like I might be missing some obvious, elegant solution.
To elaborate: the situation is that my database (Postgres, if it matters) table is likely to to contain many of the same strings, but they are still arbitrary, and not bounded in a way that a native enum type would be the right solution. I want to collect these strings in another table with an auto-incrementing PK and then reference them in the main table by FK. The goals here include both space savings and string "hygiene" (i.e. I'd like to be able to easily assess and track the growth of this string table.)
I've tried the obvious naive solution of creating a separate entity, but this seems to foist the mechanics of the string interning onto every consumer of the entity. i.e. every consumer has to traverse the relationship to get the value, like this: obj.interned_property.value And absent joinedload hints, it causes another database hit for every new access. (In general, I try to keep loading strategies out of the model itself, since different use cases often benefit from different loading strategies.) Adding a python property to traverse the relationship is not a good approach because it can't participate in SQLAlchemy filtering/ordering operations.
I've tried using the AssociationProxy extension, but I've been generally disappointed with it. I discovered that AssociationProxy attributes don't follow the same metadata contract of other SA ORM attributes; They lack an info property, for instance. An info dictionary was relatively simple to graft on, but this was really just the first shoe to drop. After that, I discovered that you can't filter against them in a query (at least not with the LIKE operator.) I've gotten to the point where I'm kinda sick of discovering the next thing that AssociationProxy attributes can't do.
The next thought I had was to do all the interning inside the database using triggers and updatable views, but that inherently hampers portability w/r/t database engine, and splits the logic between Python and PL/SQL which makes it harder for future developers coming into this code to figure out what's going on. And, it's a bunch of effort, so if I'm going to do it, I would like to feel more confident that it's the right way to go.
Anyway, it seems like this is a pretty common pattern, and I feel like someone must have figured out an elegant solution by now. So, I'd love to hear from someone who's been down this road before: what's the best way to handle string interning with SQLAlchemy?
A wise man told me that to learn how a syntax works does not mean your a good programmer, but rather to grasp programming constructs like iterators and conditionals, thus, meaning you can pick up any syntax easier.
How would one go about learning these constructs??
The easiest construct you mention is a conditional.
The basic pattern of a conditional is:
if <some-condition> then
<do-action>
else
<do-other-action>
end if
This basic pattern is expressed in many different ways according to the language of choice, but is the basic decision-making building block of any program.
An iterator is a construct which abstracts the physical layout of a data structure, allowing you to iterate (pass through) it without worrying about where in memory each element in the data structure is.
So, for example, you can define a data structure such as any of Array, Vector, Deque, Linked List, etc.
When you go to iterate, or pass through the data structure one element at a time, the iterator presents you with an interface in which each element in the data structure follows sequentially, allowing you to loop through with a basic for loop structure:
for <element> in <data-structure>
<do-action>
end loop
As for other constructs, take a look at some books on Data Structures and Algorithms (usually a 2nd-year level computer science course).
Syntax is only a technical form of expressing your solution. The way you implement and the concepts you use in your solution are the ones who makes the different between a beginner and an experienced developer. Programming languages are the means not the wits !
Apologies for the shopping list, but I've played with a few ORM-type libraries, and most are good, but none have done everything :) On my next project, am hoping to find one that can do a few more things out of the box. Have you got any good suggestions?
This is what I am looking for:
Easily select deeply nested data.
for example, PHP Yii's CActiveRecord can do something like this: Contact::model()->with('phone_numbers', 'addresses', 'createdBy.user.company')->findAll();
Easily create/return deeply nested JSON from the database or ORM
Easily load deeply nested JSON data, validate it, and save it to the database correctly
Supports optimistic concurrency control
Handles multi-tenant systems gracefully
ORM stands for Object-Relational mapper. It lets you convert from the world of rows to the world of objects and associations between these objects. Nothing in both worlds has anything to do with JSON or XML serialization. In order to achieve what you want you will need to employ separate serialization framework. It also looks like you don't need ORM because you don't plan on having an actual Object model, you seem to be thinking in terms of 'data' not 'objects', you just need a 'glue' between a database and network app.
Easily select deeply nested data / Easily create/return deeply nested
JSON from the database or ORM
Yet to find one...you need a generic way to convert to/from objects, arrays, json in and out, recursively
Easily load deeply nested JSON data, validate it, and save it to the
database correctly
Yet to find one.
Supports optimistic concurrency control
Doctrine or brew your own w a "version" counter on a record
Handles multi-tenant systems gracefully
Ruby ActiveRecord + Postgres
I want to start building better persistence layers using NHibernate and other ORMs. What are the best design patterns or important design considerations for implementing persistence with ORM?
Couple off the top of my head...
1) Be careful not to group data that changes at drastically different rates into the same object. This can end up bloating tables with redundant data.
2) Avoid throwing in text fields that you intend to search, better to use something like Lucene for this. DBs aren't as efficient as dedicated text search libraries when doing LIKE style queries.
3) If you can make it so that your objects are immutable once written (i.e. they have a state id), then you can get very nice caching benefits on the front end and keep people from even needing to hit your server in the first place.
A high frequent design pattern that we use is Singleton. Another things we consider to use is lazy loading and data pagination.
singleton. very useful but also useful for u wuld be the following link....
http://www.yoda.arachsys.com/csharp/singleton.html
I have a fairly deep object graph (5-6 nodes), and as I traverse portions of it NHProf is telling me I've got a "Select N+1" problem (which I do).
The two solutions I'm aware of are
Eager load children
Break apart my object graph (and eager load)
I don't really want to do either of these (although I may break the graph apart later as I forsee it growing)
For now....
Is it possible to tell NHibernate (with FluentNHibernate) that whenever I try to access children, to load them all in one go, instead of select-n+1-ing as I iterate over them?
I'm also getting "unbounded results set"s, which is presumably the same problem (or rather, will be solved by the above solution if possible).
Each child collection (throughout the graph) will only ever have about 20 members, but 20^5 is a lot, so I don't want to eager load everything when I get the root, but simply get all of a child collection whenever I go near it.
Edit: an afterthought.... what if I want to introduce paging when I want to render children? Do I HAVE to break my object graph here, or is there some sneakiness I can employ to solve all these issues?
It sounds to me that you want to pursue the approach of using your domain model rather than creating a specific nhibernate query to handle this scenario. Given this, I would suggest you take a look at the batch-size attribute which you can apply to your collections. The Fluent NHibernate fluent interface does not yet support this attribute, but as a work around you can use:
HasMany(x => x.Children).AsSet().SetAttribute("batch-size", "20")
Given the general lack of information about your exact scenario, I cannot say for sure whether batch-size is the ideal solution, but I certainly recommend you give it a go. If you haven't already, I suggest you read these:
http://www.nhforge.org/wikis/howtonh/lazy-loading-eager-loading.aspx
http://nhibernate.info/doc/nhibernate-reference/performance.html
The NHibernate performance documentation will explain how batch-size works.
Edit: I am not aware of any way to page from your domain model. I recommend you write NH queries for scenarios where paging is required.
Edit: an afterthought.... what if i
want to introduce paging when i want
to render children? do i HAVE to break
my object graph here, or is there some
sneakyness i can employ to solve all
these issues?
Well, if you only load the children then you can page them :). But if you want something like : LoadParent AND PageChildren , then I don't think you can do that.