How do I build object oriented software with seemingly objectless ideas? - oop

My question, is not a philosophical one, but one of methodology. I understand how we build software to mimic everyday processes and objects, like the 'car' object so well demonstrated. My question is about lesser concrete things. For example, If I were to build a search application that returns products and prices from a database (as a part of a much larger application), would I imagine this in terms of how it might be implemented in life? For example, a 'search agent' object in code that would mimic a real life 'human search agent' (create attributes and methods from that)? Essentially, if something doesn't have an obvious real-life counterpart, do we take that idea and create a representation of it so we CAN code it? How should I think about objects in an application that do not mimic life so much?
Thanks for all your input

Object Oriented design is not about mimicking real life objects.
That is just a lazy guidance on how to abstract objects for your problem domain. A good start but is not sufficient for more abstract problem domains.
The underlying idea is to be able to abstract those units that can be abstracted from other units within you problem domain based on data and behavior.

No, objects are conceptual units.
Real life objects are just used to INTRODUCE object oriented ideas, but objects don't have to correspond to "real" things, they also correspond to abstract concepts that have no real life equivalent.

Look at the functionality ,your inputs and outputs.
Look for tasks to be done for example, Retrieving the data , processing the data and creating logical tables , representation of the data.
You can break these tasks into smaller , yet similar tasks. For example the Connection to server tasks. The request sending tasks , parsing the request back. Try to break down your objective in smallest pieces and then you can put these small functions and data members into classes.

Related

Why map java beens

I run across Orika recently.
And I couldn't find a good explanation of why I should use it. If I have, say, a User domain object, why not use that? Why do I need to create a UserDTO with more or less the same members.
Of course there are times when I need to hide some fields. But that doesn't explain the need to have dozens of libraries.
Can someone explain to me why I should not re-use domain objects from one architectural boundary to another? Saying boundary to include layers or micro-service interfaces or anything similar.
It all depends! Good designpatterns for big systems are often overkill for smaller ones. Is the data you are getting really the same as your logical intuitive domain objects or are there extra data there.
Do you find yourself in the situation described in the answer to this post, then DTO it up. DTO's exist to limit the amount of expensive network operations by transmitting more data in each request. Say you have 'User' and 'AddressDetail' domain objects, and that you could get the data for both these objects in a single call(and the data is useful in the same area of the application) then you'd use a dto and send all the data at once.
It can be hard predicting how your system will grow(especially if you are working against a living API which someone else controls), and data transfer object on some level provides a clear separation of responsibility which is often a good thing.
I'd say reuse domain objects with caution in large systems.

How do we decide what should be an object

An object oriented application is made up of several different objects. Before engineers start writing code for the participating objects:
How does one decide what should be an object?
Your question is too open as to admit a concrete answer, so let me provide some thoughts that could help you get started.
Let's assume that we are going to use a pure OO language like Smalltalk where everything is an object.
When working with objects for representing some reality (or fantasy) you usually don't try to identify all of them before you start. Instead, you identify the few that first come to mind as good representatives of your domain. For instance, if you are writing an application for modeling some aspects of Chemistry, you might want to start with Atom and Molecule.
Generally speaking, you should try to focus on concrete an important entities from your domain. Do not think in their interactions yet.
Once you have identified the first few objects, you should focus on the inherent behavior of each of them. For instance, if you are modeling a colony of ants, then your Ant objects will have to know how to move around, how to get back to the nest, how to cut or carry a leaf, etc.
As soon as you add more and more methods to your objects you will discover new objects that are required to enhance the behavior of the ones you have identified. For instance, if you are modeling a chess game, and you have already identified the Pawn object, you will soon realize that you need to model the CheckBoard, the CheckGame and so on.
So, the identification of objects is not something you try to address before you start. Instead, it is something that you will naturally discover by the evolution of your model. As your objects mature, i.e., as they learn more things, they will "reveal" which other objects are still missing.
In the course of your modeling you will likely hit some pieces of knowledge you lack. Here is when you need to ask a domain expert, read or study related material, ask questions in fora such as Stack Overflow, etc. In this regard you will find yourself learning more about the domain so you can "explain" your newly acquired knowledge to your objects. In some regards you will feel that your objects are asking you questions that you would have never conceived yourself and all these activities will fruitfully populate your model with an increasing number of classes (or prototypes) and methods.
I must say this is a very interesting question, how someone identify the object. Based on my knowledge I would like to say that programmer should not worry about object as it is the representation of something. It is good to relate object oriented programming with real world. So programmer should firstly identify the class which can be the real world entity and then to describe that real world entity we need object.
For suppose you are developing a project for the librarian, who informed you below problem
"As a librarian, I want library management system where there are different types of reader for books, magazines, CDs and DVDs" (something like this)
So developer can easily identify the class based on its real world representation
classes should be
Librarian
Book
Magazine
CD
Reader
And object can be Programming C++ book, Advance JAVA book. Hardik is the reader etc.
So, the object is the representation of the real world entity, which has some specific meaning.

How to design a business logic layer

To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).

First step in OOD?

What is the frst step in OOD?
There are no steps, it's not a process.
The answer is..
(source: headfirstlabs.com)
http://headfirstlabs.com/books/hfooad/
http://www.amazon.com/dp/0596008678/?tag=forelangstud-20
Practice, read broadly and more practice.
Especially with others to review and comment on approaches.
Reading should cover not just OOD, but also patterns to see how others have approached common problems.
It's a lot of practice. The first thing is to get your mind around the way objects work--especially if you are a procedural programmer.
Practice making many small objects--I've literally never seen a system with too many objects; it's possible but I've never seen it. It should be really obvious when you need to put many objects into one, but it's not as obvious when an object should be broken up.
Ask an object to do something, don't ask for it's data. Try to avoid getters and setters and concentrate on methods where you ask it to do something with it's data. If you EVER see code like o.a=o.b+o.c or o.setA(o.getB()+o.getC() you are doing it wrong.
Constantly try to refactor out duplication. Rewrite your code repeatedly until there is none (or as little as possible). This will probably do more for your OO design skills than any other practice. As you get more knowledgeable, try refactoring things you didn't think you could refactor before. Anything that even looks like a pattern can probably be refactored. For instance here's a very basic example--if you had lines of code that looked like this:
a = b + c * d;
g = h + i * d;
Chances are there are HUGE refactorings missing in your code even though it doesn't look like it off the bat. You probably are missing an object that would hold a,b,c and a second instance would hold g,h,i, after creating these objects a bunch of stuff would factor into your new object. Learning to recognize new opportunities like this is critical.
I've been programming for over 20 years now, over half of it has been OO at this point and it seems like every few years I think I know it all--a year later I look back and realize how ignorant I was.
the first step is object oriented analysis - its aim is to identify the objects that make up a system and how they interact; given this knowledge you can then specify the behavior of the object (the interface methods) and then the internals (what are the required data members of an object)
The design process produces a number of diagrams - these are tools that are supposed to help with working out the details of the system :
first come a set of 'use cases' - a use case is a verbal description of a scenario that is implemented by the system (one is supposed to pick the most substantial ones); these are then used to identify the main actors and concepts which are supposed to map to the classes of a system. This understanding is then refined by working out 'object interaction diagrams' 'class diagrams' and 'sequence diagrams' sometimes state charts are used to visual state machines - these diagrams are tools to gain an even better understanding of the system, as a result you have a sufficient understanding of the system to write the class header files/class definitions. There are no fixed rules which one of these diagrams come first, these are used as appropriate.
i found the following book very useful :
OBJECT-ORIENTED ANALYSIS AND DESIGN With applications (second edition) by Grady Booch
the book goes through the process of designing several example systems step by step (i think it is enough to read the design process for these example systems); One minor problem is that the notation used in this book is a bit dated : modern practice is to use the UML notation for diagrams, however the book still uses the older Booch notation. The strong point of the book is that it is always explaining each concept by working through concrete examples.
There are some preliminary steps:
Understand OOD (in general)
Understand the problem/application domain (the functional specification)
Have a high-level/architectural design: know what O/S, libraries, frameworks etc. you can use
I then use a mixture of top-down and bottom-up development:
Top-down: decide what components and what APIs (object interfaces) I would like to have in order to implement the application (and then, develope those API)
Bottom-up: decide how to add new functionality to existing APIs (object interfaces), by adding new methods and new types of object (and sometimes splitting a large object into several smaller objects).
The first step of OOD are the OOD principles. Check out The Principles of OOD.

Are ORM's counterproductive to OO design?

In OOD, design of an object is said to be characterized by its identity and behavior.
Having used ORM's in the past, the primary purpose, in my opinion, revolves around the ability to store/retrieve data. That is to say, ORM objects are not design by behavior, but rather data (i.e. database tables). Case and point: Many ORM tools come with a point-to-a-database-table-and-click-object-generator.
If objects are no longer characterized by behavior this will, in my opinion, muddy the identity and responsibility of the objects. Subsequently, if objects are not defined by a responsibility this could lend a hand to having tightly coupled classes and overall poor design.
Furthermore, I would think that in an application setting, you would be heading towards scalability issues.
So, my question is, do you think that ORM's are counterproductive to OO design? Perhaps the underlying question would be whether or not they are counterproductive to application development.
There's a well-known and oft-ignored impedance mismatch between the requirements of good database design and the requirements of good OO design. Most developers (in my experience) either do not understand this impedance mismatch or do not care. Since it's more common to start with the database and generate the objects from it (rather than the reverse), then yes, you'll end up with objects that are great as a persistence layer but sub-optimal from an OO perspective. (The reverse, generating the database from the object model, makes me want to stab my eyes out.)
Why are they sub-optimal from an OO perspective? Because the objects produced by an ORM are not business objects, even with partial classes and the like. Business objects model behavior. ORM objects model persistence. I'm not going to spend ten paragraphs arguing this distinction. It's something Rocky Lhotka has covered quite well in his books on Business Objects and his CSLA framework. Whether or not you like or use CSLA, I think his arguments are solid ones.
If objects are no longer characterized by behavior this will, in my opinion, muddy the
identity and responsibility of the objects.
The objects in question do have database reading and writing as defined behavior. They just don't have much other than that.
The reality of the situation is pretty simple: object orientation isn't an end in itself, it's a means to an end -- but in some cases it just doesn't do much to improve the end result. A lot of uses for ORM form a case in point -- they are thousands of variations of CRUD applications that don't need or want to attach any real behavior to most of the data they process.
The application as a whole gains flexibility by not encoding much (if any) of the data's "behavior" into the code of the application itself. Instead, they're often better off with that as "dumb" data, that they simply pass through from UI to database and back out to reports and such. With a bit of care, this can allow a substantial level of user customization that's almost impossible to match when you try to treat the data as real objects with real behavior encoded into the application proper.
Of course, there's another side to that: it can make it substantially more difficult to ensure the integrity of the data or that the data is only used appropriately -- I've seen code that accidentally used the wrong field in a calculation, so they were averaging the office numbers instead of office sizes in square feet. Both were user-defined fields that just said the contents should be numeric. The application had no way to know that one made perfect sense, and the other didn't at all.
Case and point: Many OR/M tools come with a point-to-a-database-table-and-click-object-generator.
Yes but there are equal, if not greater numbers or ORM solutions that base themselves off your objects and generate your database tables.
If you start with data and then tramp down forcing auto-generated objects to have object behaviours, yes, you might get confused... But if you start with the object and generate the database as a secondary layer, you end up with something a lot more usable, even if the database isn't perhaps as optimised as it could be.
If you're looking for an excuse not to use ORM, don't use it. I personally find it saves me thousands of lines of code doing trivial things that the ORM does just great.
I don't believe ORM are counterproductive to OO design, unless you want to insist that persistence is an integral part of behavior.
I'd separate persistence from business behavior. You're certainly free to add busines behavior to any object that an ORM generates for you.
I've also seen ORM systems which tend to go from the OO model and generate the database.
If anything, I would say ORMs are more biased towards producing good OO code than they are to producing good database code.
Ideally, a successful ORM bridges the two worlds and your application code would be great from a business domain problem-solving and implementation perspective and your database code and model would be great from a normalization, performance and ETL/reporting/replication whatever perspective.
On systems where it separates your data from behavior it's absolutely counterproductive.
Orm systems tend to analyze existing database tables to create stupid "Objects" in your language. These things are not true objects because they do not contain business behavior, but since people have these structures, they tend to want to use them.
Ruby on Rails (Active Record) actually binds your data to a "Live" class--this is much better.
I've never seen a system I really liked--ActiveRecord is close but it makes a few rubiesque assumptions that I'm not quite comfortable with--the biggest being supplying public setters & getters by default.
But to sum up--I've seen a lot of good OO programmers write screwed up code because of ORM.
As with most questions of this type, it depends on your usage.
The main ways to use ORM tools are:
Define object by data, use this object throughout application code (BAD BAD BAD)
Use ORM objects only for data access, define your own objects with an interpretive layer between (Much Better)
If starting from scratch a 3rd method is to design data from your object model. (Best if possible)
So yes, if you define the object by the data tables and use that throughout your code you will not be using OOD and introducing very poor design and maintenance issues.
But if you only use the ORM objects as a data access tool (replacing ADO) then you are free to use good OOD and ORM together. Sure, more code is required to build the interpretation layer, but enables much better practices, with not much more code required than old ADO code.
I'm answering this from a C# perspective, since that is where I do the majority of my development....
I see your point, but I think with the ability to create partial classes you can still create objects with any behavior you like and still get the power that an OR/M brings to the table for data retrieval.
From what I've seen of ORMs they do not go against OO principles - fairly orthogonal to them in fact. (for info - pretty new to ORM technology, Java perspective)
My reasoning is that ORMs help you store the data members of a class to a persistent store without having to couple to that store and write that code yourself. You still decide on the data members and write the behaviours of a class.
I guess you could abuse ORMs to break OO principles, but then you can do that with anything. You might use tooling to create skeletal data classes from a pre-existing table, but you would still create methods etc.