Help! Evil services are killing my objects - oop

When I believed in American dream about encapsulation and polymorphism, intrusion of Web Services washed my objects off with RPC calls...
When I cherished my resurrected PONOs, ugly army of barbarians called proxy objects conquered my lands...
Later, peace seemed to come back with DDD and NHibernate on the server side, but the SilverLightning hit my castle, now there's hunger again, delicious lazy loading is only in my memories, and for years now my poor objects have to consume stale services again...
And I am full of fear...the world is talking more and more about some other terrifying procedural monsters...they call them "Workflows"...
How can I save my objects?
Literally, I do not provide anyone any services. I am building a simple small system. I don't want to use services to find my data. I do not want to use services to talk from my web interface to my web interface...as I don't want to use snail mail to talk to my colleagues.
Any ideas? Did you manage to save your objects? Did you manage to save more than your domain model? (hopefully you managed the latter...)
Update:
If this was not clear...
We have a killed architecture because everything is using web service based.
There was a fashion "OO - is dead", services rule.
In SOA, it is still quite hard to focus on objects when everything is focusing on verbs ("operation contracts"). I feel it is hard to take care of your design.

Beware you foolish mortals. The Entity That Is has indeed fed on your polymorphed objects. But this also means that you have inherited the Big Slimy Interface that lurks in the dark. So you can retire your puny barbarians (by proxy if you wish).
And yes, thanks to the Entity That Is, your objects got lazy and have their garbage collected. So their joy is only temporary because their life is immediate ended when they move out of scope. And not a single one of then can get away.
If you show fear to the Entity That Is, dead is only a destructor away. So be careful when you ride the waves of the workflow, because they are as unpredictable as the average market stock.
Your objects are never save for the Entity That Is. Persistance can back them up for a while but eventually all wil fail as the last clock cycle has rung. Fortunately thanks to persistance, your objects can be send to better places where they can multiply and live in peace.
The Entity That Is, is strict but fair, so if you use the propert command, your virtual doors to other realities wil open and allow swift and reliable trafic.
Good luck and honour the Entity That Is, you may not always agree with it, but its rule is the law and death the only penalty.

i think you're complaining about verb-centered designed when using SOA. If so, that is not a requirement of SOA, but it is a temptation.
"Anywhere in a normal OO application that you would do something, change that something to a web service" is probably overkill
the best uses for SOA that i've seen just replace the data access layer with an SOA layer, plus they expose a few high-level 'public' operations like registering a new user and so on
one could make a web service out of every class method, but this would be ridiculous in most cases...

It seems like the two ain't compatible when scaling. Amazon apparently lost plenty of time and money due to issues with versioning and their object models. SOA layers seem to work better if there is no dependant object definition leaving the consumer of the services to map EACH call to their own domain model..... hmm.....

Clearly you haven't take your abstraction pills this morning. Now, take your nice medicine and you'll feel better in a little while...

Related

Object oriented programming - subobject vs globals (use case)

I hope this a correct stack.
I am developing a nethack-like game, and I would like an advice how to approach the design part. For now I got class like Location, Npc, Item etc. But I ve got a problem how to easily access parts of location.
Lets say I have an object Door inside (not inheritated). If player is inside location it is easy to check whether door are opened. But on the other hand (I got this solution with my previous non-object implementation) I had a script that at 0600 opened all shops. But now I need to iterate thru all lcoations, check whethere are doors inside, and open them if location is a shop. Is it really optimized way to do it?
I could also do a globals (like singelton) with door states and fastly run thru those - but it would be hardly an OOP.
What are the possibilites here?
If this problem is somewhere covered please share link with me and that would surely be enough :)
Thanks!
Zaqqen
In my opinion, there is a difference between theoretical and practical OOP. If you want to learn the basis or make a thesis on OOP, theoretical could be fine. In most other cases you would prefer a practical one.
Why and how is it related to your question?
When I was a young developper, I had so much pain to rationalize my code. Shall I code the selling method in the class Product, Store or Consumer?
Then I discovered SOA, and I set my selling method in the class SaleHandler.
This is what you call a singleton. I prefer to implement it as a service with a framework helping me to make some dynamic dependency injection. From here, I had my data classes (Product, Store, ...) and my service classes (SaleHandler) to be fast. All my logic was coded in these services. This was not pure theoretical OOP but it helped me so much to handle the increasing complexity of growing applications.
I don't know how you can use this in your case, but I can give you some slope:
A service class DoorsRegistry containing all your doors (pattern registry).
A service class DoorOpener handling the opening of the doors.
If you do not use any framework helping you to do that you can implement your services as singletons but be aware that the pattern singleton is certainly an anti-pattern.
Hope this is the kind of answer you are waiting for.

What is meant by "getting the right level of abstraction"?

I've read about how a hard job in programming is writing code "at the right level of abstraction". Although abstraction is generally hiding details behind a layer, is writing something at the right level of abstraction basically getting the methodss to implement decision correct? For example, if I am going to write a class to represent a car, I will provide properties for the wheels etc, but if I am asked to write a class for a car with no wheels, I will not provide the wheels and thus no method to "drive" the car. Is this what is meant by getting the abstraction right?
Thanks
Not Quite,
Providing the right level of abstraction is knowing how much of the information from the lower levels to pass up through your level.
Suppose you were writing a high level HTTP library. Perhaps you would provide a Get() method, a Head() method, a Post() method etcetera, but you wouldn't need to provide access to the underlying Sockets because you are abstracting that detail away from the user.
And below that Socket that you are using, there are layers of abstraction that you don't need to deal with. (You only access an abstraction one layer below you, beyond that it is the job of that layer to deal with the layer below it, and so forth).
For instance, you don't care about the sliding window flow control protocol because TCP is abstracting away those details.
--
If you are coding at too high of an abstraction layer for the purposes you are trying to achieve, you will run into multiple implementation details. When you are fighting with the library for control, it is an indication that perhaps you are working at too high a level.
Conversely, if you are coding at too low a level of abstraction you will get lost in the implementation details. Going back to my HTTP example, if you just want to run a Get request against the server and you are implementing a TCP handshake in your code, then perhaps you either want to try to use a library or abstract out your TCP code into a library and interface with it through that.
--
In one class that I took on the subject, the teacher had an interesting method of explaining abstractions. He had us think of them simply as a 'point of view' or 'perspective' on an object or a scenario.
The details that are important from one perspective aren't important at all from another perspective.
He put a book on a table and assigned roles to students such as "Reader", "Book Seller", "Author", "Librarian", or "Book Shipper" and asked us what details about the book we thought were important to us in that role. Based on the roll assigned to a person, their answers varied widely.
This represents an abstraction. Only needing those details that are important to you, and letting all other details be handled elsewhere (or simply fall to the wayside).
I don't think that's it.
To me, abstraction is synonymous with generalization. The more abstract something is, the more the author is trying to think about a problem in such a way that it's easier to extend and apply to new problems.
In general, greater abstraction requires more effort to develop and to understand. If you're a developer, and you're given a highly abstract framework to work with, you'll be forced to think in terms of the framework rather than using concepts that your common sense might suggest.
So, as in your example, a Car would be a very low level of abstraction. A RollingVehicle might be a higher one, and Transport might be the most abstract of all.
The right choice depends on how broadly you wish to apply your classes and how easily understood you'd like them to be.
I think one dangerous aspect of abstraction is its ability to erase or hide the reality or the design it represents. You should always maintain a reasonable distance between what you represent and the representation. By "reasonable" I mean easily understandable by an external developer how hasn't been coding on this specific project.
Joel Spolsky stated it quite right talking about the dangers of "architecture astronauts":
When great thinkers think about problems, they start to see patterns. They look at the problem of people sending each other word-processor files, and then they look at the problem of people sending each other spreadsheets, and they realize that there's a general pattern: sending files. That's one level of abstraction already. Then they go up one more level: people send files, but web browsers also "send" requests for web pages. And when you think about it, calling a method on an object is like sending a message to an object! It's the same thing again! Those are all sending operations, so our clever thinker invents a new, higher, broader abstraction called messaging, but now it's getting really vague and nobody really knows what they're talking about any more. Blah.

How should Web Applications Interfaces be designed?

I am building a web application and have been told that using object oriented programming paradigms affects the performance of the application.
I am looking for input and recommendations about design choices that come from moving from One Giant Function to a Object-Oriented Programming Interface.
In order to be more specific: If a Web Application used OOP and created objects that live for a very short time period. Would the performance hit of creating objects on the server justify using a more functional ( I am thinking static functions here ) design.
Wow, big question, but in short (and comment if you want more info) OOP code/practises (ideally well written at that) will give you far more maintainability, testability and joy to code in that OGF coding.
As for the speed arguement, that really is only an issue if you are really trying to squeeze every possible last ounce of CPU out of a server thats going to get hammered. In which case you are problably doing something wrong and need to think about better/more servers or you work for NASA or are doing it for a dare.
I dont know about performance but it definitely makes it easier to maintain.
I am looking for input and
recommendations about design choices
that come from moving from One Giant
Function to a Object-Oriented
Programming Interface.
as David suggested, answering the above will require lot of pages.
Perhaps you should be looking at frameworks. They make some design choices for you.
The most popular way of designing a web app with OOD/OOP is to use the Model-View-Controller pattern. To summarise the 3 main participants:
Model - I'm the stuff in the problem domain that you're manipulating.
View - I'm responsible for drawing and managing what you see in the browser. In web apps, this often means setting up a html template and pushing name-value pairs out into it.
Controller - I'm responsible for handling requests that come in from the web and working out what to do with them and arranging with the other objects to get that work done.
Start with the controller...
Views and Controllers often come in pairs. The controller accepts the HTTP request, works out what needs doing - and it either does it (if the work is trivial) or delegates the work to another object to do. Typically it find the view that's to be used and gives it to the object that's doing the actual work to write output into.
What I've described here corresponds with what you'd expect to find in something like Ruby on Rails.
Making lots of objects that you use once is certainly a concern - but I wouldn't worry about that aspect of performance up front. Proper virtual machines know how to manage short-lived objects. There's plenty of things you can do to speed up a web app - and I would start by sacrificing the benefit of clear organization for a speed up that might not even be the most important optimization...
MVC isn't the only way to go, there are other patterns like Model-View-Presenter and some really unusual approaches like continuation servers (e.g. Seaside) - which reuse the same objects between HTTP requests...
Yes. When doing an OO approach to web app development (using Seaside) I can deliver functionality so much faster that I have sufficient time to think about how to deliver the right amount of performance.

Jumping into N-Tier architecture with WCF?

I work for a large state government agency that is a tad behind the times. Our skill sets are outdated and budgetary freezes prevent any training or hiring of new employees/consultants (firing people is also impossible). Designing business objects, implementing design patterns, establishing code libraries and services, unit testing, source control, etc. are all things that you will not find being done here. We are as much of a 0 on the Joel Test as you can possibly get. The good news is that we can only go up from here!
We develop desktop CRUD applications (in C++, C#, or Java) that hit the Oracle database directly through an ODBC connection. We basically have GUI's littered with SQL statements and patchwork code. We have been told to move towards a service-oriented n-tier architecture to prevent direct access to the database and remove the Oracle Client need on user machines.
Is WCF the path we should be headed down? We've done a few of the n-tier application walkthroughs (like this one) and they seem easy to implement, but we just don't know enough to understand if we are even considering the right technologies. Utilizing the .NET generated typed DataSets seems like a nice stopgap to save us month/years of work (as opposed to creating new business objects from the ground up for numerous projects). Is this canned approach viable for a first step?
I recently started using WCF services for my Data Layer in some web applications and I must say, it's frustrating at the beginning (the first week or so), but it is totally worth it once the code is deployed.
You should first try it out with a small existing app, or maybe a proof of concept to make sure it will fit your needs.
From the description of the environment you are in, I'm sure you'll realize the benefit almost immediately.
The last company I worked for chose WCF for almost the exact reason you describe above. There is lots of good documentation and books for WCF, its relatively easy to get working, and WCF supports a lot of configuration options.
There can be some headaches when you start trying to bend WCF to work in a way not specifically designed out of the box. These are generally configuration issues. But sites like this or IDesign can help you through those.
First of all, I would definitely not (sorry for the emphasis) worry about the time you'll save using typed DataSet's versus creating your own business objects. That is usually not where you will spend most of your development time. I prefer using business objects myself.
In you're situation I would want to implement a proof-of-concept first. One that addresses all issues you may encounter. This proof-of-concept should implement an entire use case, starting on the client, retrieving data from the database and returning it to the client. You should feel confident about your implementation before continuing.
Then about choice of technology. WCF is definitely a good choice for communication between your client applications and the service layer. I suppose that both your clients as well as your service layer will become C# applications? That makes things a lot easier since interoperability between different platforms (Java/C# for example) is still not trivial although it should work in most cases.
Take a look at Entity Framework (as there are a couple Oracle providers available for it already) in conjunction with .NET 3.5 SP1 which enables built-in WCF serialization of your EF generated classes.
Here is a good blog to get started: http://blogs.msdn.com/dsimmons
CSLA might be a good fit for your N-Tier desktop apps. It supports WCF, has a large dev community, and is well documented. It is very object oriented.

traversing object graph from n-tier client

I'm a student currently dabbling in a .Net n-tier app that uses Nhibernate+WCF+WPF.
One of the things that is done quite terribly is object graph serialisation, In fact it isn't done at all, currently associations are ignored and we are using DTOs everywhere.
As far as I can tell one method to proceed is to predefine which objects and collections should be loaded and serialised to go across the wire, thus being able to present some associations to the client, however this seems limited, inflexible and inconsistent (can you tell that I don't like this idea).
One option that occurred to me was to simply replace the NHProxies that lazy load collection on the client tier with a "disconnectedProxy" that would retrieve the associated stuff over the wire. This would mean that we'd have to expand our web service signature a little and do some hackery on our generated proxies but this seemed like a good T4/other code gen experiment.
As far as I can tell this seems to be a common stumbling block but after doing a lot of reading I haven't been able to figure out any good/generally accepted solutions. I'm looking for a bit of direction as much as any particular solution, but if there is an easy way to make the client "feel" connected please let me know.
You ask a very good question that unfortunately does not have a very clean answer. Even if you were able to get lazy loading to work over WCF (which we were able to do) you still would have issues using the proxy interceptor. Trust me on this one, you want POCO objects on the client tier!
What you really need to consider...what has been conceived as the industry standard approach to this problem from the research I have seen, is called persistence vs. usage or persistence ignorance. In other words, your object model and mappings represent your persistence domain but it does not match your ideal usage scenarios. You don't want to bring the whole database down to the client just to display a couple properties right??
It seems like such a simple problem but the solution is either very simple, or very complex. On one hand you can design your entities around your usage scenarios but then you end up with proliferation of your object domain making it difficult to maintain. On the other, you still want the rich object model relationships in order to write granular business logic.
To simplify this problem let’s examine the two main gaps we need to fill…between the database and the database/service layer and the service to client gap. NHibernate fills the first one just fine by providing an ORM to load data into your objects. It does a decent job, but in order to achieve great performance it needs to be tweaked using custom loading strategies. I digress…
The second gap, between the server and client, is where things get dicey. To simplify, imagine if you did not send any mapped entities over the wire to the client? Try creating a mechanism that exchanges business entities into DTO objects and likewise DTO objects into business entities. That way your client deals with only DTOs (POCO of course), and your business logic can maintain its rich structure. This allows you to leverage not only NHibernate’s lazy loading mechanism, but other benefits from the session such as L1 cache.
For brevity and intellectual property reasons I will not go into the design of said mechanism, but hopefully this is enough information to point you in the right direction. If you don’t care about performance or latency at all…just turn lazy loading off all together and work through the serialization issues.
It has been a while for me but the injection/disconnected proxies may not be as bad as it sounds. Since you are a student I am going to assume you have some time and want to muck around a bit.
If you want to inject your own custom serialization/deserialization logic you can use IDataContractSurrogate which can be applied using DataContractSerializerOperationBehavior. I have only done a few basic things with this but it may be worth looking into. By adding some fun logic (read: potentially hackish) at this layer you might be able to make it more connected.
Here is an MSDN post about someone who came to the same realization, DynamicProxy used by NHibernate makes it not possible to directly serialize NHibernate objects doing lazy loading.
If you are really determined to transport the object graph across the network and preserve lazy loading functionality. Take a look at some code I produced over here http://slagd.com/?page_id=6 . Basically it creates a fake session on the other side of the wire and allows the nhibernate proxies to retain their functionality. Not saying it's the right way to do things, but it might give you some ideas.