Differencce between interface and API - api

could you please explain the difference between interface and API?
I was looking for this information here and using google but I only find a special information for Oracle.
What I'm looking for is the general difference.
Much appreciated!
Update:
thank you all for the answers. My question was kept deliberately general because I
1) do not have the detailed information about the used programming languages (question based on a short information about one vendor's implementation in my project);
2) I wanted to understand the general high-level difference between the both terms.

Without further context, your question is a bit broad; but lets try; by looking up the definition of API on wikipedia:
In computer programming, an application programming interface (API) is a set of subroutine definitions, protocols, and tools for building software and applications.
Then: API stands for application programming interface; indicating that well, an API contains all elements (plural!) required to create an application which wants to interact with the component behind that API.
Whereas an interface in its stricter sense typically denotes a "single specific entity"; like the List interface in java describes an ordered collection (without providing details about specific implementation).
But of course, there are certain commonalities - both terms are about the description of the "boundary" of a "system". Long story short: there is simply no sharp, precise separation between those two concepts. Thus, when using those words within a group of people, you might want to first "step back" and discuss terminology - to ensure that all people involved have the same understanding of such terms. Or as DDD puts it: you want to create a Ubiquitous Language in which such terms have a clear, well defined meaning.
Finally: it is also worth mentioning that the term interface has different meanings when using different programming languages. In Java, an interface is really a concept embodied within the language core; where as in C++, an "interface" would probably be seen as the content of a single header file; leading to subtle but important "flavors" of "interface" for those two languages.
Edit:
Both, interface and API should not expose (too much?!) of the internals to the outside world.
Generally speaking, an API outlines a "component" (a complete "application" for example); whereas an interface might outline a "smaller" entity".
For your other refinement, about one company providing the API, and another the interface - I cant say anything. Because; as others have explained too: the definitions for those terms are really to unclear/fuzzy. We would need to know much more about the application and its requirements to even comment on your statement here.

The question "what is the difference between X and Y" is only meaningful when X and Y have single meanings and strict definitions. But "interface" and "API" do not have either single meanings nor strict definitions, so one cannot tell what is the difference between them.
For the most part, there is no difference, but there exist certain contexts where one would be suitable to use, while the other would be less suitable, or even unsuitable.
So, for example, a class implements an interface, never an API. On the other hand, an entire software system is more likely to be said to expose APIs rather than interfaces, though to say interfaces in this case would not be wrong either.
I wish there was some easy distinction, like "interfaces are small-scale, APIs are large-scale", or "interfaces are more specific, APIs are more nebulous", but there is no such distinction.

Related

OOP design of tool for teaching formal languages & automata [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am thinking of using some spare time to play around with designing and implementing a teaching tool for a course on formal languages and automata theory. I am trying to decide whether an OOP implementation would be appropriate and, if so, whether anyone can suggest high-level improvements to the design I outline below.
There are lots of potential classes revealed in the linguistic analysis. Some (and please let me know if I've missed anything fundamental) are: grammar; nonterminal; terminal; production; regular grammar; context-free grammar; context-sensitive grammar; unrestricted grammar; automaton; state; symbol; transition; DFA; NFA; NFA-Lambda; DPDA; PDA; LBA; Turing machine.
Question 1: Should each kind of grammar get its own class in the implementation, or should an over-arching grammar class have methods to determine what kind of grammar it is (e.g., "isRegular()", "isContextFree()", etc.) (more generally, should classes which differ only a little in the domain model, and only in terms of behavior, be represented via inheritance in the implementation, or is it better to simply push different kinds of behavior into the parent class?)
Question 2: Should things like "Symbol", "State", "Nonterminal", etc. get their own classes in the implementation, or should these be dominated by their containers? (more generally, should very simple classes in the domain model be given their own classes in the implementation - e.g. for extensibility - or should that be pushed into the container class?)
Question 3: Should Transition be its own class in the implementation and, if so, will I need to subclass it in order to support each kind of automaton (since, besides differing in terms of the state, they also differ in terms of what happens during transitions)? (more generally, is it good practice to have two abstract parent classes where there is a bijection between the children of one and the children of another... coupling?)
I realize that, at the end of the day, a lot of these decisions are simply design decisions, but I would like to know what you guys think about best practices in OOP design. Moreover, and the reason I'm not just asking the "more generally" questions as pure OOP design questions is that I'd like special perspective from people who have experience with this kind of domain (languages & automata).
Any help is greatly appreciated.
more generally, should classes which differ only a little in the domain model, and only in terms of behavior, be represented via inheritance in the implementation, ...
Behavior is not "only". Behavior is most important part of objects.
or is it better to simply push different kinds of behavior into the parent class?
Certainly not. That would be violation of Liskov substitution principle.
Inheritance should not be used for "easier access to common stuff".
Content of parent class muss be completely ubiquitous with child classes, avoid inheritance and use composition if child does not comply w/ it.
more generally, should very simple classes in the domain model be given their own classes in the implementation - e.g. for extensibility - or should that be pushed into the container class?
It kind a depends on how "deep" Your logic is going to go.
Usually it's a good idea to start decomposition only when you hit some limits.
Some people call it evolutionary design.
E.g. - I have "class must not be larger than ~200 lines of code", "object must not talk with more than 5-7 other objects", "method should not be larger than ~10 lines of code", "same logic should be written once only" etc...
Also - it's easy to underestimate semantics. if order.isOverdue() is much more easily readable and understandable than if order.dueDate<date.Now(). Introducing classes that reflect concepts of domain greatly "humanizes" code - increases abstraction level (think "asm vs java").
But decomposition for the sake of decomposition leads to unnecessary complexity.
It always must be justified.
Should Transition be its own class in the implementation and, if so, will I need to subclass it in order to support each kind of automaton (since, besides differing in terms of the state, they also differ in terms of what happens during transitions)?
There is nothing wrong with that as long as it complies with Your domain.
Creation of abstractions is artistic activity that highly depends on Your domain (e.g. advises from experts at formal languages & automata might be severely over-complicated if requirement that Your code is supposed to be as a teaching tool for a course is forgotten).
Good Idea. Done similar stuff, first with Procedural programming, first, O.O., later.
Answer 1: Both. Some grammars or tokens will have a specific methods or attributes, while other should be shared among all grammars.
Answer 2: They should have their own classes, altought may share common ancestors.
Answer 3: It can be handled both ways, altought defining a specific class could be useful. I agree that intersection or association between other classes / objects exists, but, they are difficult to model. The "proxy" design pattern is an example of that.
Teaching language design with LEX, Bison, yacc are difficult. I "heard" that ANTLR is a good designing tool for teaching compiler related stuff.
What do you want to do ?
A object oriented parser / scanner ?
There are some already, I have trouble understanding how to use them. Some of them declare new grammars like defining new classes, but, in this case, I find functional programming (syntax) or logical programming (syntax) more suitable for declaring rules.
Visual related tools:
http://www.ust-solutions.com/ultragram.aspx
http://www.sand-stone.com/
http://antlr.org/
http://antlr.org/works/index.html
Good Luck ;-)
If you want to go with an existing solution instead of writing one, some friends of mine at NCSU wrote a tool called ProofChecker that might fit the bill.
It's written in Java, so that might satisfy your OO angle as well.

Why use classes instead of functions? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I do know some advantages to classes such as variable and function scopes, but other than that is just seems easier to me to have groups of functions rather than to have many instances and abstractions of classes. So why is the "norm" to group similar functions in a class?
Simple, non-OOP programs may be one
long list of commands. More complex
programs will group lists of commands
into functions or subroutines each of
which might perform a particular task.
With designs of this sort, it is
common for the program's data to be
accessible from any part of the
program. As programs grow in size,
allowing any function to modify any
piece of data means that bugs can have
wide-reaching effects.
In contrast, the object-oriented
approach encourages the programmer to
place data where it is not directly
accessible by the rest of the program.
Instead the data is accessed by
calling specially written functions,
commonly called methods, which are
either bundled in with the data or
inherited from "class objects" and act
as the intermediaries for retrieving
or modifying those data. The
programming construct that combines
data with a set of methods for
accessing and managing those data is
called an object.
Advantages of OOP programming:
MAINTAINABILITY Object-oriented programming methods make code more maintainable. Identifying the source of errors is easier because objects are self-contained.
REUSABILITY Because objects contain both data and methods that act on data, objects can be thought of as self-contained black boxes. This feature makes it easy to reuse code in new systems.Messages provide a predefined interface to an object's data and functionality. With this interface, an object can be used in any context.
SCALABILITY Object-oriented programs are also scalable. As an object's interface provides a road map for reusing the object in new software, and provides all the information needed to replace the object without affecting other code. This way aging code can be replaced with faster algorithms and newer technology.
The point of OOP is not to 'group similar functions in a class'. If this is all you're doing then you're not doing OOP (despite using an OO language). Having classes instead of just a bunch of functions has a side effect of 'variable and function scopes' that you mention, but I see it just as a side effect.
OOP is about such concepts as encapsulation, inheritance, polymorphism, abstraction and many others. It is a specific way of software design, a specific way of mapping a problem to a software solution.
Grouping functions in a class is by no means the norm. Let me share some of the things I have learned through experimentation with different languages and paradigms.
I think the core concept here is that of a namespace. Namespaces are so useful that they are present in almost on any programming language.
Namespaces can help you overcome some common problems and model various patterns that appear in many domains, e.g., avoiding name collisions, hiding details, representing hierarchies, define access control, grouping related symbols (functions or data), define a context or scope ... and I'm sure there are more applications.
Classes are a type of namespace, and the specific properties of classes vary from language to language and sometimes from version to version of the same language, e.g., some provide access modifiers, some do not; some allow inheritance from multiple classes, others do not. People have been trying to find the magic mix of features that will be the most useful and that in part explains the plethora of available options in different programming languages.
So, why use classes, because they help solve certain kinds of problems in a way that seems natural or maybe intuitive. Every time we write a computer program we're trying to capture the essence of the problem and if the problem can be modeled by using some of the patterns mentioned above then it makes perfect sense to use those features of a language that help you do that.
As the problem becomes better understood you might realize that certain parts of the program could be better implemented by using a different paradigm/feature/pattern and then it's time for refactoring. Most programs I have had the chance to work on keep evolving until either the money/resources run out or when we arrive at the point of diminishing returns, many times you have something that's good enough for now and there's little incentive to keep working on it.
It's not the norm, it's just one way of doing it. Classes group methods (functions) AND data together, based on the concept of encapsulation.
For lager projects it often becomes easier to group things this way. Many people find it easier to conceptualizes the problem with objects.
There are many reasons to use classes, not the least of which is encapsulation of logic. Objects more closely match the world we live in, and are thus often more intuitive than other methodologies. Consider a car, a car has properties like body color, interior color, engine horsepower, features, current mileage, etc.. It also has methods, like Start (), TurnRight(.30), ApplyBrakes(.50). It has events like the ding when you open your car door with the keys in the ignition.
Probably the biggest reason is that most applications seem to have a graphical component these days and most of the libraries for graphical user interface are implemented with object models.
Polymorphism is probably a big reason, too. The ability to treat multiple types of objects generically is quite helpful.
If you are a mathematician, a functional style may be more intuitive, ML, F#. If you’re interacting with data in a predictable format, a declarative style would be better like SQL or LINQ.
In simple words, it seems to me that (apart from everything everyone is saying) that classes are best suited for large projects, especially those implemented by more than one programmer to facilitate keeping things tidy; as using functions in such situations can become rather cumbersome.
Otherwise, for simple programs/projects that you would implement yourself to do one thing or another, then functions would do nicely.

Why is multiple inheritance not supported in most of programming language?

Why is multiple inheritance not supported in most of programming language?
I could really use this feature to develop different layout of application?
Multiple inheritance is useful in many situations as a developer, but it greatly increases the complexity of the language, which makes life harder for both the compiler developers and the programmers.
One problem occurs when two parent classes have data members or methods of the same name. It is difficult to resolve which is being referenced by the sub-class.
Another occurs when two parent classes inherit from the same base class, forming a "diamond" pattern in the inheritance hierarchy.
The order that the initialisation/elaboration of the parent classes needs to be specified - this can sometimes lead to behaviour changing when the order of the inheritance changes - something may catch developers by surprise.
Some languages support a reference to 'super', or equivalent, which refers to an attribute of the base class for this object. That becomes difficult to support in a language with multiple inheritance.
Some languages attempt to provide an automatic Object-Relational Model, so the objects can be made persistent with a regular RDMS. This mapping is difficult at the best of times (it has been described as the "Vietnam War" of software development), but it is much more difficult if multiple inheritance is supported.
One reason not to support it is ambiguity of method resolution.
http://en.wikipedia.org/wiki/Diamond_problem
However, I'm not sure what you mean by "most" programming languages. Many that are in use today support it directly (C++, Python, Perl, OCaml) or have a mechanism for similar functionality (Ruby and Scala come to mind).
The real reason why multiple inheritance is not supported across many langauges, is just the laziness of language developers. To cover up this embarrassing failure, all sorts of excuses are made, "it makes life difficult for the developer" bla bla, but for anyone who has actually used a language that implements it well, multiple inheritance becomes natural and easy after about 1 month. No big deal.
The only problem with it is after you have realized how useful and easy it is, you tend to become allergic to languages that don't support it and this may constrain your career prospects.
So my advice would be to stay well away from it.

Practical uses of OOP

I recently had a debate with a colleague who is not a fan of OOP. What took my attention was what he said:
"What's the point of doing my coding in objects? If it's reuse then I can just create a library and call whatever functions I need for whatever task is at hand. Do I need these concepts of polymorphism, inheritance, interfaces, patterns or whatever?"
We are in a small company developing small projects for e-commerce sites and real estate.
How can I take advantage of OOP in an "everyday, real-world" setup? Or was OOP really meant to solve complex problems and not intended for "everyday" development?
My personally view: context
When you program in OOP you have a greater awareness of the context. It helps you to organize the code in such a way that it is easier to understand because the real world is also object oriented.
The good things about OOP come from tying a set of data to a set of behaviors.
So, if you need to do many related operations on a related set of data, you can write many functions that operate on a struct, or you can use an object.
Objects give you some code reuse help in the form of inheritance.
IME, it is easier to work with an object with a known set of attributes and methods that it is to keep a set of complex structs and the functions that operate on them.
Some people will go on about inheritance and polymorphism. These are valuable, but the real value in OOP (in my opinion) comes from the nice way it encapsulates and associates data with behaviors.
Should you use OOP on your projects? That depends on how well your language supports OOP. That depends on the types of problems you need to solve.
But, if you are doing small websites, you are still talking about enough complexity that I would use OOP design given proper support in the development language.
More than getting something to just work - your friend's point, a well designed OO design is easier to understand, to follow, to expand, to extend and to implement. It is so much easier for example to delegate work that categorically are similar or to hold data that should stay together (yes even a C struct is an object).
Well, I'm sure a lot of people will give a lot more academically correctly answers, but here's my take on a few of the most valuable advantages:
OOP allows for better encapsulation
OOP allows the programmer to think in more logical terms, making software projects easier to design and understand (if well designed)
OOP is a time saver. For example, look at the things you can do with a C++ string object, vectors, etc. All that functionality (and much more) comes for "free." Now, those are really features of the class libraries and not OOP itself, but almost all OOP implementations come with nice class libraries. Can you implement all that stuff in C (or most of it)? Sure. But why write it yourself?
Look at the use of Design Patterns and you'll see the utility of OOP. It's not just about encapsulation and reuse, but extensibility and maintainability. It's the interfaces that make things powerful.
A few examples:
Implementing a stream (decorator pattern) without objects is difficult
Adding a new operation to an existing system such as a new encryption type (strategy pattern) can be difficult without objects.
Look at the way PostgresQL is
implemented versus the way your
database book says a database should
be implemented and you'll see a big
difference. The book will suggest
node objects for each operator.
Postgres uses myriad tables and
macros to try to emulate these nodes.
It is much less pretty and much
harder to extend because of that.
The list goes on.
The power of most programming languages is in the abstractions that they make available. Object Oriented programming provides a very powerful system of abstractions in the way it allows you to manage relationships between related ideas or actions.
Consider the task of calculating areas for an arbitrary and expanding collection of shapes. Any programmer can quickly write functions for the area of a circle, square, triangle, ect. and store them in a library. The difficulty comes when trying to write a program that identifies and calculates the area of an arbitrary shape. Each time you add a new kind of shape, say a pentagon, you would need to update and extend something like an IF or CASE structure to allow your program to identify the new shape and call the correct area routine from your "library of functions". After a while, the maintenance costs associated with this approach begin to pile up.
With object-oriented programming, a lot of this comes free-- just define a Shape class that contains an area method. Then it doesn't really matter what specific shape you're dealing with at run time, just make each geometrical figure an object that inherits from Shape and call the area method. The Object Oriented paradigm handles the details of whether at this moment in time, with this user input, do we need to calculate the area of a circle, triangle, square, pentagon or the ellipse option that was just added half a minute ago.
What if you decided to change the interface behind the way the area function was called? With Object Oriented programming you would just update the Shape class and the changes automagically propagate to all entities that inherit from that class. With a non Object Oriented system you would be facing the task of slogging through your "library of functions" and updating each individual interface.
In summary, Object Oriented programming provides a powerful form of abstraction that can save you time and effort by eliminating repetition in your code and streamlining extensions and maintenance.
Around 1994 I was trying to make sense of OOP and C++ at the same time, and found myself frustrated, even though I could understand in principle what the value of OOP was. I was so used to being able to mess with the state of any part of the application from other languages (mostly Basic, Assembly, and Pascal-family languages) that it seemed like I was giving up productivity in favor of some academic abstraction. Unfortunately, my first few encounters with OO frameworks like MFC made it easier to hack, but didn't necessarily provide much in the way of enlightenment.
It was only through a combination of persistence, exposure to alternate (non-C++) ways of dealing with objects, and careful analysis of OO code that both 1) worked and 2) read more coherently and intuitively than the equivalent procedural code that I started to really get it. And 15 years later, I'm regularly surprised at new (to me) discoveries of clever, yet impressively simple OO solutions that I can't imagine doing as neatly in a procedural approach.
I've been going through the same set of struggles trying to make sense of the functional programming paradigm over the last couple of years. To paraphrase Paul Graham, when you're looking down the power continuum, you see everything that's missing. When you're looking up the power continuum, you don't see the power, you just see weirdness.
I think, in order to commit to doing something a different way, you have to 1) see someone obviously being more productive with more powerful constructs and 2) suspend disbelief when you find yourself hitting a wall. It probably helps to have a mentor who is at least a tiny bit further along in their understanding of the new paradigm, too.
Barring the gumption required to suspend disbelief, if you want someone to quickly grok the value of an OO model, I think you could do a lot worse than to ask someone to spend a week with the Pragmatic Programmers book on Rails. It unfortunately does leave out a lot of the details of how the magic works, but it's a pretty good introduction to the power of a system of OO abstractions. If, after working through that book, your colleague still doesn't see the value of OO for some reason, he/she may be a hopeless case. But if they're willing to spend a little time working with an approach that has a strongly opinionated OO design that works, and gets them from 0-60 far faster than doing the same thing in a procedural language, there may just be hope. I think that's true even if your work doesn't involve web development.
I'm not so sure that bringing up the "real world" would be as much a selling point as a working framework for writing good apps, because it turns out that, especially in statically typed languages like C# and Java, modeling the real world often requires tortuous abstractions. You can see a concrete example of the difficulty of modeling the real world by looking at thousands of people struggling to model something as ostensibly simple as the geometric abstraction of "shape" (shape, ellipse, circle).
All programming paradigms have the same goal: hiding unneeded complexity.
Some problems are easily solved with an imperative paradigm, like your friend uses. Other problems are easily solved with an object-oriented paradigm. There are many other paradigms. The main ones (logic programming, functional programming, and imperative programming) are all equivalent to each other; object-oriented programming is usually thought as an extension to imperative programming.
Object-oriented programming is best used when the programmer is modeling items that are similar, but not the same. An imperative paradigm would put the different kinds of models into one function. An object-oriented paradigm separates the different kinds of models into different methods on related objects.
Your colleague seems to be stuck in one paradigm. Good luck.
To me, the power of OOP doesn't show itself until you start talking about inheritance and polymorphism.
If one's argument for OOP rests the concept of encapsulation and abstraction, well that isn't a very convincing argument for me. I can write a huge library and only document the interfaces to it that I want the user to be aware of, or I can rely on language-level constructs like packages in Ada to make fields private and only expose what it is that I want to expose.
However, the real advantage comes when I've written code in a generic hierarchy so that it can be reused later such that the same exact code interfaces are used for different functionality to achieve the same result.
Why is this handy? Because I can stand on the shoulders of giants to accomplish my current task. The idea is that I can boil the parts of a problem down to the most basic parts, the objects that compose the objects that compose... the objects that compose the project. By using a class that defines behavior very well in the general case, I can use that same proven code to build a more specific version of the same thing, and then a more specific version of the same thing, and then yet an even more specific version of the same thing. The key is that each of these entities has commonality that has already been coded and tested, and there is no need to reimpliment it again later. If I don't use inheritance for this, I end up reimplementing the common functionality or explicitly linking my new code against the old code, which provides a scenario for me to introduce control flow bugs.
Polymorphism is very handy in instances where I need to achieve a certain functionality from an object, but the same functionality is also needed from similar, but unique types. For instance, in Qt, there is the idea of inserting items onto a model so that the data can be displayed and you can easily maintain metadata for that object. Without polymorphism, I would need to bother myself with much more detail than I currently do (I.E. i would need to implement the same code interfaces that conduct the same business logic as the item that was originally intended to go on the model). Because the base class of my data-bound object interacts natively with the model, I can instead insert metadata onto this model with no trouble. I get what I need out of the object with no concern over what the model needs, and the model gets what it needs with no concern over what I have added to the class.
Ask your friend to visualize any object in his very Room, House or City... and if he can tell a single such object which a system in itself and is capable of doing some meaningful work. Things like a button isnt doing something alone - it takes lots of objects to make a phone call. Similarly a car engine is made of the crank shaft, pistons, spark plugs. OOPS concepts have evolved from our perception in natural processes or things in our lives. The "Inside COM" book tells the purpose of COM by taking analogy from a childhood game of identifying animals by asking questions.
Design trumps technology and methodology. Good designs tend to incorporate universal principals of complexity management such as law of demeter which is at the heart of what OO language features strive to codify.
Good design is not dependant on use of OO specific language features although it is typically in ones best interests to use them.
Not only does it make
programming easier / more maintainable in the current situation for other people (and yourself)
It is already allowing easier database CRUD (Create, Update, Delete) operations.
You can find more info about it looking up:
- Java : Hibernate
- Dot Net : Entity Framework
See even how LINQ (Visual Studio) can make your programming life MUCH easier.
Also, you can start using design patterns for solving real life problems (design patterns are all about OO)
Perhaps it is even fun to demonstrate with a little demo:
Let's say you need to store employees, accounts, members, books in a text file in a similar way.
.PS. I tried writing it in a PSEUDO way :)
the OO way
Code you call:
io.file.save(objectsCollection.ourFunctionForSaving())
class objectsCollection
function ourFunctionForSaving() As String
String _Objects
for each _Object in objectsCollection
Objects &= _Object & "-"
end for
return _Objects
end method
NON-OO Way
I don't think i'll write down non-oo code. But think of it :)
NOW LET'S SAY
In the OO way. The above class is the parent class of all methods for saving the books, employees, members, accounts, ...
What happens if we want to change the way of saving to a textfile? For example, to make it compactible with a current standard (.CVS).
And let's say we would like to add a load function, how much code do you need to write?
In the OO- way you only need the add a New Sub method which can split all the data into parameters (This happens once).
Let your collegue think about that :)
In domains where state and behavior are poorly aligned, Object-Orientation reduces the overall dependency density (i.e. complexity) within these domains, which makes the resulting systems less brittle.
This is because the essence of Object-Orientation is based on the fact that, organizationally, it doesn't dustinguish between state and behavior at all, treating both uniformly as "features". Objects are just sets of features clumpled to minimize overall dependency.
In other domains, Object-Orientation is not the best approach. There are different language paradigms for different problems. Experienced developers know this, and are willing to use whatever language is closest to the domain.

Service-Orientation vs Object-Orientation - can they coexist?

There's been a lot of interest in Service-Oriented Architecture (SOA) at my company recently. Whenever I try to see how we might use it, I always run up against a mental block. Crudely:
Object-orientation says: "keep data and methods that manipulate data (business processes) together";
Service-orientation says: "keep the business process in the service, and pass data to it".
Previous attempts to develop SOA have ended up converting object-oriented code into data structures and separate procedures (services) that manipulate them, which seems like a step backwards.
My question is: what patterns, architectures, strategies etc. allow SOA and OO to work together?
Edit: The answers saying "OO for internals, SOA for system boundaries" are great and useful, but this isn't quite what I was getting at.
Let's say you have an Account object which has a business operation called Merge that combines it with another account. A typical OO approach would look like this:
Account mainAccount = database.loadAccount(mainId);
Account lesserAccount = database.loadAccount(lesserId);
mainAccount.mergeWith(lesserAccount);
mainAccount.save();
lesserAccount.delete();
Whereas the SOA equivalent I've seen looks like this:
Account mainAccount = accountService.loadAccount(mainId);
Account lesserAccount = accountService.loadAccount(lesserId);
accountService.merge(mainAccount, lesserAccount);
// save and delete handled by the service
In the OO case the business logic (and entity awareness thanks to an ActiveRecord pattern) are baked into the Account class. In the SOA case the Account object is really just a structure, since all of the business rules are buried in the service.
Can I have rich, functional classes and reusable services at the same time?
My opinion is that SOA can be useful, at a macro level, but each service probably still will be large enough to need several internal components. The internal components may benefit from OO architecture.
The SOA API should be defined more carefully than the internal APIs, since it is an external API. The data types passed at this level should be as simple as possible, with no internal logic. If there is some logic that belongs with the data type (e.g. validation), there should preferably be one service in charge of running the logic on the data type.
SOA is a good architecture for communicating between systems or applications.
Each application defines a "service" interface which consists of the requests it will handle and the responses expected.
The key points here are well defined services, with a well defined interface. How your services are actually implemented is irrelevant as far as SOA is concerned.
So you are free to implement your services using all the latest and greatest OO techniques, or any other methodology that works for you. ( I have seen extreme cases where the "service" is implemented by actual humans entering data on a screen -- yet everything was still text book SOA!).
I really think SOA is only useful for external interfaces (generally speaking, to those outside your company), and even then, only in cases when performance doesn't really matter, you don't need ordered delivery of messages.
In light of that, I think they can coexist. Keep your applications working and communicating using the OO philosophy, and only when external interfaces (to third parties) are needed, expose them via SOA (this is not essential, but it is one way).
I really feel SOA is overused, or at least architectures with SOA are getting proposed too often. I don't really know of any big systems that use SOA internally, and I doubt they could. It seems like more of a thing you might just use to do mashups or simple weather forecast type requests, not build serious systems on top of.
I think that this is a misunderstanding of object orientation. Even in Java, the methods are generally not part of the objects but of their class (and even this "membership" is not necessary for object orientation, but that is a different subject). A class is just a description of a type, so this is really a part of the program, not the data.
SOA and OO do not contradict each other. A service can accept data, organize them into objects internally, work on them, and finally give them back in whatever format is desired.
I've heard James Gosling say that one could implement SOA in COBOL.
If you read Alan Kay's own description of the origins of OOP, it describes a bunch of little computers interacting to perform something useful.
Consider this description:
Your X should be made up of Ys. Each Y should be responsible for a single concept, and should be describable completely in terms of its interface. One Y can ask another Y to do something via an exchange of messages (per their specified interfaces).
In some cases, an X may be implemented by a Z, which it manages according to its interface. No X is allowed direct access to another X's Z.
I think that the following substitutions are possible:
Term Programing Architecture
---- --------------- ------------
X Program System
Y Objects Services
Z Data structure Database
---- --------------- ------------
result OOP SOA
If you think primarily in terms of encapsulation, information hiding, loose coupling, and black-box interfaces, there is quite a bit of similarity. If you get bogged down in polymorphism, inheritance, etc. you're thinking programming / implementation instead of architecture, IMHO.
If you allow your services to remember state, then they can just be considered to be big objects with a possibly slow invocation time.
If they are not allowed to retain state then they are just as you've said - operators on data.
It sounds like you may be dividing your system up into too many services. Do you have written, mutually agreed criteria for how to divide?
Adopting SOA does not mean throw out all your objects but is about dividing your system into large reusable chunks.