Related
To be perfectly clear, I do not expect a solution to this problem. A big part of figuring this out is obviously solving the problem. However, I don't have a lot of experience with well architected n-tier applications and I don't want to end up with an unruly BLL.
At the moment of writing this, our business logic is largely a intermingled ball of twine. An intergalactic mess of dependencies with the same identical business logic being replicated more than once. My focus right now is to pull the business logic out of the thing we refer to as a data access layer, so that I can define well known events that can be subscribed to. I think I want to support an event driven/reactive programming model.
My hope is that there's certain attainable goals that tell me how to design these collection of classes in a manner well suited for business logic. If there are things that differentiate a good BLL from a bad BLL I'd like to hear more about them.
As a seasoned programmer but fairly modest architect I ask my fellow community members for advice.
Edit 1:
So the validation logic goes into the business objects, but that means that the business objects need to communicate validation error/logic back to the GUI. That get's me thinking of implementing business operations as objects rather than objects to provide a lot more metadata about the necessities of an operation. I'm not a big fan of code cloning.
Kind of a broad question. Separate your DB from your business logic (horrible term) with ORM tech (NHibernate perhaps?). That let's you stay in OO land mostly (obviously) and you can mostly ignore the DB side of things from an architectural point of view.
Moving on, I find Domain Driven Design (DDD) to be the most successful method for breaking a complex system into manageable chunks, and although it gets no respect I genuinely find UML - especially action and class diagrams - to be critically useful in understanding and communicating system design.
General advice: Interface everything, build your unit tests from the start, and learn to recognise and separate the reusable service components that can exist as subsystems. FWIW if there's a bunch of you working on this I'd also agree on and aggressively use stylecop from the get go :)
I have found some o fthe practices of Domain Driven Design to be excellent when it comes to splitting up complex business logic into more managable/testable chunks.
Have a look through the sample code from the following link:
http://dddpds.codeplex.com/
DDD focuses on your Domain layer or BLL if you like, I hope it helps.
We're just talking about this from an architecture standpoint, and what remains as the gist of it is "abstraction, abstraction, abstraction".
You could use EBC to design top-down and pass the interface definitions to the programmer teams. Using a methology like this (or any other visualisation technique) visualizing the dependencies prevents you from duplicating business logic anywhere in your project.
Hmm, I can tell you the technique we used for a rather large database-centered application. We had one class which managed the datalayer as you suggested which had suffix DL. We had a program which automatically generated this source file (which was quite convenient), though it also meant if we wanted to extend functionality, you needed to derive the class since upon regeneration of the source you'd overwrite it.
We had another file end with OBJ which simply defined the actual database row handled by the datalayer.
And last but not least, with a well-formed base class there was a file ending in BS (standing for business logic) as the only file not generated automatically defining event methods such as "New" and "Save" such that by calling the base, the default action was done. Therefore, any deviation from the norm could be handled in this file (including complete rewrites of default functionality if necessary).
You should create a single group of such files for each table and its children (or grandchildren) tables which derive from that master table. You'll also need a factory which contains the full names of all objects so that any object can be created via reflection. So to patch the program, you'd merely have to derive from the base functionality and update a line in the database so that the factory creates that object rather than the default.
Hope that helps, though I'll leave this a community wiki response so perhaps you can get some more feedback on this suggestion.
Have a look in this thread. May give you some thoughts.
How should my business logic interact with my data layer?
This guide from Microsoft could also be helpful.
Regarding "Edit 1" - I've encountered exactly that problem many times. I agree with you completely: there are multiple places where the same validation must occur.
The way I've resolved it in the past is to encapsulate the validation rules somehow. Metadata/XML, separate objects, whatever. Just make sure it's something that can be requested from the business objects, taken somewhere else and executed there. That way, you're writing the validation code once, and it can be executed by your business objects or UI objects, or possibly even by third-party consumers of your code.
There is one caveat: some validation rules are easy to encapsulate/transport; "last name is a required field" for example. However, some of your validation rules may be too complex and involve far too many objects to be easily encapsulated or described in metadata: "user can include that coupon only if they aren't an employee, and the order is placed on labor day weekend, and they have between 2 and 5 items of this particular type in their cart, unless they also have these other items in their cart, but only if the color is one of our 'premiere sale' colors, except blah blah blah...." - you know how business 'logic' is! ;)
In those cases, I usually just accept the fact that there will be some additional validation done only at the business layer, and ensure there's a way for those errors to be propagated back to the UI layer when they occur (you're going to need that communication channel anyway, to report back persistence-layer errors anyway).
I look around and see some great snippets of code for defining rules, validation, business objects (entities) and the like, but I have to admit to having never seen a great and well-written business layer in its entirety.
I'm left knowing what I don't like, but not knowing what a great one is.
Can anyone point out some good OO business layers (or great business objects) or let me know how they judge a business layer and what makes one great?
Thanks
I’ve never encountered a well written business layer.
Here is Alex Papadimoulis's take on this:
[...] If you think about it, virtually every line of code in a software
application is business logic:
The Customers database table, with
its CustomerNumber (CHAR-13),
ApprovedDate (DATETIME), and
SalesRepName (VARCHAR-35) columns:
business logic. If it wasn’t, it’d
just be Table032 with Column01,
Column02, and Column03.
The
subroutine that extends a ten-percent
discount to first time customers:
definitely business logic. And
hopefully, not soft-coded.
And
the code that highlights past-due
invoices in red: that’s business
logic, too. Internet Explorer
certainly doesn’t look for the strings
“unpaid” and “30+ days” and go, hey,
that sure would look good with a #990000 background!
So how then is possible to encapsulate all of this business logic
in a single layer of code? With
terrible architecture and bad code of
course!
[...] By implying that a system’s architecture should include a layer dedicated to business logic, many developers employ all sorts of horribly clever techniques to achieve that goal. And it always ends up in a disaster.
I imagine this is because business logic, as a general rule, is arbitrary and nasty. Garbage in, garbage out.
Also, most of the really good business layers are most probably proprietary. ;-)
Good business layers have been designed after a thorough domain analysis. If you can capture the business' semantics and isolate it from any kind of implementation, whether that be in data storage or any specific application (including presentation), then the logic should be well-factored and reusable in different contexts.
Just as a good database schema design should capture business semantics and isolate itself from any application, a business layer should do the same and even if a database schema and a business layer describe the same entities and concepts, the two should be usable in separate contexts--a database schema shouldn't have to change even when the business logic changes unless the schema doesn't reflect the current business. A business layer should work with any storage schema provided that it's abstracted via an intermdiate layer. For example, the ADO.NET Entity framework lets you design a conceptual schema which maps to the business layer and has a separate mapping to the storage schema which can be changed without recompiling the business object layer or conceptual layer.
If a person from the business side of things can look at code written with the business layer and have a rough idea of what's going on then it might be a good indication that the objects were designed right--you've succesfully conveyed a solution in the problem domain without obfuscating it with artifacts from the solution domain.
I've always been stuck between a rock and a hard place. Ideally, your business logic wouldn't be at all concerned with database or UI-related issues.
Keys Cause Problems
Still, I find things like primary and foreign keys causing problems. Even tools like Entity Framework don't completely eliminate this creep. It can be extremely inefficient to convert IDs passed as POST data into their respective objects, only to pass this to the business layer, which then passes them to the data layer to just be stripped down again.
Even NoSQL databases come with problems. They tend to return full object models, but they usually return more than you need and can lead to problems because you're assuming that object model won't change. And keys are still found in NoSQL databases.
Reuse vs. Overhead
There's also the issue of code reuse. It's pretty common for data layers to return fully populated objects, including every column in that particular table or tables. However, often business logic only cares about a limited subset of this information. It lends itself to specialized data transfer objects that only carry with them the relavent data. Of course, you need to convert between representations, so you create a mapper class. Then, when you save, you need to somehow convert these lesser objects back into the full database representation or do a partial UPDATE (meaning a another SQL command).
So, I see a lot of business layer classes accepting objects mapping directly to database tables (data transfer objects). I also see a lot of business layers accepting raw UI values (presentation objects), as well. It's also not unusual to see business layers calling out to the database mid-computation to retrieve needed data. To try to grab it up-front would probably be inefficient (think about how and if-statement can affect the data that gets retrieved) and lazy loaded values result in a lot of magic or unintended calls out to the database.
Write Your Logic First
Recently, I've been trying to write the "core" code first. This is the code that performs the actual business logic. I don't know about you, but many times when going over someone else's code, I ask the question, "But, where does it do [business rule]?" Often, the business logic is so crowded with concerns about grabbing data, transforming it and whatnot that I can't even see it (needle in a hay stack). So, now I implement the logic first and as I figure out what data I need, I add it as a parameter or add it to a parameter object. Getting the rest of the code to fit this new interface usually falls on a mediator class of some kind.
Like I said, though, you have to keep a lot in mind when writing business layers, including performance. The approach above has been useful lately because I don't have rights to version control or the database schema yet. I am working in a dark room with just my understanding of the requirements so far.
Write with Testing in Mind
Utiltizing dependency injection can be useful for designing a good architecture up-front. Try to think about how you would test your code without hitting a database or other service. This also lends itself to small, reusable classes that can run in multiple contexts.
Conclusion
My conclusion is that there really is no such thing as a perfect business layer. Even in the same application, there can be times when one approach only works 90% of the time. The best we can do is try to write the simplest thing that works. For the longest time I avoided DTOs and wrapped ADO.NET DataRows with objects so updates were immediately recorded in the underlying DataTable. This was a HUGE mistake because I couldn't copy objects and constraints caused exceptions to be thrown at weird times. I only did it to avoid setting parameter values explicitly.
Martin Fowler has blogged extensively about DSLs. I would recommend starting there.
http://martinfowler.com/bliki/dsl.html
It was helpful to me to learn and play with CSLA.Net (if you are a MS guy). I've never implemented a "pure" CSLA application, but have used many of the ideas presented in the architecture.
Your best bet is keep looking for that elusive magic bullet and use the ideas that best fit the problem you are solving. Keep it simple.
One problem I find is that even when you have a nicely designed business layer it is hard to stop business logic leaking out, and development tools tend to encourage this. For example as soon as you add a validator control to an ASP.NET WebForm you have let business logic leak out into the view. The validation should occur in the business layer and only the results of it displayed in the view. And as soon as you add constraints to a database you then have business logic in your database as well. DBA types tend to disagree strongly with this last point though.
Neither have I. We don't create a business layer in our applications. Instead we use MVC-ARS. The business logic is embedded in the (S) state machine and the (A) action.
Possibly because in reality we are never able to fully decouple the business logic from the "process", the inputs, outputs, interface and that ultimately people find it hard to deal with the abstract let alone relating it back to reality.
What do you suggest for Data Access layer? Using ORMs like Entity Framework and Hibernate OR Code Generators like Subsonic, .netTiers, T4, etc.?
For me, this is a no-brainer, you generate the code.
I'm going to go slightly off topic here because there's a bigger underlying fallacy at play. The fallacy is that these ORM frameworks solve the object/relational impedence mismatch. This claim is a barefaced lie.
I find the best way to resolve the object/relational impedance mismatch is to either use OOP exclusively and use an object database or use the idioms of the relational database exclusively and ignore OOP.
The abstraction "everything is a table" is to me, much more powerful than the abstraction "everything is a class." It takes less code, less intellectual effort and leads to faster code when you code to the database rather than to an object model.
To me this seems obvious. If your application is data driven then surely your code should be data driven too? Yet to say this is hugely controversial.
The central problem here is that OOP becomes a really leaky abstraction when used in conjunction with a database. Code that look perfectly sensible when written to the idioms of OOP looks completely insane when you see the traffic that code generates at the database. When that messiness becomes a performance problem, OOP is the first casualty.
There is really no way to resolve this. Databases work with sets of data. OOP focus on instances of classes. Trying to marry the two is always going to end in divorce.
So to answer your question, I believe you should generate your classes and try and make them map the underlying database structure as closely as possible.
Perhaps controversially, I've always felt that using code generators for the ADO.NET plumbing is fundamentally solving the wrong problem.
At some point, hopefully not too long after learning about Connection Strings, SqlCommands, DataAdapters, and all that, we notice that:
Such code is ugly
It is very boring to write
It's very easy to miss something if you're doing it by hand
It has to be repeated every time you want to access the database
So, the problem to solve is "how to do the same thing lots of times fast"?
I say no.
Using code generators to make this process quick still means that you have a ton of code, all the same, all over your business classes (or your data access layer, if you separate that from your business logic).
And then, if you want to do something generically (like track stored procedure usage, for instance), you end up having to customise your code generator if it doesn't already have the feature you want. And even if it does, you still have to regenerate everything all the time.
I like to do things once, not many times, no matter how fast I can do them.
So I rolled my own Data Access class that knows how to add parameters, set up and close connections, manage transactions, and other cool stuff. It only had to be written once, and calling its methods from a Business object that needs some database stuff done consists of one line of code.
When I needed to make the application support multithreaded database accesses, it required a change to the Data Access class only, and all the business classes just do what they already did.
There is no right answer it all depends on your project. As Simon points out if your application is all data driven, then it might make sense depending on the size and complexity of the domain to use non oop paradigm. I had a lot of success building a system using a Transaction Script pattern, which passed XML Messages around the system.
However this system started to break down after five or six years as the application grew in size and complexity (5 or 6 webs, several web services, tons of COM+ components, legacy and .net code, 8+ databases with 800+ tables 4,000+ procedures). No one knew what anything was, and duplication was running rampant.
There are other ways to alleviate the maintance then OOP; however, if you have a very complex domain then hainvg a rich domain model is ideal IMHO, as it allows for the business rules to be expressed in nice encapsulated components.
To answer your question, avoid code generators if you can. Code generators are a recipe for disaster, but if you do go with code generation do not modify the generated code. Also be sure to have a good process in place that is easy for developers to get the new generated code.
I recommend using either the following: ORM or hand roll a lightweight DAL. I am currently transitioning a project over to nHibernate off my hand rolled DAL and am having a lot of success; however, I like having the option of using either option. Further if you properly seperate your concerns (Data Access from Business Layer from Presentation) you can have a single service layer that might talk to a Dao (Data Access Object) that for one object is an ORM but for another is hand rolled). I like this flexibility as it allows to apply the best tool to the job.
I like nHibernate over a hand rolled DAL because while my DAL does abstract away most of the ADO.Net code you still have to write the code that takes a data reader to an object or an object and creates the parameters.
I've always preferred to go the code generator route, especially in C# where you can make use of extended classes to add functionality to the basic data objects.
Hate to say this, but it depends. If you find an ORM tool that fits your needs go for it. We wrote our own system in small steps while developing the application. We are using C++ and there are not that many tools out there anyway. Ours ended up being a XML description of the database, from that the SQL generation script and the basic object layer and metadata were generated.
Do your homework and evaluate theses tools and you will find a good fit for your needs.
When designing an application, does there come a point where you have too many objects? How do you determine when you've crossed the line of granularity in your object model?
At the point when you wonder "why the hell is this an object?"
I recently had a "BetForgetter" class which I refactored into 2 lines of code in another class.
I assume that you're talking about types and not objects (I know it is called an object model, but it really is a model of the types involved).
Anyway, if you subscribe to the Single Responsibility Principle, which states that each type should only handle a single responsibility, the number of types will grow as the application grows.
However, each type should be fairly easy to comprehend due to its limited size, and assuming that you have some kind of structure in place, you should rarely (if ever) need to look at all the types at a time.
Managing large software project is all about splitting things up into manageable pieces and labeling them in a sensible way. If you do that, the number of types becomes less important in my experience.
This is probably a symptom of another problem.
IMHO-As long as you have good project, namespace, and/or directory organization, along with a reasonable naming convention, you should be able to handle any number of classes easilly.
PS-I'm assuming you meant 'classes' when said 'objects'.
I'm not sure what the answer is in the scope of an entire project, but I can usually tell that two classes might be better off as one if they depend too much on each other.
No hard and fast rule, but pretty much up to the programmers discretion.
But as a general rule of thumb, if your wasting time writing classes to to the tiniest operation, then thats to many....and complex.
If your writing megalumps of code then that's too little.
Not only with too many objects but also with too much complexity, I detect the point when I feel lost reading my code after one day off.
I'd say that you should probably worry more about having too few object/classes. SRP is something worth abiding to as it usually leads to cleaner and more readable code.
I think that it's impossible to answer your question without knowing your problem.
It depends a lot on the problem that you're trying to solve with your application.
Iraimbilanja is absolutely right. If you start creating objects for everything that crosses your way, you soon will be wondering what is the use that you can give them, and why the hell you're spending time to create them.
You can never have too many of them, ask any decent Java programmer.
A project with proper OO design, with hundreds of objects, is in a much better state than one with fewer, monolithic classes.
How long is a piece of string? (twice the length from the middle to the end but thats beside the point)
I think it really depends on what you are writing and how you have written it. The number of objects is a bad meteric, you should be worrying about complexity of your object model not how many objects you have. How tightly coupled are your objects is what you need to be asking.
If you have 100 different type of Bird object all of which implement an interface and the shared methods are in an abstract class then you don't have too many objects because maybe you need each Bird to act differently.
However if you have a mess of obejects that all implement the same code then you have too many objects and can refactor to having less.
Similarly if you have one huge class that contains lots of disparate functionality then you have too few classes.
Unless you have classes that do the same thing or are replicating things that other classes in the framework already do they you are fine.
And remember just because you can use inheritence doesn't mean you have to, composition is often better.
Everyone I work with is obsessed with the data-centric approach to enterprise development and hates the idea of using custom collections/objects. What is the best way to convince them otherwise?
Do it by example and tread lightly. Anything stronger will just alienate you from the rest of the team.
Remember to consider the possibility that they're onto something you've missed. Being part of a team means taking turns learning & teaching.
No single person has all the answers.
If you are working on legacy code (e.g., apps ported from .NET 1.x to 2.0 or 3.5) then it would be a bad idea to depart from datasets. Why change something that already works?
If you are, however, creating a new apps, there a few things that you can cite:
Appeal to experiencing pain in maintaining apps that stick with DataSets
Cite performance benefits for your new approach
Bait them with a good middle-ground. Move to .NET 3.5, and promote LINQ to SQL, for instance: while still sticking to data-driven architecture, is a huge, huge departure to string-indexed data sets, and enforces... voila! Custom collections -- in a manner that is hidden from them.
What is important is that whatever approach you use you remain consistent, and you are completely honest with the pros and cons of your approaches.
If all else fails (e.g., you have a development team that utterly refuses to budge from old practices and is skeptical of learning new things), this is a very, very clear sign that you've outgrown your team it's time to leave your company!
Remember to consider the possibility that they're onto something you've missed. Being part of a team means taking turns learning & teaching.
Seconded. The whole idea that "enterprise development" is somehow distinct from (and usually the implication is 'more important than') normal development really irks me.
If there really is a benefit for using some technology, then you'll need to come up with a considered list of all the pros and cons that would occur if you switched.
Present this list to your co workers along with explanations and examples for each one.
You have to be realistic when creating this list. You can't just say "Saves us lots of time!!! WIN!!" without addressing the fact that sometimes it is going to take MORE time, will require X months to come up to speed on the new tech, etc. You have to show concrete examples where it will save time, and exactly how.
Likewise you can't just skirt over the cons as if they don't matter, your co-workers will call you on it.
If you don't do these things, or come across as just pushing what you personally like, nobody is going to take you seriously, and you'll just get a reputation for being the guy who's full of enthusiasm and energy but has no idea about anything.
BTW. Look out for this particular con. It will trump everything, unless you have a lot of strong cases for all your other stuff:
Requires 12+ months work porting our existing code. You lose.
Of course, "it depends" on the situation. Sometimes DataSets or DataTables are more suited, like if it really is pretty light business logic, flat hierarchy of entities/records, or featuring some versioning capabilities.
Custom object collections shine when you want to implement a deep hierarchy/graph of objects that cannot be efficiently represented in flat 2D tables. What you can demonstrate is a large graph of objects and getting certain events to propagate down the correct branches without invoking inappropriate objects in other branches. That way it is not necessary to loop or Select through each and every DataTable just to get the child records.
For example, in a project I got involved in two and half years ago, there was a UI module that is supposed to display questions and answer controls in a single WinForms DataGrid (to be more specific, it was Infragistics' UltraGrid). Some more tricky requirements
The answer control for a question can be anything - text box, check box options, radio button options, drop-down lists, or even to pop up a custom dialog box that may pull more data from a web service.
Depending on what the user answered, it can trigger more sub-questions to appear directly under the parent question. If a different answer is given later, it should expose another set of sub-questions (if any) related to that answer.
The original implementation was written entirely in DataSets, DataTables, and arrays. The amount of looping through the hundreds of rows for multiple tables was purely mind-bending. It did not help the programmer came from a C++ background attempting to ref everything (hello, objects living in the heap use reference variables, like pointers!). Nobody, not even the originally programmer, could explain why the code is doing what it does. I came into the scene more than six months after this, and it was stil flooded with bugs. No wonder the 2nd-generation developer I took over from decided to quit.
Two months of tying to fix the chaotic mess, I took it upon myself to redesign the entire module into an object-oriented graph to solve this problem. yeap, complete with abstract classes (to render different answer control on a grid cell depending on question type), delegates and eventing. The end result was a 2D dataGrid binded to a deep hierarchy of questions, naturally sorted according to the parent-child arrangement. When a parent question's answer changed, it would raise an event to the children questions and they would automatically show/hide their rows in the grid according to the parent's answer. Only question objects down that path were affected. The UI responsiveness of this solution compared to the old method was by orders of magnitude.
Ironically, I wanted to post a question that was the exact opposite of this. Most of the programmers I've worked with have gone with the custom data objects/collections approach. It breaks my heart to watch someone with their SQL Server table definition open on one monitor, slowly typing up a matching row-wrapper class in Visual Studio in another monitor (complete with private properties and getters-setters for each column). It's especially painful if they're also prone to creating 60-column tables. I know there are ORM systems that can build these classes automagically, but I've seen the manual approach used much more frequently.
Engineering choices always involve trade-offs between the pros and cons of the available options. The DataSet-centric approach has its advantages (db-table-like in-memory representation of actual db data, classes written by people who know what they're doing, familiar to large pool of developers etc.), as do custom data objects (compile-type checking, users don't need to learn SQL etc.). If everyone else at your company is going the DataSet route, it's at least technically possible that DataSets are the best choice for what they're doing.
Datasets/tables aren't so bad are they?
Best advise I can give is to use it as much as you can in your own code, and hopefully through peer reviews and bugfixes, the other developers will see how code becomes more readable. (make sure to push the point when these occurrences happen).
Ultimately if the code works, then the rest is semantics is my view.
I guess you can trying selling the idea of O/R mapping and mapper tools. The benefit of treating rows as objects is pretty powerful.
I think you should focus on the performance. If you can create an application that shows the performance difference when using DataSets vs Custom Entities. Also, try to show them Domain Driven Design principles and how it fits with entity frameworks.
Don't make it a religion or faith discussion. Those are hard to win (and is not what you want anyway)
Don't frame it the way you just did in your question. The issue is not getting anyone to agree that this way or that way is the general way they should work. You should talk about how each one needs to think in order to make the right choice at any given time. give an example for when to use dataSet, and when not to.
I had developers using dataTables to store data they fetched from the database and then have business logic code using that dataTable... And I showed them how I reduced the time to load a page from taking 7 seconds of 100% CPU (on the web server) to not being able to see the CPU line move at all.. by changing the memory object from dataTable to Hash table.
So take an example or case that you thing is better implemented differently, and win that battle. Don't fight the a high level war...
If Interoperability is/will be a concern down the line, DataSet is definitely not the right direction to go in. You CAN expose DataSets/DataTables over a service but whether you SHOULD or is debatable. If you are talking .NET->.NET you're probably Ok, otherwise you are going to have a very unhappy client developer from the other side of the fence consuming your service
You can't convince them otherwise. Pick a smaller challenge or move to a different organization. If your manager respects you see if you can do a project in the domain-driven style as a sort of technology trial.
If you can profile, just Do it and profile. Datasets are heavier then a simple Collection<T>
DataReaders are faster then using Adapters...
Changing behavior in an objects is much easier than massaging a dataset
Anyway: Just Do It, ask for forgiveness not permission.
Most programmers don't like to stray out of their comfort zones (note that the intersection of the 'most programmers' set and the 'Stack Overflow' set is the probably the empty set). "If it worked before (or even just worked) then keep on doing it". The project I'm currently on required a lot of argument to get the older programmers to use XML/schemas/data sets instead of just CSV files (the previous version of the software used CSV's). It's not perfect, the schemas aren't robust enough at validating the data. But it's a step in the right direction. The code I develop uses OO abstractions on the data sets rather than passing data set objects around. Generally, it's best to teach by example, one small step at a time.
There is already some very good advice here but you'll still have a job to convince your colleagues if all you have to back you up is a few supportive comments on stackoverflow.
And, if they are as sceptical as they sound, you are going to need more ammo.
First, get a copy of Martin Fowler's "Patterns of Enterprise Architecture" which contains a detailed analysis of a variety of data access techniques.
Read it.
Then force them all to read it.
Job done.
data-centric means less code-complexity.
custom objects means potentially hundreds of additional objects to organize, maintain, and generally live with. It's also going to be a bit faster.
I think it's really a code-complexity vs performance question, which can be answered by the needs of your app.
Start small. Is there a utility app you can use to illustrate your point?
For instance, at a place where I worked, the main application had a complicated build process, involving changing config files, installing a service, etc.
So I wrote an app to automate the build process. It had a rudimentary WinForms UI. But since we were moving towards WPF, I changed it to a WPF UI, while keeping the WinForms UI as well, thanks to Model-View-Presenter. For those who weren't familiar with Model-View-Presenter, it was an easily-comprehensible example they could refer to.
Similarly, find something small where you can show them what a non-DataSet app would look like without having to make a major development investment.