Many developers seem to be either intimidated or a bit overwhelmed when an application design requires both procedural code and a substantial database. In most cases, "database" means an RDBMS with an SQL interface.
Yet it seems to me that many of the techniques for addressing the "impedance mismatch" between the two paradigms would be much better suited to an ISAM (indexed-sequential access method) toolset, where you can (must) specify tables, indexes, row-naviagation, etc. overtly - exactly the behavior prescribed by the ActiveRecord model, for instance.
In early PC days, dBASE and its progeny were the dominant dbms platforms, and it was an enhanced ISAM. Foxpro continues this lineage quite successfully through to today. MySQL and Informix are two RDBMSs that were at least initially built on top of ISAM implementations, so this approach should be at least equally performant. I get the feeling that many developers who are unhappy with SQL are at least unconsciously yearning for the ISAM approach to be revived, and the database could be more easily viewed as a set of massively efficient linkable hyper-arrays. It seems to me that it could be a really good idea.
Have you ever tried, say, an ORM-to-ISAM implementation? How successfully? If not, do you think it might be worth a try? Are there any toolsets for this model explicitly?
Maybe Pig Latin is what you want? According to this article
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=693D79B5EFDC0452E1C9A87D1C495D4C?doi=10.1.1.124.5496&rep=rep1&type=pdf :
"Besides, many of the people who ana-
lyze this data are entrenched
procedural programmers, who find the
declarative, SQL style to be
unnatural. The success of the more
procedural map-reduce programming
model, and its associated scalable
implementations on commodity hard-
ware, is evidence of the above.
However, the map-reduce paradigm is
too low-level and rigid, and leads to
a great deal of custom user code that
is hard to maintain, and reuse. We
describe a new language called Pig
Latin that we have designed to fit in a
sweet spot between the declarative
style of SQL, and the low-level,
procedural style of map-reduce."
There are certainly times and places where ISAM provides the services needed by the application with less cost and overhead than a full-blown SQL DBMS. One downside of an ISAM mechanism is that there isn't necessarily a system catalogue to describe the data; another is that generally there are few user-friendly tools to get at the data. These are both places where the RDBMS provides considerable advantage. The best ISAM (or similar) systems provide transaction support - even XA transactions, sometimes.
Where you need to do complex joins and computations (aggregates, for example), the work done by the DBMS provides huge benefits. Where all you need is access to records, then ISAM could be beneficial.
Security tends to be harder to enforce with an ISAM-based system than with a DBMS. Also, you need to worry about integrity of the files in case of a crash. Most DBMS use a two-process architecture (DBMS client in a separate process from the DBMS server), which provides resilience in the face of the client crashing (or the client PC being turned off). You also have to worry about backup and restore - a competent DBMS has systems in place for providing a coherent backup of a database while the database is in use; it is not clear that ISAM systems would provide that level of integrity.
Overall, given a suitable ISAM mechanism, there would at least sometimes, maybe often, advantages to using an ISAM mechanism in an ORM system instead of a full RDBMS.
I implemented an ORM-to-isam library back in the 1990s that enjoyed some (very) modest success as shareware. I largely agree with what you say about the virtues of ISAMs and I think it better to use an ISAM when building an ORM layer or product if you are looking only for flexibility and speed.
However, the risk that you take is that you'll lose out on the benefits of the wide range of SQL-related products now on the market. In particular, reporting tools have evolved to be ever more tightly integrated with the most popular SQL packages. While ISAM product vendors in the 1990s provided ODBC drivers to integrate with products like Crystal Reports, it seemed, even then, that the market was trending away from ISAM and that I would be risking obsolescence if I continued using that technology. Thus, I switched to SQL.
One caveat: it has been nearly a decade since I was playing in the ISAM sandbox so I cannot purport to be up on the latest ISAM tools and their solutions to this problem. However, unless I was convinced that I was not going to be trapped without reporting tools support, I would not adopt an ISAM-based ORM regardless of its virtues. And that doesn't even cover the other tools available for SQL-based development!
I did my share of dBase, Clipper and FoxPro. However I believe the relational model provided by SQL is infinitely more powerful and useful, and products like Oracle and SQL Server deserve their success in the marketplace.
I'm always surprised why people make such a big deal of creating a mapping layer for the ~80-90% of the cases and writing 10-20% of custom SQL to deal with complex queries (mostly reports) and batch data movement. I must be doing something really good or something really silly by adopting the DAL/DAO model, given the level of hatred against hibernate, active record, etc. - vide Vietnam discussion from earlier.
Multivalue database anyone? (aka Pick) Think XML without the tags. They predate RDBMS by at least a decade, and still going strong if you know where to look.
If you know exactly what you want to do with your data and how you want to do that, pick ISAM. You will be happy because you will have structured your indexes to serve your exact needs. Know upfront that if your needs change, you will want to change your indexing. Data access will be blazing fast.
If you are not sure what uses the data will be put to, or you know your data needs will change a lot over time, pick SQL. You will have the flexibility of ad hoc queries, quick reporting turnaround, data mining, etc.
Both types of databases have matured over the years. Both can have robust servers with live backup, transactions, security, metadata, etc.
Old question, but interesting discussion. The concepts of ISAM are important, the additional features that we're provided in today's RDBMSs (as discussed i.e. backup, consistency, security, metadata) offer us signficant benefits.
With the NoSQL craze (yes I said it...craze) it doesn't mean that we can't model ISAM-like access inside the RDBMS. You'll be sure I'm gonna push off as much logic to the DB as I possibly can, but there are times like "traditional" data gridding/multi-dimensional data interpolation where I'll traverse all necessary records via my own logical index.
Related
For most programming tasks, you've got quite the selection of languages to choose from, and good strong communities behind plenty of them. But when you need to work with a database, there's really only one viable choice these days: SQL. Sure, there are different companies with different implementations and dialects, but you're still looking things up with
SELECT columns
FROM table
JOIN other_table ON criteria
WHERE other_criteria
It wasn't always this way, though. As late as the early 90s, there was no single obvious way to interact with a database. But today, there is. And with the way computer languages tend to proliferate rather than converge, I find that a bit odd. What historical and technical factors led to SQL's almost complete dominance of the database access domain?
It's like this Winston Churchill quote:
Indeed, it has been said that democracy is the worst form of government except all those other forms that have been tried from time to time.
There were alternative database technologies before 1970 when the relational model was first proposed. There have been alternatives the whole time since then, and there are new alternatives today.
But of all the alternatives, no solution besides SQL provides as good a balance for:
Widespread standardization
Popular and long-lived products such as Oracle
Plays nicely with many application programming languages
Support for formal data modeling, strong data integrity, ACID transactions
Here's a reference from the Codd Wikipedia article - some detail on how SQL 'won out'.
Committee on Innovations in Computing and Communications: Lessons from History: The Rise of Relational Databases.
Edgar F Codd started the madness.
Enjoy!
Codd and Churchill aside, SQL isn't a horribly bad language for defining and querying table-based datasets. As another general said, "It got there the firstest with the mostest."
One factor is that data persists. It is a lot harder to replace/migrate a company's data than its applications. Applications can come and go, coded in the latest 'flavor of the month' language, but the database platform lives on. This is a bit like a QWERTY effect. While the QWERTY keyboard layout is known to be inefficient, it persists because there would a massive cost in switching to anything else.
Secondly, there is massive market domination by Oracle and IBM (and more recently Microsoft). While they might not agree on every detail, neither has seen a benefit to a non-SQL interface to their databases. I used Ingres back in the early 90s when its QUEL was being pushed out by SQL.
Thirdly, there's a benefit to the application developers (especially the likes of SAP and Oracle) to have a standard(ish) platform to sit on.
I suppose the flip side to this question is why do we need/want so many different programming languages.
Why aren't there many DDD stories that use newer nosql tools such as db4o and Cassandra for persistence.
In my view, the effort involved in O/R mapping seems too high for the returned value. Being able to report right off the database is the main advantage I can see for my projects.
On the other hand, db4o seems to almost be the Repository pattern and Cassandra's concept of Column Families and SuperColumns seems to be perfect for defining Aggregates and their value objects (the scalability would just be an added bonus). Yet, most of the online resources giving examples of DDD projects seem to always default to using [N]Hibernate.
I don't have enough time/resources to take big risks by trying these newer tools on my projects which makes me want to opt for a very well documented approach to persistence. Is it possible that O/R mapping remains the norm just because people are afraid to give up the oh so reliable SQL? Should I make the leap?
From what I've seen, DDD is most common in long-lived, business-oriented code bases. That's an area where the SQL database mindset reigns almost unchallenged so far. Some factors that play into that:
People writing long-lived code bases tend to like technologies that have been around a long time.
Large, business-oriented projects often take place in large businesses, which are naturally conservative.
If you are starting your project with any existing data, it's likely to be in an SQL database to start with, and existing code likely tied to that.
Most business projects are not very performance-sensitive, at least not in the same way that purely technical or consumer-focused efforts are.
And I'm sure there are more.
If you can't afford the financial risks that come with trying novel tools, then you should probably stick with the known thing. Some of the alternative persistence approaches are fantastic, and can get you radically more performance depending on need. But they are all early in their lifecycles. Although SQL databases have a lot of limitations, at least those limitations are pretty well known, both by you and the developers who will inherit your code.
Relational databases are designed for a specific category of use cases, particularly in business applications. As such, they have certain capabilities that are valuable in these scenarios. Data retrieval is often accompanied by sophisticated search and analysis. If you use NoSql or object databases, you may be giving up some of these capabilities in favor of others, such as the handling of huge, distributed datasets, a task at which NoSQL databases typically excel.
In other words, you may need more capabilities than just data persistence, capabilities which relational databases already provide. Relational databases are a mature, well-known and predictable technology, with many experts having abundant expertise in them. All of these reasons are good reasons for continuing to choose relational databases over more "exotic" solutions.
I have to prepare a case to convince managers to promote development using an ORM. I don't want to go into technical details in this case, the benefits have to be visible to business people.
I'm not quite happy with the arguments I've written down until now Are there any points I'm forgetting, both PRO and CONTRA?
The case I'm going to make will be in two points:
Convince managers to use an ORM
Convince managers to use NHibernate
The PRO's for ORM:
Solves the impedence mismatch between a rich ecosystem of connected objects with behaviour and tablular lists of scalar values
Higher productivity => reduced time writing tedious data access code lets you focus more on solving 'real' business issues
Higher maintainability => reduced number of LOC == system is easier to understand (hmm... maybe...)
Almost no performance hit when used right
The CONTRA's for ORM:
O/R mapping tools do not perform well with bulk processing of data. Stored procedures may have better performance, but are not portable
Heavy reliance on ORM software has been pointed to as a major factor in producing poorly designed databases
The PRO's for NH:
Very mature produce
Supports a lot of DB's => developers don't have to learn a new SQL dialect on every other project
High mind-share amongst .net community leaders
Many examples, articles, blog posts
It's open source
The CONTRA's for NH:
Not suited at all for batch processing
No code generation or code designer => some people think this makes developers more productive
Bad reputation due to lazy coding (== abuse of lazy loading)
It's not from Microsoft
It's open source => some companies just don't like that
Business people typically think in terms of cost and deliverables. Beyond that, most don't grasp or care about the technical reasons.
You already mentioned the higher productivity aspect.. try phrasing that as spending more time on business rules and less time on repetitive CRUD code.
I'd add, and highlight this:
Lower development friction makes meeting deadlines easier
Reduces cost of development in time spent working with the database
Reduces cost of maintenance
NHib is flexible; can handle object mapping automatically, and allow specific queries/stored procedures when needed
Also checkout Fluent NHibernate and Fluent Migrator.
I'd actually like to rebuke some of the negative points.
O/R mapping tools do not perform well with bulk processing of data. Stored procedures may have better performance, but are not portable
NH has many optimizations for bulk data processing (batching and caching are the first that come to mind). And you can always add stored procedures for some cases, it's not an "all-or-nothing" proposal.
Heavy reliance on ORM software has been pointed to as a major factor in producing poorly designed databases
I've actually seen the opposite: supposedly optimized DB-first designs that fall apart when the real use cases are implemented. In any case, it's a developer failure; you can do poorly with or without an ORM
Not suited at all for batch processing
Absolutely not true. NH has many features specifically designed for batch processing, like Stateless Sessions. Of course it's hard to beat the performance of a SP running in the DB server, but apart from that, it'll usually do just as well as adhoc ADO.NET code.
No code generation or code designer
False. There are several products that provide that. Check http://nhforge.org/wikis/general/commercial-product-ecosystem.aspx
Bad reputation due to lazy coding
If you do SELECT * FROM TABLE it will perform badly too. I fail to see how NH is to blame.
It's not from Microsoft
Neither are Oracle, the iPod or BMWs, yet people use them
It's open source
There is commercial support available. And, unlike what happens with MS products, the support is provided by people who know the internals and can fix them in a few hours instead of a few years.
When your manager decides which technology you have to use, your manager should be able to understand the technical reasons. Otherwise, he should trust the developers (they have the knowledge to make the decision) to decide. I can think of more useful things for a manager than making technical decisions he does not understand enough.
When your boss is like Dilberts pointy haired boss, I would go for the time and money saving argument (for every technical decision the manager wants to be involved in).
Take a look at this list of features of a real-world ORM. For any non-trivial project you will eventually end up using 80%+ of those features. So, either you can build those features yourself or you can use NHibernate. If you choose to build it yourself, be prepared to invest about $7.5M and 140 person years.
Or you can just use NHibernate, save the money and effort, and benefit from the huge knowledge base available on the net, books, community, etc, and even use (if necessary) one of the high-quality commercial support providers available.
You need to express it in terms of time and money - impact on development and maintenance time and impact on user time. ORM use is way too often the cause of a system that is strangling itself in poor performance (and which is now too far along the design path to change), so you need to address to the managers how you intend to avoid doing this. Frankly dev time and maintenance time are peanuts compared to wasted users time from bad implementation of an ORM. Yes it can be done right to avoid that but it usually is not. Often this is because the devs who use ORM don't understand databases and don't want to understand databases. ORM in the hands of a database expert, good thing, ORM in the hands of an application programmer who can't write basic SQL and who doesn't even understand joins, disaster waiting to happen.
No code generation or code designer
As Diego pointed out, there are third party tools. However, I would like to stress that NOT having to use a designer is a strength of NHibernate. Designers typically don't scale in the following ways:
What good is a design surface when an application has many, many entities? A visual designer just slows you down with noise at that point.
Designers typically have issues with merging. They work great when only one developer needs to edit the model at a time. The more devs you have on a project, the more potential trouble there is for merging the designer files.
I am really torn right now between using O/R mappers or just sticking to traditional data access. For some reason, every time I bring up O/R mappers, fellow developers cringe and speak about performance issues or how they're just bad in general. What am I missing here? I'm looking at LINQ to SQL and Microsoft Entity Framework. Is there any basis to any of these claims? What kind of things do I have to compromise if I want to use an O/R mapper. Thanks.
This will seem like an unrelated answer at first, but: one of my side interests is WWII-era fighter planes. All of the combatant nations (US, Great Britain, Germany, USSR, Japan etc.) built a bunch of different fighters during the war. Some of them used radial engines (P47, Corsair, FW-190, Zero); some used inline liquid-cooled engines (Bf-109, Mustang, Yak-7, Spitfire); and some used two engines instead of one (P38, Do-335). Some used machine guns, some used cannons, and some used both. Some were even made out of plywood, if you can imagine.
In the end, they all went really really fast, and in the hands of a competent, experienced pilot, they would shoot your rookie ass down in a heartbeat. I don't imagine many pilots flew around thinking "oh, that idiot is flying something with a radial engine - I don't have to worry about him at all". Everyone understood that there were many different ways of achieving the ultimate goal, and each approach had its particular advantages and disadvantages, depending on the circumstances.
The debate between ORM and traditional data access is just like this, and it behooves any programmer to become competent with both approaches, and choose the option that is right for the job at hand.
I struggled with this decision for a long time. I think I was hesitant for two primary reasons. First, O/R mappers represented a lack of control over what was happening in a critical part of the app and, second, because so many times I've been disappointed by solutions that are awesome for the 90% case but miserable for the last 10%. Everything works for select * from authors, of course, but when you crank up the complexity and have a high-volume, critical system and your career is on the line, you feel you need to have complete control to tune every query pattern and byte over the wire. Most developers, including me, get frustrated the first time the tool fails us, and we cannot do what we need to do, or our need deviates from the established pattern supported by the tool. I'll probably get flamed for mentioning specific flaws in tools, so I'll leave it at that.
Fortunately, Anderson Imes finally convinced me to try CodeSmith with the netTiers template. (No, I don't work for them.) After more than a year using this, I can't believe we didn't do it sooner. My team uses Visual Studio DB Pro, and on every check-in our continuous integration build drops out a new set of data access layer assemblies. This handles all the common, low risk stuff automatically, yet we can still write custom sprocs for the tricky bits and have them included as methods on the generated classes, and we can customize the templates for the generated code as well. I highly recommend this approach. There may be other tools that allow this level of control as well, and there is a newer CodeSmith template called PLINQO that uses LINQ to SQL under the hood. We haven't that yet examined (haven't needed to), but this overall approach has a lot of merit.
Jerry
O/RM tools designed to perform very well in most situations. It will cache entities for you, it will execute queries in bulks, it has a very low level optimised access to objects which is way faster than manually assigning values to properties, they give you a very easy way to incorporate variations of aspect oriented programming using modern technics like interceptors, it will manage entity state for you and help resolve conflicts and many more.
Now cons of this approach usually lies in lack of understanding of how things work on a very low level. Most classic problem is "SELECT N+1" (link).
I've been working with NHibernate for 2.5 years now, and I'm still discovering something new about it almost on a daily basis...
Good. In most cases.
The productivity benefit of using an ORM, will in most case outweigh the loss of control over how the data is accessed.
There are not that many who would avoid C#, in order to program is MSIL or Assembly, although that would give them more control.
The problem that i see with a lot of OR mappers is that you get bloated domain objects, which are usually highly coupled with the rest of your data access framework. Our developers cringe at that as well :) It's just harder to port these object to another data access technology. If you use L2S, you can take a look at the generated code. It looks like a complete mess. NHibernate is probably one of the best at this. Your entities are completely unaware of your data access layer, if you design them right.
It really depends on the situation.
I went from a company that used a tweaked out ORM to a company that did not use a ORM and wrote SQL queries all the time. When I asked about using an ORM to simplify the code, I got that blank look in the face followed by all the negatives of it:
Its High Bloat
you don't have fine control over your queries and execute unnecessary ones
there is a heavy object to table mapping
its not dry code because you have to repeat your self
on an on
Well, after working there for a few weeks, I had noticed that:
we had several queries that were almost identical, and alot of times if there was a bug, only a handful would get fixed
instead of caching common tables queries, we would end up reading a table multiple times.
We were repeating our selves all over the place
We had several levels of skill level, so some queries were not written the most efficiently.
After I pointed most of this out, they wrote a "DBO" because the didn't want to call it an ORM. They decided to write one from scratch instead of tweaking out one.
Also, alot of the arguments come from ignorance against ORM's I feel. Every ORM that I have seen allows for custom queries, and even following the ORM's conventions, you can write very complex and detailed queries and normally are more human readable. Also, they tend to be very DRY, You give them your schema, and they figure the rest out, down to relationship mapping.
Modern ORM's have a lot of tools to help you out, like migration scripts, multiple DB types accessed to the same objects so you can leverage advantages of both NOSQL and SQL DB's. But you have to pick the right ORM for your project if your going to use one.
I first got into ORM mapping and Data Access Layers from reading Rockford Lhotka's book, C# business objects. He's spent years working on a framework for DAL's. While his framework out of the box is quite bloated and in some cases, overkill, he has some excellent ideas. I highly recommend the book for anyone looking at ORM mappers. I was influenced by his book enough to take away a lot of his ideas and build them into my own framework and code generation.
There is no simple answer to this since each ORM provider will have it's own particular pluses and minuses. Some ORM solutions are more flexible than others. The onus is on the developer to understand these before using one.
However, take LinqToSql - if you are sure you are not going to need to switch away from SQL Server then this solves a lot of the common problems seen in ORM mappers. It allows you to easily add stored procedures (as static methods), so you aren't just limited to generated SQL. It uses deferred execution, so that you can chain queries together efficiently. It uses partial classes to allow you to easily add custom logic to generated classes without needing to worry about what happens when you re-generate them. There is also nothing stopping you using LINQ to create your own, abstracted DAL - it just speeds up the process. The main, thing, though is that it alleviates the tedium and time required to create basic CRUD layer.
But there are downsides, too. There will be a tight coupling between your tables and classes, there will be a slight performance drop, you may occasionally generate queries that are not as efficient as you expected. And you are tied in to SQL Server (though some other ORM technlogies are database agnostic).
As I said, the main thing is to be aware of the pros and cons before pinning your colours to a particular methodology.
Time and again, I've seen people here and everywhere else advocating avoidance of nonportable extensions to the SQL language, this being the latest example. I recall only one article stating what I'm about to say, and I don't have that link anymore.
Have you actually benefited from writing portable SQL and dismissing your dialect's proprietary tools/syntax?
I've never seen a case of someone taking pains to build a complex application on mysql and then saying You know what would be just peachy? Let's switch to (PostGreSQL|Oracle|SQL Server)!
Common libraries in -say- PHP do abstract the intricacies of SQL, but at what cost? You end up unable to use efficient constructs and functions, for a presumed glimmer of portability you most likely will never use. This sounds like textbook YAGNI to me.
EDIT: Maybe the example I mentioned is too snarky, but I think the point remains: if you are planning a move from one DBMS to another, you are likely redesigning the app anyway, or you wouldn't be doing it at all.
Software vendors who deal with large enterprises may have no choice (indeed that's my world) - their customers may have policies of using only one database vendor's products. To miss out on major customers is commercially difficult.
When you work within an enterprise you may be able to benefit from the knowledge of the platform.
Generally speaking the DB layer should be well encapsulated, so even if you had to port to a new database the change should not be pervasive. I think it's reasonable to take a YAGNI approach to porting unless you have a specific requriement for immediate multi-vendor support. Make it work with your current target database, but structure the code carefully to enable future portability.
The problem with extensions is that you need to update them when you're updating the database system itself. Developers often think their code will last forever but most code will need to be rewritten within 5 to 10 years. Databases tend to survive longer than most applications since administrators are smart enough to not fix things that aren't broken so they often don't upgrade their systems with every new version.Still, it's a real pain when you upgrade your database to a newer version yet the extensions aren't compatible with that one and thus won't work. It makes the upgrade much more complex and demands more code to be rewritten.When you pick a database system, you're often stuck with that decision for years.When you pick a database and a few extensions, you're stuck with that decision for much, much longer!
The only case where I can see it necessary is when you are creating software the client will buy and use on their own systems. By far the majority of programming does not fall into this category. To refuse to use vendor specific code is to ensure that you have a porrly performing database as the vendor specific code is usually written to improve the performance of certain tasks over ANSII Standard SQL and it written to take advatage of the specific architecture of that database. I've worked with datbases for over 30 years and never yet have I seen a company change their backend database without a complete application rewrite as well. Avoiding vendor-specific code in this case means that you are harming your performance for no reason whatsoever most of the time.
I have also used a lot of different commercial products with database backends through the years. Without exception, every one of them was written to support multiple backends and, without exception, every one of them was a miserable, slow dog of a program to actually use on a daily basis.
In the vast majority of applications I would wager there is little to no benefit and even a negative effect of trying to write portable sql; however, in some cases there is a real use case. Let's assume you are building a Time Tracking Web Application. And you'd like to offer a self hosted solution.
In this case your clients will need to have a DB Server. You have some options here. You could force them into using a specific version which could limit your client base. If you can support multiple DBMS then you have a wider potential client that can use your web application.
If you're corporate, then you use the platform you are given
If you're a vendor, you have to plan for multiple platforms
Longevity for corporate:
You'll probably rewrite the client code before you migrate DBMS
The DBMS will probably outlive your client code (Java or c# against '80 mainframe)
Remember:
SQL within a platform is usually backward compatible, but client libraries are not. You are forced to migrate if the OS can not support an old library, or security environment, or driver architecture, or 16 bit library etc
So, assume you had an app on SQL Server 6.5. It still runs with a few tweaks on SQL Server 2008. I bet you're not using the sane client code...
There are always some benefits and some costs to using the "lowest common denominator" dialect of a language in order to safeguard portability. I think the dangers of lock-in to a particular DBMS are low, when compared to the similar dangers for programming languges, object and function libraries, report writers, and the like.
Here's what I would recommend as the primary way of safeguarding future portability. Make a logical model of the schema that includes tables, columns, constraints and domains. Make this as DBMS independent as you can, within the context of SQL databases. About the only thing that will be dialect dependent is the datatype and size for a few domains. Some older dialects lack domain support, but you should make your logical model in terms of domains anyway. The fact that two columns are drawn from the same domain, and don't just share a common datatype and size, is of crucial importance in logical modelling.
If you don't understand the distinction between logical modeling and physical modeling, learn it.
Make as much of the index structure portable as you can. While each DBMS has its own special index features, the relationship between indexes, tables, and columns is just about DBMS independent.
In terms of CRUD SQL processing within the application, use DBMS specific constructs whenever necessary, but try to keep them documented. As an example, I don't hesitate to use Oracle's "CONNECT BY" construct whenever I think it will do me some good. If your logical modeling has been DBMS independent, much of your CRUD SQL will also be DBMS independent even without much effort on your part.
When it comes time to move, expect some obstacles, but expect to overcome them in a systematic way.
(The word "you" in the above is to whom it may concern, and not to the OP in particular.)