Is ORM slow? Does it matter? [closed] - orm

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I really like ORM as compared to store procedure, but one thing that I afraid is that ORM could be slow, because of layers and layers of abstraction. Will using ORM slow down my application? Or does it matter?

Yes, it matters. It is using more CPU cycles and consequently slowing your application down. Hear me out though...
But, consider this: what is more expensive? Server hardware or another programmer? Server hardware, generally, is cheaper than hiring another team of programmers. So, while ORM may be costing you CPU cycles, you need one less programmer to manage your SQL queries, often resulting in a lower net cost.
To determine if it's worth it for you, calculate or determine how many hours you saved by using an ORM. Then, figure out how much money you spent on the server to support ORM. Multiply the hours you saved by your hourly rate and compare to the server cost.
Of course, whether an ORM actually saves you time is a whole another debate...

Is ORM slow?
Not inherently. Some heavyweight ORMs can add a general drag to things but we're not talking orders of magnitude slowdown.
What does make ORM slow is naïve usage. If you're using an ORM because it looks easy and you don't know how the underlying relational data model works, you can easily write code that seems reasonable to an OO programmer, but will murder performance.
ORM is a handy tool, but you need the lower-level understanding (that usually comes from writing SQL queries) to go with it.
Does it matter?
If you end up performing a looped query for each of thousands of entities at once, instead of a single fast join, then certainly it can.

ORM's are slower and do add overhead to applications (unless you specifically know how to get around these problems, which is not very common). The database is the most critical element and Web applications should be designed around it.
Many OOP frameworks using Active Record or ORMs, developers in general - treat the database as an unimportant afterthought and tend to look at it as something they don't really need to learn. But performance and scalability usually suffer as the db is heavily taxed!
Many large scale web apps have fallen flat, wasting millions and months to years of time because they didn't recognize the importance of the database. Hundreds of concurrent users and tables with millions of records require database tuning and optimization. But I believe the problem is noticeable with a few users and less data.
Why are developers so afraid to learn proper SQL and tuning measures when it's the key to performance?

In a Windows Mobile 5 project against using SqlCe, I went from using hand-coded objects to code generated (CodeSmith) objects using an ORM template. In the process all my data access used CSLA as a base layer.
The straight conversion improved my performance by 32% in local testing, almost all of it a result of better access methods.
After that change, we adjusted the templates (after seeing some SqlCe performance stuff at PDC by Steve Lasker) and in less then 20 minutes, our entire data layer was greatly improved, our average 'slow' calls went from 460ms to ~20ms. The cool part about the ORM stuff is that we only had to implement (and unit test) these changes once and all the data access code got changed. It was an amazing time saver, we maybe saved 40 hours or more.
The above being said, we did lose some time by taking out a bunch of 'waiting' and 'progress' dialogs that were no longer needed.
I have used a few of the ORM tools, and I can recommend two of them:
.NET Tiers
CSLA codegen templates
Both of them have performed quite nicely and any performance loss has not been noticeable.

I've always found it doesn't matter. You should use whatever will make you the most productive, responsive to changes, and whatever is easiest to debug and maintain.
Most applications never need enough load for the difference between ORM and SPs to noticeable. And there are optimizations to make ORM faster.
Finally, a well-written app will have its data access seperated from everything else so that in the future switching from ORM to whatever would be possible.

Is ORM slow?
Yes ( compared with stored procedures )
Does it matter?
No ( except your concern is speed )
I think the problem is many people think of ORM as a object "trick" to databases, to code less or simplify SQL usage, while in reality is .. well an Object - To Relational ( DB ) - Mapping.
ORM is used to persist your objects to a relational database manager system, and not ( just ) to substitute or make SQL easier ( although it make a good job at that too )
If you don't have a good object model, or you're using to make reports, or even if you're just trying to get some information, ORM is not worth it.
If in the other hand you have a complex system modeled through objects were each one have different rules and they interact dynamically and you your concern is persist that information into the database rather than substitute some existing SQL scripts then go for ORM.

Yes, ORM will slow down your application. By how much depends on how far the abstraction goes, how well your object model maps to the database, and other factors. The question should be, are you willing to spend more developer time and use straight data access or trade less dev time for slower runtime performance.
Overall, the good ORMs have little overhead and, by and large, are considered well worth the trade off.

Yes, ORMs affect performance, whether that matters ultimately depends on the specifics of your project.
Programmers often love ORM because they like the nice front-end cding environments like Visual Studio and dislike coding raw SQL with no intellisense, etc.
ORMs have other limitations besides a performance hit--they also often do not do what you need 100% of the time, add the complexity of an additional abstraction layer that must be maintained and re-established every time chhnges are made, there are also caching issues to be dealt with.
Just a thought -- if the database vendors would make the SQL programming environment as nice as Visual Studio, and provide a more natural linkage between the db code and front-end code, we wouldn't need the ORMs...I guess things may go in that direction eventually.

Obvious answer: It depends
ORM does a good job of insulating a programmer from SQL. This in effect substitutes mediocre, computer generated queries for the catastrophically bad queries a programmer might give.
Even in the best case, an ORM is going to do some extra work, loading fields it doesn't need to, explicitly checking constraints, and so forth.
When these become a bottle-neck, most ORM's let you side-step them and inject raw SQL.
If your application fits well with objects, but not quite so easily with relations, then this can still be a win. If instead your app fits nicely around a relational model, then the ORM represents a coding bottleneck on top of a possible performance bottleneck.
One thing I've found to be particularly offensive about most ORM's is their handling of primary keys. Most ORM's require pk's for everything they touch, even if there is no concievable use for them. Example: Authors should have pk's, Blog posts SHOULD have pk's, but the links (join table) between authors and posts not.

I have found that the difference between "too slow" and "not too much slower" depends on if you have your ORM's 2nd level (SessionFactory) cache enabled. With it off it handles fine under development load, but will crush your system under mild production load. After turning on the 2nd Level cache the server handled the expected load and scaled nicely.

ORM can get an order of magnitude slower, not just on the grount=s of wasting a lot of CPU cycles on it's own but also using much more memeory which then has to be GC-d.
Much worse that that however is that the is no standard for ORM (unlike SQL) and that my and large ORM-s use SQL vary inefficiently so at the end of the day you still have to dig into SQL to fix per issues and every time an ORM makes a mess and you have to debug it. Meaning that you haven't gained anything at all.
It's terribly immature technology for real production-level applications. Very problematic things are handling indexes, foreign keys, tweaking tables to fit object hierarchies and terribly long transactions, which means much more deadlocks and repeats - if an ORM knows hows to handle that at all.
It actually makes servers less scalable which multiplies costs but these costs don't get mentioned at the begining - a little inconvenient truth :-) When something uses transactions 10-100 times bigger than optimal it becomes impossible to scale SQL side at all. Talking about serious systems again not home/toy/academic stuff.

An ORM will always add some overhead because of the layers of abstraction but unless it is a poorly designed ORM that should be minimal. The time to actually query the database will be many times more than the additional overhead of the ORM infrastructure if you are doing it correctly, for example not loading the full object graph when not required. A good ORM (nHibernate) will also give you many options for the queries run against the database so you can optimise as required as well.

Using an ORM is generally slower. But the boost in productivity you get will get your application up and running much faster. And the time you save can later be spent finding the portions of your application that are causing the biggest slow down - you can then spend time optimizing the areas where you get the best return on your development effort. Just because you've decided to use an ORM doesn't mean you can't use other techniques in the sections of code that can really benefit from it.

An ORM can be slower, but this is offset by their ability to cache data, therefore however fast the alternative, you can't get much faster than reading from memory.

I never really understood why people think that this is slower or that is slower... get a real machine I say. I have had mixed results... I've seen where execution time for a stored procedure is much slower than ORM and vise versa.. But in both cases the performance was due to difference in hardware.

Related

What is the expected performance gap switching from SQL to TSDB for handling time series?

We are in the case of using a SQL database for a single node storage of roughly 1 hour of high frequency metrics (several k inserts a second). We quickly ran into I/O issues which proper buffering would not simply handle, and we are willing to put time into solving the performance issue.
I suggested to switch to a specialised database for handling time series, but my colleague stayed pretty skeptical. His argument is that the gain "out of the box" is not guaranteed as he knows SQL well and already spent time optimizing the storage, and we in comparison do not have any kind of TSDB experience to properly optimize it.
My intuition is that using a TSDB would be much more efficient even with an out of box configuration but I don't have any data to measure this, and internet benchs such as InfluxDB's are nowhere near trustable. We should run our own, except we can't affoard to loose time in a dead end or a mediocre improvement.
What would be, in my use case but very roughly, the performance gap between relational storage and TSDB, when it comes to single node throughput ?
This question may be bordering on a software recommendation. I just want to point one important thing out: You have an existing code base so switching to another data store is expensive in terms of development costs and time. If you have someone experienced with the current technology, you are probably better off with a good-faith effort to make that technology work.
Whether you switch or not depends on the actual requirements of your application. For instance, if you don't need the data immediately, perhaps writing batches to a file is the most efficient mechanism.
Your infrastructure has ample opportunity for in-place growth -- more memory, more processors, solid-state disk (for example). These might meet your performance needs with a minimal amount of effort.
If you cannot make the solution work (and 10k inserts per second should be quite feasible), then there are numerous solutions. Some NOSQL databases relax some of the strict ACID requirements of traditional RDBMSs, providing faster throughout.

Which ORM has better performance !! OpenAccess or LLBLGen Pro?

I'm working on a new project right now and thinking of using an ORM beyond that of OpenAccess or LLBLGen Pro or Subsonic.This project may have great quantities and hits concurrent,So our performance requirements is very high.
Please compare and recommend it to me.
Thanks
Jim.
Jim,
For the best results in answering this question, you'll need to do your own comparison since your specific requirements and data access scenarios will likely affect the results of any such performance testing.
That said, we use LLBLGen for a high throughput web application and the performance is exceptional. We have found that the big issue is in the application design itself. Using SQL Server Profiler we are able to see (during development) which parts of the application create a lot of hits on the database. The biggest penalty we found was with loading a grid and then doing another database operation OnDataBinding / DataBound events. This creates a huge amount of traffic to the SQL Server database, a lot of reads and a lot of disk swapping. We have been very well served by making sure we get all the data in the first query by making a good design choice when building the set of data/joins/etc. when building the application -- or refactoring it later once we find the performance is slow.
The overhead for LLBLGen, at least, is very minimal. It's fast even when creating huge numbers of objects. The much, much bigger performance hit comes when we make queries that spawn other queries (example above) and our DB reads go through the roof.
You may wish to evaluate both for which one you feel is a better match for your skills and productivity as well.

What is the resource impact from normalizing a database?

When taking a database from a relatively un-normalized form and normalizing it, what, if any, changes in resource utilization might one expect?
For example, normalization often means more tables get created from fewer which means the database now has a higher number of tables, but many of them are quite small, allowing the often used ones to fit into memory better.
The higher number of tables also means that more joins are needed (potentially) to get at the data that was abstracted out, so one would expect some sort of impact from the higher number of joins the system needs to do.
So, what impact on resource usage (ie. what will change) does normalizing an un-normalized database have?
Edit:
To add a bit of context, I have an existing (ie. legacy) database with over 300 horrible tables. About 1/2 of the data is TEXT and the other half is either char fields or integers. There are no constraints of any kind. The reason I ask is primarily to get more information for convincing others that things need to change and that there won't be a decrease in performance or maintainability. Unfortunately, those I have to convince know just enough about the performance benefits of a de-normalized database to want to avoid normalization as much as possible.
This can not really be answered in a general manner, as the impact will vary heavily depending on the specifics of the database in question and the apps using it.
So you basically stated the general expectations concerning the impact:
Overall memory demands for storage should go down, as redundant data gets removed
CPU needs might go up, as queries might get more expensive (Note that in many cases queries on a normalized database will actually be faster, even if they are more complex, as there are more optimization options for the query engine)
Development resource needs might go up, as developers might need to construct more elaborate queries (But on the other hand, you need less development effort to maintain data integrity)
So the only real answer is the usual: it depends ;)
Note: This assumes that we are talking about cautious and intentional denormalization. If you are referring to the 'just throw some tables together as data comes along' approach way to common with inexperienced developers, I'd risk the statement that normalization will reduce resource needs on all levels ;)
Edit: Concerning the specific context added by cdeszaq, I'd say 'Good luck getting your point through' ;)
Oviously, with over 300 Tables and no constraints (!), the answer to your question is definitely 'normalizing will reduce resource needs on all levels' (and probably very substantially), but:
Refactoring such a mess will be a major undertaking. If there is only one app using this database, it is already dreadful - if there are many, it might become a nightmare!
So even if normalizing would substantially reduce resource needs in the long run, it might not be worth the trouble, depending on circumstances. The main questions here are about long term scope - how important is this database, how long will it be used, will there be more apps using it in the future, is the current maintenance effort constant or increasing, etc. ...
Don't ignore that it is a running system - even if it's ugly and horrible, according to your description it is not (yet) broken ;-)
"Normalization" applies only and exclusively to the logical design of a database.
The logical design of a database and the physical design of a database are two completely distinct things. Database theory has always intended for things to be this way. The fact that the developers who overlook/disregard this distinction (out of ignorance or out of carelessness or out of laziness or out of whatever other so-called-but-invalid "reason") are the vast majority, does not make them right.
A logical design can be said to be normalized or not, but a logical design does not inherently carry any "performance characteristic" whatsoever. Just like 'c:=c+1;' does not inherently carry any performance characteristic.
A physical design does determine "performance characteristics", but then again a physical design simply does not have the quality of being "normalized or not".
This flawed perception of "normalization hurting performance" is really nothing else than concrete proof that all the DBMS engines that exist today are just seriously lacking in physical design options.
There's a very simple answer to your question: it depends.
Firstly, I'd re-phrase your question as 'what is the benefit of denormalization', because normalization is the something that should be done as a default (as the result of a pure logical model) and then denormalization can be applied for very specific tables where performance is critical. The main problem of denormalization is that it can complicate data integrity management, but the benefits in some cases outweigh the risks.
My advice for denormalization: do it only when it really hurts and make sure you got all scenarios covered when it comes to maintaining data integrity after any inserts, updates or deleted.
To underscore some points made by prior posters: Is you current schema really denormalized? The proper way (imho) to design a database is to:
Understand as best you can the system/information to be modeled
Build a fully normalized model
Then, if and as you find it necessary, denormalize in a controlled fashion to enhance performance
(There may be other reasons to denormalize, but the only ones I can think of off-hand are political ones--have to match the existing code, the developers/managers don't like it, etc.)
My point is, if you never fully normalized, you don't have a denormalized database, you've got an unnormalized one. And I think you can think of more descriptive if less polite terms for those databases.
I've found that normalization, in some cases, will improve performance.
Small tables read more quickly. A badly denormalized database will often have (a) longer rows and (b) more rows than a normalized design.
Reading fewer shorter rows means less physical I/O.
For one thing, you'll end up having to do resultset calculations. For example, if you have a Blog, with a number of Posts, you could either do:
select count(*) from Post where BlogID = #BlogID
which is more expensive than
select PostCount from Blog where ID = #BlogID
and can lead to the SELECT N+1 problem, if you're not careful.
Of course with the second option you have to deal with keeping the data integrity, but if the first option is painful enough, then you make it work.
Be careful you don't fall foul of premature optimisation. Do it in the normalised fashion, then measure performance against requirements, and only if it falls short should you look to denormalise.
Normalized schemas tend to perform better for INSERT/UPDATE/DELETE because there are no "update anomalies" and the actual changes that need to be made are more localized.
SELECTs are mixed. Denormalization is esentially materializing a join. There's no doubt that materializing a join sometimes helps, however, materialization is often very pessimistic (probably more often than not), so don't assume that denormalization will help you. Also, normalized schemas are generally smaller and therefore might require less I/O. A join is not necessarily expensive, so don't automatically assume that it will be.
I wanted to elaborate on Henrik Opel's #3 bullet point. Development costs might go up, but they don't have to. In fact, normalization of a database should simplify or enable the use of tools like ORMs, Code Generators, Report Writers, etc. These tools can significantly reduce the time spent on the data access layer of your applications and move development on through to adding business value.
You can find a good StackOverflow discussion here about the development aspect of normalized databases. There were many good answers, comments and things to think about.

How important is optimization?

I got into a bit of a debate yesterday with my boss about the proper role of optimization when building software. Essentially, his position was that optimization needs to be a primary concern during the entire process of development.
My opinion is that you need to make the right algorithmic decisions during development, but you should never be counting cycles during development. In fact, I feel so strongly about this I had to walk away from the conversation. I've seen too many bad programming decisions in the name of "optimization", and too much bad code defended with the excuse "this way is faster".
What does the StackOverflow.com community think?
"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
- Donald Knuth
I think the premature optimization quote is used by too many to avoid thinking about the hard stuff concerning how well the application will run. I guarantee the users want you to think about how to design it so it will run as fast as possible.
This is not to say you should be timing everything, but the design phase is the easiest place to optimize and not cost lots of time later.
There are often several ways to do anything, you should pick in the design phase the one which is most likely to perform the best (if it turns out to be one of the times when it isn't the best, then optimize later). This should trump the need to have easy to read code.
If you aren't considering performance in the design phase, you aren't going to have a well designed system. That doesn't mean it should be the only concern (although in a database I'd rate it as 3rd in importance, right after data integrity and security), but trying to fix a system where poorly performing techniques were used throughout because the developers thought they were easier to understand is a nightmare. Being a user of such a system where you have to wait for minutes everytime you want to move from one screen to another is a nightmare (developers reallly should spend all day everyday for at least a week, using their systems!) for everyone who is stuck with the badly designed system. It costs less to design properly than to fix later and considering performance is critical to designing properly.
I work somewhere where the orginal developers drank the koolaid about premature optimization and did everything the way they thought was simplest (but which in almost every case was the wrong choice from a performance perspective). Now we are at 10 times the size we were three years ago and every screen on every website takes 30 seconds or so to load (or worse times out) and we are losing customers because of it. But changing it will be too hard because at the base they designed the database without considering how it would perform and redesigning a database with many many gigabytes of data into a new structure is way too time consuming and costly. If it had been designed to perform from the start it would be both easier to maintain and faster for the clients. We aren't talking about the need to performance tune the top 10 slowest queries here, we are talking about the fact that the overall structure requires a drastic change (one that would affect virtually every query against the system) to perform well.
Yes don't do micro optimization until you nmeed to, but please do the macro stuff. Consider if is this the best way before you commit to the path. Don't write cursors to hit tables with millions of records when a set-based statement will do. Don't try to have as few tables as possible becasue that seems to be a more elegant solution when the tables are storing disparate items (such as people, places, and vehicles) causing every single query to hit the same table and causing every delete to check all sorts of foreign key tables that will not ever have a record for that type of entity (it takes minutes to delete one record from the main table in our database, it's a real joy when something goes wrong in an import (bad data from a client usually) and we have to delete 200,000 let me tell you).
Optimization is almost tautologically a tradeoff: you gain runtime efficiency at the cost of other things (readability, maintainability, flexibility, compile times, etc.). As such, it's really never a good idea to do unless you know what the tradeoff is and why it's worthwhile.
Even worse, thinking about "how do I do X fast" can be very distracting. To much energy in that direction can easily lead to you misisng out on method Y which is much better (and often faster --- this is how "optimization" can make your code slower). Particularly if you do too much of this on a big project from the beginning, it represents a lot of momentum. If you can't afford to overcome that momentum, you can easily become locked into a bad design because you can't afford the time to restructure it. This way lie dragons
What your boss may be thinking of is more of an issue of writing bad code via inappropriate representations and algorithms. It's not really the same thing as optimizing, but an approach where you pay no attention whatsoever to appropriate data structures etc. can result in a codebase that is slow everywhere, and (much like the above "lock in") requires heroic effort to fix.
In general though, premature optimization really honestly is a terrible idea. Particularly when you end up with a complex, finely tuned, well documented (because that's the only way you can understand it) piece of code you end up not using anyway. And that's not even getting into the issue of subtle bugs that are often introduced when "optimizing"
[edit: pshaw, of course a Knuth quote encapsulates this well. That's what I get for typing too much]
Engineer throughout, optimize at the end.
Since with going with pithy, I'll say that optimization is as important as the impact of not doing it.
I think the "premature optimization is root of all evil" has to be understood literally - it does not say when is premature, and does not say you should optimize only at the end. Just not too early. Also, the "use the right algorithm, O(n^2) vs O(N)" is a bit dangerous if taken literally - because for many problems, the N is actually small, etc...
I think it depends a lot of the type of software you are doing: some software are such as every part is very independent, and can be optimized separately. But that's not always the case. For many (most ?) applications, speed just does not matter at all, the brute force but obviously correct way is the best one. But for projects where speed matters, it often has to be taken into account early - maybe that's another possible interpretation of Knuth's saying: many applications don't need to be optimized at all, just know which ones need and plan ahead.
Optimization is a primary concern through development when you have a good reason to expect that performance will be unfixably bad if optimization is a secondary concern.
This depends a lot what kind of code you're writing, but there are often better reasons to believe that your code will be unfixably difficult to use; or maintain; or full of bugs; or late; if all those things become secondary to tweaking performance.
Bad managers say, "all of those things are our primary concerns". Good managers work to find out which are dangers for this project.
Of course, good design does have to consider all these things, and the earlier you have a back-of-the-envelope estimate of any of them, the better. If all your manager is saying, is that if you never think about how fast your code will run then too much of your code will be dog-slow, then he's right. I just wouldn't say that makes optimization your "primary" concern.
If the USP of your software is that it's faster than your competitors', then optimization is a primary concern. With experience, you can often predict what sorts of operations will be the bottlenecks, design those with optimization in mind right from the start, and more-or-less ignore optimization elsewhere. A lot of projects won't even need this: they'll be fast enough without much effort, provided you use sensible algorithms and don't do anything stupid. "Don't do anything stupid" is always a primary concern, with no need to mention performance in particular.
I think that code needs to be, first and foremost, readable and understandable. So, optimisations that are done, should not be at the expense of readability. However, optimisation is often a trade-off.
Whether or not you should optimise your code depends on your application domain. If you are working on an embedded processor with only 8Mb of memory, then optimisation is probably something that every team member needs to keep in mind, when writing code - optimising for space vs speed.
However, pre-mature optimisation is not useful unless your system has been clearly spec'ed and understood. This is because most programmers do not make good optimisation decisions unless they can factor in the influence of the overall system, including processor architectural factors such as cache memory, hardware threads, pipelines, etc.
From 2 years of building highly optimized Java code (and that needed to be optimized that way) I would say that there is a time-spent rule that governs optimization:
optimizing on the spot: 5%-10% of your development time, because you have to do it countless times (every single time you have to amend your design)
optimizing just when you have had it working: 2% of your development time (you do it only once)
going back to it and optimizing when it's too slow: 30% of your development time, because you have to plunge yourself back into the system
SO I would come to the conclusion that there is a right time and a right way to optimize: do it entity by entity (class by class, if you have classes that have a single, well defined job to do, and can be tested and optimized), test well, make sure the logic is working, optimize just afterward, and forget about that entity's implementation details forever.
When developing, just keep it simple. IMHO, most performance problems are caused by over-engineering - making mountains out of molehills, often because of wanting "the right algorithm".
Periodically, stress test with a big data set, profiling or (my favorite technique) manual random sampling. You find a problem, you fix it. You find another, you fix it.
That way you avoid creating slugs (slowness bugs), and when they do arise, you kill them.
Added: If I can just elaborate on point 1. OO is seemingly the law of the land, and it certainly has good reasons behind it. Unfortunately it causes many young programmers to feel that programming is all about having lots of data structure, with layers upon layers of abstraction. Not that those are inherently bad, but combine that with the natural tendency to assume that the time something takes is roughly proportional to the number of characters you have to type to invoke it, and that this tendency multiplies over the layers (and besides, the machines are really fast), it's easy to create a perfect storm of cycle-waste.
Quote from a friend: "It's easier to make a working system efficient than to make an efficient system work".
I think it is important to use smart practices and patterns from the start, but get the system actually running for small test cases then do performance analysis. Frequently the areas of code that have poor performance aren't anticipated at the beginning, so get some real data and then optimize the bottlenecking 3% (or 20%, or whatever it is).
I think your boss is a lot more right than you are.
Allt too often the user experience is lost only to be "rediscovered" at the last possible moments when performance activites are prohibitively costly and inefficient. Or when it is discovered that the batch program that will process today's transactions requires forty hours to run.
Things such as database organization, when and when not to do which SELECTs are examples of design decisions that can make or break an application. Still you run the risk of a single programmer deciding to to otherwise, misinterpret or simply not understand what to do. Following-up on performance during an entire project decreases the risk that such things will happen. It also allows design decisions to be changed when there is a need for that without puting the entire poroject at risk.
"You need to make the right algorithmic decisions during development" is certainly true yet how many mainstream programmers are able to do that? Browsing the net for information does not guarantee finding a high quality solution. "Right" could be interpreted as meaning that it is ok to choose a poor algorithm because it is easy to understand and implement (= less development time, lower cost) than a more complicated one (= more development time, higher cost).
The pendulum of quantity vs quality is almost always on the quantity side because more code per hour or faster development time means money in the short term. The quality side means money in the long term.
EDIT
This article discusses performance and optimization quite thoroughly.
Performance preacher Rico Mariana sums it up in the short statement "never give up your performance accidentally."
Premature optimization is the root of all evil...There is a fine balance between, but I would say 95% of the time you need to optimize at the end; however, there are decisions you can make early on to help prevent issues. For example assume we are talking about an e-commerce web site. You have a requirement to display the catalog. Now you can grab all 100,000 items and display 50 of them, or you can grab just 50 from the database. These type of decisions should be made up front.
Cycle counting should only be done when a problem has been identified.
Your boss is partly right, optimisation does need to be considered throughout the development lifecycle but it is rarely the primary concern. Also, the term 'optimisation' is vague - it's an adjective, 'optimise for ...' which could be 'memory', 'speed', 'usability', 'maintainability' and so on.
However, the OP is right that counting cycles is pointless for many projects. For most PC applications the CPU is never the bottleneck. Also, the IA32 is not consistent - what worked well on one architecture, performs poorly on another. Cycle counting should only ever be done when it will actually make a difference - usually in CPU limited code or code with very specific timing needs.
Optimisation, of any kind, must always be driven by hard evidence. Never assume anything about the system or how the code is behaving. In an ideal world, application performance / constraints will be specified in the initial product design and tools to monitor the application's performance during development will be added early on in the development phase to guide the programmers as the product is made.

When do transactions become more of a burden than a benefit?

Transactional programming is, in this day and age, a staple in modern development. Concurrency and fault-tolerance are critical to an applications longevity and, rightly so, transactional logic has become easy to implement. As applications grow though, it seems that transactional code tends to become more and more burdensome on the scalability of the application, and when you bridge into distributed transactions and mirrored data sets the issues start to become very complicated. I'm curious what seems to be the point, in data size or application complexity, that transactions frequently start becoming the source of issues (causing timeouts, deadlocks, performance issues in mission critical code, etc) which are more bothersome to fix, troubleshoot or workaround than designing a data model that is more fault-tolerant in itself, or using other means to ensure data integrity. Also, what design patterns serve to minimize these impacts or make standard transactional logic obsolete or a non-issue?
--
EDIT: We've got some answers of reasonable quality so far, but I think I'll post an answer myself to bring up some of the things I've heard about to try to inspire some additional creativity; most of the responses I'm getting are pessimistic views of the problem.
Another important note is that not all dead-locks are a result of poorly coded procedures; sometimes there are mission critical operations that depend on similar resources in different orders, or complex joins in different queries that step on each other; this is an issue that can sometimes seem unavoidable, but I've been a part of reworking workflows to facilitate an execution order that is less likely to cause one.
I think no design pattern can solve this issue in itself. Good database design, good store procedure programming and especially learning how to keep your transactions short will ease most of the problems.
There is no 100% guaranteed method of not having problems though.
In basically every case I've seen in my career though, deadlocks and slowdowns were solved by fixing the stored procedures:
making sure all tables are accessed in order prevents deadlocks
fixing indexes and statistics makes everything faster (hence diminishes the chance of deadlock)
sometimes there was no real need of transactions, it just "looked" like it
sometimes transactions could be eliminated by making multiple statement stored procedures in single statement ones.
The use of shared resources is wrong in the long run. Because by reusing an existing environment you are creating more and more possibilities. Just review the busy beavers :) The way Erlang goes is the right way to produce fault-tolerant and easily verifiable systems.
But transactional memory is essential for many applications in widespread use. If you consult a bank with its millions of customers for example you can't just copy the data for the sake of efficiency.
I think monads are a cool concept to handle the difficult concept of changing state.
One approach I've heard of is to make a versioned insert only model where no updates ever occur. During selects the version is used to select only the latest rows. One downside I know of with this approach is that the database can get rather large very quickly.
I also know that some solutions, such as FogBugz, don't use enforced foreign keys, which I believe would also help mitigate some of these problems because the SQL query plan can lock linked tables during selects or updates even if no data is changing in them, and if it's a highly contended table that gets locked it can increase the chance of DeadLock or Timeout.
I don't know much about these approaches though since I've never used them, so I assume there are pros and cons to each that I'm not aware of, as well as some other techniques I've never heard about.
I've also been looking into some of the material from Carlo Pescio's recent post, which I've not had enough time to do it justice unfortunately, but the material seems very interesting.
If you are talking 'cloud computing' here, the answer would be to localize each transaction to the place where it happens in the cloud.
There is no need for the entire cloud to be consistent, as that would kill performance (as you noted). Simply, keep track of what is changed and where and handle multiple small transactions as changes propagate through the system.
The situation where user A updates record R and user B at the other end of cloud does not see it (yet) is the same as the one when user A didn't do the change yet in the current strict-transactional environment. This could lead to a discrepancy in an update-heavy system, so systems should be architectured to work with updates as less as possible - moving things to aggregation of data and pulling out the aggregates once the exact figure is critical (i.e. moving requirement for consistency from write-time to critical-read-time).
Well, just my POV. It's hard to conceive a system that is application agnostic in this case.
Try to make changes at the database level in the least number of possible instructions.
The general rule is to lock a resource the lest possible time. Using T-SQL, PLSQL, Java on Oracle or any similar way you can reduce the time that each transaction locks a shared resource. I fact transactions in the database are optimized with row-level locks, multi-version, and other kinds of intelligent techniques. If you can make the transaction at the database you save the network latency. Apart from other layers like ODBC/JDBC/OLEBD.
Sometimes the programmer tries to obtain the good things of a database ( It is transactional, parallel, distributed, ) but keep a caché of the data. Then they need to add manually some of the database features.