Tips for optimizing sql commands worrying about legacy - sql

The concern with the legacy of the SQL statements is a constant in my head. Especially when SCRUM is used, where the code has no owner, that is, all must be able to repair and maintain each piece. Optimizing SQL procedures usually means converting it into a set-based commands and using special operators. I need tips to keep the code working without forgetting the threshold optimization x readability.

Comments. If it's a newer command or an obscure one, make sure to leave a comment associated with the statement describing it in code/source. That way you don't have another developer down the road refactoring the statement to improve readability at the cost of performance. My general guideline with this is if someone of intermediate skill level or below would have to spend several minutes or more searching for what the statement is really doing, leave the comment to save them time.

I wouldn't worry so much about readability other than having the formatting conform to defined standards. Optimization is much more important than using only simple SQL that anyone can understand. That is where comments should come in... Explain what the SQL should be doing and why you chose a certain optimization technique. The added advantage to this is that it will help the next person who reads it to learn new SQL techniques.

I've found the best solution to be to include, in your comments, a clearly qualified, duplicable optimization test for the query, using statistics from the optimizer. (This also works nicely for stored procedures, where the same issues appear.)
Include a statement about the nature of the testing context (hardware and data), data generation code if necessary, and a clear description of testing conditions (cache settings, repetitions, etc.) Better yet, agree on a team template for this spec.
Then enforcing comparisons needs to be built into your culture somewhere ... the best solution would be for the culture to expect documented before-and-after optimization testing.

Related

When to use hints in oracle query [duplicate]

This question already has an answer here:
When to use Oracle hints?
(1 answer)
Closed 5 years ago.
I have gone through some documentation on the net and using hints is mostly discouraged. I still have doubts about this. Can hints be really useful in production specially when same query is used by hundreds of different customer.
Is hint only useful when we know the number of records that are present in the tables? I am using leading in my query and it gives faster results when the data is very large but the performance is not that great when the records fetched are less.
This answer by David is very good but I would appreciate if someone clarified this in more details.
Most hints are a way of communicating our intent to the optimizer. For instance, the leading hint you mention means join tables in this order. Why is this necessary? Often it's because the optimal join order is not obvious, because the query is badly written or the database statistics are inaccurate.
So one use of hints such as leading is to figure out the best execution path, then to figure out why the database doesn't choose that plan without the hint. Does gathering fresh statistics solve the problem? Does rewriting the FROM clause solve the problem? If so, we can remove the hints and deploy the naked SQL.
Some times there are times where we cannot resolve this conundrum, and have to keep the hints in Production. However this should be a rare exception. Oracle have had lots of very clever people working on the Cost-Based Optimizer for many years, so its decisions are usually better than ours.
But there are other hints we would not blink to see in Production. append is often crucial for tuning bulk inserts. driving_site can be vital in tuning distributed queries.
Conversely other hints are almost always abused. Yes parallel, I'm talking about you. Blindly putting /*+ parallel (t23, 16) */ will probably not make your query run sixteen times faster, and not infrequently will result in slower retrieval than a single-threaded execution.
So, in short, there is no universally applicable advice to when we should use hints. The key things are:
understand how the database works, and especially how the cost-based optimizer works;
understand what each hint does;
test hinted queries in a proper tuning environment with Production-equivalent data.
Obviously the best place to start is the Oracle documentation. However, if you feel like spending some money, Jonathan Lewis's book on the Cost-Based Optimizer is the best investment you could make.
I couldn't just rephrase that, so I will paste it here
(it's a brief explanation as of "When Not To Use Hints", that I had bookmarked):
In summary, don’t use hints when
What the hint does is poorly understood, which is of course not limited to the (ab)use of hints;
You have not looked at the root cause of bad SQL code and thus not yet tapped into the vast expertise and experience of your DBA in tuning the database;
Your statistics are out of date, and you can refresh the statistics more frequently or even fix the statistics to a representative state;
You do not intend to check the correctness of the hints in your statements on a regular basis, which means that, when statistics change, the hint may be woefully inadequate;
You have no intention of documenting the use of hints anyway.
Source link here.
I can summarize this as: The use of hints is not only a last resort, but also a lack of knowledge on the root cause of the issue. The CBO (Cost Based Optimizer) does an excellent job, if you just ensure some basics for it. Those include:
Fresh statistics
1.1. Index statistics
1.2. Table statistics
1.3. Histograms
Correct JOIN conditions and INDEX utilization
Correct Database settings
This article here is worth reading:
Top 10 Reasons for poor Oracle performance
Presented by non other, but Mr. Donald Burleson.
Cheers
In general hints should be used only exceptional, I know following situations where they make sense:
Workaround for Oracle bugs
Example: Once for a SELECT statement I got an error ORA-01795: maximum expression number in list - 1000, although the query did not contain an IN expression at all.
Problem was: The queried table contains more than 1000 (sub-) partitions and Oracle did a transformation of my query. Using the (undocumented) hint NO_EXPAND_TABLE solved the issue.
Datewarehouse application while staging
While staging you can have significant changes on your data where the table/index statistics are not aware about as statistics are gathered only once a week by default. If you know your data structure then hints could be useful as they are faster than running DBMS_STATS.GATHER_TABLE_STATS(...) manually all the time in between your operations. On the other hand you can run DBMS_STATS.GATHER_TABLE_STATS() even for single columns which might be the better approach.
Online Application Upgrade Hints
From Oracle documentation:
The CHANGE_DUPKEY_ERROR_INDEX, IGNORE_ROW_ON_DUPKEY_INDEX, and
RETRY_ON_ROW_CHANGE hints are unlike other hints in that they have a
semantic effect. The general philosophy explained in "Hints" does not
apply for these three hints.

Is premature optimization in SQL as "evil" as it is in procedural programming languages?

I'm learning SQL at the moment and I've read that joins and subqueries can potentially be performance destroyers. I (somewhat) know the theory about algorithmic complexity in procedural programming languages and try to be mindful of that when programming, but I don't know how expensive different SQL queries can be. I'm deciding whether I should invest time in learning about SQL performance or just notice it when my queries run slow. The base question for me then is: is premature optimization for SQL as evil as it is for procedural languages?
As added information, I work in an environment where, most of the time, high performance is not an issue and the biggest tables I have to work with have some 150k rows.
Here's the Donald Knuth quote I refer to when saying "evil":
We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil. Yet we should not
pass up our opportunities in that
critical 3%.
I would say that some general notions about performance are a must-have : it'll prevent you from writing really bad queries that can hurt your application (Even if you don't have millions of rows in your tables).
It'll also help you design your database so it's more officient-oriented : you'll have some ideas about where to put indexes, for instance.
But you shouldn't have performance as a first goal : the first thing is to have an application that works ; and, then, if needed, you'll optimize it (having some performance notions while developping will help you have an application that's easier to optimize, though).
Note I would not say that "having notions about performances" is "premature optimization", as long as you don't just "optimize", but just "write correctly" ; I would rather call it good practice that'll help to write better quality code ;-)
What Knuth means is: it's really, really important to know about SQL optimisation but only when you need to. As you say, "most of the time ... high performance is not an issue."
It's that 3% of times when you do need high performance that it's important to know what rules to break and why.
However, unlike procedural languages, even for 150k rows it can be important to know a little about how your query is processed. For instance free text searching will be very slow compared with searching through exact matches on indexed columns. It's going the final steps into e.g. sharding or full denormalisation where most DBAs and developers draw the line.
I wouldn't say that SQL optimization has as many pitfalls as premature programming optimization. Designing your schema and queries ahead of time with performance in mind can help you avoid some really nasty redesigns later on. That being said, spending a day getting rid of a table scan can be utterly worthless to you in the long run if that query isn't a slow query, can be cached, or is rarely called in a manner that would impact your application.
I personally profile my queries and focus on the worst, and most used queries. Careful design ahead of time cuts out most of the worst.
I would say that you should make the SQL as easily readeble as possible, and only worry about the performance once it hits you.
That said.
Be mindfull of standard things as you develop, such as indexes, sub selects, use of cursors where a standard query would do the job, etc.
It will not hurt to develop the original correctly, and you can optimize the problems later when it is needed.
EDIT
Also remeber that maintainability of your SQL code is very important, and that debugging SQL is slightly more difficult than normal coding.
Knuth says "forget about 97%" but for a typical web app it's in the database IO where 97% of the request time is spent. This is where a little optimization effort can yield greatest results.
If this is the kind of apps you're writing I strongly suggest learning as much of how RDBMSes work as you can afford. Other people give you excellent suggestions, and I'd add that I usually follow this list top-down when deciding how to spent my "optimization budget":
Schema design. Think twelve times
about normalizaton and access
strategies. This will save you many
painful hours later.
Query readability. Related to #1,
sometimes trying to reogranize your
queries gives a better understanding
of how schema should look. Also it'll
help later when you ask for
help.
Avoid subqueries in SELECT list, use
JOINs.
If there are slow queries reach for
Profiler. Check for missing indexes
first And finally, if there are
still slow queries, try to rewrite
it.
Keep in mind also, that database performance very much depends on data distribution and number of simultaneous requests (because of locking). Even though a query completes in 1 sec. on your underpowered netbook it could take 15 seconds on the 8-core server. If possible, check your queries on actual data. If you know that concurrency is going to be high it's (paradoxically) better to use many small queries than one big one.
I agree with everything that's said here, and I'd like to add: make sure that your SQL is well-encapsulated so that, when you discover what needs to be optimized, there's only one place you need to change it, and the change will be transparent to whatever code calls it.
Personally, I like to encapsulate all of my SQL in PL/SQL procedures, but there are some who disagree with that. Whatever you do, I recommend trying to avoid putting your SQL "inline" with other sourcecode. That seems to always lead to cut-and-pasting and quickly becomes hard to maintain. Put your SQL elsewhere, and try to re-use it as much as possible.
Also, read up on indexes, how they really work, and when you should and shouldn't use them. A lot of people's first instinct, when they get a slow query, is to index the table to death. That might solve the problem in the short term, but long-term an over-index table will be slow to insert and update into. A few well-chosen indexes are much better than indexing every field. Try reading "Refactoring SQL Applications" by Stephane Faroult.
Finally, as said above, a properly normalized database design will help avoid 99% of your slow queries. Denormalization is neccesary sometimes, but it's important that you know the rules, before you break them.
Good luck!

Why no love for SQL? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've heard a lot lately that SQL is a terrible language, and it seems that every framework under the sun comes pre-packaged with a database abstraction layer.
In my experience though, SQL is often the much easier, more versatile, and more programmer-friendly way to manage data input and output. Every abstraction layer I've used seems to be a markedly limited approach with no real benefit.
What makes SQL so terrible, and why are database abstraction layers valuable?
This is partly subjective. So this is my opinion:
SQL has a pseudo-natural-language style. The inventors believed that they can create a language just like English and that database queries will be very simple. A terrible mistake. SQL is very hard to understand except in trivial cases.
SQL is declarative. You can't tell the database how it should do stuff, just what you want as result. This would be perfect and very powerful - if you wouldn't have to care about performance. So you end up in writing SQL - reading execution plans - rephrasing SQL trying to influence the execution plan, and you wonder why you can't write the execution plan yourself.
Another problem of the declarative language is that some problems are easier to solve in a imperative manner. So you either write it in another language (you'll need standard SQL and probably a data access layer) or by using vendor specific language extensions, say by writing stored procedures and the like. Doing so you will probably find that you're using one of the worst languages you've ever seen - because it was never designed to be used as an imperative language.
SQL is very old. SQL has been standardized, but too late, many vendors already developed their language extensions. So SQL ended up in dozens of dialects. That's why applications are not portable and one reason to have a DB abstraction layer.
But it's true - there are no feasible alternatives. So we all will use SQL for the next few years.
Aside from everything that was said, a technology doesn't have to be bad to make an abstraction layer valuable.
If you're doing a very simple script or application, you can afford to mix SQL calls in your code wherever you like. However, if you're doing a complex system, isolating the database calls in separate module(s) is a good practice and so it is isolating your SQL code. It improves your code's readability, maintainability and testability. It allows you to quickly adapt your system to changes in the database model without breaking up all the high level stuff, etc.
SQL is great. Abstraction layers over it makes it even greater!
One point of abstraction layers is the fact that SQL implementations tend to be more or less incompatible with each other since the standard is slightly ambiguous, and also because most vendors have added their own (nonstandard) extras there. That is, SQL written for a MySQL DB might not work quite similarly with, say, an Oracle DB — even if it "should".
I agree, though, that SQL is way better than most of the abstraction layers out there. It's not SQL's fault that it's being used for things that it wasn't designed for.
SQL gets badmouthed from several sources:
Programmers who are not comfortable with anything but an imperative language.
Consultants who have to deal with many incompatible SQL-based products on a daily basis
Nonrelational database vendors trying to break the stranglehold of relational database vendors on the market
Relational database experts like Chris Date who view current implementations of SQL as insufficient
If you stick to one DBMS product, then I definitely agree that SQL DBs are more versatile and of higher quality than their competition, at least until you hit a scalability barrier intrinsic in the model. But are you really trying to write the next Twitter, or are you just trying to keep some accounting data organized and consistent?
Criticism of SQL is often a standin for criticisms of RDBMSes. What critics of RDBMSes seem not to understand is that they solve a huge class of computing problems quite well, and that they are here to make our lives easier, not harder.
If they were serious about criticizing SQL itself, they'd back efforts like Tutorial D and Dataphor.
It's not so terrible. It's an unfortunate trend in this industry to rubbish the previous reliable technology when a new "paradigm" comes out. At the end of the day, these frameworks are very most probably using SQL to communicate with the database so how can it be THAT bad? That said, having a "standard" abstraction layer means that a developer can focus on the application code and not the SQL code. Without such a standard layer you'd probably write a lightweight one each time you're developing a system, which is a waste of effort.
SQL is designed for management and query of SET based data. It is often used to do more and edge cases lead to frustration at times.
Actual USE of SQL can be SO impacted by the base database design that the SQL may not be the issue, but the design might - and when you toss in the legacy code associated with a bad design, changes are more impactive and costly to impliment (no one like to go back and "fix" stuff that is "working" and meeting objectives)
Carpenters can pound nails with hammers, saw lumber with saws and smooth boards with planes. It IS possible to "saw" using hammers and planes, but dang it is frustrating.
I wont say it's terrible. It's unsuitable for some tasks. For example: you can not write good procedural code with SQL. I was once forced to work with set manipulation with SQL. It took me a whole weekend to figure that out.
SQL was designed for relational algebra - that's where it should to be used.
I've heard a lot lately that SQL is a terrible language, and it seems that every framework under the sun comes pre-packaged with a database abstraction layer.
Note that these layers just convert their own stuff into SQL. For most database vendors SQL is the only way to communicate with the engine.
In my experience though, SQL is often the much easier, more versatile, and more programmer-friendly way to manage data input and output. Every abstraction layer I've used seems to be a markedly limited approach with no real benefit.
… reason for which I just described above.
The database layers don't add anything, they just limit you. They make the queries disputably more simple but never more efficient.
By definition, there is nothing in the database layers that is not in SQL.
What makes SQL so terrible, and why are database abstraction layers valuable?
SQL is a nice language, however, it takes some brain twist to work with it.
In theory, SQL is declarative, that is you declare what you want to get and the engine provides it in the fastest way possible.
In practice, there are many ways to formulate a correct query (that is the query that return correct results).
The optimizers are able to build a Lego castle out of some predefined algorithms (yes, they are multiple), but they just cannot make new algorithms. It still takes an SQL developer to assist them.
However, some people expect the optimizer to produce "the best plan possible", not "the best plan available for this query with given implementation of the SQL engine".
And as we all know, when the computer program does not meet people's expectations, it's the program that gets blamed, not the expectations.
In most cases, however, reformulating a query can produce a best plan possible indeed. There are tasks when it's impossible, however, with the new and growing improvements to SQL these cases get fewer and fewer in number.
It would be nice, though, if the vendors provided some low-level access to the functions like "get the index range", "get a row by the rowid" etc., like C compilers let you to embed the assembly right into the language.
I recenty wrote an article on this in my blog:
Double-thinking in SQL
I'm a huge ORM advocate and I still believe that SQL is very useful, although it's certainly possible to do terrible things with it (just like anything else). .
I look at SQL as a super-efficient language that does not have code re-use or maintainability/refactoring as priorities.
So lightning fast processing is the priority. And that's acceptable. You just have to be aware of the trade-offs, which to me are considerable.
From an aesthetic point of view, as a language I feel that it is lacking some things since it doesn't have OO concepts and so on -- it feels like very old school procedural code to me. But it's far and away the fastest way to do certain things, and that's a powerful niche!
SQL is excellent for certain kinds of tasks, especially manipulating and retrieving sets of data.
However, SQL is missing (or only partially implements) several important tools for managing change and complexity:
Encapsulation: SQL's encapsulation mechanisms are coarse. When you write SQL code, you have to know everything about the implementation of your data. This limits the amount of abstraction you can achieve.
Polymorphism: if you want to perform the same operation on different tables, you've got to write the code twice. (One can mitigate this with imaginative use of views.)
Visibility control: there's no standard SQL mechanism for hiding pieces of the code from one another or grouping them into logical units, so every table, procedure, etc. is
accessible from every other one, even when it's undesirable.
Modularity and Versioning
Finally, manually coding CRUD operations in SQL (and writing the code to hook it up to the rest of one's application) is repetitive and error-prone.
A modern abstraction layer provides all of those features, and allows us to use SQL where it's most effective while hiding the disruptive, repetitive implementation details. It provides tools to help overcome the object-relational impedance mismatch that complicates data access in object-oriented software development.
I would say that a database abstraction layer included with a framework is a good thing because it solves two very important problems:
It keeps the code distinct. By putting the SQL into another layer, which is generally very thin and should only be doing the basics of querying and handoff of results (in a standardized way), you keep your application free from the clutter of SQL. It's the same reason web developers (should) put CSS and Javascript in separate files. If you can avoid it, do not mix your languages.
Many programmers are just plain bad at using SQL. For whatever reason, a large number of developers (especially web developers) seem to be very, very bad at using SQL, or RDBMSes in general. They treat the database (and SQL by extension) as the grubby little middleman they have to go through to get to data. This leads to extremely poorly thought out databases with no indexes, tables stacked on top of tables in dubious manners, and very poorly written queries. Or worse, they try to be too general (Expert System, anyone?) and cannot reasonably relate data in any meaningful way.
Unfortunately, sometimes the way that someone tries to solve a problem and tools they use, whether due to ignorance, stubbornness, or some other trait, are in direct opposition with one another, and good luck trying to convince them of this. As such, in addition to just being a good practice, I consider a database abstraction layer to be a sort of safety net, as it not only keeps the SQL out of the poor developer's eyes, but it makes their code significantly easier to refactor, since all the queries are in one place.
SQL is based on Set Theory, while most high level languages are object oriented these days. Object programmers typically like to think in objects, and have to make a mental shift to use Set based tools to store their objects. Generally, it is much more natural (for the OO programmer) to just cut code in the language of their choice and do something like object.save or object.delete in application code instead of having to write sql queries and call the database to achieve the same result.
Of course, sometimes for complex things, SQL is easier to use and more efficient, so it is good to have a handle on both types of technology.
IMO, the problem that I see that people have with SQL has nothing to do with relational design nor the SQL language itself. It has to do with the discipline of modeling the data layer which in many ways is fundamentally different than modeling a business layer or interface. Mistakes in modeling at the presentation layer are generally much easier to correct than at the data layer where you have multiple applications using the database. These problems are the same as those encountered in modeling a service layer in SOA designs where you have to account for current consumers of your service and the input and output contracts.
SQL was designed to interact with relational database models. There are other data models that have existed for some time, but the discipline about designing the data layer properly exists regardless of the theoretical model used and thus, the difficulties that developers typically have with SQL are usually related to attempts to impose a non-relational data model onto a relational database product.
For one thing, they make it trivial to use parameterized queries, protecting you from SQL injection attacks. Using raw SQL, from this perspective, is riskier, that is, easier to get wrong from a security perspective. They also often present an object-oriented perspective on your database, relieving you of having to do this translation.
Heard a lot recently? I hope you're not confusing this with the NoSql movement. As far as i'm aware that is mainly a bunch of people who use NoSql for high scalability web apps and appear to have forgotten that SQL is an effective tool in a non "high scalability web app" scenario.
The abstraction layer business is just about sorting out the difference between Object Oriented code and Table - Set based code such as SQL likes to talk. Usually this results in writing lots of boiler plate and dull transition code between the two. ORM automates this and thus saves time for business objecty people.
For experienced SQL programmer the bad sides are
Verbosity
As many have said here, SQL is declarative, which means optimizing is not direct. It's like rallying compared to circuit racing.
Frameworks that try to address all possible dialects and don't support shortcuts of any of them
No easy version control.
For others, the reasons are that
some programmers are bad at SQL. Probably because SQL operates with sets, while programming languages work in object or functional paradigm. Thinking in sets (union, product, intersect) is a matter of habbit that some people don't have.
some operations aren't self-explanatory: i.e. at first it's not clear that where and having filter different sets.
there are too many dialects
The primary goal of SQL frameworks is to reduce your typing. They somehow do, but too often only for very simple queries. If you try doing something complex, you have to use strings and type a lot. Frameworks that try to handle everything possible, like SQL Alchemy, become too huge, like another programming language.
[update on 26.06.10] Recently I worked with Django ORM module. This is the only worthy SQL framework I've seen. And this one makes working with stuff a lot. Complex aggregates are a bit harder though.
SQL is not a terrible language, it just doesn't play too well with others sometimes.
If for example if you have a system that wants to represent all entities as objects in some OO language or another, then combining this with SQL without any kind of abstraction layer can become rather cumbersome. There's no easy way to map a complex SQL query onto the OO-world. To ease the tension between those worlds additional layers of abstraction are inserted (an OR-Mapper for example).
SQL is a really good language for data manipulation. From a developer perspective, what I don't like with it is that changing the database don't break your code at compile time... So I use abstraction which add this feature at the price of performance and maybe expressiveness of the SQL language, because in most application you don't need all the stuff SQL has.
The other reason why SQL is hated, is because of relational databases.
The CAP Theorem becomes popular:
What goals might you want from a
shared-data system?
Strong Consistency: all clients see the same view, even in presence of
updates
High Availability: all clients can find some replica of the data, even in
the presence of failures
Partition-tolerance: the system properties hold even when the system
is partitioned
The theorem states that you can always
have only two of the three CAP
properties at the same time
Relational database address Strong Consistency and Partition-Tolerance.
So more and more people realize that relational database is not the silver bullet, and more and more people begin to reject it in favor of high availability, because high availability makes horizontal scaling more easy. Horizontal scaling gain popularity because we have reached the limit of Moore law, so the best way to scale is to add more machine.
If relational database is rejected, SQL is rejected too.
Quick, write me SQL to paginate a dataset that works in MySQL, Oracle, MSSQL, PostgreSQL, and DB2.
Oh, right, standard SQL doesn't define any operators to limit the number of results coming back and which row to start at.
• Every vendor extends the SQL syntax to suit their needs. So unless you're doing fairly simple things, your SQL code is not portable.
• The syntax of SQL is not orthogonal; e.g., the select, insert, update,anddelete statements all have completely different syntactical structure.
I agree with your points, but to answer your question, one thing that makes SQL so "terrible" is the lack of complete standardization of T-SQL between database vendors (Sql Server, Oracle etc.), which makes SQL code unlikely to be completely portable. Database abstraction layers solve this problem, albeit with a performance cost (sometimes a very severe one).
Living with pure SQL can really be a maintenance hell. For me the greatest advantage of ORMs is the ability to safely refactor code without tedious "DB refactoring" procedures. There are good unit testing frameworks and refactoring tools for OO languages, but I yet have to see Resharper's counterpart for SQL, for example.
Still all DALs have SQL behind the scenes, and still you need to know it to understand what's happening to your database, but daily working with good abstraction layer becomes easier.
If you haven't used SQL too much, I think the major problem is the lack of good developer tools.
If you have lots of experience with SQL, you will have, at one point or another, been frustrated by the lack of control over the execution plan. This is an inherent problem in the way SQL was specified to the vendors. I think SQL needs to become a more robust language to truly harness the underlying technology (which is very powerful).
SQL has many flaws, as some other posters here have pointed out. Still, I much prefer to use SQL over many of the tools that people offer as alternatives, because the "simplifications" are often more complicated than the thing they were supposed to simplify.
My theory is that SQL was invented by a bunch of ivory-tower blue-skiers. The whole non-procedural structure. Sounds great: tell me what you want rather than how you want to do it. But in practice, it's often easier to just give the steps. Often this seems like trying to give car maintenance instructions by describing how the car should perform when you're done. Yes, you could say, "I want the car to once again get 30 miles per gallon, and to run with this humming sound like this ... hmmmm ... and, etc" But wouldn't it be easier for everyone to just say, "Replace the spark plugs" ? And even when you do figure out how to express a complex query in non-procedural terms, the database engine often comes up with a very inefficient execution plan to get there. I think SQL would be much improved by the addition of standardized ways to tell it which table to read first and what index to use.
And the handling of nulls drive me crazy! Yes, theoretically it must have sounded great when someone said, "Hey, if null means unknown, then adding an unknown value to a known value should give an unknown value. After all, by definition, we have no idea what the unknown value is." Theoretically, absolutely true. In practice, if we have 10,000 customers and we know exactly how much money 9,999 owe us but there's some question about the amount owed by the last one, and management says, "What are our total accounts receivable?", yes, the mathematically correct answer is "I don't know". But the practical answer is "we calculate $4,327,287.42 but one account is in question so that number isn't exact". I'm sure management would much rather get a close if not certain number than a blank stare. But SQL insists on this mathemcatically pristine approach, so every operation you do, you have to add extra code to check for nulls and handle them special.
All that said, I'd still rather use SQL than some layer built on top of SQL, that just creates another whole set of things I need to learn, and then I have to know that ultimately this will be translated to SQL, and sometimes I can just trust it to do the translation correctly and efficiently, but when things get complex I can't, so now I have to know the extra layer, I still have to know SQL, and I have to know how it's going to translate to I can trick the layer into tricking SQL into doing the right thing. Arggh.
There's no love for SQL because SQL is bad in syntax, semantics and current usage. I'll explain:
it's syntax is a cobol shrapnel, all the cobol criticism applies here (to a lesser degree, to be fair). Trying to be natural language like without actually attempting to interpret natural language creates arbirtrary syntax (is it DROP TABLE or DROP , UPDATE TABLE , UPDATE or UPDATE IN , DELETE or DELETE FROM ...) and syntactical monstrosities like SELECT (how many pages does it fill?)
semantics is also deeply flawed, Date explains it in great detail, but it will suffice to note that a three valued boolean logic doesn't really fit a relational algebra where a row can only be or not be part of a table
having a programming language as the main (and often only) interface to databases proved to be a really bad choice and it created a new category of security flaws
I'd agree with most of the posts here that the debate over the utility of SQL is mostly subjective, but I think it's more subjective in the nature of your business needs.
Declarative languages, as Stefan Steinegger has pointed out, are good for specifying what you want, not how you want to do it. This means that your various implementations of SQL are decent from a high-level perspective : that is, if all you want is to get some data and nothing else matters, you can satisfy yourself with writing relatively simple queries, and choosing the implementation of SQL that is right for you.
If you work on a much "lower" level, and you need to optimize all of that yourself, it's far from ideal. Using a further layer of abstraction can help, but if what you're really trying to do is specify the methods for optimizing queries and so forth, it's a little counter intuitive to add a middleman when trying to optimize.
The biggest problem I have with SQL is like other "standardized" languages, there are very few real standards. I'd almost prefer having to learn a whole new language between Sybase and MySQL so that I don't get the two conventions confused.
While SQL does get the job done it certainly has issues...
it tries to simultaneously be the high level and the low level abstraction, and that's ... odd. Perhaps it should have been two or more standards at different levels.
it is a huge failure as a standard. Lots of things go wrong when a standard either stirs in everything, asks too much of implementations, asks too little, or for some reason does not accomplish the partially social goal of motivating vendors and implementors to produce strictly conforming interoperable complete implementations. You certainly cannot say SQL has done any of that. Look at some other standards and note that success or failure of the standard is clearly a factor of the useful cooperation attained:
RS-232 (Bad, not nearly enough specified, even which pin transmits and which pin receives is optional, sheesh. You can comply but still achieve nothing. Chance of successful interop: really low until the IBM PC made a de-facto useful standard.)
IEEE 754-1985 Floating Point (Bad, overreach: not a single supercomputer or scientific workstation or RISC microprocessor ever adopted it, although eventually after 20 years we were able to implement it nicely in HW. At least the world eventually grew into it.)
C89, C99, PCI, USB, Java (Good, whether standard or spec, they succeeded in motivating strict compliance from almost everyone, and that compliance resulted in successful interoperation.)
it failed to be selected for arguably the most important database in the world. While this is more of a datapoint than a reason, the fact that Google Bigtable is not SQL and not relational is kind of an anti-achievement for SQL.
I don't dislike SQL, but I also don't want to have to write it as part of what I am developing. The DAL is not about speed to market - actually, I have never thought that there would be a DAL implementation that would be faster than direct queries from the code. But the goal of the DAL is to abstract. Abstraction comes at a cost, and here it is that it will take longer to implement.
The benefits are huge, though. Writing native tests around the code, using expressive classes, strongly typed datasets, etc. We use a "DAL" of sorts, which is a pure DDD implementation using Generics in C#. So we have generic repositories, unit of work implementations (code based transactions), and logical separation. We can do things like mock out our datasets with little effort and actually develop ahead of database implementations. There was an upfront cost in building such a framework, but it is very nice that business logic is the star of the show again. We consume data as a resource now, and deal with it in the language we are natively using in the code. An added benefit of this approach is the clear separation it provides. I no longer see a database query in a web page, for example. Yes, that page needs data. Yes, the database is involved. But now, no matter where I am pulling data from, there is one (and only one) place to go into the code and find it. Maybe not a big deal on smaller projects, but when you have hundreds of pages in a site or dozens of windows in a desktop application, you truly can appreciate it.
As a developer, I was hired to implement the requirements of the business using my logical and analytical skills - and our framework implementation allows for me to be more productive now. As a manager, I would rather have my developers using their logical and analytical skills to solve problems than to write SQL. The fact that we can build an entire application that uses the database without having the database until closer to the end of the development cycle is a beautiful thing. It isn't meant as a knock against database professionals. Sometimes a database implementation is more complex than the solution. SQL (and in our case, Views and Stored Procs, specifically) are an abstraction point where code can consume data as a service. In shops where there is a definite separation between the data and development teams, this helps to eliminate sitting in a holding pattern waiting for database implementation and changes. Developers can focus on the problem domain without hovering over a DBA and the DBA can focus on the correct implementation without a developer needing it right now.
Many posts here seem to argue that SQL is bad because it doesn't have "code optimization" features, and that you have no control over execution plans.
What SQL engines are good at is to come up with an execution plan for a written instruction, geared towards the data, the actual contents. If you care to take a look beyond the programming side of things, you will see that there is more to data than bytes being passed between application tiers.

What simple guidelines would you give your developers for writing good SQL against Oracle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I work in a group of about 25 developers. I'm responsible for coming up with the database design (tables, views, etc) and am called apon for performance tuning when necessary.
There are a couple of different applications that connect. Database access is via JDBC, hibernate, and iBatis SQL maps. Developers with various levels of experience write SQL statements.
What guidelines would you give to developers to write good SQL?
By good I mean: correct, performs well, easy to understand and maintain.
These are just meant to be easy to follow guidelines - I want to get people onto the right track for the majority of situations. We will break these guidelines when it makes sense.
EDIT: We have in place code reviews for all source commits (SQL, java, etc) enforced through a jira workflow.
If you have 25 developers writing SQL queries against your database you are in quite a bit of trouble. Guidelines are not worth much when your junior developers are learning SQL and checking in a mess.
I would like to offer 4 recommendations
Use an ORM of sorts so your all your devs write less SQL.
Invest in training, buy books, send people to courses.
Have all the SQL reviewed by the senior SQL developers, by all, I mean every SQL statement, no exceptions. This way your senior guys will be able to teach the juniors over time.
Have a single person, who lives and breaths Oracle, responsible for the database. By responsible I mean knows every query, understands all the structure and is able to give expert advice.
Here are some additional things you may add to your existing guidelines/checklist.
Have you tested your queries on a large data set? How was performance?
Have you performed a quick index review on the tables that are being accessed? Are all the right indexes in place? Do you recommend and new indexes?
For high volume queries, are any covering indexes required?
Are you using "NOT IN" in cases where a "LEFT JOIN" should be used?
Is your work transactionally sound? Are you missing a transaction somewhere?
Here's what I already have in my guidelines.
Work in sets, not row by row
The best way to make something go quicker is to avoid doing work you do not have to do
Databases love to join
Fully qualify and specify column names (so SQL does not break when additional columns are added)
Select only the data you need (never select *, never more rows than you require, never every column just becaues it's there)
How to use rownum to limit resultsets
Bind Variables vs Literals (use bind variables in all but a few special cases related to skewed data)
Avoid functions or calculations on columns in the WHERE clause (except for a special case of function based index)
Use ORDER BY for all queries returning more than one row (this is mostly for testability)
Each of these points is expanded a bit in the actual guidelines I've written out with an example relevant to our database schema.
Read Tom Kyte's books. He explains how you can write fast code and how you can measure performance and scalability. If you have a problem you can probably find the answer on the "ask tom"-site.
Introduce basic style guide that covers:
naming (of everything - tables, columns, procedures, aliases, ...) .
formatting style
line width
what reserved words require new line (e.g where)
are reserved word capitalized or small caps
indenting
...
Here are some examples:
Oracle PL/SQL Programming, Fourth Edition. There is older, 2nd edition - available online
SQL and PL/SQL Coding Standards
Be very strict about naming, it will be easier for you to read other people's code.
As formatting is concerned, there are tools available that can format automatically, so maybe you don't need very detailed description here.
If you are a database developer, you need to know what an EXECUTION PLAN is. If you don't then go mine coal or something.
Before developing:
first, you think what best EXECUTION PLAN will be,
second you create tables and indexes, and
third you use hints to persuade the optimizer to come out with the plan you made.
You do use hints. Forget automatic optimization, it's a marketing myth. No optimizer knows your data better than you and never will.
There are no "programmers who create queries" and "system administrators who create indexes". Programmers program, system administrators make backups (or whatever they make).
Triggers are evil.
Prefix you columns, tables and views (SELECT prs_name FROM t_person)
Make lines and indent
An hour long presentation on some Oracle fundamentals (eg parsing, SGA vs PGA). "Do this" rules may or may not apply to your situation. Give them an understanding of what the DB side does, and they at least have a basis on which to make a decision.
Plus Code reviews.
Pair-program. Any advantange it provides for agile development in general, at least doubles for SQL development.
Second choice, code reviews for all SQL.
Along with the recommendation to have queries reviewed by senior programmers, if you can get the buy-in, have code reviews which involve as many team members as possible.
I'm by no means a guru but here are my tips:
Don't use ORDER BY unless you really need an ordered list as it incurs a performance hit.
Understand the explain plan and also recognise that the plan on your development environment is often different from your production environment. Don't expect it to accurately reflect real life performance
The pros of using hints is that you get to choose your explain plan, the cons of using hints is that the optimal plan may change over time and you might be choosing a plan that is suboptimal in the long term
Make sure the developers know when to use INNER JOIN, OUTER JOIN, [NOT] IN, [NOT] EXISTS - you can put in place a lot of processes but one or two Cartesian products will bring production performance to its knees
Ensure your developers understand indexes - what they are, when they should be used, when they should be avoided
Have a DBA monitor the most executed queries and the most expensive queries and highlight these as candidates for optimisation
Peer review
Coding standards (especially code comments on particularly long/complex queries)
Unit testing
Don't write SQL if you can help it, use HQL (or JPQL is on Java EE) whenever possible
Don't use SELECT *
Pick your internet sources wisely (e.g. asktom.oracle.com)
Don't use cursors
Don't do string concatenation in SQL
Write queries such that they use indexes (fundamentally this means base WHERE predicates on the indexes that exist)
use MERGE instead of other awkward 'upsert' type logic
When working with dates, make sure you understand how they're stored in Oracle vs. how they are stored in Java, especially when it relates to TimeZone. Depending on the Calendar/Date types, this information can be stripped out, remapped to the TZ of the default locale, etc.
Most importantly: Don't use the excuse of being a developer for not knowing how to write good SQL, and how the database works. You don't have to be a DBA, but you need to invest in your own training to make yourself suitable for the task. By the same token, your company needs to invest in that as well.
I don't mean to say that these "Don'ts" always apply. It's just that, if you're talking about a developer who is not comfortable with Oracle, they need to know what they're doing before they start deciding whether those types of things are necessary and appropriate.

Signs of a great SQL developer

Based on their work, how do you distinguish a great SQL developer?
Examples might include:
Seldom uses CURSORs, and tries to refactor them away.
Seldom uses temporary tables, and tries to refactor them away.
Handles NULL values in OUTER JOINs with confidence.
Avoids SQL extensions that are not widely implemented.
Knows how to indent with elegance.
I've found that a great SQL developer is usually also a great database designer, and will prefer to be involved in both the design and implementation of the database. That's because a bad database design can frustrate and hold back even the best developer - good SQL instincts don't always work right in the face of pathological designs, or systems where RI is poor or non-existent. So, one way to tell a great SQL developer is to test them on data modeling.
Also, a great DB developer has to have complex join logic down cold, and know exactly what the results of various multi-way joins will be in different situations. Lack of comfort with joins is the #1 cause of bad SQL code (and bad SQL design, for that matter).
As for specific syntax things, I'd hesitate at directives like:
Does not use CURSORs.
Does not use temporary tables.
Use of those techniques might allow you to tell the difference between a dangerously amateur SQL programmer (who uses them when simple relational predicates would be far better) and a decent starting SQL programmer (who knows how to do most stuff without them). However, there are many situations in real world usage where temp tables and cursors are perfectly adequate ways (sometimes, the only ways) to accomplish things (short of moving to another layer to do the processing, which is sometimes better anyway).
So, use of advanced concepts like these isn't forbidden, but unless you're clearly dealing with a SQL expert working on a really tough problem that, for some reason, doesn't lend itself to a relational solution ... yeah, they're probably warning signs.
I don't think that cursors, temporary tables or other SQL practices are inherently bad or that their usage is a clear sign of how good a database programmer is.
I think there is the right tool for every type of problem. Sure, if you only have a hammer, everything looks like a nail. I think a great SQL programmer or database developer is a person who knows which tool is the right one in a specific situation. IMHO you can't generalize excluding specific patterns.
But a rule of thumb may be: a great database developer will find a more short and elegant solution for complex situations than the average programmer.
Here are a few things that don't apply to run-of-the-mill software developers, but do apply to someone with good SQL skills:
Defines beneficial indexes, but not redundant or unused indexes.
Employs transactions effectively.
Values referential integrity.
Applies normalization to database design.
Thinks in terms of sets, not in terms of loops.
Uses JOIN confidently.
Knows how NULL and tri-value logic works.
Understands the uses and benefits of query parameters.
The examples you give, of not using cursors, temp tables, or knowing 3 alternative queries for a given task, I would not consider indications of being a great SQL developer. Perhaps I would call someone who does those things an "acrobat."
Just to add to the already great answers; The developer can reduce a complex problem to something simple and easy to maintain.
Knows how to use INFORMATION_SCHEMA and table metadata in order to write either generic code or to code generate code in order to save repetitive database tasks.