How do I 'refactor' SQL Queries? - sql

I have several MS Access queries (in views and stored procedures) that I am converting to SQL Server 2000 (T-SQL). Due to Access's limitations regarding sub-queries, and or the limitations of the original developer, many views have been created that function only as sub-queries for other views.
I don't have a clear business requirements spec, except to 'do what the Access application does', and half a page of notes on reports/CSV extracts, but the Access application doesn't even do what I suspect is required properly.
I, therefore, have to take a bottom up approach, and 'copy' the Access DB to T-SQL, where I would normally have a better understanding of requirements and take a top down approach, creating new queries to satisfy well defined requirements.
Is there a method I can follow in doing this? Do I spread it all out and spend a few days 'grokking' it, or do I continue just copying the Access views and adopt an evolutionary approach to optimising the querying?

Work out what access does with the queries, and then use this knowledge to check that you've transferred it properly. Only once you've done this can you think about refactoring. I'd start with slow queries and then go from there: work out what indexes you need and then progressively rewrite. This way you can deliver as soon as you've proved that you moved everything successfully (even if it is potentially a bit slower). That's much better than not being able to deliver at all because problem X came along.

I'd probably start with the Access database, exercise the queries in situ and see what the resultset is. Often you can understand what the query accomplishes and then work back to your own design to accomplish it. (To be thorough, you'll need to understand the intent pretty completely anyway.) And that sounds like the best statement of requirements you're going to get - "Just like it's implemented now."
Other than that, You're approach is the best I can think of. Once they are in SQL Server, just start testing and grokking.

When you are dealing with a problem like this it's often helpful to keep things working as they are while you make incremental changes. This is better from a risk management perspective.
I'd concentrate on getting it working, then checking the database performance and optimizing performance problems. Then, as you add features and fix bugs, clean up the code that's hard to maintain. As you said, a sub-query is really very similar to a view. So if it's not broken you may not need to change it.

This depends on your timeline. If you have to get the project running absolutely as soon as possible (I know this is true for EVERY project, but if it's REALLY true for you), then yes, duplicate the functionality and infrastructure from Access then do your refactoring either later or as you go.
If you have SOME time you can dedicate to it, then refactoring it now will give you two things:
You'll be happier with the code, and it will (likely) perform better, since actual analysis was done rather than the transcoding equivalent of a copy-paste
You'll likely gain a greater understanding of what the true business rules are, since you'll almost certainly come across things that aren't in the spec (especially considering how you describe them)

I would recommend copying the views to SQL Server immediately, and then use its sophisticated tools to help you grok them.
For example, SQL Server can tell you what views, stored procedures, etc, rely on a particular view, so you can see from there whether the view is a one-of or if it's actually used in more than one place. It will help you determine which views are more important than which.

Related

Query design practices in SQL

I am building queries for a database in MS Access 2007 and I am wondering if my current design practices are up to par. Basically, the database was configured before I came, but I have been given the responsibility of building efficient queries to extract the data.
My current queries are small and simple, each accomplishing 2-3 tasks (sometimes only 1) at a time. The reason I am taking this approach is because I am completely new to SQL, and I find it easier to work with many, simple queries and use reports to consolidate the data, as opposed to building extremely complex queries which are 1) hard to build (for me, anyways) and 2) hard to maintain.
I was just curious if anyone had any best practices for query design, and if you could give me some specific feed back for the approach listed above, and whether or not I should start making complex queries, or just stick to simple queries and reports to consolidate the relevant data.
Thanks.
The people answering this question are not coming to it from an Access point of view, so I'll offer some observations as somebody who has been creating Access applications professionally full-time since 1996.
First off, there are several places where you'll have SQL in an Access application:
stored queries.
stored properties of forms, reports, combo boxes and list boxes.
in VBA code where you are writing SQL on the fly.
Managing all of these SQL statements in an organized fashion is difficult, if not impossible. But I'm not sure it's worth it!
First off, consider just stored queries. If you follow the advice of saving a query for every individual task so that each SQL statement is used in only one place, you'll soon have a mess in the list of queries, and you'll be forced into some kind of naming convention to keep track of what's what. Because of this, I generally don't save queries EXCEPT where they MUST be saved, or where the optimization that comes with a saved query is going to be helpful (i.e., large dataset or complex joins/filtering).
For example, when I first started programming in Access, I'd save all the rowsources of my combo boxes as saved queries. I developed a naming convention so they wouldn't be mixed in with the other queries in the list of queries, so it wasn't to hard to manage. At first, I thought I'd be re-using the saved queries, but it quickly became clear that I needed to make changes for individual circumstances, and changing a query that was used elsewhere might alter its results in other contexts, so really, there was no "shared code" benefit to the saved queries (as I thought there would be). The only place where it was helpful was where I had the same combo box on multiple forms, and then I could save the rowsource for that as a saved query and if I needed to alter it, I could it in just one place. However, that was really only an advantage for a relatively complex rowsource -- a simple SELECT on a couple of fields doesn't really benefit from that kind of sharing, particularly when it's used in only a couple of different places.
In short, I quickly concluded that it was just easier to save the SQL statements where they were used -- since there was very little re-use in the first place (once I gained enough experience to realize the pitfalls of trying to re-use them), this worked much better, and it kept the SQL close to where it was being used.
For forms and reports, I do some of the same things, but in general, use saved queries for the purpose of avoiding having to write too many complex subselects for use as derived tables. Where I needed those it was always easier to write it and save it and then use it with a JOIN in another SQL statement than it was to try to use the subselect inline as a derived table (which just makes for complicated SQL that's hard to read -- particularly when you can't comment or format your SQL, as is the case with saved Access queries).
In general, I don't save the recordsources of forms or reports except where there is real re-use going on (a report will often use the same recordsource as a form, so in that case, it's useful to save it, so that when you change the SQL of the form, the report that goes with it inherits the alteration).
That all leaves dynamic SQL assembled in VBA code. I use lots of this, from dynamically setting the rowsources of combo/listboxes, to setting the recordsources of subforms for filtering purposes. This is harder to manage, and sometimes I use string constants in the module to make that easier. For instance, in a case where you're writing dynamic SQL where everything remains the same except the WHERE clause, a constant with the SELECT and a second constant with the ORDER BY makes it a lot easier to assemble the complete SQL statement.
I don't know if this really answers your questions, but I have learned over the years that the benefits of re-using SQL statements are vastly outweighed by the uncertainty that comes from the inability to track easily where that SQL statement may be used. I find that storing the SQL statment as close to where it is used as possible is the best practice, as that is a form of "self-documentation" (though not a great one!).
I do make many exceptions and save queries when there is a real and demonstrable benefit in terms of performance or managing what would otherwise become much more comples SQL. However, I would also note that one should also not go too far in the other direction, using tons of nested saved queries, because then you run into other problems (i.e., the "too many databases" problem, which is actually caused by using up the 2048 table handles available at one time -- it's done more easily than you might think).
My humble opinion, it doesn't matter if DB engine is big and monstrous as MSSQL or Oracle, or tiny and simple as SQLite, every query (or stored procedure or any other unit of data processing) should be responsible only for 1 function. I use this principle anywhere (not only in DB development) and I can say it works.
If you are not sure, try to read books about refactoring, Fawler for example. I suppose his principles are applicable to any area of development.
If you are storing your data in MSAccess then your database cannot be too large and any optimization you do is limited by the constraints MSAccess imposes. If better (more optimized) queries is a goal, then perhaps migrating the data out of Access and into SQL Server may allow you to have better flexibility in development going forward. You can leverage cached execution plans, stored procedure, and views.
This may mean that you will need to enhance your T-SQL skills to accomplish this.
So weigh out the options you propose in your question:
1. Keep code simple (comfortable at your current skill level)
2. Meet the responsibility to create efficient queries for data extraction.
SQL Server Express could be a good starting point (it's free).

Raw SQL vs OOP based queries (ORM)?

I was doing a project that requires frequent database access, insertions and deletions. Should I go for Raw SQL commands or should I prefer to go with an ORM technique? The project can work fine without any objects and using only SQL commands? Does this affect scalability in general?
EDIT: The project is one of the types where the user isn't provided with my content, but the user generates content, and the project is online. So, the amount of content depends upon the number of users, and if the project has even 50000 users, and additionally every user can create content or read content, then what would be the most apt approach?
If you have no ( or limited ) experience with ORM, then it will take time to learn new API. Plus, you have to keep in mind, that the sacrifice the speed for 'magic'. For example, most ORMs will select wildcard '*' for fields, even when you just need list of titles from your Articles table.
And ORMs will aways fail in niche cases.
Most of ORMs out there ( the ones based on ActiveRecord pattern ) are extremely flawed from OOP's point of view. They create a tight coupling between your database structure and class/model.
You can think of ORMs as technical debt. It will make the start of project easier. But, as the code grows more complex, you will begin to encounter more and more problems caused by limitations in ORM's API. Eventually, you will have situations, when it is impossible to to do something with ORM and you will have to start writing SQL fragments and entires statements directly.
I would suggest to stay away from ORMs and implement a DataMapper pattern in your code. This will give you separation between your Domain Objects and the Database Access Layer.
I'd say it's better to try to achieve the objective in the most simple way possible.
If using an ORM has no real added advantage, and the application is fairly simple, I would not use an ORM.
If the application is really about processing large sets of data, and there is no business logic, I would not use an ORM.
That doesn't mean that you shouldn't design your application property though, but again: if using an ORM doesn't give you any benefit, then why should you use it ?
For speed of development, I would go with an ORM, in particular if most data access is CRUD.
This way you don't have to also develop the SQL and write data access routines.
Scalability should't suffer, though you do need to understand what you are doing (you could hurt scalability with raw SQL as well).
If the project is either oriented :
- data editing (as in viewing simple tables of data and editing them)
- performance (as in designing the fastest algorithm to do a simple task)
Then you could go with direct sql commands in your code.
The thing you don't want to do, is do this if this is a large software, where you end up with many classes, and lot's of code. If you are in this case, and you scatter sql everywhere in your code, you will clearly regret it someday. You will have a hard time making changes to your domain model. Any modification would become really hard (except for adding functionalities or entites independant with the existing ones).
More information would be good, though, as :
- What do you mean by frequent (how frequent) ?
- What performance do you need ?
EDIT
It seems you're making some sort of CMS service. My bet is you don't want to start stuffing your code with SQL. #teresko's pattern suggestion seems interesting, seperating your application logic from the DB (which is always good), but giving the possiblity to customize every queries. Nonetheless, adding a layer that fills in memory objects can take more time than simply using the database result to write your page, but I don't think that small difference should matter in your case.
I'd suggest to choose a good pattern that seperates your business logique and dataAccess, like what #terekso suggested.
It depends a bit on timescale and your current knowledge of MySQL and ORM systems. If you don't have much time, just do whatever you know best, rather than wasting time learning a whole new set of code.
With more time, an ORM system like Doctrine or Propel can massively improve your development speed. When the schema is still changing a lot, you don't want to be spending a lot of time just rewriting queries. With an ORM system, it can be as simple as changing the schema file and clearing the cache.
Then when the design settles down, keep an eye on performance. If you do use ORM and your code is solid OOP, it's not too big an issue to migrate to SQL one query at a time.
That's the great thing about coding with OOP - a decision like this doesn't have to bind you forever.
I would always recommend using some form of ORM for your data access layer, as there has been a lot of time invested into the security aspect. That alone is a reason to not roll your own, unless you feel confident about your skills in protecting against SQL injection and other vulnerabilities.

Strategies for Fixing Problems / Tweaking NHibernate Apps in Production

First off, I am not a DBA, but I do work in an environment where DBAs do tune/make changes in the production database from time to time in ways that do not cause the need for an application rebuild/redeployment. Usually these changes consist of reworking indexes, changing procs, and sometimes changing the table structure in minor ways (usually abstracted from the app via procs).
Obviously, a team should strive to catch performance problems with NHibernate before they get into production using things like NHProf, SQL Profiler, and load tests. That being said, are there certain strategies that can be used to allow some amount of tweaking once the code is built and out running in production? Using stored procedures 100% of the time seems like it would allow the most flexibility for the DBA's, but obviously that would really kill the efficiency of NHibernate. From what I've read, updatable views (in SQL Server) don't really work that well with NHibernate either (this may-or-may-not be true).
I've read quite a bit about NHibernate and experimented with it over the years, but I have never put it into practice in a production environment. I have yet to come across a set of "best practices" to allow for maximum tweaking once deployed.
As an NHibernate user, how are you and your team dealing with issues if they arise in production? My production environment is made up of ASP.NET apps and SQL server, but I don't think the answers need to be restricted to that platform.
I am in a similar position, and in order to keep our DBA happy, I did the following:
Wrote some of the queries in HQL, some others in SQL (especially those perf-sensitive)
Externalized those queries to files, one file per query.
When your app needs to execute of these queries, it just loads the appropriate file, optionally running it through a pre-processor, and runs it.
With this approach, the DBA could theoretically tweak the queries just by modifying those files. That's quite similar to having stored procedures.
In practice, it's up to you to decide if you'll really give the DBA access to those files (if you catch my drift...)
IMHO the DBA should just use the DBMS's profiling tools and report her findings back to the devs (as in "there's this query that is running 20 times/sec and does 10 joins. is that really necessary? can it be cached? do you really need all those joins? can we denormalize this?" etc.
I'm not in the deploy phase yet, but on my current project I've come up against this already and my solution presently has been to replace my queries with stored procs. As long as the shape of the data coming back from the DB remains the same it's not a big deal. Yeah you do lose some of that agility you enjoyed during development but I'm not sure it's as bad as it initially sounds. You'll have a code push when you first make the change of course, and then from that point it's just proc changes.
You can use a profiler like NHProf to see the sql queries executed, so you can show them to a DBA. This tool can also detect some problem like n+1 select.
Using a second level of cache can be useful : http://web.archive.org/web/20110514214657/http://blogs.hibernatingrhinos.com/nhibernate/archive/2008/11/09/first-and-second-level-caching-in-nhibernate.aspx

Can you have too many stored procedures?

Is there such a thing as too many stored procedures?
I know there is not a limit to the number you can have but is this any performance or architectural reason not to create hundreds, thousands??
To me the biggest limitation to that hundreds or thousands store procedure is maintainability. Even though that is not a direct performance hit, it should be a consideration. That is an architectural stand point, you have to plan not just for the initial development of the application, but future changes and maintenance.
That being said you should design/create as many as your application requires. Although with Hibernate, NHibernate, .NET LINQ I would try to keep as much store procedures logic in the code, and only put it in the database when speed is a factor.
Yes you can.
If you have more than zero, you have too many
Do you mean besides the fact that you have to maintain all of them when you make a change to the database? That's the big reason for me. It's better to have fewer ones that can handle many scenarios than hundreds that can only handle one.
I would just mention that with all such things maintenance becomes an issue. Who knows what they all are and what purpose they serve? What happens years down the way when they are just artifacts of a legacy system that no one can recall what they are for but are scared to mess with?
I think the main thing is the question of not is it possible, but should it be done. Thats just one thought anyway.
I have just under 200 in a commercial SQL Server 2005 product of mine, which is likely to increase by about another 10-20 in the next few days for some new reports.
Where possible I write 'subroutine' sprocs, so that anytime I find myself putting 3 or 4 identical statements together in more than a couple of sprocs, it's time to turn those few statements into a subroutine, if you see what I mean.
I don't tend to use the sprocs to perform all the business logic as such, I just prefer to have sprocs doing anything that could be seen as 'transactional' - so where as my client code (in Delphi but whatever) might do the odd insert or update itself, as soon as something requires a couple of things to be updated or inserted 'at once', it's time for a sproc.
I have a simple, crude naming convention to assist in the readability (and in maintenance!)
The product's code name is 'Rachel', so we have
RP_whatever - these are general sprocs that update/insert data,
- or return small result sets
RXP_whatever - these are subroutine sprocs that server as 'functions'
- or workers to the RP_ type procs
REP_whatever - these are sprocs that simply act as glorified views almost
- they don't alter data, they return potentially complex result sets
- for reporting purposes, etc
XX_whatever - these are internal test/development/maintenance sprocs
- that the client application (or the local dba) would not normally use
the naming is arbitrary, it's just to help distinguish from the sp_ prefix that SQL Server uses.
I guess if I found I had 400-500 sprocs in the database I might become concerned, but a few hundred isn't a problem at all as long as you have a system for identifying what kind of activity each sproc is responsible for. I'd rather chase down schema changes in a few hundred sprocs (where the SQL Server tools can help you find dependencies etc) than try to chase down schema changes in my high-level programming language.
I suppose the deeper question here is- where else should all of that code go? Views can be used to provide convenient access to data through standard SQL. More powerful application languages and libraries can be used to generate SQL. Stored procedures tend to obscure the nature of the data and introduce an unnecessary layer of abstraction, complexity, and maintenance to a system.
Given the power of object relational mapping tools to generate basic SQL, any system that is overly dependent on stored procedures is going to be less efficient to develop. With ORM, there is simply a lot less code to write.
Hell yes, you can have too many. I worked on a system a couple of year ago that had something like 10,000 procs. It was insane. The people who wrote the system didn't really know how to program but they did know how to right badly structured procs, so they put almost all of the application logic in the procs. Some of the procs ran for thousands of line. Managing the mess was a nightmare.
Too many (and I can't draw you a specific line in the sand) is probably an indicator of poor design. Besides, as others have pointed out, there are better ways to achieve a granular database interface without resorting to massive numbers of procs.
My question is why aren't you using some sort of middleware platform if you're talking about hundreds or thousands of stored procedures?
Call me old fashioned but I thought the database should just hold the data and the program(s) should be the one making the business logic decisions.
in Oracle database you can use Packages to group related procedures together.
a few of those and your namespace would free up.
Too much of any particular thing probably means you're doing it wrong. Too many config files, too many buttns on the screen ...
Can't speak for other RDBMSs but an Oracle application that uses PL/SQL should be logically bundling procedures and functions into packages to prevent cascading invalidations and manage code better.
To me, using lots of stored procedures seems to lead you toward something equivalent to the PHP API. Thousands of global functions which may have nothing to do with eachother all grouped together. The only way to relate between them is to have some naming convention where you prefix each function with a module name, similar to the mysql_ functions in PHP. I think this is very hard to maintain, and very hard to to keep everything consistent. I think stored procedures work well for things that really need to take place on the server. A stored procedure for a simple select query, or even a select query with a join is probably going to far. Only use stored procedures where you actually require advanced logic to be handled on the database server.
As others have said, it really comes down to the management aspect. After a while, it turns into finding the proverbial needle in the haystack. That's even more so when there's poor naming standards in place...
No, not that I know of. However, you should have a good naming convention for them. For example, starting them with usp_ is something that a lot of poeple like to do (fastr than starting with sp_).
Maybe your reporting sprocs should be usp_reporting_, and your bussiness objects should be usp_bussiness_, etc. As long s you can manage them, there shouldn't be a problem with having a lot of them.
If they get too big you might want to split the database up into multiple databases. This might make more logical sense and can help with database size and such.
Yep, you can have toooooo many stored procedures.
Naming conventions can really help with this. For instance, if you have a naming convention where you always have the table name and InsUpd or Get, it's pretty easy to find the right stored procedure when you need it. If you don't have some kind of standard naming convention it would be really easy to come up with two (or more) procedures that do almost exactly the same thing.
It would be easy to find a couple of the following without having a standardized naming convention... usp_GetCustomersOrder, CustomerOrderGet_sp, GetOrdersByCustomer_sp, spOrderDetailFetch, GetCustOrderInfo, etc.
Another thing that starts happening when you have a lot of stored procedures is that you'll have some stored procedures that are never used and some that are just rarely used... If you don't have some way of tracking stored procedure usage you'll either end up with a lot of unused procedures... or worse, getting rid of one you think is never used and finding out afterwards that it's only used once a year or once a quarter. :(
Jeff
Not while you have sane naming conventions, up-to-date documentation and enough unit tests to keep them organized.
My first big project was developed with an Object Relational Mapper that abstracted the database so we did everything object oriented and it was easier to mantain and fix bugs and it was especially easy to make changes since all the data access and business logic was C# code, however, when it came to do complex stuff the system felt slow so we had to work around those things or re-architect the project, especially when the ORM had to do complex joins in the database.
Im now working on a different company that has a motto of "our applications are just frontends of the database" and it has its advantages and disadvantages. We have really fast applications since all of the data operations are done on stored procedures, this is using mainly SQL Server 2005, but, when it comes to make a change, or a fix to the software its harder because you have to change both, the C# code and the SQL stored procedures so its like working twice, contrary to when I used an ORM, you dont have refactoring or strongly typed objects in sql, Management Studio and other tools help a lot but normally you spend more time going this way.
So I would say that deppending on the needs of the project, if you are not going to keep really complex business data than maybe you could even avoid using stored procedures at all and use an ORM that makes developer life's easier. If you are concerned about performance and need to save all of the resources than you should use stored procedures, of course, the quantity deppends uppon the architecture that you design, so for exmaple if you always have stored procedures for CRUD operations you would need one for inserts, one for update, one for deletion, one for selecting and one for listing since these are the most common operations, if you have 100 business objects, multiply these 4 by that and you get 400 stored procedures just to manage the most basic operations on your objects so, yes, they could be too many.
If you have a lot of stored procedures, you'll find you are tying yourself to one database - some of them may not easily be transferred.
Tying yourself to one database isn't good design.
Moreover, if you have business logic on the database and in a business layer then maintenance becomes a problem.
i think as long as you name them uspXXX and not spXXX the lookup by name will be direct, so no downside - though you might wonder about generalizing the procs if you have thousands...
You have to put all that code somewhere.
If you are going to have stored procedures at all (I personally do favour them, but many do not), then you might as well use them as much as you need to. At least all the database logic is in one place. The downside is that there isn't typically much structure to a bucket full of stored procs.
Your call! :)

Languages other than SQL in postgres

I've been using PostgreSQL a little bit lately, and one of the things that I think is cool is that you can use languages other than SQL for scripting functions and whatnot. But when is this actually useful?
For example, the documentation says that the main use for PL/Perl is that it's pretty good at text manipulation. But isn't that more of something that should be programmed into the application?
Secondly, is there any valid reason to use an untrusted language? It seems like making it so that any user can execute any operation would be a bad idea on a production system.
PS. Bonus points if someone can make PL/LOLCODE seem useful.
#Mike: this kind of thinking makes me nervous. I've heard to many times "this should be infinitely portable", but when the question is asked: do you actually foresee that there will be any porting? the answer is: no.
Sticking to the lowest common denominator can really hurt performance, as can the introduction of abstraction layers (ORM's, PHP PDO, etc). My opinion is:
Evaluate realistically if there is a need to support multiple RDBMS's. For example if you are writing an open source web application, chances are that you need to support MySQL and PostgreSQL at least (if not MSSQL and Oracle)
After the evaluation, make the most of the platform you decided upon
And BTW: you are mixing relational with non-relation databases (CouchDB is not a RDBMS comparable with Oracle for example), further exemplifying the point that the perceived need for portability is many times greatly overestimated.
"isn't that [text manipulation] more of something that should be programmed into the application?"
Usually, yes. The generally accepted "three-tier" application design for databases says that your logic should be in the middle tier, between the client and the database. However, sometimes you need some logic in a trigger or need to index on a function, requiring that some code be placed into the database. In that case all the usual "which language should I use?" questions come up.
If you only need a little logic, the most-portable language should probably be used (pl/pgSQL). If you need to do some serious programming though, you might be better off using a more expressive language (maybe pl/ruby). This will always be a judgment call.
"is there any valid reason to use an untrusted language?"
As above, yes. Again, putting direct file access (for example) into your middle tier is best when possible, but if you need to fire things off based on triggers (that might need access to data not available directly to your middle tier), then you need untrusted languages. It's not ideal, and should generally be avoided. And you definitely need to guard access to it.
These days, any "unique" or "cool" feature in a DBMS makes me incredibly nervous. I break out in a rash and have to stop work until the itching goes away.
I just hate to be locked in to a platform unnecessarily. Suppose you build a big chunk of your system in PL/Perl inside the database. Or in C# within SQL Server, or PL/SQL within Oracle, there are plenty of examples*.
Now you suddenly discover that your chosen platform doesn't scale. Or isn't fast enough. Or something. Worse, there's a new kid on the database block (something like MonetDB, CouchDB, Cache, say but much cooler) that would solve all your problems (even if your only problem, like mine, is having an uncool databse platform). And you can't switch to it without recoding half your application.
(*Admittedly, the paid-for products are to some extent seeking to lock you in by persuading you to use their unique features, which is not an accusation that can directly be levelled at the free providers, but the effect is the same).
So that's a rant on the first part of the question. Heart-felt, though.
is there any valid reason to use an
untrusted language? It seems like
making it so that any user can execute
any operation would be a bad idea
My goodness, yes it does! A sort of "Perl injection attack"? Almost worth doing it just to see what happens, I'd have thought.
For philosophical reasons outlined above I think I'll pass on the PL/LOLCODE challenge. Although I was somewhat amazed to discover it was a link to something extant.
From my perspective, I guess the answer is 'it depends'.
There is an argument that manipulation of the data belongs in the database layer, so that the business logic does not need to be overly concerned about how the manipulation happens, it just knows that it has.
Another very good reason to process data on the db layer is if the volume of data being crunched means that network bandwidth will become an issue. I once had to categorise very large amounts of data. Processing this in the application layer was severly restricted by the time required to transfer all the data across the network for processing.
I then wrote a binning algorithm in PL/pgSQL and it worked much faster.
Regarding untrusted languages, I heard a podcast from Josh Berkus (a postgres advocate) who discussed an application of postgresql that brought in data from MySQL as part of its processing, so that the communication itself was handled by the postgres server. I don't remember the full details, I think it was on the FLOSS Weekly podcast which is quite an interesting discussion of the history of PostGRESQL and some of the issues it is put to.
The untrusted versions of the procedural languages allow you to access I/O on the system.
This can come in handy if you need a trigger or something send a email or connect to a socket server to send a popup notification. There are tons of uses for this type of thing, and because of postgresql isolation levels you cans safely do things like this.
You can put checkpoints in the function so if the transaction fails the email or whatever won't go out. The nice thing about doing this is it removes the logic from the client and puts it on the server.
I think most additional languages are offered so that if you develop in that language on a regular basis, you can feel comfortable writing db functions, triggers, etc. The usefulness of these features is to provide a control over data as close to the data as possible.
An example of a useful stored procedure I recently wrote in an external language that would not have been possible in pl/sql is a version of 'df' which allowed SQL table generators to pick a tablespace with the most free space available at runtime.
I used plperlu, and it was relatively simple, although I had to be careful with data typing.