How do I organize my queries in Oracle? - sql

I have a base query made (Thanks Justin Cave!)
Now I have to use that query and join it to different tables and sub queries many times over to do checks against our data. Additional queries are likely to be added in the future. So in the end there will be maybe two dozen checks for the data and the findings will be summarized in an SSRS report. If this were in MSSQL, I would put the results of the queries into a temp table and finally run a select on the temp table. Doing as much research as possible I've decided that the best way would be to use the WITH clause and joining with the other temp tables and queries to get results, then Unionize all of the queries together to get my result. However this seems like it is going to be extremely messy and large. I'd use Global Temporary Tables, but they seem to be frowned upon in Oracle. Perhaps you have a better method for modularizing and organizing this?
Per our licensing agreement we are not able to add new tables in oracle (so I am told), but we are able to add view, stored procedures and functions.
Thanks in advance!

If materialized views are not forbidden to use, you can use them to get all the advantages of a temp table.
But unless you need results of a sub-query in several different queries, you can just use as many independent sub-queries per query as you like, and operate on them as if these were tables. Most of the time you'll have pretty decent query plans.
Also, in my eyes, using a global temp table to speed up analysis 10x is worth it — as long as you don't expose sensitive data to someone not trusted.

Roll them all up into various stored procedures and enclose them in Oracle packages.
Then you can have a package for each logic area of your application. E.g. PKG_USERS, PKG_ACCOUNTS, etc.
It is also easier to track changes because you can put these under version control and see all changes at a glance.
It works for me, hopefully it helps you...

Related

Refactoring big database views

I am an intern (which means I have no decision power, I know sql and pl/sql) and got the Task to refactor some huge database views
Basically the problem is that all views relay on big view which is querying all data and the other views filter further (thoes views all query data for letters and emails) these views are Not documented so I need to find out what they do and rebuild them so that there won't be a query that acquires all that from the database.
My problem is that those views (btw each about 600 lines) are written very wiredly (as far as I know written by a non programmer) and I have no idea where to start, what is the first thing todo? What would you suggest?
This is one of those "stick your finger in the air" kind of questions; it's almost impossible to say what you should do, as it depends entirely on the sql used to create the views, the data in the tables, the structure of the tables etc, etc.
My suggestion would be to start with the base view and see what basic things you can do to aid peformance - for example, can you make use of sub-query factoring? can you change things to filter early? Can things be replaced by an analytic query?
Break things down, too. If you have a subquery in the original view that you think can be optimised, well, it's easy to compare the results of two different sql statements - if they match, it's generally a good indication that you've rewritten the statement correctly (not always, though, so be careful! Test thoroughly!)
Breaking a complex query down to its constituent parts should also allow you to work out what it's trying to do a lot easier!

Managing very large SQL queries

I'm looking for some ideas managing very large SQL queries in Oracle.
My employer is looking to build very wide reports ( 150 - 200 ) columns of data per report.
Each item is a sub-query or an element from a view. The data has to be real time, so DW style batch processing is not an option. We also don't use any BI tools , just a java app that generates Excel ( its a requirement to output data in Excel)
The query also contains unions as feeds from other systems.
The queries result in very large SQL ( about 1500 lines) that is very difficult to manage.
What strategies can I employ to make the work more manageable?
It is also not a performance problem. I was able to optimize the query to be very efficient , its mostly width of the query , managing 200 columns is a challenge in itself.
I deal with queries this length daily and here is some of what helps me out in manitaining them:
First alias every single one of the those columns. When you are building it you may know where each one came from but when it is time to make a change, it is really helpful to know exactly where each column came from. This applies to join conditions, group by and where conditions as well as the select columns.
Organize in easily understandable and testable chunks. I use temp tables to pull things that make sense together and so I can see the results before the final query while in test mode.
This brings me to test mode. If I have chunks of data, I design the proc with a test mode and then query individual temp tables when in test mode, so I can see where the data went wrong if there is a bug. Not sure how Oracle works but in SQL Server, I make this the last parameter and give it a default value, so that it doesn't need to be passed in by the application.
Consider logging the execution details and the values of passed in parameters and certainly log any error messages. This will help tremendously when you have to troubleshoot why this report that has functioned perfectly for six years doesn't work for this one user.
Put columns on a separate line for each one and do the same for where clauses. At times you may have to troublshoot by commenting out joins until you find the one that is causing the problem. It is easier if you can easily comment out the associated fields as well.
If you don't have a technical design document, then at least use comments to explain your thought process. You want to understand the whys not the hows in any comments. This stuff is hard to come back to later and understand even when you wrote it. Give your future self some help.
In developing from scratch, I put the select list in and then comment all but the first item. Then I build the query only until I get that value - testing until I am sure what I got was correct. Then I add the next one and whatever joins or where conditions I might need to get it. Test again making sure it is right. (Oops why did that go from 1000 records to 20000 when I added that? Hmm maybe there is something I need to handle there or is that right?) By adding only one thing at a time, you will find an error in the logic much faster and be much more confident of your results. It will also take you less time than trying to build a massive query in one go.
Finally, there is no substitute for understanding your data. There are plently of complex queries that work but do not give the correct answer. Know if you need an inner join or a left join. Know what where conditions you need to get the records you want. Know how to handle the records when you have a one-to-many relationship (this may require push back on the requirements); should you have 3 lines (one for each child record), or should you put that data in a comma delimited list or should you pick only one of the many records and have one line using aggregation. If the latter, what is the criteria for choosing the record you want to keep?
Without seeing the specifics of your problem, here are a couple of ideas that immediately come to mind:
If you are looking purely for management, I might suggest organizing your subqueries as a number of views and then referencing those views in your final query.
For performance on the other hand you may want to consider creating temp tables or even materialized views (which are fixed views) to break up the heavier parts of your process.
If your queries require an enormous amount of subquerying in order to gain usable data, you might need to rethink your database design and possibly create a number of datamarts to easily access reporting data. Think of these as mini-warehouses sans the multi-year trended data.
Finally, I know you said you don't use any BI tools but this problem certainly seems like one that might make sense by organizing your data into "cubes" or Business Object "universes". It might be worthwhile to at least entertain the cost of bringing on a BI tool vs. the programming hours to support the current setup.

What is the overhead for a cross server query vs. a native query?

If I am connected to a particular server in SQL Server, and I use a query referencing a table in another server, does this slow down the query and if so how much (e.g. if I am joining a lot to tables in the server I currently am in)? I ask because I want to know if it is ever more efficient to import that table to the server I am in than to cross server query.
I would say it depends. Yeah, probably not the answer you wanted, but it depends on the situation. Using linked servers does come with a cost, especially depending on the number of rows you're trying to return and the types of queries you're trying to run. You should be able to view your execution plan on your queries and see how much is being used to hit the linked server tables. That along with the time it takes to return the results would probably help determine if it's needed.
In regards to bringing the tables locally, then that depends as well. Do you need up-to-date data or is the data static? If static, then importing the table would not be a bad idea. If that data constantly changes and you need the changes, then this might not be the best solution. You could always look into creating SSIS packages to do this on a nightly basis, but again, it just depends.
Good luck.
As sgeddes mentions, it depends.
My own experiences with linked server queries were that they are pretty slow for large tables. Depending on how you write the query the predicates may have to be evaluated on the server that is running the statements (very likely if you're joining to them), meaning it could be transferring the entire table to that server anyways, and then filtering the result. And that is definitely bad for performance.

How can my application benefit from temporary tables?

I've been reading a little about temporary tables in MySQL but I'm an admitted newbie when it comes to databases in general and MySQL in particular. I've looked at some examples and the MySQL documentation on how to create a temporary table, but I'm trying to determine just how temporary tables might benefit my applications and I guess secondly what sorts of issues I can run into. Granted, each situation is different, but I guess what I'm looking for is some general advice on the topic.
I did a little googling but didn't find exactly what I was looking for on the topic. If you have any experience with this, I'd love to hear about it.
Thanks,
Matt
Temporary tables are often valuable when you have a fairly complicated SELECT you want to perform and then perform a bunch of queries on that...
You can do something like:
CREATE TEMPORARY TABLE myTopCustomers
SELECT customers.*,count(*) num from customers join purchases using(customerID)
join items using(itemID) GROUP BY customers.ID HAVING num > 10;
And then do a bunch of queries against myTopCustomers without having to do the joins to purchases and items on each query. Then when your application no longer needs the database handle, no cleanup needs to be done.
Almost always you'll see temporary tables used for derived tables that were expensive to create.
First a disclaimer - my job is reporting so I wind up with far more complex queries than any normal developer would. If you're writing a simple CRUD (Create Read Update Delete) application (this would be most web applications) then you really don't want to write complex queries, and you are probably doing something wrong if you need to create temporary tables.
That said, I use temporary tables in Postgres for a number of purposes, and most will translate to MySQL. I use them to break up complex queries into a series of individually understandable pieces. I use them for consistency - by generating a complex report through a series of queries, and I can then offload some of those queries into modules I use in multiple places, I can make sure that different reports are consistent with each other. (And make sure that if I need to fix something, I only need to fix it once.) And, rarely, I deliberately use them to force a specific query plan. (Don't try this unless you really understand what you are doing!)
So I think temp tables are great. But that said, it is very important for you to understand that databases generally come in two flavors. The first is optimized for pumping out lots of small transactions, and the other is optimized for pumping out a smaller number of complex reports. The two types need to be tuned differently, and a complex report run on a transactional database runs the risk of blocking transactions (and therefore making web pages not return quickly). Therefore you generally don't want to avoid using one database for both purposes.
My guess is that you're writing a web application that needs a transactional database. In that case, you shouldn't use temp tables. And if you do need complex reports generated from your transactional data, a recommended best practice is to take regular (eg daily) backups, restore them on another machine, then run reports against that machine.
The best place to use temporary tables is when you need to pull a bunch of data from multiple tables, do some work on that data, and then combine everything to one result set.
In MS SQL, Temporary tables should also be used in place of cursors whenever possible because of the speed and resource impact associated with cursors.
If you are new to databases, there are some good books by Joe Kelko that review best practices for ANSI SQL. SQL For Smarties will describe in great detail the use of temp table, impact of indexes, where clauses, etc. It's a great reference book with in depth detail.
I've used them in the past when I needed to create evaluated data. That was before the time of views and sub selects in MySQL though and I generally use those now where I would have needed a temporary table. The only time I might use them is if the evaluated data took a long time to create.
I haven't done them in MySQL, but I've done them on other databases (Oracle, SQL Server, etc).
Among other tasks, temporary tables provide a way for you to create a queryable (and returnable, say from a sproc) dataset that's purpose-built. Let's say you have several tables of figures -- you can use a temporary table to roll those figures up to nice, clean totals (or other math), then join that temp table to others in your schema for final output. (An example of this, in one of my projects, is calculating how many scheduled calls a given sales-related employee must make per week, bi-weekly, monthly, etc.)
I also often use them as a means of "tilting" the data -- turning columns to rows, etc. They're good for advanced data processing -- but only use them when you need to. (My golden rule, as always, applies: If you don't know why you're using x, and you don't know how x works, then you probably shouldn't use it.)
Generally, I wind up using them most in sprocs, where complex data processing is needed. I'd love to give a concrete example, but mine would be in T-SQL (as opposed to MySQL's more standard SQL), and also they're all client/production code which I can't share. I'm sure someone else here on SO will pick up and provide some genuine sample code; this was just to help you get the gist of what problem domain temp tables address.

Is it okay to have a lot of database views?

I infrequently (monthly/quarterly) generate hundreds of Crystal Reports reports using Microsoft SQL Server 2005 database views. Are those views wasting CPU cycles and RAM during all the time that I am not reading from them? Should I instead use stored procedures, temporary tables, or short-lived normal tables since I rarely read from my views?
I'm not a DBA so I don't know what's going on behind the scenes inside the database server.
Is it possible to have too many database views? What's considered best practice?
For the most part, it doesn't matter. Yes, SQL Server will have more choices when it parses SELECT * FROM table (it'll have to look in the system catalogs for 'table') but it's highly optimized for that, and provided you have sufficient RAM (most servers nowadays do), you won't notice a difference between 0 and 1,000 views.
However, from a people-perspective, trying to manage and figure out what "hundreds" of views are doing is probably impossible, so you likely have a lot of duplicated code in there. What happens if some business rules change that are embedded in these redundant views?
The main point of views is to encapsulate business logic into a pseudo table (so you may have a person table, but then a view called "active_persons" which does some magic). Creating a view for each report is kind of silly unless each report is so isolated and unique that there is no ability to re-use.
A view is a query that you run often with preset parameters. If you know you will be looking at the same data all the time you can create a view for ease of use and for data binding.
That being said, when you select from a view the view defining query is run along with the query you are running.
For example, if vwCustomersWhoHavePaid is:
Select * from customers where paid = 1
and the query you are running returns the customers who have paid after August first is formatted like this:
Select * from vwCustomersWhoHavePaid where datepaid > '08/01/08'
The query you are actually running is:
Select * from (Select * from customers where paid = 1) where datepaid > '08/01/08'
This is something you should keep in mind when creating views, they are a way of storing data that you look at often. It's just a way of organizing data so it's easier to access.
The views are only going to take up cpu/memory resources when they are called.
Anyhow, best practice would be to consolidate what can be consolidated, remove what can be removed, and if it's literally only used by your reports, choose a consistent naming standard for the views so they can easily be grouped together when looking for a particular view.
Also, unless you really need transactional isolation, consider using the NOLOCK table hint in your queries.
-- Kevin Fairchild
You ask: What's going on behind the scenes?
A view is a bunch of SQL text. When a query uses a view, SQL Server places that SQL text into the query. This happens BEFORE optimization. The result is the optimizer can consider the combined code instead of two separate pieces of code for the best execution plan.
You should look at the execution plans of your queries! There is so much to learn there.
SQL Server also has a concept of a clustered view. A clustered view is a system maintained result set (each insert/update/delete on the underlying tables can cause insert/update/deletes on the clustered view's data). It is a common mistake to think that views operate in the way that clustered views operate.