Patterns or Ideas for web based domain-specific query builder (not for reporting)? - sql

Maybe this is a shot in the dark here but I'm trying to find out if anyone has thoughts on this problem we have been presented with.
The situation is that we have a database that contains all kinds of data about a large list of projects. There are dozens of tables that all provide supporting info about a project, both in 1 to 1 manner, where some specific type of info about projects (say ProjectInfoTypeA) might be stored in a table called ProjectInfoTypeA, and we'd do a inner join between that and the projects table, as well as 1 to many, like maybe ProjectScopeKeywords, where a project can be assigned N attributes or in this case "keywords" for a number of different attribute/lookup tables.
In the end we need to have the user in our web app build up things like:
Show me all projects completed in the last 5 years that took at least 4 years to do, cost at least $1MM, and have all 3 of these keywords ( x,y,z ) associated with it.
We also want users to be able to save their queries so they, and other users, can select them from a list of saved queries.
Once we get the list of projects from their filter, we need to then work with it in all different ways: but not as a report. If this were a report I'd just give them some report builder of some kind, but we need to work with their filtered list in the web app.
Currently we are thinking of 2 different ideas:
1) being that we just try to write our own UI for building up the query, and then create some giant SQL statement.
2) we store the data about each of their filters in the database, and then when they slick "Search" we would essentially prune down the list of projects by iteratively stripping off the projects that didn't match each query, based on the data they stored in the database.
I'm guessing no one out there has had to deal with something like this, but if any of you had, I'd be interested to hear any suggestions/patterns that would be worth looking into.

I would recommend choosing option 1. I have used a query-builder approach on a number of projects, with varying degrees of sophistication depending on the complexity of the requirements.
If you are in a position to use a ready-made solution, you can find several on the web: http://www.google.com/search?q=sql+query+builder
For a custom built solution, at a minimum, you would probably want to provide flattened views for the user to query from; this will simplify the designer complexity, reduce the learning curve for the user, and provide some abstraction against future schema changes.
After defining your base data sources, you need to provide means by which the user can select specific columns, define filter criteria, specify value aggregation, and define sub-queries (based on your example query requirement). The column selection and filter definition should not be too difficult, but the value aggregation and sub-query creation would be non-trivial to define. You should be able to use the ready-made solutions as examples of how to present this functionality to the user.

Related

Retrieve and interpret large amounts of data from a SQL Server database

I'm an Electrical Test Engineer. Programming experience with C, mostly for devices with 256B of RAM or less. And have not a lot of experience with SQL databases...
We got a database with production data, serial numbers and testing results.
At the creation of the database no tools was created to retrieve the data.
If we can't retrieve the data, the database may as well not exist.
We have the data, the database exists. I want to create tools to retrieve and interpret data. And in the future do statistical analysis on the data.
The database has over 500k unique devices. With over 10 million measurements.
My question is: what's the most sensical way to retrieve and display the data?
For instance: a program what loops trough every entry and records the data will be complicated to write and will take days to complete.
The program and query's get complicated very fast.
We have Device types, Batch numbers, Serial numbers.
For every DISTINCT (DeviceType)
For Every DISTINCT (Batch number)
COUNT DISTINCT (Serial number) where...
NOT IN User <> 'development'...
AND Testing result <> 'FAIL'...
AND Date between ... and ...
Not to mention the measurement data, as each device may be tested multiple times. It seemed a trivial task, I'm now overwhelmed by the complexity.
I will create the code and query's myself. What I ask is help finding a strategy.
Ask yourself what questions you want to answer by recourse to the data. If the data is recorded in the most granular way possible, then it may be appropriate to consider common grouping or aggregation methods. These might include grouping by device, or location, or something else - each of these dimensions will have a business interpretation.
Writing down 3 top business requests should give you a starting point for constructing your extract/analysis strategy.
Next, try and draw together a data model, find out what tables exist, what references and relations they have to one another.
Together, between the questions you want to answer and the table-design, you should then be in a position to start constructing sensible general use queries.
Sometimes you may find different business questions can be answered using a common view of the data - when you're happy with an extract path, you can write this out using a common query language called SQL and - if appropriate, create a VIEW using that language. This abstracts the problem and makes it more convenient for users to actually get the answers they're looking for.
Your database will provide tools to write and run SQL statements, and you will need to refer to the documentation for your database to figure out how that happens - it's usually similar, but implementations differ across databases.

Managing very large SQL queries

I'm looking for some ideas managing very large SQL queries in Oracle.
My employer is looking to build very wide reports ( 150 - 200 ) columns of data per report.
Each item is a sub-query or an element from a view. The data has to be real time, so DW style batch processing is not an option. We also don't use any BI tools , just a java app that generates Excel ( its a requirement to output data in Excel)
The query also contains unions as feeds from other systems.
The queries result in very large SQL ( about 1500 lines) that is very difficult to manage.
What strategies can I employ to make the work more manageable?
It is also not a performance problem. I was able to optimize the query to be very efficient , its mostly width of the query , managing 200 columns is a challenge in itself.
I deal with queries this length daily and here is some of what helps me out in manitaining them:
First alias every single one of the those columns. When you are building it you may know where each one came from but when it is time to make a change, it is really helpful to know exactly where each column came from. This applies to join conditions, group by and where conditions as well as the select columns.
Organize in easily understandable and testable chunks. I use temp tables to pull things that make sense together and so I can see the results before the final query while in test mode.
This brings me to test mode. If I have chunks of data, I design the proc with a test mode and then query individual temp tables when in test mode, so I can see where the data went wrong if there is a bug. Not sure how Oracle works but in SQL Server, I make this the last parameter and give it a default value, so that it doesn't need to be passed in by the application.
Consider logging the execution details and the values of passed in parameters and certainly log any error messages. This will help tremendously when you have to troubleshoot why this report that has functioned perfectly for six years doesn't work for this one user.
Put columns on a separate line for each one and do the same for where clauses. At times you may have to troublshoot by commenting out joins until you find the one that is causing the problem. It is easier if you can easily comment out the associated fields as well.
If you don't have a technical design document, then at least use comments to explain your thought process. You want to understand the whys not the hows in any comments. This stuff is hard to come back to later and understand even when you wrote it. Give your future self some help.
In developing from scratch, I put the select list in and then comment all but the first item. Then I build the query only until I get that value - testing until I am sure what I got was correct. Then I add the next one and whatever joins or where conditions I might need to get it. Test again making sure it is right. (Oops why did that go from 1000 records to 20000 when I added that? Hmm maybe there is something I need to handle there or is that right?) By adding only one thing at a time, you will find an error in the logic much faster and be much more confident of your results. It will also take you less time than trying to build a massive query in one go.
Finally, there is no substitute for understanding your data. There are plently of complex queries that work but do not give the correct answer. Know if you need an inner join or a left join. Know what where conditions you need to get the records you want. Know how to handle the records when you have a one-to-many relationship (this may require push back on the requirements); should you have 3 lines (one for each child record), or should you put that data in a comma delimited list or should you pick only one of the many records and have one line using aggregation. If the latter, what is the criteria for choosing the record you want to keep?
Without seeing the specifics of your problem, here are a couple of ideas that immediately come to mind:
If you are looking purely for management, I might suggest organizing your subqueries as a number of views and then referencing those views in your final query.
For performance on the other hand you may want to consider creating temp tables or even materialized views (which are fixed views) to break up the heavier parts of your process.
If your queries require an enormous amount of subquerying in order to gain usable data, you might need to rethink your database design and possibly create a number of datamarts to easily access reporting data. Think of these as mini-warehouses sans the multi-year trended data.
Finally, I know you said you don't use any BI tools but this problem certainly seems like one that might make sense by organizing your data into "cubes" or Business Object "universes". It might be worthwhile to at least entertain the cost of bringing on a BI tool vs. the programming hours to support the current setup.

Elasticsearch querying multiple types and grouped by types?

Suppose I am to search against two types [cars] and [buildings], and I would want the results to be separated. Is there a way one can group results by types?
I understand one simple way will be to query each types separately, but for other use cases one may actually need to query tens or hundreds of types together. Is there a native way or hacky way(like using sort) to achieve this?
This type of grouping behavior is (currently) not available in elasticsearch. It has been a long standing request:
https://github.com/elasticsearch/elasticsearch/issues/256
There are two approaches that can help, both of which are far from perfect, but may be good enough for some use cases.
Client side aggregation. Request a lot more results than you plan on displaying and the then bucket those.
Using multi-query. This allows you to easily pass down some number of queries in a single batch, but will have potential scaling problems if the number of queries gets to large.
This is one feature that Solr has that elasticsearch doesn't, but I have never tried it. I used a similar feature with Autonomy IDOL years back, but the performance was abysmal.
If you want the results separated in groups of documents, you're going to have to restructure your documents, since, elasticsearch is focused on finding matching documents. You might get around this by designing a document that has child documents then you can query for matches on the parent document that represents your type.
I guess there might be some common field (let's say it's [price]) if you want to search against different types. Then it would be reasonable to add some different type like [price_aggregator] and put into it fields [type] and [price]. And then you could easily build your query against just one type. This requires some additional work while indexing and more memory to store index but it's much performant when you search.

Recursively querying through structured table data / process design

After my first try to misappropriate Ms-Access - with your help - turned out to be a great success, I have been sent back to do "more of this".
A bit of introduction you can skip if you want:
I am building a data foundation about certain projects from which I want to create analysises and overviews.
The data and findings are to be represented in programs like Excel or Powerpoint, so the process itself is very open. It will probably be very visual with detailed points on request.
However, the data might be changing periodically and if this turns out well, I might repeat the process.
Therefore I think the ideal way would be to have a data layer, then a fixed set of queries on that data and then I would (semi-)manually compile the results into a report in whatever format fits, maybe using external data analysis tools such as R in between.
Trouble is, the only database I have access to is.. well.. Ms Access 2010. I am not at liberty to install anything on this machine.
I could of course use non-install or online tools if you have recommendations for this.
tl;dr: I want to use Ms Access to query data from a relational db into tabular format to be processed further by hand, using as little of Ms-Access VBA and forms as possible.
I have since started to implement a prototype in ms-Access, a standard relational database.
One interesting problem I have come up with with this kind of design is that I have a table for companies involved in the projects. Along with this, I have a table of "relationship" - like stakeholding, ownerships or cooperations.
So let's say company A is building project A, but is just a subsidary of company B, which then partly owned by company C and so on.
Now let's say I want to query all companies involved in a project, but as owners I just want to show the last "elements" of the chain.
Imagine I want to sort the list by net assets, which is usually a figure which is only available for the public companies at the end of the chain, not the project subsidaries up the chain etc.
Is this possible with (Ms-)SQL or would I need to do this in VBA?
Right now I think I could manage do write a VBA function and dump it into a temporary table, but then I'd have to create forms and such.
Another idea that immediately springs from this is ´to answer the question "In which project does company C have stake" by a query. You can see where this is going.
I would prefer the database and the queries to be as flexible as possible (and in this case, independend from the actual Access).
So this time, no mock-program or user-interface. It was a pain to get what I want from Access in the last project and that was with a very specific question set...
But in general I am also open to use different tools if I can.
Thank you so much!
Modelling hierarchies in an RDBMS is a fairly tricky process - some (like Oracle) have built-in functionality to query hierarchical data, but I don't think Access does.
The best solution is to use a "nested set" model. This allows you to model hierarchical data while using standard SQL; it's also pretty fast for querying.
If your data isn't hierarchical, the nested set isn't so useful; the typical solution in that case is to introduce a table to map the relationship - typically including the two related entities, and often with a "relationship type" field (e.g. "parent", "part owner" etc.). This is often called a Directed Acyclical Graph or DAG. There are several ways of modelling these in a database; a "Closure table" is probably the most efficient. This article shows how to do this - it's a heavy read, but I think it answers your question.

How should I organize complex SQL views in Rails?

I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use.
I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;.
The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable.
I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require.
Some options I've thought of:
Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model.
Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.)
How would you deal with a complex reporting problem like this?
UPDATE for Mladen Jablanović
It started by having a couple of views for reporting purposes. My boss(es) decided they wanted more, so I started writing more. Some give couple hundred columns of data, based on the requirements I've been given.
I have a couple thousand lines of views all shoved in a single file now. I don't like that situation, so I want to reorganize/refactor the code. I'd also like an easy way of providing CSVs -- I'm currently running queries and emailing them by hand, which could easily be automated. Finally, I would like to be able to write some tests on the output of the views, since a couple of regressions have already popped up.
I haven't worked much with SQL and views directly, so I can't help you there, but you can certainly build an ActiveRecord model on top of a view, very easily in fact. The book Enterprise Rails has a whole chapter on it (here it is at Google Books).
We are using views in our DB extensively and some of them are exposed as Rails models. You work with them as you would with tables, except for you can't update them of course.
Also, some of the columns may be calculated using other columns (different ratios for example) so we don't do it in the view, but in the model instead (ok, not entirely true, we construct SQL snippet and pass it to :select => '' portion of find call).
Presentation logic (such as date and number formatting) goes to Rails views.
I'm afraid I can't help you with more concrete advice, as the scope of the question is pretty wide.
EDIT:
Hundreds of columns doesn't sound reasonable. Sounds like immense amount of data in one place. How do they use it at all? We have web application where they can drill down and filter the results, narrow timespan and time step etc, so they never have more then 10-20 columns in the reports.
We store our views one view per SQL file. Also, you can combine it with a numerical prefix in order to ensure proper creation order (in case some of them depend on others). No migrations there, whole DB layer is app-agnostic.
For CSV, you can create either a set of scripts you can invoke either manually, or using cron, or you can use FasterCSV from your Rails app and generate CSVs by HTTP request.