Implementing and indexing User Defined Fields in an SQL DB - sql

I need to store a large table (several millions or rows) that contains a large number of user-defined fields (not known at compile time, but probably around 20 to 40 custom fields). It is very important (performance-wise) for me to be able to query the data based on those custom fields: i.e. "Select the rows where this attribute has that value, that attribute is that value, etc.". Each query has some 20 to 30 WHERE clauses.
My ideas so far:
Change the database schema everytime a new user field is implemented. Keep each user defined field as a column in the table. Add and maintain indexes on each custom-created column. How to properly build those indexes is a big problem, as I don't know what attributes(columns) will be used in the WHERE queries.
Store the custom fields as an XML type column. As I understand from SQL2005 I can query inside the XML in the XML type columns. Not so sure about performance though.
Entity Attribute Value . This is what I am using now, but it's a pain.
Any suggestions?
Edit:
Some clarifications on my requirements. I have a table, 40 -50 million rows of (say) ID numbers and various attributes associated with those IDs.
Let's say 20 million of them have "CustomAttribute1" equal to 2, then 5 million have "CustomAttribute2" equal to "Yes" and 3 million have "CustomAttribute20" equal to 'No'
I need a FAST method of returning all IDs where:
1. CustomAttribute1 = 2
2. CustomAttribute2 = 'Yes'
3. CustomAttribute4 = null
4. CustomAttribute20 != 'No'
etc...
We have this implemented as EAV: the select query is a nightmare to implement and maintain, it takes a long time to return result, and most anoyingly the DB scales to huge sizes even for small ammounts of data, which is weird since the EAV is essentially normalizing the data but I assume all the indexes take up a bunch of space.

It seems like you've listed your available options. EAV can be a pain for querying (and slow, depending on how many criteria you want to search on simultaneously), but it tends to be the most "sane" and RDBMS-agnostic implementation.
Modifying the schema is a no-no...obviously it can be done, but such a practice is abhorrent. I do not approve.
The XML option is a solution, and SQL Server can query inside the structure. I'm not certain about other RDBMS's, and you don't list which one you're using in the post or the tags.
If you're going to be querying on many attributes (say, 20+) simultaneously, then I would probably recommend the XML solution just to limit the number of joins you'll have to make. Aside from that, I would stick with EAV.

You could represent all of the user defined fields with an XML Column, e.g.
"But I am not sure what the performance impact of doing this would be, however it is definitely the prettiest way of handling UDF's in a database in my opinion."
<UDF>
<Field Name="ConferenceAddress" DBType="NVarChar" Size="255">Some Address</Field>
<Field Name="ConferenceCity" DBType="NVarChar" Size="255">Some City</Field>
...etc
</UDF>
Then what I would do is put a trigger on the table so that when the column is updated it recreates a view for the table which pulls out the xml values as columns on the view. Lock the view etc during the recreation of it to prevent access errors application side.
Then I would create a stored procedure for updating the XML so that it would work for any XML Column following your User Defined Field xml formatting, e.g. Insert/Update/Remove/Get
GetUDFFieldValue
AddUDFField
UpdateUDFField
DeleteUDFField
--Shared Parameters
TableName
ColumnName
(e.g. use Dynamic SQL to get the XML from X table by X Column Name to make it universal/generic to all of your UDF Fields)
Here is an article on XML Performance Optimization from Sql Server 2005 (not seeing an equivalent in newer versions):
http://technet.microsoft.com/en-us/library/ms345118(v=sql.90).aspx
Lastly:
Are you sure you even need an RDBMS? NoSql Is a better fit for User Generated Fields, I might even consider using Both NoSql and Sql Server.

Related

Using a single field in Oracle SQL with multiple JSON values inside. How slow is it to query this?

We currently have an audit table of the following form:
TABLE(timestamp, type, data, id) -- all the fields are of type varchar / clob
The issue is that for some types there are actually several well suited ids. We currently only store one of those but it would be interesting to be able to query by any of those ids while still keeping only one column for that.
With Oracle's recent support for JSON we were thinking about maybe storing all the ids as a JSON object:
{ orderId:"xyz123"; salesId:"31232131" }
This would be interesting if we could continue to make queries by id with very good performance. Is this possible? Does Oracle allow for indexing in these kind of situations or would it always end up being a O(n) text search over all the millions of rows this table has?
Thanks
Although you can start fiddling around with features such as JSON types, nested tables and the appropriate indexing methods, I wouldn't recommend that unless you specifically want to enhance your Oracle skills.
Instead, you can store the ids in a junction table:
table_id synonym_id
1 1
1 10
1 100
2 2
With an index on synonym_id, table_id, looking up the synonym should be really fast. This should be simple to maintain. You can guarantee that a given synonym only applies to one id. You can hide the join in a view, if you like.
Of course, you can go down another route, but this should be quite efficient.
1)Oracle's JSON support allows functional indexing on the JSON data (b-tree index on JSON_VALUE virtual column.)
2) for non-indexed fields the json date needs to be parsed to get to the field values. Oracle uses a streaming parser with early termination. This means that fields at the beginning of the JSON data are found faster than those at the end.

A database in which the users can create data types

I have a really old application using an SQL database that I need to update. I would like to take also the opportunity to improve the database structure and I would appreciate some advice.
The basic problem is that an important part of the database must be user configurable without touching the code. To be more concrete, the DB stores products and these products have different specs (i.e. columns) depending on the type. The app must be able to search for any of the columns. There are only a few types (~20) but the administrator must be able to create a new one without touching the code.
The data that needs to be stored for each product are either strings or floats, and never more than 7 of each type.
Instead of creating an interface to create and delete tables, the following "solution" was implemented.
- In the Products Table, there is one column for the id; one column for the ProducTypeID; 7 string columns and 7 float columns
- In a ProducType column, there is one column for the ProducTypeID, and 14 string columns indicating the names of the 7 string columns and 7 float columns for each product type. If a product does not need so many columns, the column name is NULL
This works but due to the extra indirection is extremely annoying to maintain the client code.
The question is: Should I stay with an SQL DB and add a way to create/delete tables or should I use a noSQL DB? Which are the pros and cons in each case?
Keep in mind that in SQL databases, adding and removing columns on a large table can be a very expensive operation which can take minutes or even hours. Doing it on-the-fly is a really bad idea. Adding a bunch of "multi-purpose" columns to a table is not much better. It's hard to query and you have a limit on how many properties a product can have.
The usual by-the-book solution when each product has 0-n dynamic properties is to create a second table ProductID(primary key) | PropertyName(primary key) | PropertyValue. This allows each product to have any number of properties. You can easily JOIN it with the main products table to get all products with their properties.
When you are open to switching database technologies, you could also use a document-oriented NoSQL database which doesn't use a fixed schema like MongoDB or CouchDB. In such databases, each document in a collection can have a different set of fields. But before you decide to make this step, evaluate how such a database would affect other parts of your application. Listing everything that could be positively or negatively affected without knowing your whole application in and out would be too broad of a question.

XML vs Relational Database

I am looking at a cloud based solution which will give people the ability to enter information which is stored in a SQL database.
The benefits of my application will be that people can also change what type of information is stored (i.e an administrator would be able to add/remove certain attributes to change what data people can store).
Doing this in a relational database does work but it means the administrator would be changing the actual structure of the database which has so many risks and issues and I really don't want to go down this route.
I have thought about using XML, so one table contains two tables for example:
Template Data
columns (ID, XML) - This will contain the "Default Templates/Structure" of what people will enter which will is used when the users enter data and submit
Data Table
columns (ID, XML) - This will contain the actual data using the XML template of my first column but store the actual data in it
Does this sound like it would work and could I hit potential performance issues? A lot of the data will be searchable and could potentially have a LOT of records in the database. - I guess I could look at storing the searchable data in separate fields that the administrator can't modify.
Thanks
It is possible and if you do it a little smart it is feasible.
Contrary to Justings wroong answer you are not stuck with string manipulation and search.... if you actuall care to read the documentation.
SQL Server added a XML field type a long time ago.
This takes XML (only) and decomposes it internally and has an indexing mechanism (cech http://technet.microsoft.com/en-us/library/ms191497.aspx for details).
Queries then look like:
SELECT
EventID, EventTime,
AnnouncementValue = t1.EventXML.value('(/Event/Announcement/Value)[1]', 'decimal(10,2)'),
AnnouncementDate = t1.EventXML.value('(/Event/Announcement/Date)[1]', 'date')
FROM
dbo.T1
WHERE
t1.EventXML.exist('/Event/Indicator/Name[text() = "GDP"]') = 1
(copied from How to query xml column in tsql)
How far it gets you depends - this is heavier on the database and may have limitations, but it is a far cry from the alternative of storing strings and saying good bye to any indexing.
You can actually even add xml schemata so the data has to conform to some specific schema.
This is possible but data retrieval will suffer if you will query data based on the values in the XML string. If you will use this you're stuck with using a LIKE filter which is not recommended for searching a table with too many rows. If you will always read data using the ID column only I think this would be great.
On the other hand, if you will separate the data in the XML to several columns, you can refine the way you query data based on multiple columns. This will speed up your searches most especially if the columns are indexed.

One large table or split into two smaller tables?

Is there any performance benefit to splitting a large table with roughly 100 columns into 2 separate tables? This would be in terms of inserting, deleting and selecting tasks? I'm using SQL Server 2008.
If one of the fields is a CLOB or BLOB and you anticipate it holding a huge amount of data and you won't need that field very often and the result set will transmitted over a long pipe (like server to a web-based client), then I think putting that field in a separate table would be appropriate.
But just returning 100 regular fields probably won't tax your system so much as to justify a separate table and a join.
The only benefit you might see is if a number of columns are only occasionally populated. In which case putting those into their own table and only adding a row when there is data might make sense in terms of overall row overhead and, depending on the number of rows, overall page count for the table(s). That said, this is one of the reasons they introduced sparse columns in SQL Server 2008.
For the maintenance and other overhead of managing two tables instead of one (especially given that people can act on individual tables if they choose), it's unlikely it would be worth it.
Can you describe what type of entity needs to have over 100 columns? Perhaps the data model is just wrong in the first place.
I would say no as it would take more execution time to join the 2 tables whenever you wanted to do something.
I depends if you use these fields in the same time in your application.
These kind of performance improvements are really bad : you make your source code impossible to understand. If you have performance trouble with this table, add something (like a table containing the 15 fields you'll use in a request that'll updated via trigger), don't modify your clean solution.
If you don't have performance problem, don't do anything, you'll see later !

What is wrong with using SELECT * FROM sometable [duplicate]

I've heard that SELECT * is generally bad practice to use when writing SQL commands because it is more efficient to SELECT columns you specifically need.
If I need to SELECT every column in a table, should I use
SELECT * FROM TABLE
or
SELECT column1, colum2, column3, etc. FROM TABLE
Does the efficiency really matter in this case? I'd think SELECT * would be more optimal internally if you really need all of the data, but I'm saying this with no real understanding of database.
I'm curious to know what the best practice is in this case.
UPDATE: I probably should specify that the only situation where I would really want to do a SELECT * is when I'm selecting data from one table where I know all columns will always need to be retrieved, even when new columns are added.
Given the responses I've seen however, this still seems like a bad idea and SELECT * should never be used for a lot more technical reasons that I ever though about.
One reason that selecting specific columns is better is that it raises the probability that SQL Server can access the data from indexes rather than querying the table data.
Here's a post I wrote about it: The real reason select queries are bad index coverage
It's also less fragile to change, since any code that consumes the data will be getting the same data structure regardless of changes you make to the table schema in the future.
Given your specification that you are selecting all columns, there is little difference at this time. Realize, however, that database schemas do change. If you use SELECT * you are going to get any new columns added to the table, even though in all likelihood, your code is not prepared to use or present that new data. This means that you are exposing your system to unexpected performance and functionality changes.
You may be willing to dismiss this as a minor cost, but realize that columns that you don't need still must be:
Read from database
Sent across the network
Marshalled into your process
(for ADO-type technologies) Saved in a data-table in-memory
Ignored and discarded / garbage-collected
Item #1 has many hidden costs including eliminating some potential covering index, causing data-page loads (and server cache thrashing), incurring row / page / table locks that might be otherwise avoided.
Balance this against the potential savings of specifying the columns versus an * and the only potential savings are:
Programmer doesn't need to revisit the SQL to add columns
The network-transport of the SQL is smaller / faster
SQL Server query parse / validation time
SQL Server query plan cache
For item 1, the reality is that you're going to add / change code to use any new column you might add anyway, so it is a wash.
For item 2, the difference is rarely enough to push you into a different packet-size or number of network packets. If you get to the point where SQL statement transmission time is the predominant issue, you probably need to reduce the rate of statements first.
For item 3, there is NO savings as the expansion of the * has to happen anyway, which means consulting the table(s) schema anyway. Realistically, listing the columns will incur the same cost because they have to be validated against the schema. In other words this is a complete wash.
For item 4, when you specify specific columns, your query plan cache could get larger but only if you are dealing with different sets of columns (which is not what you've specified). In this case, you do want different cache entries because you want different plans as needed.
So, this all comes down, because of the way you specified the question, to the issue resiliency in the face of eventual schema modifications. If you're burning this schema into ROM (it happens), then an * is perfectly acceptable.
However, my general guideline is that you should only select the columns you need, which means that sometimes it will look like you are asking for all of them, but DBAs and schema evolution mean that some new columns might appear that could greatly affect the query.
My advice is that you should ALWAYS SELECT specific columns. Remember that you get good at what you do over and over, so just get in the habit of doing it right.
If you are wondering why a schema might change without code changing, think in terms of audit logging, effective/expiration dates and other similar things that get added by DBAs for systemically for compliance issues. Another source of underhanded changes is denormalizations for performance elsewhere in the system or user-defined fields.
You should only select the columns that you need. Even if you need all columns it's still better to list column names so that the sql server does not have to query system table for columns.
Also, your application might break if someone adds columns to the table. Your program will get columns it didn't expect too and it might not know how to process them.
Apart from this if the table has a binary column then the query will be much more slower and use more network resources.
There are four big reasons that select * is a bad thing:
The most significant practical reason is that it forces the user to magically know the order in which columns will be returned. It's better to be explicit, which also protects you against the table changing, which segues nicely into...
If a column name you're using changes, it's better to catch it early (at the point of the SQL call) rather than when you're trying to use the column that no longer exists (or has had its name changed, etc.)
Listing the column names makes your code far more self-documented, and so probably more readable.
If you're transferring over a network (or even if you aren't), columns you don't need are just waste.
Specifying the column list is usually the best option because your application won't be affected if someone adds/inserts a column to the table.
Specifying column names is definitely faster - for the server. But if
performance is not a big issue (for example, this is a website content database with hundreds, maybe thousands - but not millions - of rows in each table); AND
your job is to create many small, similar applications (e.g. public-facing content-managed websites) using a common framework, rather than creating a complex one-off application; AND
flexibility is important (lots of customization of the db schema for each site);
then you're better off sticking with SELECT *. In our framework, heavy use of SELECT * allows us to introduce a new website managed content field to a table, giving it all of the benefits of the CMS (versioning, workflow/approvals, etc.), while only touching the code at a couple of points, instead of a couple dozen points.
I know the DB gurus are going to hate me for this - go ahead, vote me down - but in my world, developer time is scarce and CPU cycles are abundant, so I adjust accordingly what I conserve and what I waste.
SELECT * is a bad practice even if the query is not sent over a network.
Selecting more data than you need makes the query less efficient - the server has to read and transfer extra data, so it takes time and creates unnecessary load on the system (not only the network, as others mentioned, but also disk, CPU etc.). Additionally, the server is unable to optimize the query as well as it might (for example, use covering index for the query).
After some time your table structure might change, so SELECT * will return a different set of columns. So, your application might get a dataset of unexpected structure and break somewhere downstream. Explicitly stating the columns guarantees that you either get a dataset of known structure, or get a clear error on the database level (like 'column not found').
Of course, all this doesn't matter much for a small and simple system.
Lots of good reasons answered here so far, here's another one that hasn't been mentioned.
Explicitly naming the columns will help you with maintenance down the road. At some point you're going to be making changes or troubleshooting, and find yourself asking "where the heck is that column used".
If you've got the names listed explicitly, then finding every reference to that column -- through all your stored procedures, views, etc -- is simple. Just dump a CREATE script for your DB schema, and text search through it.
Performance wise, SELECT with specific columns can be faster (no need to read in all the data). If your query really does use ALL the columns, SELECT with explicit parameters is still preferred. Any speed difference will be basically unnoticeable and near constant-time. One day your schema will change, and this is good insurance to prevent problems due to this.
definitely defining the columns, because SQL Server will not have to do a lookup on the columns to pull them. If you define the columns, then SQL can skip that step.
It's always better to specify the columns you need, if you think about it one time, SQL doesn't have to think "wtf is *" every time you query. On top of that, someone later may add columns to the table that you actually do not need in your query and you'll be better off in that case by specifying all of your columns.
The problem with "select *" is the possibility of bringing data you don't really need. During the actual database query, the selected columns don't really add to the computation. What's really "heavy" is the data transport back to your client, and any column that you don't really need is just wasting network bandwidth and adding to the time you're waiting for you query to return.
Even if you do use all the columns brought from a "select *...", that's just for now. If in the future you change the table/view layout and add more columns, you'll start bring those in your selects even if you don't need them.
Another point in which a "select *" statement is bad is on view creation. If you create a view using "select *" and later add columns to your table, the view definition and the data returned won't match, and you'll need to recompile your views in order for them to work again.
I know that writing a "select *" is tempting, 'cause I really don't like to manually specify all the fields on my queries, but when your system start to evolve, you'll see that it's worth to spend this extra time/effort in specifying the fields rather than spending much more time and effort removing bugs on your views or optimizing your app.
While explicitly listing columns is good for performance, don't get crazy.
So if you use all the data, try SELECT * for simplicity (imagine having many columns and doing a JOIN... query may get awful). Then - measure. Compare with query with column names listed explicitly.
Don't speculate about performance, measure it!
Explicit listing helps most when you have some column containing big data (like body of a post or article), and don't need it in given query. Then by not returning it in your answer DB server can save time, bandwidth, and disk throughput. Your query result will also be smaller, which is good for any query cache.
You should really be selecting only the fields you need, and only the required number, i.e.
SELECT Field1, Field2 FROM SomeTable WHERE --(constraints)
Outside of the database, dynamic queries run the risk of injection attacks and malformed data. Typically you get round this using stored procedures or parameterised queries. Also (although not really that much of a problem) the server has to generate an execution plan each time a dynamic query is executed.
It is NOT faster to use explicit field names versus *, if and only if, you need to get the data for all fields.
Your client software shouldn't depend on the order of the fields returned, so that's a nonsense too.
And it's possible (though unlikely) that you need to get all fields using * because you don't yet know what fields exist (think very dynamic database structure).
Another disadvantage of using explicit field names is that if there are many of them and they're long then it makes reading the code and/or the query log more difficult.
So the rule should be: if you need all the fields, use *, if you need only a subset, name them explicitly.
The result is too huge. It is slow to generate and send the result from the SQL engine to the client.
The client side, being a generic programming environment, is not and should not be designed to filter and process the results (e.g. the WHERE clause, ORDER clause), as the number of rows can be huge (e.g. tens of millions of rows).
Naming each column you expect to get in your application also ensures your application won't break if someone alters the table, as long as your columns are still present (in any order).
Performance wise I have seen comments that both are equal. but usability aspect there are some +'s and -'s
When you use a (select *) in a query and if some one alter the table and add new fields which do not need for the previous query it is an unnecessary overhead. And what if the newly added field is a blob or an image field??? your query response time is going to be really slow then.
In other hand if you use a (select col1,col2,..) and if the table get altered and added new fields and if those fields are needed in the result set, you always need to edit your select query after table alteration.
But I suggest always to use select col1,col2,... in your queries and alter the query if the table get altered later...
This is an old post, but still valid. For reference, I have a very complicated query consisting of:
12 tables
6 Left joins
9 inner joins
108 total columns on all 12 tables
I only need 54 columns
A 4 column Order By clause
When I execute the query using Select *, it takes an average of 2869ms.
When I execute the query using Select , it takes an average of 1513ms.
Total rows returned is 13,949.
There is no doubt selecting column names means faster performance over Select *
Select is equally efficient (in terms of velocity) if you use * or columns.
The difference is about memory, not velocity. When you select several columns SQL Server must allocate memory space to serve you the query, including all data for all the columns that you've requested, even if you're only using one of them.
What does matter in terms of performance is the excecution plan which in turn depends heavily on your WHERE clause and the number of JOIN, OUTER JOIN, etc ...
For your question just use SELECT *. If you need all the columns there's no performance difference.
It depends on the version of your DB server, but modern versions of SQL can cache the plan either way. I'd say go with whatever is most maintainable with your data access code.
One reason it's better practice to spell out exactly which columns you want is because of possible future changes in the table structure.
If you are reading in data manually using an index based approach to populate a data structure with the results of your query, then in the future when you add/remove a column you will have headaches trying to figure out what went wrong.
As to what is faster, I'll defer to others for their expertise.
As with most problems, it depends on what you want to achieve. If you want to create a db grid that will allow all columns in any table, then "Select *" is the answer. However, if you will only need certain columns and adding or deleting columns from the query is done infrequently, then specify them individually.
It also depends on the amount of data you want to transfer from the server. If one of the columns is a defined as memo, graphic, blob, etc. and you don't need that column, you'd better not use "Select *" or you'll get a whole bunch of data you don't want and your performance could suffer.
To add on to what everyone else has said, if all of your columns that you are selecting are included in an index, your result set will be pulled from the index instead of looking up additional data from SQL.
SELECT * is necessary if one wants to obtain metadata such as the number of columns.
Gonna get slammed for this, but I do a select * because almost all my data is retrived from SQL Server Views that precombine needed values from multiple tables into a single easy to access View.
I do then want all the columns from the view which won't change when new fields are added to underlying tables. This has the added benefit of allowing me to change where data comes from. FieldA in the View may at one time be calculated and then I may change it to be static. Either way the View supplies FieldA to me.
The beauty of this is that it allows my data layer to get datasets. It then passes them to my BL which can then create objects from them. My main app only knows and interacts with the objects. I even allow my objects to self-create when passed a datarow.
Of course, I'm the only developer, so that helps too :)
What everyone above said, plus:
If you're striving for readable maintainable code, doing something like:
SELECT foo, bar FROM widgets;
is instantly readable and shows intent. If you make that call you know what you're getting back. If widgets only has foo and bar columns, then selecting * means you still have to think about what you're getting back, confirm the order is mapped correctly, etc. However, if widgets has more columns but you're only interested in foo and bar, then your code gets messy when you query for a wildcard and then only use some of what's returned.
And remember if you have an inner join by definition you do not need all the columns as the data in the join columns is repeated.
It's not like listing columns in SQl server is hard or even time-consuming. You just drag them over from the object browser (you can get all in one go by dragging from the word columns). To put a permanent performance hit on your system (becasue this can reduce the use of indexes and becasue sending unneeded data over the network is costly) and make it more likely that you will have unexpected problems as the database changes (sometimes columns get added that you do not want the user to see for instance) just to save less than a minute of development time is short-sighted and unprofessional.
Absolutely define the columns you want to SELECT every time. There is no reason not to and the performance improvement is well worth it.
They should never have given the option to "SELECT *"
If you need every column then just use SELECT * but remember that the order could potentially change so when you are consuming the results access them by name and not by index.
I would ignore comments about how * needs to go get the list - chances are parsing and validating named columns is equal to the processing time if not more. Don't prematurely optimize ;-)