I am creating a database table that'll have a list of all Tags available in my application (just like SO's tags).
Currently, I don't have anything associated with each tag (and I'll probably never have), so my idea was to have something of the form
Tags (Tag(pk) : string)
Should this be the way to do it? Or should I instead do something like
Tags (tag_id(pk) : int, tag : string)
I guess looking up on the table in the 2nd case would be faster than in the first one, but that it also takes up more space?
Thanks
I'd go for the second option with the surrogate key.
It will mean the table takes up more space but will likely reduce space over all assuming that you have the tag information as a foreign key in other tables (e.g. a posts/tags table)
using an int rather than a string will make the lookups required to enforce the foreign key more efficient and mean that updates of tag titles don't need to affect multiple tables.
Indexes work better with integers than CHAR/VARCHAR, go with a dedicated integer primary key column. If you need tag names to be unique you can add a constraint, but it's probably not worth the hassle.
You should go for the second option. Firstly, you never know what the future holds. Secondly, you may later want multiple language support or other things that makes the string-as-the-primary-key have a strange feeling around it. Thirdly, I like the idea of using a standard procedure for a table definition, ie. that there always is a column 'id' or 'pk'. It separates business from technology.
Quite possibly you'll have a faster lookup with the index being an integer. Further, consider making your index clustered for even further speedup.
I wouldn't emphasize too much on the performance issue though. As soon as a program starts talking to a database over the internet, you have a much bigger delay than 99% of all the queries of your database (of course with the exception of reporting queries!).
Those two options achieve quite different things. In the first case you have unique tags and in the second you don't. You haven't said what use TAG_ID is in this model. Unless you put in TAG_ID for a good reason then I'd stick with the first design. It's smaller, appears to meet your requirements precisely and Tag seems like a more obvious choice for a key (on grounds of familiarity and simplicity).
Related
I'm looking for a solution for reverse engineering a DB without foreign keys (really! a 20 years old DB...). The intention is to do this completely without additional application or persistence logic, just by analyzing the data.
I know this would be somewhat difficult, but should be possible if the data itself esp. the PKs are analyzed as well.
I don't think there is a universal solution to your problem. Hopefully there is some sort of a naming convention for the tables/columns that can lead you. You can query the system tables to try and figure what's going on (Oracle: user_tab_columns, SQL Server: INFORMATION_SCHEMA.COLUMNS, etc.). Good luck!
I also don't think you'll find a universal solution to this... but I'd like to suggest you an approach, especially if you don't have any source code that could be read to guide you on mapping:
First scan all tables on your database. With scan, I mean store table names and columns.
You can assume columns types trying to convert data to a specific format (start trying to convert to dates, numbers, booleans and so on)... You can also try to discover data types by analysing its contents (if it has only numbers without floating points, if it has numbers with slashes, if it has long or short texts, if it is only one digit..etc.).
Once you have mapped all tables, start by comparing contents of all columns that has a numeric type. (Why? If the database was designed by a human... then he/she/they probably will use numbers as primary/foreign keys).
Every single time you find more than X successful correspondences between the contents of 2 columns from 2 different tables, log this connection. (This X factor depends on the amount of records you have...)
This analysis must run for each table comparing all other tables... column by column - so... this will take some time...
Of course, this is an overview of what need to be done, but it is not a complex code to be written...
Good luck and let me know if you find any sort of tool to do this! :-)
No offense but you can't have been in databases very long if this surprises you.
I am going to assume that by "reverse engineering" you are just looking to fill in the foreign keys, not moving to NoSQL or something. It could be an interesting project. Here is how I would go about it:
Look at all the SELECT statements and see how joins are made to a table. 20 years ago this would be in a WHERE clause but it gets more complicated than that, of course. With correlated subqueries and UPDATE statements with FROM clauses and whatever implies a join of some sort. You have to be able to figure all that out. If you want to do it formally (you can probably suss out all this stuff intuitively) you could list the number of times combinations are used in joins between tables. List them by pairs of tables not the the set of all the tables in the join. Those would be the candidate foreign keys if one side is a primary key. The other side gets the foreign key. There are multi-column PKs but you can figure that out (so if the other side of the primary key is in two tables that's not a foreign key). If one column ends up pointing to two different table PKs that's not a proper foreign key either but it might be appropriate to pick a table and use it as the target.
If you don't already have a primary keys you should do that first. Indexes, perhaps even clustered indexes (in Sybase/MSSQL), aren't always the correct primary keys. In any case you may have to change the primary keys accordingly.
To collect all the statements might be challenging in itself. You could use perl/awk to parse them out of their C/Java/PHP/Basic/COBOL programs, or you could reap them from monitoring input to the server. You would want to look for WHERE/JOIN/APPLY etc. rather than SELECT. There are lots of other ways.
I'm developing a job service that has features like radial search, full-text search, the ability to do full-text search + disable certain job listings (such as un-checking a textbox and no longer returning full-time jobs).
The developer who is working on Sphinx wants the database information to all be stored as intergers with a key (so under the table "Job Type" values might be stored such as 1="part-time" and 2="full-time")... whereas the other developers want to keep the database as strings (so under the table "Job Type" it says "part-time" or "full-time".
Is there a reason to keep the database as ints? Or should strings be fine?
Thanks!
Walker
Choosing your key can have a dramatic performance impact. Whenever possible, use ints instead of strings. This is called using a "surrogate key", where the key presents a unique and quick way to find the data, rather than the data standing on it's own.
String comparisons are resource intensive, potentially orders of magnitude worse than comparing numbers.
You can drive your UI off off the surrogate key, but show another column (such as job_type). This way, when you hit the database you pass the int in, and avoid looking through to the table to find a row with a matching string.
When it comes to joining tables in the database, they will run much faster if you have int's or another number as your primary keys.
Edit: In the specific case you have mentioned, if you only have two options for what your field may be, and it's unlikely to change, you may want to look into something like a bit field, and you could name it IsFullTime. A bit or boolean field holds a 1 or a 0, and nothing else, and typically isn't related to another field.
if you are normalizing your structure (i hope you are) then numeric keys will be most efficient.
Aside from the usual reasons to use integer primary keys, the use of integers with Sphinx is essential, as the result set returned by a successful Sphinx search is a list of document IDs associated with the matched items. These IDs are then used to extract the relevant data from the database. Sphinx does not return rows from the database directly.
For more details, see the Sphinx manual, especially 3.5. Restrictions on the source data.
I've been working on a database and I have to deal with a TEXT field.
Now, I believe I've seen some place mentioning it would be best to isolate the TEXT column from the rest of the table(putting it in a table of its own).
However, now I can't find this reference anywhere and since it was quite a while ago, I'm starting to think that maybe I misinterpreted this information.
Some research revealed this, suggesting that
Separate text/blobs from metadata, don't put text/blobs in results if you don't need them.
However, I am not familiar with the definition of "metadata" being used here.
So I wonder if there are any relevant advantages in putting a TEXT column in a table of its own. What are the potential problems of having it with the rest of the fields? And potential problems of keeping it in a separated table?
This table(without the TEXT field) is supposed to be searched(SELECTed) rather frequently. Is "premature optimization considered evil" important here? (If there really is a penalty in TEXT columns, how relevant is it, considering it is fairly easy to change this later if needed).
Besides, are there any good links on this topic? (Perhaps stackoverflow questions&answers? I've tried to search this topic but I only found TEXT vs VARCHAR discussions)
Yep, it seems you've misinterpreted the meaning of the sentence. What it says is that you should only do a SELECT including a TEXT field if you really need the contents of that field. This is because TEXT/BLOB columns can contain huge amounts of data which would need to be delivered to your application - this takes time and of course resources.
Best wishes,
Fabian
This is probably premature optimisation. Performance tuning MySQL is really tricky and can only be done with real performance data for your application. I've seen plenty of attempts to second guess what makes MySQL slow without real data and the result each time has been a messy schema and complex code which will actually make performance tuning harder later on.
Start with a normalised simple schema, then when something proves too slow add a complexity only where/if needed.
As others have pointed out the quote you mentioned is more applicable to query results than the schema definition, in any case your choice of storage engine would affect the validity of the advice anyway.
If you do find yourself needing to add the complexity of moving TEXT/BLOB columns to a separate table, then it's probably worth considering the option of moving them out of the database altogether. Often file storage has advantages over database storage especially if you don't do any relational queries on the contents of the TEXT/BLOB column.
Basically, get some data before taking any MySQL tuning advice you get on the Internet, including this!
The data for a TEXT column is already stored separately. Whenever you SELECT * from a table with text column(s), each row in the result-set requires a lookup into the text storage area. This coupled with the very real possibility of huge amounts of data would be a big overhead to your system.
Moving the column to another table simply requires an additional lookup, one into the secondary table, and the normal one into the text storage area.
The only time that moving TEXT columns into another table will offer any benefit is if there it a tendency to usually select all columns from tables. This is merely introducing a second bad practice to compensate for the first. It should go without saying the two wrongs is not the same as three lefts.
The concern is that a large text field—like way over 8,192 bytes—will cause excessive paging and/or file i/o during complex queries on unindexed fields. In such cases, it's better to migrate the large field to another table and replace it with the new table's row id or index (which would then be metadata since it doesn't actually contain data).
The disadvantages are:
a) More complicated schema
b) If the large field is using inspected or retrieved, there is no advantage
c) Ensuring data consistency is more complicated and a potential source of database malaise.
There might be some good reasons to separate a text field out of your table definition. For instance, if you are using an ORM that loads the complete record no matter what, you might want to create a properties table to hold the text field so it doesn't load all the time. However if you are controlling the code 100%, for simplicity, leave the field on the table, then only select it when you need it to cut down on data trasfer and reading time.
Now, I believe I've seen some place mentioning it would be best to isolate the TEXT column from the rest of the table(putting it in a table of its own).
However, now I can't find this reference anywhere and since it was quite a while ago, I'm starting to think that maybe I misinterpreted this information.
You probably saw this, from the MySQL manual
http://dev.mysql.com/doc/refman/5.5/en/optimize-character.html
If a table contains string columns such as name and address, but many queries do not retrieve those columns, consider splitting the string columns into a separate table and using join queries with a foreign key when necessary. When MySQL retrieves any value from a row, it reads a data block containing all the columns of that row (and possibly other adjacent rows). Keeping each row small, with only the most frequently used columns, allows more rows to fit in each data block. Such compact tables reduce disk I/O and memory usage for common queries.
Which indeed is telling you that in MySQL you are discouraged from keeping TEXT data (and BLOB, as written elsewhere) in tables frequently searched
Suppose you have a few, quite large (100k+) objects in available and can provide this data (e.g. name) in 20+ languages. What is an efficient way to store/handle this data in a SQL database.
The obvious way to do that looks like this - however, are there other ways which make more sense? I'm a bit worried about performance.
CREATE TABLE "object" (
"id" serial NOT NULL PRIMARY KEY
);
CREATE TABLE "object_name" (
"object_id" integer NOT NULL REFERENCES "object" ("id")
"lang" varchar(5) NOT NULL,
"name" varchar(50) NOT NULL
);
As for usage, the use will only select one language and that will result in potentially large joins over the object_name table.
Premature optimization or not, I'm interested in other approaches, if only gain some peace of mind, that the obvious solution isn't a very stupid one.
To clarify the actual model is way more complicated. That's just the pattern identified so far.
If you have a combined key on (object_id, lang) there shouldn't be any joins, just an O(1) lookup, right? (Try with EXPLAIN SELECT to be sure)
In my own projects, I don't translate at the DB level. I let the user (or the OS) give me a lang code and then I load all the texts in one go into a hash. The DB then sends me IDs for that hash and I translate the texts the moment I display them somewhere.
Note that my IDs are strings, too. That way, you can see which text you're using (compare "USER" with "136" -- who knows what "136" might mean in the UI without looking into the DB?).
[EDIT] If you can't translate at the UI level, then your DB design is the best you can aim for. It's as small as possible, easy to index and joins don't take a lot.
If you want to take it one step further and you can generate the SQL queries at the app level, you can consider to create views (one per language) and then use the views in the joins which would give you a way to avoid the two-column-join. But I doubt that such a complex approach will have a positive ROI.
Have you considered using multiple tables, one for each language?
It will cost a bit more in terms of coding complexity, but you will be loading/accessing only one table per language, in which metadata will be smaller and therefore more time efficient (possibly also space-wise, as you won't have a "lang" variable for each row)
Also, if you really want one-table-to-rule-them-all, you can create a view and join them :)
In addition to what Wim writed, the table OBJECT in your case is useless. There's no need for such table since it does not store any single information not contained in table OBJECT_NAME.
I need a status column that will have about a dozen possible values.
Is there any reason why I should choose int (StatusID) over char(4) (StatusCode)?
Since sql server doesn't support named constants, char is far more descriptive than int when used in stored procedure and views as constants.
To clarify, I would still use a lookup table either way. Since the I will need a more descriptive text for the UI. So this decision is only to help me as the developer when I'm maintaining the stored procedures and views.
Right now I'm leaning toward char(4). Especially since designing views in SQL Server Management Studio prevents me from adding comments (I know it's possible to add it in the script editor, but realistically I will use the View Designer far more often, especially if the view is trivial). StateCODE = 'NEW' is much more readable than StateID = 1000.
I guess the question is will there be cases where char(4) is problematic, and since the database is pretty small, I'm not too concerned about slight performance hit (like using TinyInt versus int), but more afraid of code maintenance problems.
Database purists will say a key should have no meaning in the business domain, and that you should create a status table where you look up the description and other meanings of the status.
But for operators and end users, having a descriptive status code can be a blessing. And it doesn't even have to be char(4), you can make it varchar(20). This allows them to query without joins, and inspect the database in an easier way.
In the end, I think the char(20) organization will run more smoothly, and go home earlier on Friday. But the int organization has a better abstraction of the database, and they can enjoy meta programming on friday evening (or boosting on forums.)
(All of this assuming that you're writing business support software. One of the more succesful business support systems, SAP, makes successful use of meaningful keys.)
There are many pro's and con's to each method. I'm sure other arguments will come up in favour of using a char(4). My reasons for choosing an int over a char include:
I always use lookup tables. They allow for an audit trail of the value to be retained and easily examined. For example, if one of your status codes is 'MING' and a business decision is made to change it from 'MING' to 'MONG' from a certain date, my lookup table handles this.
Smaller index - if you need to index this column, it will be thinner.
Extendability - OK, I made that word up, but if you need to go from 4 chars to 5 chars for example, a lookup table would be a blessing.
Descriptions: We use a lot of TLA's here which once you know what they are is great but if I gave a business user a report that said "GDA's 2007 1001", they wouldn't necessarily twig that GDA = Good Dead on Arrival. With a lookup table, I can add this description.
Best practice: Can't find the link to hand but it might be something I read in a K.Tripp article. Aim to make your clustered primary key incrementing integers to optimise the index.
Of course if you are absolutely positive that you will never need any more than a handful of 4 characters, there is no reason not to bang it in the table.
The best thing should be a lookup table with defined values and then relate it to original table, that uses that enumeration.
Collation ambigities are one reason to say no to char 4: Does ABcD = abCD = äBCd?
If you have 12 possible values, why not tinyint/byte and a Status table?
If you have to store the status for 10 million rows the 3 bytes different and the collation/string compares add up.
The place where I've run into this use case is columns that would map onto things that I would typically use an Enum for when programming. Do you store the integer value of the Enum or the name of the Enum in the database column? Honestly, I've done it both ways. Usually, I ask myself if the database will be used outside the application I'm building. If so, I will choose the human readable format to store in the database. If not, then I'll choose the integer value as it saves a little time when reconstituting (it's just a cast instead of a parse operation) the Enum in code.
You could also use a tinyint over an int
i always choose int's simply because they are easier to map to enums in code.
If you're dealing with huge amounts of data and high throughput then a smallint or tinyint can give better performance and a smaller footprint on the hard disk. If the data in your application is often viewed directly through applications like Access or Cognos then your business people will probably appreciate the descriptive values. I know that when I'm analyzing data as part of my Database Developer role I get tired of joining a lot of lookup tables because I can't remember if 1 = Foo and 2 = Bar or 1 = Bar and 2 = Foo.
Also, although performance will be enhanced if you have to lookup rows by these codes which can have smaller indexes, it can also be hurt (in a minor way) by having to do the joins if you are often looking up rows regardless of the code but where you have to include the text value. In most applications that's not an issue though and would probably only come into play in large data warehousing/reporting environments.