Multiple tables vs one table with more columns - sql

My chosen database is MongoDB. But the question should be independent.
So for example, each row of record will have a flag that can take 1 of 2 possible values.
What is the pro and con of:
Having 1 table with a column to hold the value of this flag.
versus:
the pro and con of:
Having 2 tables to hold the two different types of records distinguished by the aforementioned flag?
Would this be cheaper in terms of storage, since you don't have that extra column?
Would this also be faster in queries, since you know exactly which table to look without having to perform a filter?
What is the common practice in industry?

Storage for a single column holding just a flag (e.g. active and archived) should be negligible. The query could be faster with two tables, however your application is more complex, you have to write 2 queries.
When you have only 2 distinct values and these values are more or less evenly distributed, then an index will not be used, thus the performance should be equal - unless you select the entire table.
It might be useful to have 2 tables if the flags are not evenly distributed. For example you have a rather small active data set which is queried frequently, and a big archive data set which is much bigger but hardly queried.
If available, you can also work with partitions which is actually a good combination of both.

Related

Selecting one column from a table that has 100 columns

I have a table with 100 columns (yes, code smell and arguably a potentially less optimized design). The table has an 'id' as PK. No other column is indexed.
So, if I fire a query like:
SELECT first_name from EMP where id = 10
Will SQL Server (or any other RDBMS) have to load the entire row (all columns) in memory and then return only the first_name?
(In other words - the page that contains the row id = 10 if it isn't in the memory already)
I think the answer is yes! unless it has column markers within a row. I understand there might be optimization techniques, but is it a default behavior?
[EDIT]
After reading some of your comments, I realized I asked an XY question unintentionally. Basically, we have tables with 100s of millions of rows with 100 columns each and receive all sorts of SELECT queries on them. The WHERE clause also changes but no incoming request needs all columns. Many of those cell values are also NULL.
So, I was thinking of exploring a column-oriented database to achieve better compression and faster retrieval. My understanding is that column-oriented databases will load only the requested columns. Yes! Compression will help too to save space and hopefully performance as well.
For MySQL: Indexes and data are stored in "blocks" of 16KB. Each level of the B+Tree holding the PRIMARY KEY in your case needs to be accessed. For example a million rows, that is 3 blocks. Within the leaf block, there are probably dozens of rows, with all their columns (unless a column is "too big"; but that is a different discussion).
For MariaDB's Columnstore: The contents of one columns for 64K rows is held in a packed, compressed structure that varies in size and structure. Before getting to that, the clump of 64K rows must be located. After getting it, it must be unpacked.
In both cases, the structure of the data on disk is a compromises between speed and space for both simple and complex queries.
Your simple query is easy and efficient to doing a regular RDBMS, but messier to do in a Columnstore. Columnstore is a niche market in which your query is abnormal.
Be aware that fetching blocks are typically the slowest part of performing the query, especially when I/O is required. There is a cache of blocks in RAM.

How to store large currency like data in database

My data is similar to currency in many aspects so I will use it for demonstration.
I have 10-15 different groups of data, we can say different currencies like Dollar or Euro.
They need to have these columns:
timestamp INT PRIMARY KEY
value INT
Each of them will have more than 1 billion rows and i will append new rows as time passes.
I will just select them in some intervals and create graphs. Probably multiple currency in same graph.
Question is should I add a group column and store all in one table or leave it separately. If they are in same column timestamp will not be unique anymore and probably I should use advanced SQL techniques to make it efficient.
10 - 15 "currencies"? 1 billion rows each? Consider list partitioning in Postgres 11 or later. This way, the timestamp column stays unique per partition. (Although I am not sure why that is a necessity.)
Or simply have 10 - 15 separate tables without storing the "currency" redundantly per row. Size matters with this many rows.
Or, if you typically have multiple values (one for each "currency") for the same timestamp, you might use a single table with 10-15 dedicated "currency" columns. Much smaller overall, as it saves the tuple overhead for each "currency" (28 bytes per row or more). See:
Making sense of Postgres row sizes
The practicality of a single row for multiple "currencies" depends on detailed specs. For example: might not work so well for many updates on individual values.
You added:
I have read clustered indexes which orders data in physical order in disk. I will not insert new rows in middle of table
That seems like a perfect use case for BRIN indexes, which are dramatically smaller than their B-tree relatives. Typically a bit slower, but with your setup maybe even faster. Related:
How do I improve date-based query performance on a large table?

SQL Structure, Dynamic Two Columns or Unique Colmuns

I'm not sure which is faster. I have the need to store lists of possible data.
Currently I have an SQL table with the following structure being accessed with php.
boxID
place
name -- (serialNum, itemNum, idlock, etc, etc)
data
--(Note: The Primary Key here would be boxId, place, name, and data, to prevent duplicate data.)
The reason i set it up like this was to prevent creating columns per named data. Its a possibility in the future to have 5-10 different named data or more. Also possible to store 1,000 - 10,000 entries of data in one week for just one named data. It will be searched as well, like when i get place from a specific serialNum, then getting all data related to that place. (A specific serialNum, itemNum, idLock, etc, etc,)
But my concern is that my structure could be slower than just creating a named column for each named data. For example:
boxID
place
serialNum
itemNum
idLock
etc
etc
--(Note: Not even sure how to add keys to this if i would do it this way)
To sum it up: Which is faster and better practice? (keep in mind im still a novice with SQL)
The best practice is to model your data as entities with specific attributes. Typically an entity has at most a few dozen attributes. The entities typically turn into tables, and the attributes typically which turn into columns. That is, the physical model and the logic model are often very similar.
There may be other considerations. For instance, there is a limit on the number of columns a row can have -- and if you have more columns, you need another solution. Similarly, if the data is sparse (that is, most values are NULL), then having lots of unused columns may be a waste of space. That is, it is more efficient to store it in another format. SQL Server offers sparse columns for this reason.
My suggestion is that you design your table in an intuitive way with named columns. A volume of data of 1,000 - 10,000 rows per week is not that much data. That turns into 50,000 - 500,000 rows per year, which SQL Server should be easily able to handle the volume. You don't say how many named entities you have, but table with millions or tens of millions of rows are quite reasonable for modern databases.

Correlation between amount of rows and amount columns in database performance

Is there a correlation between the amount of rows/number of columns used and it's impact within the (MS)SQL database?
A little more background:
We have to store lots of data from measurement devices. These devices ping a string with data back to us around 100 times a day. These strings contains +- 300 fields. Assume we have 100 devices in operation that means we get 10000 records back every day. At our back-end we split these data strings and have to put these into the database. When these data strings are fixed that means we add each days around 10000 new rows into the database. No big deal.
Whatsoever, the contents of these data strings may change during time. There are two options we are considering:
Using vertical tables to store the data dynamically
Using horizontal tables and add a new column now and then when it's needed.
From the perspective of ease we'd like to choose for the first approach. Whatsoever, that means we're adding 100*100*300=3000000 rows each day. Data has to be stored 1 year and a month (395 days) so then we're around 1.2 billion rows. Not calculated the expected growth.
Is it from a performance perspective smarter to use a 'vertical' or a 'horizontal' approach?
When choosing for the 'vertical' solution, how can we actual optimize performance by using PK's/FK's wisely?
When choosing for the 'horizontal' solution, are there recommendations for adding columns to the table?
I have a vertical DB with 275 million rows in the "values" table. We took this approach because we couldn't accurately define the schema at the outset either. Inserts are fantastic. Selects suck. Too be fair we throw in a couple of extra doohickies the typical vertical schema doesn't have to deal with.
Have a search for EAV aka Entity Attribute Value models. You'll find a lot of heat on both sides of the debate. Too good articles on making it work are
What is so bad about EAV, anyway?
dave’s guide to the eav
My guess is these sensors don't just start sending you extra fields. You have to release new sensors or sensor code for this to happen. That's your chance to do change control on your schema and add the extra columns. If external parties can connect sensors without notifying you this argument is null and void and you may be stuck with an EAV.
For the horizontal option you can split tables putting the frequently-used columns in one table and the less-used in a second; both tables have the same primary key values so you can link less-used to more-used columns. Also you can use RDBMS's built-in partitioning functionality to split each day's (or week's or month's) data for the others'.
Generally, you can tune a table more for inserts (or any DML) or for queries. Improving one side comes at the expense of the other. Usually, it's a balancing act.
First of all, 10K inserts a day is not really a large number. Sure, it's not insignificant, but it doesn't even come close to what would be considered "large" nowadays So, while we don't want to make inserts downright sluggish, this gives you some wiggle room.
Creating an index on the device id and/or entry timestamp will do some logical partitioning of the data for you. The exact makeup of your index(es) will depend on your queries. Are you looking for all entries for a given date or date range? Then index the timestamp column. Are you looking for all entries received from a particular device? Then index the device id column. Are you looking for entries from a particular device on a particular date or date range or sorted by the date? Then create an index on both columns.
So if you ask for the entries for device x on date y, then you are going out to the table and looking only at the rows you need. The fact that the table is much larger than the small subset you query is incidental. It's as if the rest of the table doesn't even exist. The total size of the table need not be intimidating.
Another option: As it looks like the data is written to the table and never altered after that, then you may want to create a data warehouse schema for the data. New entries can be moved to the warehouse every day or several times a day. The point is, the warehouse schema can have the data sliced, diced, quartered and cubed to make queries much more efficient. So you can have the existing table tuned for more efficient inserts and the warehouse tuned for more efficient queries. That is, after all, what data warehouses are for.
You also imply that some of each entry is (or can be) duplicated from one entry to the next. See if you can segment the data into three types:
Type 1: Data that never changes (the device id, for example)
Type 2: Data that rarely changes
Type 3: Data that changes often
Now all you have is a normalization problem, something a lot easier to solve. Let's say the row is equally split between the types. So you have one table with 100 rows of 33 columns. That's it. It never changes. Linked to that is a table with at least 100 rows of 33 columns but maybe several new rows are added each day. Finally, linked to the second table a table with rows of 33 columns that possibly grows by the full 10K every day.
This minimizes the grow-space required by the online database. The warehouse could then denormalize back to one huge table for ease of querying.

Will one query run faster than multiple queries, if they are deleting the same amount of records

I have a table like this:
Table company
companyid | companyname | owner |
5 |coffecompany |Mike |
6 |juicecompany |Mike |
For some reason, I need to use this:
DELETE FROM company WHERE companyid='5';
DELETE FROM company WHERE companyid='6';
instead of
DELETE FROM company WHERE owner='Mike';
But I wonder if the second choice run faster, if it does, will it run much faster? In the future, I may have to use it to delete a large amount of records, so I really need to know.
delete from company where companyId in (5, 6); should always be faster, even though the difference might be negligible if eg. you've got proper indexes, no concurrent queries, no issues with locking etc.
Note that my query is for MS SQL, if your database server allows using the same construct (ie. specifying all the values in such concise way), you should probably use it, if not, go with something like delete from company where companyId = 5 or companyId = 6; Also, don't use string literals if companyid is a number (is the table column actually a number, or a text?).
In any case, it gives the server more lee-way in implementing the actual operation, and DB servers tend to be very good at query optimization.
One possible bottle-neck for deletion could be in the transaction logs, however. It might very well be that if you're deleting a huge amount of rows at once, it would be better to do a few separate deletes in separate transactions to fit within transaction size limits.
Generally, SQL is language operating on sets of data so second query will be much faster for huge amount of rows.
First choice might be slower as you'll have to send query text as many times as you have rows to delete. Imagine network traffic if you want to delete 1 000 000 rows.
On small amounts of rows probably you won't be able to see any difference.
If you are using Oracle, think of using bind variable :
execute immediate 'DELETE FROM company WHERE companyid=:ID' USING 6;
But other than that, there is no specific answer to your question, you need to benchmark yourself, it depends on the amount of data, your indexes etc...
When using Where clause in a query, RDBMS will go to find the result set applying the condition.
Normally RDBMS will do a full-table-scan to find the result set, it means that any records will be investigated to see if the condition is matched. Based on the table size that will be time consuming.
Above approach will differed when when the column(s) listed in the where condition are indexed.
Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure which holds the field value, and pointer to the record it relates to. This index structure is then sorted, allowing Binary Searches to be performed on it.
As a simplified sample:
A linear search (full-table-scan) on the field A of table T containing N records would require an average of N/2 accesses to find a value.
If 'A' field is indexed then a sorted binary search will requiring an average of log2 N block accesses. Asuming that N=1,000,000 then we will have
N/2 = 500,000
log2 1000000 = 19.93 = 20
Instantly we can see this is a drastic improvement.
It looks like the companyid is the primary key of company table, if so any primary key column will be indexed automatically by RDMS and the search will be more effective than searching by owner.