I have a question about columnar databases such as Sybase. I understand columnar databases are very fast when your operation is restricted to single coulmns and doesn’t go across columns, i.e no row based filtering?
But most queries are a combination of both, filter some rows, then aggregate some columns.
So really, where do columnar databases shine?
Columnar databases can definitely access data across different columns. By storing columns separately, they offer a few advantages not available in row-based storage:
They only need to read the columns accessed in a particular query.
They make it easy to add new columns to a table.
They allow an individual column to be compressed, using a compression algorithm optimal for that column.
They sometimes provide built-in indexing for each column.
All of these can be used to speed SELECT queries.
One big issue with columnar databases is inserting a new row (or deleting a row), because this requires touching all the columns. That makes ensuring ACID properties . . . trickier.
There are definitely some SELECT queries that may not perform as well in a columnar database as in other databases. But they do surprisingly well at increasing the performance of many queries.
Related
Suppose i have a User table, and other tables (e.g. UserSettings, UserStatistics) which have one-to-one relationship with a user.
Since sql databases don't save complex structs in table fields (some allow JSON fields with undefined format), is it ok to just add said tables, allowing to store individual (complex) data for each user? Will it complicate performance by 'joining' more queries?
And in distirbuted databases cases, will it save those (connected) tables randomly in different nodes, making more redundant requests with each other and decreasing efficiency?
1:1 joins can definitely add overhead, especially in a distributed database. Using a JSON or other schema-less column is one way to avoid that, but there are others.
The simplest approach is a "wide table": instead of creating a new table UserSettings with columns a,b,c, add columns setting_a, setting_b, setting_c to your User table. You can still treat them as separate objects when using an ORM, it'll just need a little extra code.
Some databases (like CockroachDB which you've tagged in your question) let you subdivide a wide table into "column families". This tends to let you get the best of both worlds: the database knows to store rows for the same user on the same node, but also to let them be updated independently.
The main downside of using JSON columns is they're harder to query efficiently--if you want all users with a certain setting, or want to know just one setting for a user, you're going to get at least a minor performance hit if the database has to parse a JSON column to figure that out, or you have to fetch the entire blob and do it in your app. If they're more convenient for other reasons though, you can work around this by adding inverted indexes on your JSON columns, or expression indexes on the specific values you're interested in. Indexes can have a similar cost to 1:1 joins, but you can mitigate that in CockroachDB using by using the STORING keyword to tell the DB to write a copy of all the user columns to the index.
I was trying to build an in-memory indexing structure for all fields in my data so that I can have efficient access. Suddenly I realize that this may already be an important requirement for NoSQL im-memory databases such as MongoDB, Cassandra, or Memcached. Therefore I want to know if there exists mature ways of solving my problem.
According to my understanding, most NoSQL databases arrange data in the format of key-value pairs, or key-key-value pairs. This was efficient for accessing specific data points.
However, if I want to perform a lot of range queries, did NoSQL databases have indexing techniques that can efficiently handle those queries?
In addition, let's say I have 100 fields in my sales data, and how to efficiently handle queries that require multiple filters? E.g., one filter for id, another for a range of dates, and another for price range. Did NoSQL database already provide such functionalities?
Thanks for all your inputs.
I want to move multiple SQLite files to PostgreSQL.
Data contained in these files are monthly time-series (one month in a single *.sqlite file). Each has about 300,000 rows. There are more than 20 of these files.
My dilemma is how to organize the data in the new database:
a) Keep it in multiple tables
or
b) Merge it to one huge table with new column describing the time period (e.g. 04.2016, 05.2016, ...)
The database will be used only to pull data out of it (with the exception of adding data for new month).
My concern is that selecting data from multiple tables (join) would not perform very well and the queries can get quite complicated.
Which structure should I go for - one huge table or multiple smaller tables?
Think I would definitely go for one table - just make sure you use sensible indexes.
If you have the space and the resource 1 table, as other users have appropriately pointed out databases can handle millions of rows no problem.....Well depends on the data that is in them. The row size can make a big difference... Such as storing VARCHAR(MAX), VARBINARY(MAX) and several per row......
there is no doubt writing queries, ETL (extract transform load) is significantly easier on a single table! And maintenance of that is easier too from a archival perspective.
But if you never access the data and you need the performance in the primary table some sort of archive might make since.
There are some BI related reasons to maintain multiple tables but it doesn't sound like that is your issue here.
There is no perfect answer and will depend on your situation.
PostgreSQL is easily able to handle millions of rows in a table.
Go for option b) but..
with new column describing the time period (e.g. 04.2016, 05/2016, ...)
Please don't. Querying the different periods will become a pain, an unnecessary one. Just put the date in one column, put a index on the column and you can, probably, execute fast queries on it.
My concern is that selecting data from multiple tables (join) would not perform very well and the queries can get quite complicated.
Complicated for you to write or for the database to execute? An Example would be nice for us to get an image of your actual requirements.
I want to know optimization techniques for databases that has nearly 80,000 records,
list of possibilities for optimizing
i am using for my mobile project in android platform
i use sqlite,i takes lot of time to retreive the data
Thanks
Well, with only 80,000 records and assuming your database is well designed and normalized, just adding indexes on the columns that you frequently use in your WHERE or ORDER BY clauses should be sufficient.
There are other more sophisticated techniques you can use (such as denormalizing certain tables, partitioning, etc.) but those normally only start to come into play when you have millions of records to deal with.
ETA:
I see you updated the question to mention that this is on a mobile platform - that could change things a bit.
Assuming you can't pare down the data set at all, one thing you might be able to do would be to try to partition the database a bit. The idea here is to take your one large table and split it into several smaller identical tables that each hold a subset of the data.
Which of those tables a given row would go into would depend on how you choose to partition it. For example, if you had a "customer_id" field that could range from 0 to 10,000 you might put customers 0 - 2500 in table1, 2,500 - 5,000 in table2, etc. splitting the one large table into 4 smaller ones. You would then have logic in your app that would figure out which table (or tables) to query to retrieve a given record.
You would want to partition your data in such a way that you generally only need to query one of the partitions at a time. Exactly how you would partition the data would depend on what fields you have and how you are using them, but the general idea is the same.
Create indexes
Delete indexes
Normalize
DeNormalize
80k rows isn't many rows these days. Clever index(es) with queries that utlise these indexes will serve you right.
Learn how to display query execution maps, then learn to understand what they mean, then optimize your indices, tables, queries accordingly.
Such a wide topic, which does depend on what you want to optimise for. But the basics:
indexes. A good indexing strategy is important, indexing the right columns that are frequently queried on/ordered by is important. However, the more indexes you add, the slower your INSERTs and UPDATEs will be so there is a trade-off.
maintenance. Keep indexes defragged and statistics up to date
optimised queries. Identify queries that are slow (using profiler/built-in information available from SQL 2005 onwards) and see if they could be written more efficiently (e.g. avoid CURSORs, used set-based operations where possible
parameterisation/SPs. Use parameterised SQL to query the db instead of adhoc SQL with hardcoded search values. This will allow better execution plan caching and reuse.
start with a normalised database schema, and then de-normalise if appropriate to improve performance
80,000 records is not much so I'll stop there (large dbs, with millions of data rows, I'd have suggested partitioning the data)
You really have to be more specific with respect to what you want to do. What is your mix of operations? What is your table structure? The generic advice is to use indices as appropriate but you aren't going to get much help with such a generic question.
Also, 80,000 records is nothing. It is a moderate-sized table and should not make any decent database break a sweat.
First of all, indexes are really a necessity if you want a well-performing database.
Besides that, though, the techniques depend on what you need to optimize for: Size, speed, memory, etc?
One thing that is worth knowing is that using a function in the where statement on the indexed field will cause the index not to be used.
Example (Oracle):
SELECT indexed_text FROM your_table WHERE upper(indexed_text) = 'UPPERCASE TEXT';
I know it's probably not the right way to structure a database but does the database perform faster if the data is put in one huge table instead of breaking it up logically in other tables?
I want to design and create the database properly using keys to create relational integrity across tables but when quering, is JOIN'ing slower than reading the required data from one table? I want to make the database queries as fast as possible.
So many other facets affect the answer to your question. What is the size of the table? width? how many rows? What is usage pattern? Are there different usage patterns for different subsets of the columns in the table? (i.e., are two columns hit 1000 times per second, and the other 50 columns only hit once or twice a day? ) this scenario would be a prime candidate to split (partition) the table vertically (two columns in one table, the rest on another)
In general, normalize the schema to the maximum degree possible, then run performance testing with typical or predicted loads and usage patterns, and denormalize and partition to the point where the performance becomes acceptable, and no more...
It depends on the dbms flavor and your actual data, of course. But generally more smaller (narrower) tables are faster than fewer larger (wider) tables.
Access is a little slower when joins must be performed. How much slower depends greatly on the features offered by your particular DBMS, and how the physical database design exploits those features, and on the most frequent access patterns. There are a few access patterns where storing a lot of data in one row wastes time, because the entire row is retrieved, but only a little of the row is used. It depends.
When data is stored in a single table and the normalization rules are deviated from, update is typically slower. How important speed of of update is versus speed of query is dependant on the particular way you use this database.
In general, a lot of newbie database designers tend to put more weight on speed issues than those issues deserve. If your data model is inflexible and incomprehensible, but you gain a 10% speed improvement, you have probably done more harm than good.
Are you building a "read-only" database like a data warehouse? If so, storing data "pre-joined" may make sense. For everyday OLTP databases you need to take into account the performance and ease of inserts, updates and deletes as well. Also, what about queries that only want the data that would have been in one or two of the smaller tables? Now they have to grind through a big fat table full of stuff they don't care about.
It's worth remembering that joining tables is bread-and-butter stuff to a decent DBMS - they are very good at it.
It is often true that querying a single table is faster than querying multiple joined tables. But a normalized design allows you to query the data in multiple ways, with adequate performance across many types of queries.
If you denormalize the tables, you may improve performance of one specific query, while sacrificing performance of other queries against that data. And of course you'll have to manage referential integrity and redundancy manually.
What you're asking about is denormalization - it can speed up reads if done in the right way, and if you are able to ensure that you're not introducing anomalies into your database because of it.
Remember also that there is a hard limit to the amount of data that can be stored in one record. (not knowing which database you have, I can't say what it is.) Too many columns and you will hit that limit. Also if you are having columns like phone1, phone2, phone3 then you need to normalize. If you would need to add a column if the number of items to be inserted about a record changes (if you statred needing 4 instead of 3 phone numbers for instance), you need to normalize instead.
What's true for optimising SELECTS is often not so great at optimising INSERTS, UPDATES and DELETES, and thus it is with this approach. Breaking out the data into properly normalised tables reduces the overhead of changing the data.
While it's tru that in a data warehouse or decision suport system we'd often store pre-joined data (as Tony says), it usually only happens in the context of a precomputed summary (eg. a materialized view) and not for data at the atomic level of granularity. The reason for this is that pushing repeated longer character strings (eg. "Supplier Name") into a dimension table reduces total required storage space and number of physical reads required to retrieve the data. The joins are usually equijoins, and these are performed at almost no cost for large data sets.