Column Store Indexes in SQL Server - sql

Are column store indexes in SQL Server useful only when the query uses aggregate functions?

Certainly no. Even if they were designed to be used in DWH ambient, they can be used in OLTP environment.
And even when used in DWH, aggregation is not a requirenment.
Columnstore indexes use a different storage format for data, storing
compressed data on a per-column rather than a per-row basis. This
storage format benefits query processing in data warehousing,
reporting, and analytics environments where, although they typically
read a very large number of rows, queries work with just a subset of
the columns from a table.
So the first benefit is data compression.
Compression in columnstore is table-wide, non page-wide (I mean dictionary applied) when you use PAGE data compression. So the compression ratio is the best. A table with clustered columnstore index defined uses less space compared to the same table with no columnstore but page compression enambed.
The second benefit is for queries that filter nothing(or almost nothing, needing (almost) all the rows) but need only some columns to be returned.
When the table is stored "per-row", even if you want only 10 columns of 100, and you want all the rows, the whole table will be read, because there is a need to read the whole row to get your 10 requested columns out of it. When you use "per-column" storage, only needed columns will be read.
Of course you can define an index with your 10 needed columns as included, but it will be additional space used and the overhead of maintenance of this index. Now imagin that your queries need these 10, and other 10, and another 2o of 100, so you need to create more indexes for these queries.
With one columnstore index you will be able to satisfy all these queries

Columnstore Indexes, stores data in columnar format, therefore they are quite helpful when you are using aggregate functions. One of the reason is, because homogeneous data compression is much faster when you are trying to aggregate columns.
But this is not the only use of columnstore indexes. It is really helpful when you are processing millions of rows (in multidimensional data models).
Check out the official documentation and this as well for better understanding.

You can't say they are always useful for aggregate functions as it depends on which rows are included in the aggregation. If you are performing aggregation on all rows - they are useful. If you are selecting only small amount of the rows because of filtering, you can even get worse result than using traditional non-clustered index.
As it is written in MSDN, they can be used:
to achieve up to 10x query performance gains in your data warehouse
over traditional row-oriented storage
to get 10x data compression over the uncompressed data size (if you are interested in compression in should check the COLUMNSTORE_ARCHIVE option)
Also, depending on your SQL Server version (if SQL Server 2017 or later) you can check the Adaptive Query Processing as one of the conditions is to have such index:
You should look through the documentation and see what options you have depending on your SQL Server version and to test for sure how this index is going to affect the performance, because it is very possible to make the things worse.
It's good that in every article Microsoft have mention the scenarios when the types of column store indexes can be used for good.

Related

Indexes in SQL vs ORDER BY clause

What is the difference between using Indexes in SQL vs Using the ORDER BY clause?
Since from what I understand , the Indexes arrange the specified column(s) in an ordered manner that helps the query engine in looking through the tables quickly (and hence prevents table scan).
My question - why can't the query engine simply use the ORDER BY for improving performance?
Thanks !
You put the tag as sql-server-2008 but the question has nothing to do with SQL server. This question will apply to all databases.
From wikipedia:
Indexing is a technique some storage engines use for improving
database performance. The many types of indexes share the common
property that they reduce the need to examine every entry when running
a query. In large databases, this can reduce query time/cost by orders
of magnitude. The simplest form of index is a sorted list of values
that can be searched using a binary search with an adjacent reference to the location of the entry, analogous to the index in the back of a book. The same data can have multiple indexes (an employee database could be indexed by last name and hire date).
From a related thread in StackExchange
In the SQL world, order is not an inherent property of a set of data.
Thus, you get no guarantees from your RDBMS that your data will come
back in a certain order -- or even in a consistent order -- unless you
query your data with an ORDER BY clause.
To answer why the indexes are necessary?
Note the bolded text about indexing regarding the reduction in the need to examine every entry. In the absence of an index when an ORDER BY is issued in SQL, every entry need to be examined which increases the number of entries.
ORDER BY is applied only when reading. A single column may be used in indexes in which case there could be several different kinds of ordering in sql query requests. It is not possible to define the indexes unless we understand how the query requests are made.
A lot of times indexes are added once new patterns of querying emerge so as to keep the query performant which mean index creation is driven by how you defined your ORDER BY in SQL.
Query engine which processes your SQL with/without ORDER BY, defines your execution plan and does not understand Storage of data. The Data retrieved from a query engine may be partly from memory if the data was in cache and partly/fully from disk. When reading from disk in the storage engine will uses the indexes to figure the quickly read data.
ORDER BY effects the performance of a query when reading. Index effects the performance of a query when doing all the Create, Read, Update and Delete operations.
A query engine may choose to use an index or totally ignore the index based on the data characteristics.

Why does Redshift not need materialized views or indexes?

In the Redshift FAQ under
Q: How does the performance of Amazon Redshift compare to most traditional databases for data warehousing and analytics?
It says the following:
Advanced Compression: Columnar data stores can be compressed much more than row-based data stores because similar data is stored sequentially on disk. Amazon Redshift employs multiple compression techniques and can often achieve significant compression relative to traditional relational data stores. In addition, Amazon Redshift doesn't require indexes or materialized views and so uses less space than traditional relational database systems. When loading data into an empty table, Amazon Redshift automatically samples your data and selects the most appropriate compression scheme.
Why is this the case?
It's a bit disingenuous to be honest (in my opinion). Although RedShift has neither of these, I'm not sure that's the same as saying it wouldn't benefit from them.
Materialised Views
I have no real idea why they make this claim. Possibly because they consider the engine so performant that the gains from having them are minimal.
I would dispute this and the product I work on maintains its own materialised views and can show significant performance gains from doing so. Perhaps AWS believe I must be doing something wrong in the first place?
Indexes
RedShift does not have indexes.
It does have SORT ORDER which is exceptionally similar to a clustered index. It is simply a list of fields by which the data is ordered (like a composite clustered index).
It even has recently introduced INTERLEAVED SORT KEYS. This is a direct attempt to have multiple independent sort orders. Instead of ordering by a THEN b THEN c it effectively orders by each of them at the same time.
That becomes kind of possible because of how RedShift implements its column store.
- Each column is stored separately from each other column
- Each column is stored in 1MB blocks
- Each 1MB block has summary statistics
As well as being the storage pattern this effectively becomes a set of pseudo indexes.
- If the data is sorted by a then b then x
- But you want z = 1234
- RedShift looks at the block statistics (for column z) first
- Those stats will say the minimum and maximum values stored by that block
- This allows Redshift to skip many of those blocks in certain conditions
- This intern allows RedShift to identify which blocks to read from the other columns
as of dec 2019, Redshift has a preview of materialized views: Announcement
from the documentation: A materialized view contains a precomputed result set, based on a SQL query over one or more base tables. You can issue SELECT statements to query a materialized view, in the same way that you can query other tables or views in the database. Amazon Redshift returns the precomputed results from the materialized view, without having to access the base tables at all. From the user standpoint, the query results are returned much faster compared to when retrieving the same data from the base tables.
Indexes are basically used in OLTP systems to retrieve a specific or a small group of values. On the contrary, OLAP systems retrieve a large set of values and performs aggregation on the large set of values. Indexes would not be a right fit for OLAP systems. Instead it uses a secondary structure called zone maps with sort keys.
The indexes operate on B trees. The 'life without a btree' section in the below blog explains with examples how an index based out of btree affects OLAP workloads.
https://blog.chartio.com/blog/understanding-interleaved-sort-keys-in-amazon-redshift-part-1
The combination of columnar storage, compression codings, data distribution, compression, query compilations, optimization etc. provides the power to Redshift to be faster.
Implementing the above factors, reduces IO operations on Redshift and eventually providing better performance. To implement an efficient solution, it requires a great deal of knowledge on the above sections and as well as the on the queries that you would run on Amazon Redshift.
for eg.
Redshift supports Sort keys, Compound Sort keys and Interleaved Sort keys.
If your table structure is lineitem(orderid,linenumber,supplier,quantity,price,discount,tax,returnflat,shipdate).
If you select orderid as your sort key but if your queries are based on shipdate, Redshift will be operating efficiently.
If you have a composite sortkey on (orderid, shipdate) and if your query only on ship date, Redshift will not be operating efficiently.
If you have an interleaved soft key on (orderid, shipdate) and if your query
Redshift does not support materialized views but it easily allows you to create (temporary/permant) tables by running select queries on existing tables. It eventually duplicates data but at the required format to be executed for queries (similar to materialized view) The below blog gives your some information on the above approach.
https://www.periscopedata.com/blog/faster-redshift-queries-with-materialized-views-lifetime-daily-arpu.html
Redshift does fare well with other systems like Hive, Impala, Spark, BQ etc. during one of our recent benchmark frameworks
The simple answer is: because it can read the needed data really, really fast and in parallel.
One of the primary uses of indexes are "needle-in-the-haystack" queries. These are queries where only a relatively small number of rows are needed and these match a WHERE clause. Columnar datastores handle these differently. The entire column is read into memory -- but only the column, not the rest of the row's data. This is sort of similar to having an index on each column, except the values need to be scanned for the match (that is where the parallelism comes in handy).
Other uses of indexes are for matching key pairs for joining or for aggregations. These can be handled by alternative hash-based algorithms.
As for materialized views, RedShift's strength is not updating data. Many such queries are quite fast enough without materialization. And, materialization incurs a lot of overhead for maintaining the data in a high transaction environment. If you don't have a high transaction environment, then you can increment temporary tables after batch loads.
They recently added support for Materialized Views in Redshift: https://aws.amazon.com/about-aws/whats-new/2019/11/amazon-redshift-introduces-support-for-materialized-views-preview/
Syntax for materialized view creation:
CREATE MATERIALIZED VIEW mv_name
[ BACKUP { YES | NO } ]
[ table_attributes ]
AS query
Syntax for refreshing a materialized view:
REFRESH MATERIALIZED VIEW mv_name

Some (hopefully) basic questions about big table management ( > 10 Billion rows) in SQL Server

I'm doing some experimentation with table design for a table that we expect to have LOTS of rows (upwards of 10 billion). Some things that immediately come to mind:
In what I call the 'Tall' table approach, each row will have one of about 25 'types', along with a value corresponding to this type. Should I turn this into a 'wide approach' with a single row containing a NULLable column for a value for each type? This isn't a great approach from a maintainability standpoint (what if I have to add more 'types'), but I am more concerned about performance, with size being a secondary consideration.
Rows will have a date time stamp (probably a smalldatetime, since I should only need to the minute). I've heard that I may be better off using an integer representation for the datetime, as opposed to the datetime itself, in the table. I expect that this datetime will be used heavily in queries (maybe even to the extent that it is part of the clustered index).
My main concerns are query performance, then size, in that order. Lots of data will be dumped into the table, but it won't change or grow much (maybe daily or monthly updates, but not a lot of updates and not anything that I would consider transactional).
You may benefit from table partitioning. Both SQL Server and Oracle have good support for this functionality. Table partitioning allows you to keep one logical table that you will continue to query, but the DBMS actually breaks into several physical files that it maintains with rules that you specify. For example, you can have partitions based on date so rows that have a CreateDate that falls within 1990, 2000, 2010, or 2020 will be placed in their respective partition.
The DBMS also uses partitions to leverage parallelism and could improve performance on queries that span multiple partitions.
Outside of database partitioning, you won't see performance gains without sharding the table which is difficult to maintain and makes queries more complex.
Microsoft documentation on partitioning
Update:
When you are considering making use of an integer for your datetime to improve performance, it actually would if you place your index on the integer date. The reason for this is integers are easier to sort, so creating a B-Tree index will improve the overall speed of that particular index. However, if you aren't going to be querying using that column (within a where or group by clause), it is not recommended to just add indexes because you can. In fact, I wouldn't be surprised if your index storage is larger than the size of your table.

Sql optimization Techniques

I want to know optimization techniques for databases that has nearly 80,000 records,
list of possibilities for optimizing
i am using for my mobile project in android platform
i use sqlite,i takes lot of time to retreive the data
Thanks
Well, with only 80,000 records and assuming your database is well designed and normalized, just adding indexes on the columns that you frequently use in your WHERE or ORDER BY clauses should be sufficient.
There are other more sophisticated techniques you can use (such as denormalizing certain tables, partitioning, etc.) but those normally only start to come into play when you have millions of records to deal with.
ETA:
I see you updated the question to mention that this is on a mobile platform - that could change things a bit.
Assuming you can't pare down the data set at all, one thing you might be able to do would be to try to partition the database a bit. The idea here is to take your one large table and split it into several smaller identical tables that each hold a subset of the data.
Which of those tables a given row would go into would depend on how you choose to partition it. For example, if you had a "customer_id" field that could range from 0 to 10,000 you might put customers 0 - 2500 in table1, 2,500 - 5,000 in table2, etc. splitting the one large table into 4 smaller ones. You would then have logic in your app that would figure out which table (or tables) to query to retrieve a given record.
You would want to partition your data in such a way that you generally only need to query one of the partitions at a time. Exactly how you would partition the data would depend on what fields you have and how you are using them, but the general idea is the same.
Create indexes
Delete indexes
Normalize
DeNormalize
80k rows isn't many rows these days. Clever index(es) with queries that utlise these indexes will serve you right.
Learn how to display query execution maps, then learn to understand what they mean, then optimize your indices, tables, queries accordingly.
Such a wide topic, which does depend on what you want to optimise for. But the basics:
indexes. A good indexing strategy is important, indexing the right columns that are frequently queried on/ordered by is important. However, the more indexes you add, the slower your INSERTs and UPDATEs will be so there is a trade-off.
maintenance. Keep indexes defragged and statistics up to date
optimised queries. Identify queries that are slow (using profiler/built-in information available from SQL 2005 onwards) and see if they could be written more efficiently (e.g. avoid CURSORs, used set-based operations where possible
parameterisation/SPs. Use parameterised SQL to query the db instead of adhoc SQL with hardcoded search values. This will allow better execution plan caching and reuse.
start with a normalised database schema, and then de-normalise if appropriate to improve performance
80,000 records is not much so I'll stop there (large dbs, with millions of data rows, I'd have suggested partitioning the data)
You really have to be more specific with respect to what you want to do. What is your mix of operations? What is your table structure? The generic advice is to use indices as appropriate but you aren't going to get much help with such a generic question.
Also, 80,000 records is nothing. It is a moderate-sized table and should not make any decent database break a sweat.
First of all, indexes are really a necessity if you want a well-performing database.
Besides that, though, the techniques depend on what you need to optimize for: Size, speed, memory, etc?
One thing that is worth knowing is that using a function in the where statement on the indexed field will cause the index not to be used.
Example (Oracle):
SELECT indexed_text FROM your_table WHERE upper(indexed_text) = 'UPPERCASE TEXT';

Do indexes suck in SQL?

Say I have a table with a large number of rows and one of the columns which I want to index can have one of 20 values.
If I were to put an index on the column would it be large?
If so, why? If I were to partition the data into the data into 20 tables, one for each value of the column, the index size would be trivial but the indexing effect would be the same.
It's not the indexes that will suck. It's putting indexes on the wrong columns that will suck.
Seriously though, why would you need a table with a single column? What would the meaning of that data be? What purpose would it serve?
And 20 tables? I suggest you read up on database design first, or otherwise explain to us the context of your question.
Indexes (or indices) don't suck. A lot of very smart people have spent a truly remarkable amount of time of the last several decades ensuring that this is so.
Your schema, however, lacking the same amount of expertise and effort, may suck very badly indeed.
Partitioning, in the case described is equivalent to applying a clustered index. If the table is sorted otherwise (or is in arbitrary order) then the index necessarily has to occupy much more space. Depending on the platform, a non-clustered index may reduce in size as the sortedness of the rows with respect to the indexed value increases.
YMMV.
The short answer:
Do indexes suck: Yes and No
The longer answer:
They don't suck if used properly. Maybe you should start reading about how indexes work, why they can work and why they sometimes don't work.
Good starting points:
http://www.sqlservercentral.com/articles/Indexing/
No indexes don't suck, but you have to pay attention to how you use them or they can backfire on the performance of your queries.
First: Schema / design
Why would you create a table with only one column? That's probably taking normalization one step to far. Database design is one of the most important things to consider in optimizing performance
Second: Indexes
In a nutshell the indexes will help the database to perform a binary search of your record. Without an index on a column (or set of columns) the database will often fall back to a table scan. A table scan is very expensive because it involves enumerating each and every record.
It doesn't really matter THAT much for index scans how many records there are in the database table. Because of the (balanced) binary tree search doubling the amount of records will only result in one extra search step.
Determine the primary key of your table, SQL will automatically place a clustered index on that column(s). Clustered indexes perform really well. In addition you can place non-clustered indexes on columns that are used often in SELECT, JOIN, WHERE, GROUP BY and ORDER BY statements. Do remember that indexes have a certain overlap, try to never include your clustered index into a non-clustered index.
Also interesting might be the fill factor on the indexes. Do you want to optimize your table for reads (high fill factor - less storage, less IO) or for writes (low fill factor more storage, less rebuilding your database pages).
Third: Partitioning
One of the reasons to use partitioning is to optimize your data access. Let's say you have 1 million records of which 500,000 records are no longer relevant but stored for archiving purposes. In this case you could decide to partition the table and store the 500,000 old records on slow storage and the other 500,000 records on fast storage.
To measure is to know
The best way to get insight in what happens is to measure what happens to your cpu and io. Microsoft SQL server has some tools like the Profiler and Execution plans in Management Studio that will tell you the duration of your query, number of read/writes and cpu usage. Also the execution plan will tell you which or IF indexes are being used. To your surprise you might see a table scan although you didn't expect it.
Say I have a table with a large number of rows and one column which I want to index can have one of 20 values. If I were to put an index on the column would it be large?
The index size will be proportional to the number of your rows and the length of the indexed values.
The index keeps not only the indexed value, but also some kind of a pointer to the row (ROWID in Oracle, LCID in PostgreSQL, primary key in InnoDB etc).
If you have 10,000 rows and a 1 distinct value, you will still have 10,000 records in your index.
If so, why? If I were to partition the data into the data into 20 tables, one for each value of the column, the index size would be trivial but the indexing effect would be the same
In this case, you would come with 20 indexes being same in size in total as your original one.
This technique is sometimes used in fact in such called partitioned indexes. It has its advantages and drawbacks.
Standard b-tree indexes are best suited to fairly selective indexes, which this example would not be. You don't say what DBMS you are using; Oracle has another type of index called a bitmap index which is more suited to low-selectivity indexes in OLAP environments (since these indexes are expensive to maintain, making them unsuitable for OLTP environments).
The optimiser will decide bases on stats whether it thinks the index will help get the data in the fastest time; if it won't, the optmiser won't use it.
Partitioning is another strategy. In Oracle you can define a table as partitioned on some set of columns, and for the optimiser can automatically perform "partition elimination" like you suggest.
Sorry, I'm not quite sure what you mean by "large".
If your index is clustered, all the data for each record will be on the same leaf page, thereby creating the most efficient index available to your table as long as you write your queries against it properly.
If your index is non-clustered, then only the index related data will be on your leaf pages. Then, depending on suchs things as how many other indexes you have, coupled with details like your fill factor, your index may or may not be efficient. In general, if you don't have a ton of indexes on your table, you should be safe.
The efficiency of your index will also be determined by the data type of the 20 values you're speaking of going into the column. If those are pre-defined values, then their details should probably be in a lookup table with a simple primary key datatype (like Int/Number). Then add that column to your table as a foreign key with an index on the column.
Ultimately, you could have a perfect index on a column. But it's best use will be determined for the most part by the queries you write. So if your queries make use of the indexes, you're golden.
Indexes are purely for performance. If an index doesn't boost performance for the queries you're interested in, then it sucks.
As for disk usage, you have to weigh your concerns. Different SQL providers build indexes differently, but as a client, you generally trust that they do the best that can be done. In the case you're describing, a clustered index may be optimal for both size and performance.
It would be large enough to hold those values for all the rows, in a sorted order.
Say you have 20 different strings of 4 characters, and 1 million rows, it would at least be 4 million bytes (or 8 if 16-bit unicode) to hold those values.