Improve performance of PostgreSQL array queries - sql

I am storing large vectors (1.4 million values) of doubles in a PostgreSQL table. This table's create statement follows.
CREATE TABLE analysis.expression
(
celfile_name character varying NOT NULL,
core double precision[],
extended double precision[],
"full" double precision[],
probeset double precision[],
CONSTRAINT expression_pkey PRIMARY KEY (celfile_name)
)
WITH (
OIDS=FALSE
);
ALTER TABLE analysis.expression ALTER COLUMN core SET STORAGE EXTERNAL;
ALTER TABLE analysis.expression ALTER COLUMN extended SET STORAGE EXTERNAL;
ALTER TABLE analysis.expression ALTER COLUMN "full" SET STORAGE EXTERNAL;
ALTER TABLE analysis.expression ALTER COLUMN probeset SET STORAGE EXTERNAL;
Each entry in this table is written only once and possibly read many times at random indices. PostgreSQL doesn't seem to scale terribly well for lookups as the vector length grows even with STORAGE set to EXTERNAL (O(n)). This makes queries like the following, where we selected many individual values in the array, very, very slow (minutes - hours).
SELECT probeset[2], probeset[15], probeset[102], probeset[1007], probeset[10033], probeset[200101], probeset[1004000] FROM expression LIMIT 1000;
If there enough individual indices being pulled it can even be slower than pulling the whole array.
Is there any way to make such queries faster?
Edits
I am using PostgreSQL 9.3.
All the queries I am running are simple SELECTs possibly
SELECT probeset[2], probeset[15], probeset[102], probeset[1007], probeset[10033], probeset[200101], probeset[1004000] FROM expression JOIN samples s USING (celfile_name) WHERE s.study = 'x';
In one scenario the results of these queries are feed through prediction models. The prediction probability gets stored back into the DB in another table. In other cases select items are pulled from the arrays for downstream analysis.
Currently 1.4 million is the longest single array, the others are shorter with the smallest being 22 thousand and the average being ~ 100 thousand items long.
Ideally I would store the array data as a wide table but with 1.4 million entries this isn't feasible, and long tables (i.e. rows with celfile_name, index, value) are much slower than PostgreSQL arrays if we want to pull a full array from the data from the DB. We do this to load our downstream data stores for when we do analysis on the full dataset.

You store your data in a structured data management storage container (i.e. PostgreSQL), but due to the nature of your data (i.e. large but irregularly sized collections of like data) you actually store your data outside of the container. PostgreSQL is not good at retrieving data from irregular and unpredictable?) large arrays, as you have noticed; the fact that the arrays are stored externally is already testament to the fact that your requirements are not aligned with where PostgreSQL excels. It is very likely that there are much better solutions for storing and reading your arrays than PostgreSQL. Given that the results from analyzing the arrays through prediction models is stored in some tables in a PostgreSQL database hints at a hybrid solution: store your data in some form that allows efficient access in the patterns that you need, then store the results in PostgreSQL for further processing.
Since you do not provide any details on the prediction models, it is impossible to be specific in this answer, but I hope this will help you on your way.
If your prediction models are written in some language for which a PostgreSQL driver is available, then store your data in some format that is suited for that language, do your predictions and write the results to a table in PostgreSQL. This would work for languages like C and C++ with the pq library and for Java, C#, Python, etc using a high-level library like JDBC.
If your prediction model is written in MatLab, then store your arrays in a MatLab format and connect to PostgreSQL for the results. If written in R, you can use the R extension for PostgreSQL.
The key here is that you should store the arrays in a form that allows for efficient use in your prediction models. Match your data storage to the prediction models, not the other way around.

Related

Efficient storage of a text array column in PostgreSQL

My question is concerned primarily with database storage/space optimization. I have a table in my database that has
the following columns:
id : PRIMARY KEY INTEGER
array_col : UNIQUE TEXT[]
This table is - by far - the largest in the database (in terms of storage space) and contains about
~200 million records. The array_col has a few characteristics which make me suspicious that I
am not storing it in a very space optimal manner. They are as follows:
The majority of strings have a decent length to them (on average 25 characters)
The length of the text array is variable (typically 100+ strings per array)
The individual strings will repeat themselves with a decent frequency across records. On average
a given string will appear in several thousand other records. (The array order tends to be similar
across records too)
id
array_col
1
[…,"20 torque clutch settings",…]
2
[…,"20 torque clutch settings",…]
3
[…,"20 torque clutch settings",…]
…
…
The above table shows values repeating across records.
I do not want to normalize this table because treating the text array as an atomic unit is the most
useful for my application and it also makes querying much simpler. I also care about the ordering of
strings in the array as well.
I can think of two approaches to this problem:
Create a lookup table to avoid repeating strings. The assumption here is INT[] is probably
more space efficient than a TEXT[].
Table 1
id
array_col
1
[…,47,…]
2
[…,47,…]
3
[…,47,…]
…
…
Table 2
id
name
…
…
47
"20 torque clutch settings"
…
…
Problem: PostgreSQL, to my knowledge, does not support arrays of foreign keys. I'm also not sure what a trigger or stored procedure for this would look like. Database consistency would probably become more of a concern for me too.
ZSON ?, I have no experience in using this extension, but it sounds like it does something
similar in terms of creating a lookup table of frequently used strings. To my understanding I would
need to convert the array column to some kind of JSON string.
{"array_col":[…,“20 torque clutch settings”,…]}
GitHub - postgrespro/zson: ZSON is a PostgreSQL extension for transparent JSONB compression
Any advice on how to approach this problem would be greatly appreciated. Do any of the above choices
seem reasonable or a better long-term approach in terms of database design? I'm currently using
PostgreSQL 14 for this.
If you really want to optimize for storage space, tell PostgreSQL to compress the column whenever it exceeds 128 bytes:
ALTER TABLE tab SET (toast_tuple_target = 128);
Of course optimizing for space may not be good for performance.

Oracle SQL database In-Memory - compare compressions sizes

I'm playing with in-memory storage in oracle sql. I would like to compare the results of compression, I meant the amount of used space. For example, I'm running these queries:
ALTER TABLE RENTING INMEMORY MEMCOMPRESS FOR QUERY LOW(RETURN_DATE);
vs
ALTER TABLE RENTING INMEMORY MEMCOMPRESS FOR CAPACITY HIGH(RETURN_DATE);
Is there any easy way to check the size used by these compressions in SQL developer?
I found this article https://blogs.oracle.com/in-memory/database-in-memory-compression, there is a table containing 'space used' for each type of compression. This exactly what I am trying to do on my own. Thanks for any advices.
Querying v$im_segments after population will show you how many bytes from the table were loaded and how much of the in-memory store was utilised.
Since the column space is part of the In-Memory Compression Units (IMCU), there is no way to see how much space is consumed by individual columns. It is possible to display the individual column level compression setting in the view v$im_column_level though. The closest you could come would be to compare the populated size between the two compression levels. As Connor said, you can do this with v$im_segments or you can display individual IMCU information for an object with the view v$im_header.

JSONB performance degrades as number of keys increase

I am testing the performance of jsonb datatype in postgresql. Each document will have about 1500 keys that are NOT hierarchical. The document is flattened. Here is what the table and document looks like.
create table ztable0
(
id serial primary key,
data jsonb
)
Here is a sample document:
{ "0": 301, "90": 23, "61": 4001, "11": 929} ...
As you can see the document does not contain hierarchies and all values are integers. However, Some will be text in the future.
Rows: 86,000
Columns: 2
Keys in document: 1500+
When searching for a particular value of a key or performing a group by the performance is very noticeably slow. This query:
select (data ->> '1')::integer, count(*) from ztable0
group by (data ->> '1')::integer
limit 100
took about 2 seconds to complete. Is there any way to improve performance of jsonb documents.
This is a known issue in 9.4beta2, please, have a look at this blog post, it contains some details and pointers to the mail threads.
About the issue.
PostgreSQL is using TOAST to store data values, this means that big values (typically round 2kB and more) are stored in the separate special kind of table. And PostgreSQL also tries to compress the data, using it's pglz method (been there for ages). By “tries” it means that before deciding to compress data, first 1k bytes are probed. And if results are not satisfactory, i.e. compression gives no benefits on the probed data, decision is made not to compress.
So, initial JSONB format stored a table of offsets in the beginning of it's value. And for values with high number of root keys in JSON this resulted in first 1kB (and more) being occupied by offsets. This was a series of distinct data, i.e. it was not possible to find two adjacent 4-byte sequences that'd be equal. Thus no compression.
Note, that if one would pass over the offset table, the rest of the value is perfectly compressable.
So one of the options would be to tell to the pglz code explicitly wether compression is applicable and where to probe for it (especially for the newly introduced data types), but existing infrastructure doesn't supports this.
The fix
So decision was made to change the way data is stored inside the JSONB value, making it more suitable for pglz to compress. Here's a commit message by Tom Lane with the change that implements a new JSONB on-disk format. And despite the format changes, lookup of a random element is still O(1).
It took around a month to be fixed though. As I can see, 9.4beta3 had been already tagged, so you'll be able to re-test this soon, after the official announcement.
Important Note: you'll have to do pg_dump/pg_restore exercise or utilize pg_upgrade tool to switch to 9.4beta3, as fix for the issue you've identified required changes in the way data is stored, so beta3 is not binary compatible with beta2.

SQL Server row length calculator

I am looking for an up to date tool to accurately calculate the total row size and page-density of any SQL table definition for SQL Server 2005+.
Please note that there are plenty of resources concerning calculating sizes of rows in existing tables, estimating techniques for sizing, etc... However, I am designing tables and have some options about column size which I am trying to balance with efficient data access - meaning that I can relocate less-frequently accessed long text into dedicated tables to allow the most frequent access of these new tables to operate at optimum speed.
Ideally there would be an online facility where a create statement can be cut and pasted, or a sproc I can run on a dev db.
and The answer is a simple one until you start making proper table design and balance that against joins and FK data and disk access.
I'd have a look an see how many data pages you are using and remember that one reads an extend (8 data pages) from disk, not only the data page you are looking for. Then there is the option for data compression in your table as well as sparse columns and out of row type of data storage and variable length characters.
It's not about how much data is in a column, it's really about how many data reads and CPU you need to get it. this you can test when executing a Query and looking against the ACTUAL QUERY PLAN.
As for space used you can use a stored procedure called sp_spaceused. here is a source you can use to see how one could use it in dbforms
Hope it helps
Walter

Representing Sparse Data in PostgreSQL

What's the best way to represent a sparse data matrix in PostgreSQL? The two obvious methods I see are:
Store data in a single a table with a separate column for every conceivable feature (potentially millions), but with a default value of NULL for unused features. This is conceptually very simple, but I know that with most RDMS implementations, that this is typically very inefficient, since the NULL values ususually takes up some space. However, I read an article (can't find its link unfortunately) that claimed PG doesn't take up data for NULL values, making it better suited for storing sparse data.
Create separate "row" and "column" tables, as well as an intermediate table to link them and store the value for the column at that row. I believe this is the more traditional RDMS solution, but there's more complexity and overhead associated with it.
I also found PostgreDynamic, which claims to better support sparse data, but I don't want to switch my entire database server to a PG fork just for this feature.
Are there any other solutions? Which one should I use?
I'm assuming you're thinking of sparse matrices from mathematical context:
http://en.wikipedia.org/wiki/Sparse_matrix (The storing techniques described there are for memory storage (fast arithmetic operation), not persistent storage (low disk usage).)
Since one usually do operate on this matrices on client side rather than on server side a SQL-ARRAY[] is the best choice!
The question is how to take advantage of the sparsity of the matrix? Here the results from some investigations.
Setup:
Postgres 8.4
Matrices w/ 400*400 elements in double precision (8 Bytes) --> 1.28MiB raw size per matrix
33% non-zero elements --> 427kiB effective size per matrix
averaged using ~1000 different random populated matrices
Competing methods:
Rely on the automatic server side compression of columns with SET STORAGE MAIN or EXTENDED.
Only store the non-zero elements plus a bitmap (bit varying(xx)) describing where to locate the non-zero elements in the matrix. (One double precision is 64 times bigger than one bit. In theory (ignoring overheads) this method should be an improvement if <=98% are non-zero ;-).) Server side compression is activated.
Replace the zeros in the matrix with NULL. (The RDBMSs are very effective in storing NULLs.) Server side compression is activated.
(Indexing of non-zero elements using a 2nd index-ARRAY[] is not very promising and therefor not tested.)
Results:
Automatic compression
no extra implementation efforts
no reduced network traffic
minimal compression overhead
persistent storage = 39% of the raw size
Bitmap
acceptable implementation effort
network traffic slightly decreased; dependent on sparsity
persistent storage = 33.9% of the raw size
Replace zeros with NULLs
some implementation effort (API needs to know where and how to set the NULLs in the ARRAY[] while constructing the INSERT query)
no change in network traffic
persistent storage = 35% of the raw size
Conclusion:
Start with the EXTENDED/MAIN storage parameter. If you have some free time investigate your data and use my test setup with your sparsity level. But the effect may be lower than you expect.
I suggest always to use the matrix serialization (e.g. Row-major order) plus two integer columns for the matrix dimensions NxM. Since most APIs use textual SQL you are saving a lot of network traffic and client memory for nested "ARRAY[ARRAY[..], ARRAY[..], ARRAY[..], ARRAY[..], ..]" !!!
Tebas
CREATE TABLE _testschema.matrix_dense
(
matdata double precision[]
);
ALTER TABLE _testschema.matrix_dense ALTER COLUMN matdata SET STORAGE EXTERN;
CREATE TABLE _testschema.matrix_sparse_autocompressed
(
matdata double precision[]
);
CREATE TABLE _testschema.matrix_sparse_bitmap
(
matdata double precision[]
bitmap bit varying(8000000)
);
Insert the same matrices into all tables. The concrete data depends on the certain table.
Do not change the data on server side due to unused but allocated pages. Or do a VACUUM.
SELECT
pg_total_relation_size('_testschema.matrix_dense') AS dense,
pg_total_relation_size('_testschema.matrix_sparse_autocompressed') AS autocompressed,
pg_total_relation_size('_testschema.matrix_sparse_bitmap') AS bitmap;
A few solutions spring to mind,
1) Separate your features into groups that are usually set together, create a table for each group with a one-to-one foreign key relationship to the main data, only join on tables you need when querying
2) Use the EAV anti-pattern, create a 'feature' table with a foreign key field from your primary table as well as a fieldname and a value column, and store the features as rows in that table instead of as attributes in your primary table
3) Similarly to how PostgreDynamic does it, create a table for each 'column' in your primary table (they use a separate namespace for those tables), and create functions to simplify (as well as efficiently index) accessing and updating the data in those tables
4) create a column in your primary data using XML, or VARCHAR, and store some structured text format within it representing your data, create indexes over the data with functional indexes, write functions to update the data (or use the XML functions if you are using that format)
5) use the contrib/hstore module to create a column of type hstore that can hold key-value pairs, and can be indexed and updated
6) live with lots of empty fields
A NULL value will take up no space when it's NULL. It'll take up one bit in a bitmap in the tuple header, but that will be there regardless.
However, the system can't deal with millions of columns, period. There is a theoretical max of a bit over a thousand, IIRC, but you really don't want to go that far.
If you really need that many, in a single table, you need to go the EAV method, which is basically what you're saying in (2).
If each entry has only a relatively few keys, I suggest you look at the "hstore" contrib modules which lets you store this type of data very efficiently, as a third option. It's been enhanced further in the upcoming 9.0 version, so if you are a bit away from production deployment, you might want to look directly at that one. However, it's well worth it in 8.4 as well. And it does support some pretty efficient index based lookups. Definitely worth looking into.
I know this is an old thread, but MadLib provides a sparse vector type for Postgres, along with several machine learning and statistical methods.