Parse records by version number in PostgreSQL - sql

I have a table of records that have a 'Version' property. The version is stored as text. I know how to match on specific versions (e.g. WHERE version = '1.2.3'), but I want to be able to use comparison operators (<, >, =, etc.) on the version number (e.g. WHERE version > '1.2.3').
Does anyone know of a good way to achieve this?

For versionsort, casting to cidr might be a solution. It works for the trivial cases. Non-numeric characters will probably spoil the party.
-- some data
DROP table meuk;
CREATE table meuk
( id SERIAL NOT NULL PRIMARY KEY
, version text
);
INSERT INTO meuk(version) VALUES
( '1.1.1'), ( '1.1.2'), ( '1.1.11'), ( '1.1.102')
, ( '0.1.1'), ( '2.1.2'), ( '22.1.11'), ( '0.1.102')
;
SELECT * FROM meuk
ORDER BY version::cidr -- cast to cidr (less restrictive than inet)
;
Rseult:
DROP TABLE
CREATE TABLE
INSERT 0 8
id | version
----+---------
5 | 0.1.1
8 | 0.1.102
1 | 1.1.1
2 | 1.1.2
3 | 1.1.11
4 | 1.1.102
6 | 2.1.2
7 | 22.1.11
(8 rows)

Related

Postgres json column with versions, return the oldest one

I have the following table:
create table subscriptions (id int, data jsonb);
filled with the following data:
insert into subscriptions (id, data)
values
(1, '{"versions": {"10.2.3": 3, "9.2.3": 4, "12.2.3": 5, "1.2.3": 5}}'),
(2, '{"versions": {"0.2.3": 3, "2.2.3": 4, "3.2.3": 5}}');
And I am trying to write a query that will result in:
-----------------
| id | minVersion |
-----------------
| 1 | 1.2.3 |
| 2 | 0.2.3 |
-----------------
What I have tried:
select
(string_to_array(jsonb_object_keys((data->'versions')::jsonb), '.'))[1] as total_prices
from subscriptions;
(does not work but I think those methods will be useful here)
I would create a function that extracts the version keys and sorts them to pick the smallest.
create or replace function smallest_version(p_version jsonb)
returns text
as
$$
select version
from jsonb_each_text(p_version) as x(version, num)
order by string_to_array(x.version, '.')::int[]
limit 1
$$
language sql
immutable;
The above will only work if there are no versions like 1.2.3-beta or other non-numeric characters in your version number.
Then use it like this:
select smallest_version(data->'versions')
from subscriptions;
Online example

Incremental integer ID in Impala

I am using Impala for querying parquet-tables and cannot find a solution to increment an integer-column ranging from 1..n. The column is supposed to be used as ID-reference. Currently I am aware of the uuid() function, which
Returns a universal unique identifier, a 128-bit value encoded as a string with groups of hexadecimal digits separated by dashes.
Anyhow, this is not suitable for me since I have to pass the ID to another system which requests an ID in style of 1..n. I also already know that Impala has no auto-increment-implementation.
The desired result should look like:
-- UUID() provided as example - I want to achieve the `my_id`-column.
| my_id | example_uuid | some_content |
|-------|--------------|--------------|
| 1 | 50d53ca4-b...| "a" |
| 2 | 6ba8dd54-1...| "b" |
| 3 | 515362df-f...| "c" |
| 4 | a52db5e9-e...| "d" |
|-------|--------------|--------------|
How can I achieve the desired result (integer-ID ranging from 1..n)?
Note: This question differs from this one which specifically handles Kudu-tables. However, answers should be applicable for this question as well.
Since other Q&A's like this one only came up with uuid()-alike answers, I put some thought in it and finally came up with this solution:
SELECT
row_number() OVER (PARTITION BY "dummy" ORDER BY "dummy") as my_id
, some_content
FROM some_table
row_number() generates a continuous integer-number over a provided partition. Unlike rank(), row_number() always provides an incremented number on its partition (even if duplicates occur)
PARTITION BY "dummy" partitions the entire table into one partition. This works since "dummy" is interpreted in the execution graph as temporary column yielding only the String-value "dummy". Thus, also something analog to "dummy" works.
ORDER BY is required in order to generate the increment. Since we don't care about the order in this example (otherwise just set your respective column), also use the "dummy"-workaround.
The command creates the desired incremental ID without any nested SQL-statements or other tricks.
| my_id | some_content |
|-------|--------------|
| 1 | "a" |
| 2 | "b" |
| 3 | "c" |
| 4 | "d" |
|-------|--------------|
I used Markus's answer for a large partitioned table and found that I was getting duplicate ids. I think the ids were only unique within their partition; possibly PARTITION BY "dummy" leads Impala to think that each partition can execute row_number() on its own. I was able to get it working by specifying an actual column to order by and no partition by:
SELECT
row_number() OVER (ORDER BY actual_column) as my_id
, some_content
FROM some_table
It doesn't seem to matter whether the values in the column are unique (mine weren't), but using the actual partition key might result in the same issue as the "dummy" column.
Understandably, it took a lot longer to run than the dummy version.

What are the technical differences between "select * from table_name" and "select a.* from table_name a"?

This might be a basic question, but I can't find explanations after googling around.
Anyway, a short background story. I have this table that I don't have the permission to alter on DB2:
other_field | date_field | time_field
---------------------------------------
1 | 180101 | 101010
2 | 180102 | 202020
3 | 180103 | 303030
4 | 180104 | 404040
I tried to use:
select *, concat(date_field, time_field) as TIME
from Table_Name
My expected result is displaying something like this:
other_field | date_field | time_field | TIME
--------------------------------------------------------
1 | 180101 | 101010 | 180101101010
2 | 180102 | 102020 | 180102102020
3 | 180103 | 103030 | 180103103030
4 | 180104 | 104040 | 180104104040
But I can't use that query for some reason. It gave me an error ...Token , was not valid. Valid tokens: FROM INTO that basically said a comma (,) after * is invalid.
Then I tried tweaking it a little into:
select a.*, concat(a.date_field, a.time_field) as TIME
from Table_Name a
And it works!
I understand that Table_Name a are often used for joining tables, but I'm curious about the underlying mechanism.
What are the technical differences between using Table_Name and Table_Name a? And what is this a called?
Technically there will be no difference between the op of
SELECT * FROM TAB_NAME and SELECT a,* FROM TAB_NAME a.
Here you are just specifying alias name.
But you can understand the difference when you will try to fetch another column with * from TAB_NAME.
That means if you want to gate data as bellow
SELECT *,COL_1,COL2...
FROM TAB_NAME
or
SELECT *,CONCAT(...)
FROM TAB_NAME
or anything with * you must have to specify the alias name.
But the question is why? Let me try to explain,
As you know here SELECT * means you are trying to select all columns. So, * means "all" and if you are putting * after SELECT clause that means you already have given a command to your system to select all by passing a special character and after that your system can only expect FROM clause instead of any other thing. Because you already told your system/database to select all then there would be nothing left to select and hence your system will always wait for FROM clause. So it will throw an error each and every time.
BUT now the question is, how the bellow query will work internally
SELECT a.*,COL_1,COL2...
FROM TAB_NAME a
or
SELECT a.*,a.COL_1,a.COL2...
FROM TAB_NAME a
or
SELECT a.*,CONCAT(c1,c2)
FROM TAB_NAME a
or
SELECT a.*,CONCAT(a.c1,a.c2)
FROM TAB_NAME a
or anything else like that.
Here your system will understand that you are trying to select all from table a that means you may select any other col/function etc from either table a or any other table. That's the reason why your system/database will allow you to insert other col/func also after a, if required or you can use from clause as well after a.*
Db2 (LUW) 11.1 support this syntax
create table Table_Name (
other_field int not null primary key
, date_field date not null
, time_field time not null
)
;
insert into Table_Name values
(1,'2018-01-01', '10.10.10')
, (2,'2018-01-01', '20.20.20')
, (3,'2018-01-01', '13.13.13')
, (4,'2018-01-01', '14.14.14')
;
select *, timestamp(date_field, time_field) as TIME from Table_Name
;
which will return
OTHER_FIELD DATE_FIELD TIME_FIELD TIME
----------- ---------- ---------- ---------------------
1 2018-01-01 10:10:10 2018-01-01 10:10:10.0
2 2018-01-01 20:20:20 2018-01-01 20:20:20.0
3 2018-01-01 13:13:13 2018-01-01 13:13:13.0
4 2018-01-01 14:14:14 2018-01-01 14:14:14.0
BTW I took the liberty of using sensible data types for your example. Use DATE1, TIME and TIMESAMP (or TIMESTAMP(0)) for working with date and time values...

Store multidimensional array in database: relational or multidimensional?

I have read numerous posts along the lines of multidimensional to single dimension, multidimensional database, and so on, but none of the answers helped. I did find a lot of documentation on Google but that only provided background information and didn't answer the question at hand.
I have a lot of strings that are related to one another. They are needed in a PHP script. The structure is hierarchical. Here is an example.
A:
AA:
AAA
AAC
AB
AE:
AEA
AEE:
AEEB
B:
BA:
BAA
BD:
BDC:
BDCB
BDCE
BDD:
BDDA
BE:
BED:
BEDA
C:
CC:
CCB:
CCBC
CCBE
CCC:
CCCA
CCCE
CE
Each indent supposes a new level in the multidimensional array.
The goal is to retrieve an element with PHP by name and all its descendants. If for instance I query for A, I want to receive an array of string containing array('A', 'AA', 'AAA', 'AAC', 'AB', 'AE', 'AEA', 'AEE', 'AEEB'). The 'issue' is that queries can also be made to lower-level elements. If I query AEE, I want to get array('AEE', 'AEEB').
As I understand the concept of relational databases, this means that I cannot use a relational database because there is no common 'key' between elements. The solution that I thought is possible, is assigning PARENT elements to each cell. So, in a table:
CELL | PARENT
A NULL
AA A
AAA AA
AAC AA
AB A
AE A
AEA AE
AEE AE
AEEB AEE
By doing so, I think you should be able to query the given string, and all items that share this parent, and then recursively go down this path until no more items are found. However, this seems rather slow to me because the whole search space would need to be looked through on each level - which is exactly what you don't want in a multidimensional array.
So I am a bit at loss. Note that there are actually around 100,000 strings structured in this way, so speed is important. Luckily the database is static and would not change. How can I store such a data structure in a database without having to deal with long loops and search times? And which kind of database software and data type is best suited for this? It has come to my attention that PostgreSQL is already present on our servers so I'd rather stick with that.
As I said I am new to databases but I am very eager to learn. Therefore, I am looking for an extensive answer that goes into detail and provides advantages and disadvantages of a certain approach. Performance is key. An expected answer would contain the best database type and language for this use case, and also script in that language to build such a structure.
The goal is to retrieve an element with PHP by name and all its descendants.
If that is all you need, you can use a LIKE search
SELECT *
FROM Table1
WHERE CELL LIKE 'AEE%';
With an index beginning with CELL this is a range check, which is fast.
If your data doesn't look like that, you can create a path column which looks like a directory path and contains all nodes "on the way/path" from root to the element.
| id | CELL | parent_id | path |
|====|======|===========|==========|
| 1 | A | NULL | 1/ |
| 2 | AA | 1 | 1/2/ |
| 3 | AAA | 2 | 1/2/3/ |
| 4 | AAC | 2 | 1/2/4/ |
| 5 | AB | 1 | 1/5/ |
| 6 | AE | 1 | 1/6/ |
| 7 | AEA | 6 | 1/6/7/ |
| 8 | AEE | 6 | 1/6/8/ |
| 9 | AEEB | 8 | 1/6/8/9/ |
To retrieve all descendants of 'AE' (including itself) your query would be
SELECT *
FROM tree t
WHERE path LIKE '1/6/%';
or (MySQL specific concatenation)
SELECT t.*
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = 'AE'
AND t.path LIKE CONCAT(r.path, '%');
Result:
| id | CELL | parent_id | path |
|====|======|===========|==========|
| 6 | AE | 1 | 1/6/ |
| 7 | AEA | 6 | 1/6/7/ |
| 8 | AEE | 6 | 1/6/8/ |
| 9 | AEEB | 8 | 1/6/8/9/ |
Demo
Performance
I have created 100K rows of fake data on MariaDB with the sequence plugin using the following script:
drop table if exists tree;
CREATE TABLE tree (
`id` int primary key,
`CELL` varchar(50),
`parent_id` int,
`path` varchar(255),
unique index (`CELL`),
unique index (`path`)
);
DROP TRIGGER IF EXISTS `tree_after_insert`;
DELIMITER //
CREATE TRIGGER `tree_after_insert` BEFORE INSERT ON `tree` FOR EACH ROW BEGIN
if new.id = 1 then
set new.path := '1/';
else
set new.path := concat((
select path from tree where id = new.parent_id
), new.id, '/');
end if;
END//
DELIMITER ;
insert into tree
select seq as id
, conv(seq, 10, 36) as CELL
, case
when seq = 1 then null
else floor(rand(1) * (seq-1)) + 1
end as parent_id
, null as path
from seq_1_to_100000
;
DROP TRIGGER IF EXISTS `tree_after_insert`;
-- runtime ~ 4 sec.
Tests
Count all elements under the root:
SELECT count(*)
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = '1'
AND t.path LIKE CONCAT(r.path, '%');
-- result: 100000
-- runtime: ~ 30 ms
Get subtree elements under a specific node:
SELECT t.*
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = '3B0'
AND t.path LIKE CONCAT(r.path, '%');
-- runtime: ~ 30 ms
Result:
| id | CELL | parent_id | path |
|=======|======|===========|=====================================|
| 4284 | 3B0 | 614 | 1/4/11/14/614/4284/ |
| 6560 | 528 | 4284 | 1/4/11/14/614/4284/6560/ |
| 8054 | 67Q | 6560 | 1/4/11/14/614/4284/6560/8054/ |
| 14358 | B2U | 6560 | 1/4/11/14/614/4284/6560/14358/ |
| 51911 | 141Z | 4284 | 1/4/11/14/614/4284/51911/ |
| 55695 | 16Z3 | 4284 | 1/4/11/14/614/4284/55695/ |
| 80172 | 1PV0 | 8054 | 1/4/11/14/614/4284/6560/8054/80172/ |
| 87101 | 1V7H | 51911 | 1/4/11/14/614/4284/51911/87101/ |
PostgreSQL
This also works for PostgreSQL. Only the string concatenation syntax has to be changed:
SELECT t.*
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = 'AE'
AND t.path LIKE r.path || '%';
Demo: sqlfiddle - rextester
How does the search work
If you look at the test example, you'll see that all paths in the result begin with '1/4/11/14/614/4284/'. That is the path of the subtree root with CELL='3B0'. If the path column is indexed, the engine will find them all efficiently, because the index is sorted by path. It's like you would want to find all the words that begin with 'pol' in a dictionary with 100K words. You wouldn't need to read the entire dictionary.
Performance
As others have already mentioned, performance shouldn't be an issue as long as you use a suitable indexed primary key and ensure that relations use foreign keys. In general, an RDBMS is highly optimised to efficiently perform joins on indexed columns and referential integrity can also provide the advantage of preventing orphans. 100,000 may sound a lot of rows but this isn't going to stretch an RDBMS as long as the table structure and queries are well designed.
Choice of RDBMS
One factor in answering this question lies in choosing a database with the ability to perform a recursive query via a Common Table Expression (CTE), which can be very useful to keep the queries compact or essential if there are queries that do not limit the number of descendants being traversed.
Since you've indicated that you are free to choose the RDBMS but it must run under Linux, I'm going to throw PostgreSQL out there as a suggestion since it has this feature and is freely available. (This choice is of course very subjective and there are advantages and disadvantages of each but a few other contenders I'd be tempted to rule out are MySQL since it doesn't currently support CTEs, MariaDB since it doesn't currently support *recursive* CTEs, SQL Server since it doesn't currently support Linux. Other possibilities such as Oracle may be dependent on budget / existing resources.)
SQL
Here's an example of the SQL you'd write to perform your first example of finding all the descendants of 'A':
WITH RECURSIVE rcte AS (
SELECT id, letters
FROM cell
WHERE letters = 'A'
UNION ALL
SELECT c.id, c.letters
FROM cell c
INNER JOIN rcte r
ON c.parent_cell_id = r.id
)
SELECT letters
FROM rcte
ORDER BY letters;
Explanation
The above SQL sets up a "Common Table Expression", i.e. a SELECT to run whenever its alias (in this case rcte) is referenced. The recursion happens because this is referenced within itself. The first part of the UNION picks the cell at the top of the hierarchy. Its descendants are all found by carrying on joining on children in the second part of the UNION until no further records are found.
Demo
The above query can be seen in action on the sample data here: http://rextester.com/HVY63888
You absolutely can do that (if I've read your question correctly).
Depending on your RDBMS you might have to choose a different way.
Your basic structure of having a parent is correct.
SQL Server use recursive common table expression (CTE) to anchor the start and work down
https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx
Edit: For Linux use the same in PostgreSQL https://www.postgresql.org/docs/current/static/queries-with.html
Oracle has a different approach, though I think you might be able to use the CTE as well.
https://oracle-base.com/articles/misc/hierarchical-queries
For 100k rows I don't imagine performance will be an issue, though I'd still index PK & FK because that's the right thing to do. If you're really concerned about speed then reading it into memory and building a hash table of linked lists might work.
Pros & cons - it pretty much comes down to readability and suitability for your RDBMS.
It's an already solved problem (again, assuming I've not missed anything) so you'll be fine.
I have two words for you... "RANGE KEYS"
You may find this technique to be incredibly powerful and flexible. You'll be able to navigate your hierarchies with ease, and support variable depth aggregation without the need for recursion.
In the demonstration below, we'll build the hierarchy via a recursive CTE. For larger hierarchies 150K+, I'm willing to share a much faster build in needed.
Since your hierarchies are slow moving (like mine), I tend to store them in a normalized structure and rebuild as necessary.
How about some actual code?
Declare #YourTable table (ID varchar(25),Pt varchar(25))
Insert into #YourTable values
('A' ,NULL),
('AA' ,'A'),
('AAA' ,'AA'),
('AAC' ,'AA'),
('AB' ,'A'),
('AE' ,'A'),
('AEA' ,'AE'),
('AEE' ,'AE'),
('AEEB','AEE')
Declare #Top varchar(25) = null --<< Sets top of Hier Try 'AEE'
Declare #Nest varchar(25) ='|-----' --<< Optional: Added for readability
IF OBJECT_ID('TestHier') IS NOT NULL
Begin
Drop Table TestHier
End
;with cteHB as (
Select Seq = cast(1000+Row_Number() over (Order by ID) as varchar(500))
,ID
,Pt
,Lvl=1
,Title = ID
From #YourTable
Where IsNull(#Top,'TOP') = case when #Top is null then isnull(Pt,'TOP') else ID end
Union All
Select cast(concat(cteHB.Seq,'.',1000+Row_Number() over (Order by cteCD.ID)) as varchar(500))
,cteCD.ID
,cteCD.Pt
,cteHB.Lvl+1
,cteCD.ID
From #YourTable cteCD
Join cteHB on cteCD.Pt = cteHB.ID)
,cteR1 as (Select Seq,ID,R1=Row_Number() over (Order By Seq) From cteHB)
,cteR2 as (Select A.Seq,A.ID,R2=Max(B.R1) From cteR1 A Join cteR1 B on (B.Seq like A.Seq+'%') Group By A.Seq,A.ID )
Select B.R1
,C.R2
,A.ID
,A.Pt
,A.Lvl
,Title = Replicate(#Nest,A.Lvl-1) + A.Title
Into dbo.TestHier
From cteHB A
Join cteR1 B on A.ID=B.ID
Join cteR2 C on A.ID=C.ID
Order By B.R1
Show The Entire Hier I added the Title and Nesting for readability
Select * from TestHier Order By R1
Just to state the obvious, the Range Keys are R1 and R2. You may also notice that R1 maintains the presentation sequence. Leaf nodes are where R1=R2 and Parents or rollups define the span of ownership.
To Show All Descendants
Declare #GetChildrenOf varchar(25) = 'AE'
Select A.*
From TestHier A
Join TestHier B on B.ID=#GetChildrenOf and A.R1 Between B.R1 and B.R2
Order By R1
To Show Path
Declare #GetParentsOf varchar(25) = 'AEEB'
Select A.*
From TestHier A
Join TestHier B on B.ID=#GetParentsOf and B.R1 Between A.R1 and A.R2
Order By R1
Clearly these are rather simple illustrations. Over time, I have created a series of helper functions, both Scalar and Table Value Functions. I should also state that you should NEVER hard code range key in your work because they will change.
In Summary
If you have a point (or even a series of points), you'll have its range and therefore you'll immediately know where it resides and what rolls into it.
This approach does not depend on the existence of a path or parent column. It is relational not recursive.
Since the table is static create a materialized view containing just the leaves to make searching faster:
create materialized view leave as
select cell
from (
select cell,
lag(cell,1,cell) over (order by cell desc) not like cell || '%' as leave
from t
) s
where leave;
table leave;
cell
------
CCCE
CCCA
CCBE
CCBC
BEDA
BDDA
BDCE
BDCB
BAA
AEEB
AEA
AB
AAC
AAA
A materialized view is computed once at creation not at each query like a plain view. Create an index to speed it up:
create index cell_index on leave(cell);
If eventually the source table is altered just refresh the view:
refresh materialized view leave;
The search function receives text and returns a text array:
create or replace function get_descendants(c text)
returns text[] as $$
select array_agg(distinct l order by l)
from (
select left(cell, generate_series(length(c), length(cell))) as l
from leave
where cell like c || '%'
) s;
$$ language sql immutable strict;
Pass the desired match to the function:
select get_descendants('A');
get_descendants
-----------------------------------
{A,AA,AAA,AAC,AB,AE,AEA,AEE,AEEB}
select get_descendants('AEE');
get_descendants
-----------------
{AEE,AEEB}
Test data:
create table t (cell text);
insert into t (cell) values
('A'),
('AA'),
('AAA'),
('AAC'),
('AB'),
('AE'),
('AEA'),
('AEE'),
('AEEB'),
('B'),
('BA'),
('BAA'),
('BD'),
('BDC'),
('BDCB'),
('BDCE'),
('BDD'),
('BDDA'),
('BE'),
('BED'),
('BEDA'),
('C'),
('CC'),
('CCB'),
('CCBC'),
('CCBE'),
('CCC'),
('CCCA'),
('CCCE'),
('CE');
For your scenario, I would suggest you to use Nested Sets Approach in PostgreSQL. It is XML tags based querying using Relational database.
Performance
If you index on lft and rgt columns, then you don't require recursive queries to get the data. Even though, the data seems huge, the retrieval will be very fast.
Sample
/*1A:
2 AA:
3 AAA
4 AAC
5 AB
6 AE:
7 AEA
8 AEE:
9 AEEB
10B:
*/
CREATE TABLE tree(id int, CELL varchar(4), lft int, rgt int);
INSERT INTO tree
("id", CELL, "lft", "rgt")
VALUES
(1, 'A', 1, 9),
(2, 'AA', 2, 4),
(3, 'AAA', 3, 3),
(4, 'AAC', 4, 4),
(5, 'AB', 5, 5),
(6, 'AE', 6, 9),
(7, 'AEA', 7, 7),
(8, 'AEE', 8, 8),
(9, 'AEEB', 9, 9)
;
SELECT hc.*
FROM tree hp
JOIN tree hc
ON hc.lft BETWEEN hp.lft AND hp.rgt
WHERE hp.id = 2
Demo
Querying using Nested Sets approach

SQL Populate table with random data

I have a table with two fields:
id(UUID) that is primary Key and
description (var255)
I want to insert random data with SQL sentence.
I would like that description would be something random.
PS: I am using PostgreSQL.
I dont know exactly if this fits the requirement for a "random description", and it's not clear if you want to generate the full data: but, for example, this generates 10 records with consecutive ids and random texts:
test=# SELECT generate_series(1,10) AS id, md5(random()::text) AS descr;
id | descr
----+----------------------------------
1 | 65c141ee1fdeb269d2e393cb1d3e1c09
2 | 269638b9061149e9228d1b2718cb035e
3 | 020bce01ba6a6623702c4da1bc6d556e
4 | 18fad4813efe3dcdb388d7d8c4b6d3b4
5 | a7859b3bcf7ff11f921ceef58dc1e5b5
6 | 63691d4a20f7f23843503349c32aa08c
7 | ca317278d40f2f3ac81224f6996d1c57
8 | bb4a284e1c53775a02ebd6ec91bbb847
9 | b444b5ea7966cd76174a618ec0bb9901
10 | 800495c53976f60641fb4d486be61dc6
(10 rows)
The following worked for me:
create table t_random as select s, md5(random()::text) from generate_Series(1,5) s;
Here it is a more elegant way using the latest features. I will use the Unix dictionary (/usr/share/dict/words) and copy it into my PostgreSQL data:
cp /usr/share/dict/words data/pg95/words.list
Then, you can easily create a ton of no sense description BUT searchable using dictionary words with the following steps:
1) Create table and function. getNArrayS gets all the elements in an array and teh number of times it needs to concatenate.
CREATE TABLE randomTable(id serial PRIMARY KEY, description text);
CREATE OR REPLACE FUNCTION getNArrayS(el text[], count int) RETURNS text AS $$
SELECT string_agg(el[random()*(array_length(el,1)-1)+1], ' ') FROM generate_series(1,count) g(i)
$$
VOLATILE
LANGUAGE SQL;
Once you have all in place, run the insert using CTE:
WITH t(ray) AS(
SELECT (string_to_array(pg_read_file('words.list')::text,E'\n'))
)
INSERT INTO randomTable(description)
SELECT getNArrayS(T.ray, 3) FROM T, generate_series(1,10000);
And now, select as usual:
postgres=# select * from randomtable limit 3;
id | description
----+---------------------------------------------
1 | ultracentenarian splenodiagnosis manurially
2 | insequent monopolarity funipendulous
3 | ruminate geodic unconcludable
(3 rows)
I assume sentance == statement? You could use perl or plperl as perl has some good random data generators. Check out perl CPAN module Data::Random to start.
Here's a sample of a perl script to generate some different random stuff taken from CPAN.
use Data::Random qw(:all);
my #random_words = rand_words( size => 10 );
my #random_chars = rand_chars( set => 'all', min => 5, max => 8 );
my #random_set = rand_set( set => \#set, size => 5 );
my $random_enum = rand_enum( set => \#set );
my $random_date = rand_date();
my $random_time = rand_time();
my $random_datetime = rand_datetime();
open(FILE, ">rand_image.png") or die $!;
binmode(FILE);
print FILE rand_image( bgcolor => [0, 0, 0] );
close(FILE);