use case for uuid in database - sql

Let's say for example I have a table called test and the data is like that:
id name
1 John
2 Jay
3 Maria
Let's suppose this test gets updated and now the ids the names are for some reason allocated to different id , consider the name column as a unique column, it's just not the pprimary key of test but unique.
Next time I query test it may look like that:
id name
10 John
12 Jay
13 Maria
So in that case the id changed but the name is consistent can be traced back to the previous state of the test table. I believe this is bad practice to change id like that, but I don't have control over this table and this is how some folks handle right now the data. I would like to know if this is a good case for using uuid ? I'm not familiar with the concept of uuid, and how it's best to create something consistent as uniquely identifiable and also fast on search when I want to handle the data changes in this table. I would like to import this table on my end but create a key that is fast and that will not change during data imports.

I feel like the problem you're trying to solve isn't clear.
Problem 1: The id column keeps getting updated. This seems weird so getting to the root of why that is happening seems like the real issue to resolve.
Problem 2: Uniquely identifying rows. You would like to use the id column or a new uuid column to uniquely identify but you've already said you can uniquely identify rows with the name column so what problem are you trying to solve here.
Problem 3: Performance. You're going to get best performance using an indexed integer (preferably primary key) column. Most likely id in this case. uuid won't help with performance.
Problem 4: Data changing on imports. This is likely due to auto increments or initial values set differently in DDL. You need to get a better understanding of what exactly is going wrong with your import.
Problem 5: If you don't have control over the values of the id column how would you be able to add your own uuid?
uuid is just a way of creating a unique value.
Oracle has a function to create uuid random_uuid().

This is an XY-problem.
You have data in your table with a unique key in a given data type and when it gets reloaded then the unique key is being regenerated so that all the data gets given new unique values.
The data type you use does not matter; the issue is with the process of reloading the data and regenerating the unique keys. If you use a different data type and the unique keys are still regenerated then you have exactly the same problem.
Fix that problem first.

Putting aside the reasons for this question and does it make sense or not. If I got it right it is about generation of a unique key from NAME which is unique itself.
If that is the case then you could create your on function to do the job:
CREATE OR REPLACE FUNCTION NAME_2_ID(p_name VARCHAR2) RETURN NUMBER AS
BEGIN
--
Declare
mRet Number(16);
mAlpha VarChar2(64);
mWrk Number(16) := 0;
mANum VarChar2(4000) := '';
--
Begin
IF p_name Is Null Then
mRet := 0;
GOTO End_It;
END IF;
--
mAlpha := ' ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'',.!?"()[]{}';
-- ---------------- Replacing Alpha To Numeric -----------------------------------------------------------------------------------------------
For i In 1 .. Length(p_name) Loop
mANum := mANum || SubStr(p_name, i, 1) || To_Char(InStr(mAlpha, Upper(SubStr(p_name, i, 1)))) || '~';
mWrk := mWrk + InStr(mAlpha, Upper(SubStr(p_name, i, 1)));
End Loop;
mRet := mWrk * Length(mANum);
<<End_It>>
RETURN(mRet);
End;
END NAME_2_ID;
As your ID column in TEST table is changing like in sample data:
WITH
test_1 AS
(
Select 1 "ID", 'John' "A_NAME" From Dual Union All
Select 2 "ID", 'Jay' "A_NAME" From Dual Union All
Select 3 "ID", 'Maria' "A_NAME" From Dual
),
test_2 AS
(
Select 10 "ID", 'John' "A_NAME" From Dual Union All
Select 12 "ID", 'Jay' "A_NAME" From Dual Union All
Select 13 "ID", 'Maria' "A_NAME" From Dual
)
... you can get the same ID_2 whenever you query the table (if name didn't
change) ...
Select
ID,
A_NAME,
NAME_2_ID(A_NAME) "ID_2"
From
test_1
/*
ID A_NAME ID_2
---------- ------ ----------
1 John 765
2 Jay 429
3 Maria 846
*/
-- -------------------------
... ... ...
From
test_2
/*
ID A_NAME ID_2
---------- ------ ----------
10 John 765
12 Jay 429
13 Maria 846
*/

Related

PostgreSQL two tables with expressions relations

I have a need to do relations between two tables with binary expressions between them. I'll try to clarify. Got two tables
--First
id | Name
1 | First Test 1
2 | First Test 2
--Second
id | Name
1 | Second Test 1
2 | Second Test 2
I want to be able to link the two tables with a logical expressions like the below pseudo code:
First(id=1) => Second(id=1) && (AND) Second(id=2)
Something like one-to-many but with logical operator between all the relations. Is there a straight forward way of doing this?
Thanks in advance,
Julian
UPDATE:
As #Rezu requested - To be able to write a query that will return a text for example:
First Test 1 := Second Test 1 AND Second Test 2
the AND part can be AND, OR, NOT etc.
Hope this clarify the thing that I want to achieve
UPDATE 1:
This is almost the thing I like to achieve. The result query is this:
First Test 1 := Second Test 1
First Test 1 := Second Test 2
First Test 2 := Second Test 3
What I want to achieve is:
First Test 1 := Second Test 1 AND First Test 1 := Second Test 2
First Test 2 := Second Test 3
Hope that explains what is my goal
Basically this is my solution. Maybe there is a better one, but this is what I came up with
create table first (
id serial primary key,
name text
);
insert into first (name) values (
'First Test 1'),('First Test 2');
create table second (
id serial primary key,
name text
);
insert into second (name) values (
'Second Test 1'),('Second Test 2'),('Second Test 3');
create table first_second (
first_id bigint,
second_id bigint,
logical_operator text
);
insert into first_second (first_id, second_id,logical_operator) values (
1,1,''),(1,2,'AND'),(2,3,'');
And the query is:
SELECT first.name || ' := ' ||
string_agg(first_second.logical_operator || ' ' || second.name, ' ')
as name
FROM
first
JOIN first_second ON first_second.first_id = first.id
JOIN second ON first_second.second_id = second.id
Group by first.name

Oracle Dynamic Pivoting

I have the below table. I need to create columns based off the column CCL. The values in column CCL are unknown. I'm not sure where to begin here. Any help would be appreciated.
TABLEA
ID CCL Flag
1 john x
1 adam x
1 terry
1 rob x
2 john x
Query:
SELECT *
FROM TABLEA
Output:
ID John Adam Terry Rob
1 x x x
2 x
Using dynamic sql for a result where the columns are unknown at the time of executing is a bit of a hassle in Oracle compared to certain other RDMBS.
Because the record type for the output is yet unknown, it can't be defined beforehand.
In Oracle 11g, one way is to use a nameless procedure that generates a temporary table with the pivoted result.
Then select the results from that temporary table.
declare
v_sqlqry clob;
v_cols clob;
begin
-- Generating a string with a list of the unique names
select listagg(''''||CCL||''' as "'||CCL||'"', ', ') within group (order by CCL)
into v_cols
from
(
select distinct CCL
from tableA
);
-- drop the temporary table if it exists
EXECUTE IMMEDIATE 'DROP TABLE tmpPivotTableA';
EXCEPTION WHEN OTHERS THEN IF SQLCODE != -942 THEN RAISE; END IF;
-- A dynamic SQL to create a temporary table
-- based on the results of the pivot
v_sqlqry := '
CREATE GLOBAL TEMPORARY TABLE tmpPivotTableA
ON COMMIT PRESERVE ROWS AS
SELECT *
FROM (SELECT ID, CCL, Flag FROM TableA) src
PIVOT (MAX(Flag) FOR (CCL) IN ('||v_cols||')) pvt';
-- dbms_output.Put_line(v_sqlqry); -- just to check how the sql looks like
execute immediate v_sqlqry;
end;
/
select * from tmpPivotTableA;
Returns:
ID adam john rob terry
-- ---- ---- --- -----
1 x x x
2 x
You can find a test on db<>fiddle here
In Oracle 11g, another cool trick (created by Anton Scheffer) to be used can be found in this blog. But you'll have to add the pivot function for it.
The source code can be found in this zip
After that the SQL can be as simple as this:
select * from
table(pivot('SELECT ID, CCL, Flag FROM TableA'));
You'll find a test on db<>fiddle here
Oracle must know all the column in select list on PARSING stage.
This has a couple of consequences
It's not possible for Oracle to change the column list of the query without re-parsing it. Regardless what is supposed to impact that - whether it's distinct list of values in some column or something else. In other words you cannot expect Oracle to add new columns to output if you added new value to CCL column in your example.
In each and every query you must specify explicitly all the columns in select list unless you use "*" with table alias. If you use "*" then Oracle gets column list from metadata and if you modify metadata (i.e. run DDL on a table) then Oracle re-parses query.
So the best option to deal with "Dynamic Pivoting" is to pivot and format result in the UI. However, there are still some options in database which you may want to consider.
Generating XML with pivoted result and parsing it.
Do pivot for XML and then parse results. In this case, eventually, you have to specify pivoted columns one way or another.
create table tablea(id, ccl, flag) as
(
select 1, 'john', 'x' from dual
union all select 1, 'adam', 'x' from dual
union all select 1, 'terry', null from dual
union all select 1, 'rob', 'x' from dual
union all select 2, 'john', 'x' from dual
);
In below example you do NOT have to provide list of the values for CCL, the only literals you specify are:
pivoted expression (FLAG) and column used for pivoting (CCL).
SQL> select id, x.*
2 from tablea t
3 pivot xml (max(flag) flag for ccl in(any))
4 -- parsing output
5 , xmltable('/PivotSet' passing ccl_xml
6 columns
7 name1 varchar2(30) path '/PivotSet/item[1]/column[#name="CCL"]/text()',
8 value1 varchar2(30) path '/PivotSet/item[1]/column[#name="FLAG"]/text()',
9 name2 varchar2(30) path '/PivotSet/item[2]/column[#name="CCL"]/text()',
10 value2 varchar2(30) path '/PivotSet/item[2]/column[#name="FLAG"]/text()',
11 name3 varchar2(30) path '/PivotSet/item[3]/column[#name="CCL"]/text()',
12 value3 varchar2(30) path '/PivotSet/item[3]/column[#name="FLAG"]/text()',
13 name4 varchar2(30) path '/PivotSet/item[4]/column[#name="CCL"]/text()',
14 value4 varchar2(30) path '/PivotSet/item[4]/column[#name="FLAG"]/text()') x;
ID NAME1 VALUE NAME2 VALUE NAME3 VALUE NAME4 VALUE
---------- ----- ----- ----- ----- ----- ----- ----- -----
1 adam x john x rob x terry
2 john x
You may have noticed 2 important details
In fact, each pivoted column is represented using two columns in result - one for caption and one for value
Names are ordered so you cannot preserver order like in your example ('john', 'adam', 'terry', 'rob'),
moreover one column may represent different names like NAME1 represents values for 'adam' in first row and 'john' in second row.
It's possible to use only indices to get the same output.
select id, x.*
from tablea
pivot xml (max(flag) flag for ccl in(any))
-- parsing output
, xmltable('/PivotSet' passing ccl_xml
columns
name1 varchar2(30) path '/PivotSet/item[1]/column[1]',
value1 varchar2(30) path '/PivotSet/item[1]/column[2]',
name2 varchar2(30) path '/PivotSet/item[2]/column[1]',
value2 varchar2(30) path '/PivotSet/item[2]/column[2]',
name3 varchar2(30) path '/PivotSet/item[3]/column[1]',
value3 varchar2(30) path '/PivotSet/item[3]/column[2]',
name4 varchar2(30) path '/PivotSet/item[4]/column[1]',
value4 varchar2(30) path '/PivotSet/item[4]/column[2]') x;
But still there are two columns for each pivoted column in the output.
Below query returns exactly the same data as in your example
SQL> select id, x.*
2 from tablea
3 pivot xml (max(flag) flag for ccl in(any))
4 -- parsing output
5 , xmltable('/PivotSet' passing ccl_xml
6 columns
7 john varchar2(30) path '/PivotSet/item[column="john"]/column[2]',
8 adam varchar2(30) path '/PivotSet/item[column="adam"]/column[2]',
9 terry varchar2(30) path '/PivotSet/item[column="terry"]/column[2]',
10 rob varchar2(30) path '/PivotSet/item[column="rob"]/column[2]') x;
ID JOHN ADAM TERRY ROB
---------- ----- ----- ----- -----
1 x x x
2 x
But wait... all the values for CCL are specified in the query. This is because column caption cannot depend on the data in the table. So what is the point in pivoting for XML if you could have just hardcoded all values in for clause with the same success? One of the ideas is that Oracle SQL engine transposes query result and the tool which displays output just has to properly parse XML. So you split pivoting logic into two layers. XML parsing can be done outside SQL, say, in your application.
ODCI table interface
There is already a link in another answer to Anton's solution.
You can also check an example here.
And, of course, it's explained in detail in Oracle Documentation.
Polymorphic Table Functions
One more advanced technology has been introduces in Oracle 18 - Polymorphic Table Functions.
But again, you should not expect that column list of your query will change after you added new value to CCL. It can change only after re-parsing. There is a way to force hard parse before each excution, but that is another topic.
Dynamic SQL
Finally, as also already pointed out in the comments, you can use good old DSQL.
First step - generate SQL statement based on the table contents. Second step - execute it.
SQL> var rc refcursor
SQL> declare
2 tmp clob;
3 sql_str clob := 'select * from tablea pivot (max(flag) for ccl in ([dynamic_list]))';
4 begin
5 select listagg('''' || ccl || ''' as ' || ccl, ',') within group(order by max(ccl))
6 into tmp
7 from tablea
8 group by ccl;
9 open :rc for replace(sql_str, '[dynamic_list]', tmp);
10 end;
11 /
PL/SQL procedure successfully completed.
SQL> print rc
ID ADAM JOHN ROB TERRY
---------- ----- ----- ----- -----
1 x x x
2 x

Why multiset union is not working when I'm trying to concatenate a null with some number in plsql?

So i have two nested tables and i want to make a new one with the elements from both of them but the first nested table have an null value and the second one an number and i want the result to be the number in the second one but he print the null value. It is possible to make a union between a null and an number with multiset union ?
To answer your question, yes, it is possible to "make a union between a null and an number with multiset union". But what you end up with is **two entries in the nested table:
SQL> update test
2 set marks = numberlist(null) multiset union all numberlist(42)
3 where id_std = 1
4 /
SQL> select id_std
2 , t2.column_value as mark
3 from test t1
4 , table(t1.marks) t2
5 /
ID_STD MARK
------ ----
1
1 42
SQL>
I suspect this affect is actually what you're complaining about. However, the null mark is still a valid entry. If you want to overwrite it you need to provide different logic.

Query to assign the same ID to row being inserted if it it already exists in the table

I am inserting customer records into a table where, if a record with the same name already exists, I assign the same ID to the newly inserted record.
Assume table T has this record:
ID | Name | Phone_Number | Date_Inserted
105| Sam | 111111 | 04/03/2014
106| Rita | 222222 |04/03/2014
And I'm inserting this from table A:
Name| Phone_Number
Sam | 333333
Then after insertion, table T should have:
ID | Name | Phone_Number | Date_Inserted
105| Sam | 111111 | 04/03/2014
106| Rita | 222222 | 04/03/2014
105| Sam | 333333 | 04/04/2014
Without the above change it would look like:
INSERT INTO T SELECT CustID.nextval,Name,Phone_Number,SYSDATE FROM A;
I was thinking of using,
INSERT INTO T
SELECT CASE
WHEN NOT EXISTS(select null from T WHERE T.Name=A.Name) THEN CustID.nextVal
ELSE (select ID from T where T.Name=A.Name)
END,
Name,
Phone_Number,
SYSDATE
FROM A;
But I'm not sure if it'll work and it seems redundant/bad for performance. If there's a preferred way to do this, please let me know.
If your schema is not set in stone, I would perhaps reconfigure it so that there is a "person" table and a separate "person phone number" table. With that sort of set up, you can associate multiple phone numbers with one person, and you won't be stomping on IDs, or creating confusing secondary ID columns that aren't primary keys.
If table A is small:
insert into T (id, name, phone_number, date_inserted)
select
nvl((select max(id) from T where T.name=A.name ), custid.nextval),
a.name, a.phone_number, sysdate
from A
If table A is big:
insert into T (id, name, phone_number, date_inserted)
select nvl(DT.id,custid.nextval), a.name, a.phone_number, sysdate
from A
left outer join (
select max(id) id, name from T
where name in (select distinct name from A)
group by T.name
) DT
on A.name=DT.name
If you want to "move" rows from table A to table T:
begin
for row in (select name, phone_number, rowid rid from A) loop
insert into T (id, name, phone_number, date_inserted)
select
nvl((select max(id) from T where T.name=row.name ), custid.nextval),
row.name, row.phone_number, sysdate
from dual;
delete from A where rowid=row.rid;
end loop;
end;
/
The characterisation of anything as "bad" is subjective. As long as the results are correct, something is only "bad" if it takes too long or uses too many system resources. You define "long" and "too many". If something is returning the correct results, in an acceptable time, using an acceptable amount of system resources then there is no need to change .
There are, however, a number of things that you can look at out for (assuming that altering your data-model is not an acceptable solution):
You're going to want an index on NAME, ID as you're selecting on NAME and returning ID.
Your second correlated sub-query, (select ID from T where T.Name=A.Name), is returning multiple rows, which is going to cause an error. You either need to limit the result set to a single row, or to utilise some aggregate function. It seems better to add an additional condition where rownum < 2 to limit the results as adding an aggregate will force Oracle to perform a range scan over every row that has that name whereas you only need to find whether it exists.
CASE claims that it performs short-circuit evaluation; this isn't necessarily true when you get sequences involved.
I don't think it will affect your INSERT statement but it might be worth changing your DATE_INSERTED column to have a default; it means that you don't need to add it to every query and you can't forget to do so:
alter table t modify date_inserted date default sysdate;
Putting these (pretty small) changes together your query might look like:
insert into t (id, name, phone_number)
select coalesce( select id from t where name = a.name and rownum < 2
, custid.nextval
)
, name
, phone_number
from a
Only you can tell whether this is acceptable or not.
I do something very similar - For one analytical database I have to maintain an old data-based primary key. The only way I could get the thing to work was running it in a background job every minute, using correlated sub-queries and explicitly adding a rownum restriction on the number of potential rows. I know that it's "better" to maintain this in the INSERT statement but the execution time was unacceptable. I know that the code can only deal with at most 10,000 rows a minute but it doesn't matter as I only add at most 5,000 rows a minute to the table. These numbers might change in the future and as the table grows the execution plan might change as well - when it does I'll deal with the problem then rather than attempting to solve a problem that doesn't exist.
tl;dr
Every bit of code is okay until it isn't. While knowledge and experience can help code to remain okay for longer don't prematurely optimise if there's no need to optimise.
Your version of the insert query will generate an error for the third and subsequent rows. I agree with #JeffN that you should fix the schema, because you clearly have a "person" entity and a "telephone" entity. But, given that you don't want to do that, the query you want is:
INSERT INTO T(id, name, phone_number, date_inserted)
SELECT (CASE WHEN oldid is null THEN CustID.nextVal
ELSE oldid
END) as Id, Name, Phone_Number, SYSDATE
FROM (select a.*, (select max(id) from T where T.Name = A.Name) as OldId
from A
) a;
For the purposes of this query, you should create an index on T(Name, Id):
create index idx_t_name_id on t(name, id);
You could also wrap this in a before insert trigger. I usually use a before insert trigger for auto-incrementing column in older versions of Oracle, rather than putting the sequence values in explicitly.
I'd probably use an inline view to get distinct id,name and then outer join to it.
INSERT INTO T
( id
, name
, phone_number
, date_inserted
)
SELECT NVL( TVW.id, CustID.nextval )
, A.name
, A.phone_number
, SYSDATE
FROM A
, ( SELECT DISTINCT id, name
FROM T
) TVW
WHERE A.name = TVW.name (+)
i will suggest this kind of procedure.
create or replace procedure insert_id(name_in in varchar2,
phone_in in number,
date_ins_in date default sysdate) is
cursor names is
select id, name from names;
type id is table of names.id%type;
type name is table of names.name%type;
sql_text varchar2(4000);
r_ct pls_integer;
l_id id;
l_name name;
begin
open names;
fetch names bulk collect
into l_id, l_name;
close names;
r_ct := 0;
for i in l_id.first .. l_id.last
loop
if l_name(i) = name_in then
sql_text := q'{insert into names values(}' || q'{'}'
|| l_id(i)
||q'{'}'
|| ','
|| q'{'}'
|| name_in
|| q'{'}'
|| ','
||q'{'}'
|| phone_in
|| q'{'}'
|| ','
|| q'{'}'
||date_ins_in
|| q'{'}'
|| ')';
execute immediate sql_text;
r_ct := sql%rowcount;
commit;
exit;
end if;
end loop;
if r_ct != 1 then
for i in l_id.first .. l_id.last
loop
if l_name(i) != name_in then
sql_text := 'insert into names values(' || q'{'}'
|| CustID.nextval --this part may be wrong, i guess it will be easy to correct, if something's wrong
|| q'{'}'
|| ','
|| q'{'}'
|| name_in
|| q'{'}'
|| ','
|| q'{'}'
|| phone_in
|| q'{'}'
|| ','
|| q'{'}'
|| date_ins_in
|| q'{'}'
|| ')';
execute immediate sql_text;
commit;
exit;
end if;
end loop;
end if;
end;
I agree with the suggestions about splitting the table if you can. Your ID column really looks like it ought to be a foreign key, joining one person record to multiple phone numbers.
Assuming you can't change the tables, is it possible that there will be duplicate names in table A which don't yet appear in table T? If so, you'll need to write some PL/SQL and process the records one at a time.
For example, if A contains ...
Name| Phone_Number
Sam | 333333
Tom | 444444
Tom | 555555
... you won't be able to process the records in a single insert, because Tom's ID won't be available in table T. Tom will end up with two IDs in table T.
Given the sample data you provided, your insert will work. My version below will do exactly the same thing and ought to be slightly more efficient. Notice that the nextval of your sequence will be evaluated whether or not it's used, so you'll find that sequence numbers are skipped wherever an ID from table t is reused. If that's a problem, you're likely looking at writing a bit of PL/SQL.
insert into t
(id
,name
,phone_number
,date_inserted)
select nvl(t.id,CustID.nextval)
,a.name
,a.phone_number
,sysdate
from a
left join t on a.name = t.name;
I mostly use mysql, so not sure about the oracle syntax but logically we can achieve this using a if statement and a sub-query. Something like this :
INSERT INTO T SELECT (CASE WHEN (SELECT COUNT(ID) FROM T WHERE Name=A.Name) > 0 THEN (SELECT ID FROM T WHERE Name=A.Name where ROWNUM <= 1) ELSE CustID.nextval END),Name,Phone_Number,SYSDATE FROM A;
Again, I'm not a Oracle programmer so syntax could be incorrect.

Search for element in array of composite types

Using PostgreSQL 9.0
I have the following table setup
CREATE TABLE person (age integer, last_name text, first_name text, address text);
CREATE TABLE my_people (mperson person[]);
INSERT INTO my_people VALUES(array[ROW(44, 'John', 'Smith', '1234 Test Blvd.')::person]);
Now, i want to be able to write a select statement that can search and compare values of my composite types inside my mperson array column.
Example:
SELECT * FROM my_people WHERE 20 > ANY( (mperson) .age);
However when trying to execute this query i get the following error:
ERROR: column notation .age applied to type person[], which is not a composite type
LINE 1: SELECT mperson FROM my_people WHERE 20 > ANY((mperson).age);
So, you can see i'm trying to test the values of the composite type inside my array.
I know, i'm not supposed to use arrays and composites in my tables, but this best suites our applications requirements.
Also, we have several nested composite arrays, so a generic solution that would allow me to search many levels would be appreciated.
The construction ANY in your case looks redundant. You can write the query that way:
SELECT * FROM my_people WHERE (mperson[1]).age < 20;
Of course, if you have multiple values in this array, that won't work, but you can't get the exact array element the other way neither.
Why do you need arrays at all? You can just write one element of type person per row.
Check also the excellent HStore module, which might better suit your generic needs.
Temporary test setup:
CREATE TEMP TABLE person (age integer, last_name text, first_name text
, address text);
CREATE TEMP TABLE my_people (mperson person[]);
-- test-data, demonstrating 3 different syntax styles:
INSERT INTO my_better_people (mperson)
VALUES
(array[(43, 'Stack', 'Over', '1234 Test Blvd.')::person])
,(array['(44,John,Smith,1234 Test Blvd.)'::person,
'(21,Maria,Smith,1234 Test Blvd.)'::person])
,('{"(33,John,Miller,12 Test Blvd.)",
"(22,Frank,Miller,12 Test Blvd.)",
"(11,Bodi,Miller,12 Test Blvd.)"}');
Call (almost the solution):
SELECT (p).*
FROM (
SELECT unnest(mperson) AS p
FROM my_people) x
WHERE (p).age > 33;
Returns:
age | last_name | first_name | address
-----+-----------+------------+-----------------
43 | Stack | Over | 1234 Test Blvd.
44 | John | Smith | 1234 Test Blvd.
key is the unnest() function, that's available in 9.0.
Your mistake in the example is that you forget about the ARRAY layer in between. unnest() returns one row per base element, then you can access the columns in the complex type as demonstrated.
Brave new world
IF you actually want a whole people instead of an individual that fits the criteria, I propose you add a primary key to the table and proceed as follows:
CREATE TEMP TABLE my_better_people (id serial, mperson person[]);
-- shortcut to populate the new world by emigration from the old world ;)
INSERT INTO my_better_people (mperson)
SELECT mperson FROM my_people;
Find individuals:
SELECT id, (p).*
FROM (
SELECT id, unnest(mperson) AS p
FROM my_better_people) x
WHERE (p).age > 20;
Find whole people (solution):
SELECT *
FROM my_better_people p
WHERE EXISTS (
SELECT 1
FROM (
SELECT id, unnest(mperson) AS p
FROM my_better_people
) x
WHERE (p).age > 20
AND x.id = p.id
);
You can do it without a primary key, but that would be foolish.