Filter list of points using list of Polygons - sql

Given a list of points and a list of polygons. How do you return a list of points (subset of original list of points) that is in any of the polygons on the list
I've removed other columns in the sample tables to simplify things
Points Table:
| Longitude| Latitude |
|----------|-----------|
| 7.07491 | 51.28725 |
| 3.674765 | 51.40205 |
| 6.049105 | 51.86624 |
LocationPolygons Table:
| LineString |
|----------------------|
| CURVEPOLYGON (COMPOUNDCURVE (CIRCULARSTRING (-122.20 47.45, -122.81 47.0, -122.942505 46.687131 ... |
| MULTIPOLYGON (((-110.3086 24.2154, -110.30842 24.2185966, -110.3127...
If I had row from the LocationPolygons table I could do something like
DECLARE #homeLocation geography;
SET #homeLocation = (select top 1 GEOGRAPHY::STGeomFromText(LineString, 4326)
FROM LocationPolygon where LocationPolygonId = '123abc')
select Id, Longitude, Latitude, #homeLocation.STContains(geography::Point(Latitude, Longitude, 4326))
as IsInLocation from Points PointId in (1, 2, 3,)
which would return what I want in a format like the below. However this is only true for just one location on the list
| Id | Longitude| Latitude | IsInLocation |
|----|----------|-----------|--------------|
| 1 | 7.07491 | 51.28725 | 0 |
| 2 | 3.674765 | 51.40205 | 1 |
| 3 | 6.049105 | 51.86624 | 0 |
How do I handle the scenario with multiple rows of the LocationPolygon table?
I'd like to know
if any of the points are in any of the locationPolygons?
what specific location polygon they are in? or if they are in more than one polygon.
Question 2 is more of an extra. Can someone help?
Update #1
In response to #Ben-Thul answer.
Unfortunately I don't have access/permission to make changes to the original tables, I can request access but not certain it'll be given. So not certain I'll be able to add the columns or create the index. Although I can create temp tables in a stored proc, I might be able to use test your solution that way
I stumbled on an answer like the below, but slightly worried about performance implications of using a cross join.
WITH cte AS (
select *, (GEOGRAPHY::STGeomFromText(LineString, 4326)).STContains(geography::Point(Latitude, Longitude, 4326)) as IsInALocation from
(
select Longitude, Latitude from Points nolock
) a cross join (
select LineString FROM LocationPolygons nolock
) b
)
select * from cte where IsInALocation = 1
Obviously, it's better to look at a query plan but is the solution I stumbled upon essentially the same as yours? Are there any potential issues that I missed. Apologies for this but my sql isn't very good.

Question 1 shouldn't be too bad. First, some set up:
alter table dbo.Points add Point as (GEOGRAPHY::Point(Latitude, Longitude, 4326));
create spatial index IX_Point on dbo.Points (Point) with (online = on);
alter table dbo.LocationPolygon add Polygon as (GEOGRAPHY::STGeomFromText(LineString, 4326));
create spatial index IX_Polygon on dbo.LocationPolygon (Polygon) with (online = on);
This will create a computed column on each of your tables that is of type geography that has a spatial index on it.
From there, you should be able to do something like this:
select pt.ID,
pt.Longitude,
pt.Latitude,
coalesce(pg.IsInLocation, 0) as IsInLocation
from Points as pt
outer apply (
select top(1) 1 as IsInLocation
from dbo.LocationPolygon as pg
where pg.Polygon.STContains(p.Point) = 1
) as pg;
Here, you're selecting every row from the Points table and using outer apply to see if any polygons contain that point. If one does (it doesn't matter which one), that query will return a 1 in the result set and bubble that back up to the driving select.
To extend this to Question 2, you can remove the top() from the outer apply and have it return either the IDs from the Polygon table or whatever you want. Note though that it'll return one row per polygon that contains the point, potentially changing the cardinality of your result set!

Related

How to select polygon from table by Latitude and Longitude (Postgis)

There is polygon table. I need to find one record from table with some Point (Lat/Long) inside of area.
It's example of coordinates: 149.14668176, -35.32202098
Could you please help me to write select string to find area that contain my Point?
SELECT PostGIS_full_version();
postgis_full_version
POSTGIS="2.5.2 r17328" [EXTENSION] PGSQL="96" GEOS="3.5.1-CAPI-1.9.1 r4246" PROJ="Rel. 4.9.3, 15 August 2016" GDAL="GDAL 2.1.2, released 2016/10/24" LIBXML="2.9.4" LIBJSON="0.12.1" LIBPROTOBUF="1.2.1" RASTER
Something like that:
SELECT id,name FROM area_polygon WHERE ST_Within('149.14668176, -35.32202098', geog);
bounds=# \d bounds.area_polygon;
id | integer | | not null |
geog | geography(Polygon,4283) | | |
name | text | | |
I expected:
id | name
------+--------
1 | Alabama
st_within only supports geometry types, which is why you get the error in the earlier answer, because you have a geography column type.
You can either cast to geometry:
SELECT id,name
FROM area_polygon
WHERE ST_Within(ST_SetSRID(ST_POINT(149.14668176,-35.32202098),4283), geog::geometry);
Or you can use st_dwithin, with distance set to zero:
SELECT id,name
FROM area_polygon
WHERE ST_DWithin(ST_SetSRID(ST_POINT(149.14668176,-35.32202098),4283)::geography, geog,0);
Note that the order of the coordinates must be lon/lat (and not lat/lon) and I am assuming those coordinates are in your SRID 4283. They have to either match the geog SRID or be transformed to it...
See here for a list of which functions support which arguments.

Spatial Clustering - associate cluster attribute (id) to the geometry that is part of the cluster

I'm having certain issues into associate a clustered set of geometries with their own proprieties.
Data
I've a table with a set of geometries,
buildings {
gid integer,
geom geometry(Multipoligon,4326)
}
And I've run the function ST_ClusterWithin with a certain threshold over a the "buildings" table.
From that cluster analysis, I got a table that named "clusters",
clusters {
cid Integer,
geom geometry(GeometryCollection,4326)
}
Question
I would love to extract into a table all geometry with associated its own cluster information.
clustered_building {
gid Integer
cid Integer
geom geometry(Multipoligon,4326)
}
gid | cid | geom |
-----+------------+-----------------------+
1 | 1 | multypoligon(...) |
2 | 1 | multypoligon(...) |
3 | 1 | multypoligon(...) |
4 | 2 | multypoligon(...) |
5 | 3 | multypoligon(...) |
6 | 3 | multypoligon(...) |
What I Did (but does not work)
I've been trying using the two function ST_GeometryN / ST_NumGeometries parse each MultyGeometry and extract the information of the cluster with this query that is derived from one of the Standard Example of the ST_Geometry manual page.
INSERT INTO clustered_building (cid, c_item , geom)
SELECT sel.cid, n, ST_GeometryN(sel.geom, n) as singlegeom
FROM ( SELECT cid, geom, ST_NumGeometries(geom) as num
FROM clusters") AS sel
CROSS JOIN generate_series(1,sel.num) n
WHERE n <= ST_NumGeometries(sel.geom);
The query, it takes few seconds if I force to use a series of 10.
CROSS JOIN generate_series(1,10)
But it got stuck when I ask to generate a series according to the number of item in each GeometryCollection.
And also, this query does not allow me to link the single geometry to his own features into the building table because I'm losing the "gid"
could someone please help me,
thanks
Stefano
I don't have you data, but using some dummy values, where ids 1, 2 and 3 intersect and 4 and 5, you can do something like the following:
WITH
temp (id, geom) AS
(VALUES (1, ST_Buffer(ST_Makepoint(0, 0), 2)),
(2, ST_Buffer(ST_MakePoint(1, 1), 2)),
(3, ST_Buffer(ST_MakePoint(2, 2), 2)),
(4, ST_Buffer(ST_MakePoint(9, 9), 2)),
(5, ST_Buffer(ST_MakePoint(10, 10), 2))),
clusters(geom) as
(SELECT
ST_Makevalid(
ST_CollectionExtract(
unnest(ST_ClusterIntersecting(geom)), 3))
FROM temp
)
SELECT array_agg(temp.id), cl.geom
FROM clusters cl, temp
WHERE ST_Intersects(cl.geom, temp.geom)
GROUP BY cl.geom;
If you wrap the final cl.geom is ST_AsText, you will see something like:
{1,2,3} | MULTIPOLYGON(((2.81905966523328 0.180940334766718,2.66293922460509 -0.111140466039203,2.4142135623731 -0.414213562373094,2.11114046603921 -0.662939224605089,1.81905966523328 -0.819059665233282,1.84775906502257 -0.765366864730179,1.96157056080646 -0.390180644032256,2 0,2 3.08780778723872e-16,2 0,2.39018064403226 0.0384294391935396,2.76536686473018 0.152240934977427,2.81905966523328 0.180940334766718))......
{4,5} | MULTIPOLYGON(((10.8190596652333 8.18094033476672,10.6629392246051 7.8888595339608,10.4142135623731 7.58578643762691,10.1111404660392 7.33706077539491,9.76536686473018 7.15224093497743,9.39018064403226 7.03842943919354,9 7,8.60981935596775 7.03842943919354,8.23463313526982 7.15224093497743,7.8888595339608 7.33706077539491,7.58578643762691 7.5857864376269,7.33706077539491 7.88885953396079,7.15224093497743 8.23463313526982
where you can see the ids 1,2,3, below to the first multipolygon, and 4,5 the other.
The general idea is you cluster the data, and then you intersect the returned clusters with the original data, using array_agg to group the ids together, so that the returned Multipolygons now contain the original ids. The use of ST_CollectionExtract with 3 as the second paramter, in conjunction with unnest, which splits the geometry collection returned by ST_ClusterIntersecting back into rows, returns each contiguous cluster as a (Multi)Polygon. The ST_MakeValid is because sometimes when you intersect geometries with other related geometries, such as the original polygons with your clustered polygonss, you get strange rounding effects and GEOS error about non-noded intersections and the like.
I answered a similar question on gis.stackexchange recently that you might find useful.

Store multidimensional array in database: relational or multidimensional?

I have read numerous posts along the lines of multidimensional to single dimension, multidimensional database, and so on, but none of the answers helped. I did find a lot of documentation on Google but that only provided background information and didn't answer the question at hand.
I have a lot of strings that are related to one another. They are needed in a PHP script. The structure is hierarchical. Here is an example.
A:
AA:
AAA
AAC
AB
AE:
AEA
AEE:
AEEB
B:
BA:
BAA
BD:
BDC:
BDCB
BDCE
BDD:
BDDA
BE:
BED:
BEDA
C:
CC:
CCB:
CCBC
CCBE
CCC:
CCCA
CCCE
CE
Each indent supposes a new level in the multidimensional array.
The goal is to retrieve an element with PHP by name and all its descendants. If for instance I query for A, I want to receive an array of string containing array('A', 'AA', 'AAA', 'AAC', 'AB', 'AE', 'AEA', 'AEE', 'AEEB'). The 'issue' is that queries can also be made to lower-level elements. If I query AEE, I want to get array('AEE', 'AEEB').
As I understand the concept of relational databases, this means that I cannot use a relational database because there is no common 'key' between elements. The solution that I thought is possible, is assigning PARENT elements to each cell. So, in a table:
CELL | PARENT
A NULL
AA A
AAA AA
AAC AA
AB A
AE A
AEA AE
AEE AE
AEEB AEE
By doing so, I think you should be able to query the given string, and all items that share this parent, and then recursively go down this path until no more items are found. However, this seems rather slow to me because the whole search space would need to be looked through on each level - which is exactly what you don't want in a multidimensional array.
So I am a bit at loss. Note that there are actually around 100,000 strings structured in this way, so speed is important. Luckily the database is static and would not change. How can I store such a data structure in a database without having to deal with long loops and search times? And which kind of database software and data type is best suited for this? It has come to my attention that PostgreSQL is already present on our servers so I'd rather stick with that.
As I said I am new to databases but I am very eager to learn. Therefore, I am looking for an extensive answer that goes into detail and provides advantages and disadvantages of a certain approach. Performance is key. An expected answer would contain the best database type and language for this use case, and also script in that language to build such a structure.
The goal is to retrieve an element with PHP by name and all its descendants.
If that is all you need, you can use a LIKE search
SELECT *
FROM Table1
WHERE CELL LIKE 'AEE%';
With an index beginning with CELL this is a range check, which is fast.
If your data doesn't look like that, you can create a path column which looks like a directory path and contains all nodes "on the way/path" from root to the element.
| id | CELL | parent_id | path |
|====|======|===========|==========|
| 1 | A | NULL | 1/ |
| 2 | AA | 1 | 1/2/ |
| 3 | AAA | 2 | 1/2/3/ |
| 4 | AAC | 2 | 1/2/4/ |
| 5 | AB | 1 | 1/5/ |
| 6 | AE | 1 | 1/6/ |
| 7 | AEA | 6 | 1/6/7/ |
| 8 | AEE | 6 | 1/6/8/ |
| 9 | AEEB | 8 | 1/6/8/9/ |
To retrieve all descendants of 'AE' (including itself) your query would be
SELECT *
FROM tree t
WHERE path LIKE '1/6/%';
or (MySQL specific concatenation)
SELECT t.*
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = 'AE'
AND t.path LIKE CONCAT(r.path, '%');
Result:
| id | CELL | parent_id | path |
|====|======|===========|==========|
| 6 | AE | 1 | 1/6/ |
| 7 | AEA | 6 | 1/6/7/ |
| 8 | AEE | 6 | 1/6/8/ |
| 9 | AEEB | 8 | 1/6/8/9/ |
Demo
Performance
I have created 100K rows of fake data on MariaDB with the sequence plugin using the following script:
drop table if exists tree;
CREATE TABLE tree (
`id` int primary key,
`CELL` varchar(50),
`parent_id` int,
`path` varchar(255),
unique index (`CELL`),
unique index (`path`)
);
DROP TRIGGER IF EXISTS `tree_after_insert`;
DELIMITER //
CREATE TRIGGER `tree_after_insert` BEFORE INSERT ON `tree` FOR EACH ROW BEGIN
if new.id = 1 then
set new.path := '1/';
else
set new.path := concat((
select path from tree where id = new.parent_id
), new.id, '/');
end if;
END//
DELIMITER ;
insert into tree
select seq as id
, conv(seq, 10, 36) as CELL
, case
when seq = 1 then null
else floor(rand(1) * (seq-1)) + 1
end as parent_id
, null as path
from seq_1_to_100000
;
DROP TRIGGER IF EXISTS `tree_after_insert`;
-- runtime ~ 4 sec.
Tests
Count all elements under the root:
SELECT count(*)
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = '1'
AND t.path LIKE CONCAT(r.path, '%');
-- result: 100000
-- runtime: ~ 30 ms
Get subtree elements under a specific node:
SELECT t.*
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = '3B0'
AND t.path LIKE CONCAT(r.path, '%');
-- runtime: ~ 30 ms
Result:
| id | CELL | parent_id | path |
|=======|======|===========|=====================================|
| 4284 | 3B0 | 614 | 1/4/11/14/614/4284/ |
| 6560 | 528 | 4284 | 1/4/11/14/614/4284/6560/ |
| 8054 | 67Q | 6560 | 1/4/11/14/614/4284/6560/8054/ |
| 14358 | B2U | 6560 | 1/4/11/14/614/4284/6560/14358/ |
| 51911 | 141Z | 4284 | 1/4/11/14/614/4284/51911/ |
| 55695 | 16Z3 | 4284 | 1/4/11/14/614/4284/55695/ |
| 80172 | 1PV0 | 8054 | 1/4/11/14/614/4284/6560/8054/80172/ |
| 87101 | 1V7H | 51911 | 1/4/11/14/614/4284/51911/87101/ |
PostgreSQL
This also works for PostgreSQL. Only the string concatenation syntax has to be changed:
SELECT t.*
FROM tree t
CROSS JOIN tree r -- root
WHERE r.CELL = 'AE'
AND t.path LIKE r.path || '%';
Demo: sqlfiddle - rextester
How does the search work
If you look at the test example, you'll see that all paths in the result begin with '1/4/11/14/614/4284/'. That is the path of the subtree root with CELL='3B0'. If the path column is indexed, the engine will find them all efficiently, because the index is sorted by path. It's like you would want to find all the words that begin with 'pol' in a dictionary with 100K words. You wouldn't need to read the entire dictionary.
Performance
As others have already mentioned, performance shouldn't be an issue as long as you use a suitable indexed primary key and ensure that relations use foreign keys. In general, an RDBMS is highly optimised to efficiently perform joins on indexed columns and referential integrity can also provide the advantage of preventing orphans. 100,000 may sound a lot of rows but this isn't going to stretch an RDBMS as long as the table structure and queries are well designed.
Choice of RDBMS
One factor in answering this question lies in choosing a database with the ability to perform a recursive query via a Common Table Expression (CTE), which can be very useful to keep the queries compact or essential if there are queries that do not limit the number of descendants being traversed.
Since you've indicated that you are free to choose the RDBMS but it must run under Linux, I'm going to throw PostgreSQL out there as a suggestion since it has this feature and is freely available. (This choice is of course very subjective and there are advantages and disadvantages of each but a few other contenders I'd be tempted to rule out are MySQL since it doesn't currently support CTEs, MariaDB since it doesn't currently support *recursive* CTEs, SQL Server since it doesn't currently support Linux. Other possibilities such as Oracle may be dependent on budget / existing resources.)
SQL
Here's an example of the SQL you'd write to perform your first example of finding all the descendants of 'A':
WITH RECURSIVE rcte AS (
SELECT id, letters
FROM cell
WHERE letters = 'A'
UNION ALL
SELECT c.id, c.letters
FROM cell c
INNER JOIN rcte r
ON c.parent_cell_id = r.id
)
SELECT letters
FROM rcte
ORDER BY letters;
Explanation
The above SQL sets up a "Common Table Expression", i.e. a SELECT to run whenever its alias (in this case rcte) is referenced. The recursion happens because this is referenced within itself. The first part of the UNION picks the cell at the top of the hierarchy. Its descendants are all found by carrying on joining on children in the second part of the UNION until no further records are found.
Demo
The above query can be seen in action on the sample data here: http://rextester.com/HVY63888
You absolutely can do that (if I've read your question correctly).
Depending on your RDBMS you might have to choose a different way.
Your basic structure of having a parent is correct.
SQL Server use recursive common table expression (CTE) to anchor the start and work down
https://technet.microsoft.com/en-us/library/ms186243(v=sql.105).aspx
Edit: For Linux use the same in PostgreSQL https://www.postgresql.org/docs/current/static/queries-with.html
Oracle has a different approach, though I think you might be able to use the CTE as well.
https://oracle-base.com/articles/misc/hierarchical-queries
For 100k rows I don't imagine performance will be an issue, though I'd still index PK & FK because that's the right thing to do. If you're really concerned about speed then reading it into memory and building a hash table of linked lists might work.
Pros & cons - it pretty much comes down to readability and suitability for your RDBMS.
It's an already solved problem (again, assuming I've not missed anything) so you'll be fine.
I have two words for you... "RANGE KEYS"
You may find this technique to be incredibly powerful and flexible. You'll be able to navigate your hierarchies with ease, and support variable depth aggregation without the need for recursion.
In the demonstration below, we'll build the hierarchy via a recursive CTE. For larger hierarchies 150K+, I'm willing to share a much faster build in needed.
Since your hierarchies are slow moving (like mine), I tend to store them in a normalized structure and rebuild as necessary.
How about some actual code?
Declare #YourTable table (ID varchar(25),Pt varchar(25))
Insert into #YourTable values
('A' ,NULL),
('AA' ,'A'),
('AAA' ,'AA'),
('AAC' ,'AA'),
('AB' ,'A'),
('AE' ,'A'),
('AEA' ,'AE'),
('AEE' ,'AE'),
('AEEB','AEE')
Declare #Top varchar(25) = null --<< Sets top of Hier Try 'AEE'
Declare #Nest varchar(25) ='|-----' --<< Optional: Added for readability
IF OBJECT_ID('TestHier') IS NOT NULL
Begin
Drop Table TestHier
End
;with cteHB as (
Select Seq = cast(1000+Row_Number() over (Order by ID) as varchar(500))
,ID
,Pt
,Lvl=1
,Title = ID
From #YourTable
Where IsNull(#Top,'TOP') = case when #Top is null then isnull(Pt,'TOP') else ID end
Union All
Select cast(concat(cteHB.Seq,'.',1000+Row_Number() over (Order by cteCD.ID)) as varchar(500))
,cteCD.ID
,cteCD.Pt
,cteHB.Lvl+1
,cteCD.ID
From #YourTable cteCD
Join cteHB on cteCD.Pt = cteHB.ID)
,cteR1 as (Select Seq,ID,R1=Row_Number() over (Order By Seq) From cteHB)
,cteR2 as (Select A.Seq,A.ID,R2=Max(B.R1) From cteR1 A Join cteR1 B on (B.Seq like A.Seq+'%') Group By A.Seq,A.ID )
Select B.R1
,C.R2
,A.ID
,A.Pt
,A.Lvl
,Title = Replicate(#Nest,A.Lvl-1) + A.Title
Into dbo.TestHier
From cteHB A
Join cteR1 B on A.ID=B.ID
Join cteR2 C on A.ID=C.ID
Order By B.R1
Show The Entire Hier I added the Title and Nesting for readability
Select * from TestHier Order By R1
Just to state the obvious, the Range Keys are R1 and R2. You may also notice that R1 maintains the presentation sequence. Leaf nodes are where R1=R2 and Parents or rollups define the span of ownership.
To Show All Descendants
Declare #GetChildrenOf varchar(25) = 'AE'
Select A.*
From TestHier A
Join TestHier B on B.ID=#GetChildrenOf and A.R1 Between B.R1 and B.R2
Order By R1
To Show Path
Declare #GetParentsOf varchar(25) = 'AEEB'
Select A.*
From TestHier A
Join TestHier B on B.ID=#GetParentsOf and B.R1 Between A.R1 and A.R2
Order By R1
Clearly these are rather simple illustrations. Over time, I have created a series of helper functions, both Scalar and Table Value Functions. I should also state that you should NEVER hard code range key in your work because they will change.
In Summary
If you have a point (or even a series of points), you'll have its range and therefore you'll immediately know where it resides and what rolls into it.
This approach does not depend on the existence of a path or parent column. It is relational not recursive.
Since the table is static create a materialized view containing just the leaves to make searching faster:
create materialized view leave as
select cell
from (
select cell,
lag(cell,1,cell) over (order by cell desc) not like cell || '%' as leave
from t
) s
where leave;
table leave;
cell
------
CCCE
CCCA
CCBE
CCBC
BEDA
BDDA
BDCE
BDCB
BAA
AEEB
AEA
AB
AAC
AAA
A materialized view is computed once at creation not at each query like a plain view. Create an index to speed it up:
create index cell_index on leave(cell);
If eventually the source table is altered just refresh the view:
refresh materialized view leave;
The search function receives text and returns a text array:
create or replace function get_descendants(c text)
returns text[] as $$
select array_agg(distinct l order by l)
from (
select left(cell, generate_series(length(c), length(cell))) as l
from leave
where cell like c || '%'
) s;
$$ language sql immutable strict;
Pass the desired match to the function:
select get_descendants('A');
get_descendants
-----------------------------------
{A,AA,AAA,AAC,AB,AE,AEA,AEE,AEEB}
select get_descendants('AEE');
get_descendants
-----------------
{AEE,AEEB}
Test data:
create table t (cell text);
insert into t (cell) values
('A'),
('AA'),
('AAA'),
('AAC'),
('AB'),
('AE'),
('AEA'),
('AEE'),
('AEEB'),
('B'),
('BA'),
('BAA'),
('BD'),
('BDC'),
('BDCB'),
('BDCE'),
('BDD'),
('BDDA'),
('BE'),
('BED'),
('BEDA'),
('C'),
('CC'),
('CCB'),
('CCBC'),
('CCBE'),
('CCC'),
('CCCA'),
('CCCE'),
('CE');
For your scenario, I would suggest you to use Nested Sets Approach in PostgreSQL. It is XML tags based querying using Relational database.
Performance
If you index on lft and rgt columns, then you don't require recursive queries to get the data. Even though, the data seems huge, the retrieval will be very fast.
Sample
/*1A:
2 AA:
3 AAA
4 AAC
5 AB
6 AE:
7 AEA
8 AEE:
9 AEEB
10B:
*/
CREATE TABLE tree(id int, CELL varchar(4), lft int, rgt int);
INSERT INTO tree
("id", CELL, "lft", "rgt")
VALUES
(1, 'A', 1, 9),
(2, 'AA', 2, 4),
(3, 'AAA', 3, 3),
(4, 'AAC', 4, 4),
(5, 'AB', 5, 5),
(6, 'AE', 6, 9),
(7, 'AEA', 7, 7),
(8, 'AEE', 8, 8),
(9, 'AEEB', 9, 9)
;
SELECT hc.*
FROM tree hp
JOIN tree hc
ON hc.lft BETWEEN hp.lft AND hp.rgt
WHERE hp.id = 2
Demo
Querying using Nested Sets approach

Unexpected result from CASE referencing another expression

The following statement always returns the result from st_area(st_buffer(polygon,100)).
select st_Area(polygon) as area,
case when area>100000 then st_area(st_buffer(polygon,500))
else st_area(st_buffer(polygon,100))
end from polygons limit 10;
area | st_area
------------------+------------------
383287.287473659 | 723738.615102036
47642.5395246768 | 192575.823383778
45546.753026985 | 174122.420564731
435204.455923533 | 725419.735987631
839954.564052786 | 1268251.88626391
315213.27742828 | 630424.785088617
966620.061916605 | 1447647.57269461
38446.6010009923 | 151584.647252579
82576.1182937309 | 238095.988431594
321682.125463567 | 695462.262796463
(10 rows)
st_area should have been the result of st_buffer(polygon,500) when area>100000 as shown below:
area | st_area
------------------+------------------
383287.287473659 | 2702203.34758147
47642.5395246768 | 192575.823383778
45546.753026985 | 174122.420564731
435204.455923533 | 2507469.89929028
839954.564052786 | 3568866.96452707
315213.27742828 | 2453576.33477712
966620.061916605 | 3953365.12876066
38446.6010009923 | 151584.647252579
82576.1182937309 | 238095.988431594
321682.125463567 | 2628693.69179652
(10 rows)
Can someone explain?
It doesn't become completely clear from the question (yet), but my educated guess is you want this:
SELECT st_Area(polygon) AS area -- or pick some other name!
, CASE WHEN st_Area(polygon) > 100000
THEN st_area(st_buffer(polygon,500))
ELSE st_area(st_buffer(polygon,100)) END AS st_area
FROM polygons
LIMIT 10;
You cannot reference the column alias (name of the output column) in another item of the same SELECT list. You can only reference input column names. So you have to repeat the expression or use a subquery:
SELECT area
, CASE WHEN area > 100000
THEN st_area(st_buffer(polygon,500))
ELSE st_area(st_buffer(polygon,100)) END AS st_area
FROM (SELECT st_Area(polygon) AS area, polygon FROM polygons LIMIT 10) sub;
Normally you should get a syntax error immediately. Obviously, there is another column named area in your base table. Hence the confusion. Additional wisdom to take away from this:
It's better to use a name different from any input column when attaching an alias to an output column.
Always include table definitions in questions. Clarifies a lot.

SQL magic - query shouldn't take 15 hours, but it does

Ok, so i have one really monstrous MySQL table (900k records, 180 MB total), and i want to extract from subgroups records with higher date_updated and calculate weighted average in each group. The calculation runs for ~15 hours, and i have a strong feeling i'm doing it wrong.
First, monstrous table layout:
category
element_id
date_updated
value
weight
source_prefix
source_name
Only key here is on element_id (BTREE, ~8k unique elements).
And calculation process:
Make hash for each group and subgroup.
CREATE TEMPORARY TABLE `temp1` (INDEX ( `ds_hash` ))
SELECT `category`,
`element_id`,
`source_prefix`,
`source_name`,
`date_updated`,
`value`,
`weight`,
MD5(CONCAT(`category`, `element_id`, `source_prefix`, `source_name`)) AS `subcat_hash`,
MD5(CONCAT(`category`, `element_id`, `date_updated`)) AS `cat_hash`
FROM `bigbigtable` WHERE `date_updated` <= '2009-04-28'
I really don't understand this fuss with hashes, but it worked faster this way. Dark magic, i presume.
Find maximum date for each subgroup
CREATE TEMPORARY TABLE `temp2` (INDEX ( `subcat_hash` ))
SELECT MAX(`date_updated`) AS `maxdate` , `subcat_hash`
FROM `temp1`
GROUP BY `subcat_hash`;
Join temp1 with temp2 to find weighted average values for categories
CREATE TEMPORARY TABLE `valuebycats` (INDEX ( `category` ))
SELECT `temp1`.`element_id`,
`temp1`.`category`,
`temp1`.`source_prefix`,
`temp1`.`source_name`,
`temp1`.`date_updated`,
AVG(`temp1`.`value`) AS `avg_value`,
SUM(`temp1`.`value` * `temp1`.`weight`) / SUM(`weight`) AS `rating`
FROM `temp1` LEFT JOIN `temp2` ON `temp1`.`subcat_hash` = `temp2`.`subcat_hash`
WHERE `temp2`.`subcat_hash` = `temp1`.`subcat_hash`
AND `temp1`.`date_updated` = `temp2`.`maxdate`
GROUP BY `temp1`.`cat_hash`;
(now that i looked through it and wrote it all down, it seems to me that i should use INNER JOIN in that last query (to avoid 900k*900k temp table)).
Still, is there a normal way to do so?
UPD: some picture for reference:
removed dead ImageShack link
UPD: EXPLAIN for proposed solution:
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
| 1 | SIMPLE | cur | ALL | NULL | NULL | NULL | NULL | 893085 | 100.00 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | next | ref | prefix | prefix | 1074 | bigbigtable.cur.source_prefix,bigbigtable.cur.source_name,bigbigtable.cur.element_id | 1 | 100.00 | Using where |
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
Using hashses is one of the ways in which a database engine can execute a join. It should be very rare that you'd have to write your own hash-based join; this certainly doesn't look like one of them, with a 900k rows table with some aggregates.
Based on your comment, this query might do what you are looking for:
SELECT cur.source_prefix,
cur.source_name,
cur.category,
cur.element_id,
MAX(cur.date_updated) AS DateUpdated,
AVG(cur.value) AS AvgValue,
SUM(cur.value * cur.weight) / SUM(cur.weight) AS Rating
FROM eev0 cur
LEFT JOIN eev0 next
ON next.date_updated < '2009-05-01'
AND next.source_prefix = cur.source_prefix
AND next.source_name = cur.source_name
AND next.element_id = cur.element_id
AND next.date_updated > cur.date_updated
WHERE cur.date_updated < '2009-05-01'
AND next.category IS NULL
GROUP BY cur.source_prefix, cur.source_name,
cur.category, cur.element_id
The GROUP BY performs the calculations per source+category+element.
The JOIN is there to filter out old entries. It looks for later entries, and then the WHERE statement filters out the rows for which a later entry exists. A join like this benefits from an index on (source_prefix, source_name, element_id, date_updated).
There are many ways of filtering out old entries, but this one tends to perform resonably well.
Ok, so 900K rows isn't a massive table, it's reasonably big but and your queries really shouldn't be taking that long.
First things first, which of the 3 statements above is taking the most time?
The first problem I see is with your first query. Your WHERE clause doesn't include an indexed column. So this means that it has to do a full table scan on the entire table.
Create an index on the "data_updated" column, then run the query again and see what that does for you.
If you don't need the hash's and are only using them to avail of the dark magic then remove them completely.
Edit: Someone with more SQL-fu than me will probably reduce your whole set of logic into one SQL statement without the use of the temporary tables.
Edit: My SQL is a little rusty, but are you joining twice in the third SQL staement? Maybe it won't make a difference but shouldn't it be :
SELECT temp1.element_id,
temp1.category,
temp1.source_prefix,
temp1.source_name,
temp1.date_updated,
AVG(temp1.value) AS avg_value,
SUM(temp1.value * temp1.weight) / SUM(weight) AS rating
FROM temp1 LEFT JOIN temp2 ON temp1.subcat_hash = temp2.subcat_hash
WHERE temp1.date_updated = temp2.maxdate
GROUP BY temp1.cat_hash;
or
SELECT temp1.element_id,
temp1.category,
temp1.source_prefix,
temp1.source_name,
temp1.date_updated,
AVG(temp1.value) AS avg_value,
SUM(temp1.value * temp1.weight) / SUM(weight) AS rating
FROM temp1 temp2
WHERE temp2.subcat_hash = temp1.subcat_hash
AND temp1.date_updated = temp2.maxdate
GROUP BY temp1.cat_hash;