SQL - Lat/Lng Distance Query - Returns nothing if distance = 0 - sql

If my input $latitude $longitude values happen to match exactly the stored values in the DB nothing is returned (eg searching for yourself)...I assume because the distance will be 0.
Im using this query:
SELECT *,
(((acos(sin((".$latitude."*pi()/180)) * sin((`lat`*pi()/180))
+cos((".$latitude."*pi()/180)) * cos((`lat`*pi()/180))
* cos(((".$longitude."- `lng`)*pi()/180))))*180/pi())*60*1.1515)
AS distance
FROM ...
LEFT JOIN ...
ON ...
WHERE 'cat_id' = '$cat_id'
HAVING distance <= $radius
ORDER BY distance ASC
One solution I found was to make the input less accurate, by reducing the decimal of the lat lng values - but that's not really a solution.
How can I alter the query so that the row is still returned if the distance is 0?

The HAVING clause is killing the output. Change the query so that its condition is part of the WHERE:
...
WHERE 'cat_id' = '$cat_id' AND distance <= $radius

Related

Select records by their closest point and order by ascending distance

I have a table of listings of which each has multiple listing_coordinates records. Given a specific point (for example ST_Point ('-118.1885', '33.7775')) I want to retrieve all listings that have a coordinate within 25 kilometres of that point and order them by their closest point. In addition I'd like access to that distance in the result too.
listing_coordinates.coordinate is of type geography(Point, 4326) and has a spatial index of (coordinate)::geography on it.
SELECT
"listings".*,
(
SELECT
ST_Distance(coordinate, ST_Point ('-118.1885', '33.7775')) AS distance
FROM
"listing_coordinates"
WHERE
"listing_coordinates"."listing_id" = "listings"."id"
ORDER BY
"distance" ASC
LIMIT 1) AS "distance"
FROM
"listings"
WHERE EXISTS (
SELECT
*
FROM
"listing_coordinates"
WHERE
"listings"."id" = "listing_coordinates"."listing_id"
AND ST_DWithin (ST_Point ('-118.1885', '33.7775'), listing_coordinates.coordinate, 25000, FALSE))
ORDER BY
"distance" ASC
LIMIT 16 OFFSET 0
The performance of this query isn't great - even locally it's hovering around 3-400ms. Removing the WHERE EXISTS clause doesn't appear to have an impact either.
I suspect I'm approaching this problem incorrectly, rather than missing or not using an index correctly.
What would be the best way to query listings that have a coordinate with a certain radius and order it by their closest point ascending?

Filtering table with Spark SQL to rows with minimum value of column

I apologize in advance for what has got to be a very simple question. But I'm trying to filter a table down to only the rows that contain the minimum distance, where distance itself is calculated from another table.
I've tried:
SELECT * FROM
(SELECT
shape_id,
ST_Distance(ST_Point(-87.65751647949219, 41.89625930786133), geom) AS distance
FROM routes) a
WHERE distance = (SELECT MIN(distance) FROM a)
LIMIT 50
I've also tried:
"""SELECT shape_id, ST_Distance(ST_Point(-87.65751647949219, 41.89625930786133), geom) AS distance
FROM routes a
WHERE distance = (select MIN(distance) from a)
LIMIT 50
"""
But I always get:
Table or view not found: a
Can someone please help me with the syntax here?

SQL Network Length Calculation Lon/Lat

I currently have an Azure postgresql database containing openstreetmap data and I was wondering if there's a SQL query that can get the total distance of a way by using the lat/longs of the nodes the way uses.
I would like the SQL query to return way_id and distance.
My current approach is using C# to download all the ways and all the nodes into dictionaries (with their id's being the key). I then loop through all the ways, grouping all the nodes that belong to that way and then use their lat/longs (value divided by 10000000) to calculate the distance. This part works as excepted but rather it be done on the server.
The SQL I have attempted is below but I'm stuck on calculating the total distance per way based on the lat/longs.
Update: Postgis extension is installed.
SELECT current_ways.id as wId, node_id, (CAST(latitude as float)) / 10000000 as lat, (CAST(longitude as float)) / 10000000 as lon FROM public.current_ways
JOIN current_way_nodes as cwn ON current_ways.id = cwn.way_id
JOIN current_nodes as cn ON cwn.node_id = cn.id
*output*
wId node_id latitude longitude
2 1312575 51.4761127 -3.1888786
2 1312574 51.4759647 -3.1874216
2 1312573 51.4759207 -3.1870016
2 1213756 51.4758761 -3.1865223
3 ....
*desired_output*
way_id length
2 x.xxx
3 ...
**Tables**
current_nodes
id
latitude
longitude
current_ways
id
current_way_nodes
way_id
node_id
sequence_id
It would be much simpler should you also had the geometry in your table, i.e. the actual point instead of just the coordinates, or, even better, the actual lines.
That being said, here is a query to get what you are looking for:
SELECT w.way_id,
ST_Length( -- compute the length
ST_MAKELINE( --of a new line
ST_SetSRID( --made of an aggregation of NEW points
ST_MAKEPOINT((CAST(longitude as float)) / 10000000,(CAST(latitude as float)) / 10000000), --created using the long/lat from your text fields
4326) -- specify the projection
ORDER BY w.sequence_id -- order the points using the given sequence
)::geography --cast to geography so the output length will be in meters and not in degrees
) as length_m
FROM current_way_nodes w
JOIN current_nodes n ON w.node_id = n.node_id
GROUP BY w.way_id;

Data retrieving by Latitude longitude matching from both tables in mysql

I have two tables are A & B.
A table having columns are hotelcode_id, latitude,longitude
B table having columns are latitude, longitude
Requirement is, I need retrieving hotelcode_id according to match latitude from both tables and longitude from both tables
I have designed the following query, but still in query performance
SELECT a.hotelcode_id, a.latitude,b.latitude,b.longitude,b.longitude
FROM A
JOIN B
ON a.latitude like concat ('%', b.latitude, '%') AND a.longitude like concat ('%', b.longitude, '%')
Also I'm designed the following another query but I can't able to accuret data's.
This query running too much time but still now I can't able to retrieve the data's.
NOTE:
A table has 150k records
B table has 250k records
: I have set DECIMAL(10,6) for latitude and longitude columns in both tables.
I have done the following job but still in problems in query performance,
done index properly using EXPLAIN statements
done hash partition for this tables
I think wild card characters not allowed the index reference.
Also LIKE SELECT query performance very poor in MySQL.
Any other solution is there instead wild cards issues & LIKE issues in SELECT query?
If you are sure that the numeric values of LAT/LON pairs are equal across the two table, the simple approach would be
SELECT a.hotelcode_id, a.latitude,b.latitude,b.longitude,b.longitude
FROM A JOIN B
WHERE a.latitude = b.latitude
AND a.longitude = b.longitude
If there is some inaccuracy in the data, you may want to define the maximum deviation (here 3.6 angle seconds) which you would regard as "same place", e.g.
SELECT a.hotelcode_id, a.latitude,b.latitude,b.longitude,b.longitude
FROM A JOIN B
WHERE ABS(a.latitude-b.latitude) < 0.001
AND ABS(a.longitude-b.longitude) < 0.001
Mind that in the second case the actual distance (in km) between two points are not the same at any given LAT ... higher LAT --> less distance
And review the sizing of LON and LAT columns ... you know that (usually ...)
-180 <= LON <= 180
-90 <= LAT <= 90

Should I create an index on the columns if their values are used in functions ? (SQLite)

I am working with a huge database and trying top optimize it.
I was wondering if it will make any change to index the values that are used as criteria in the request, but through a function.
For example I have this GPS coordinate table :
-Node (#id,lat,lng)
and this request :
SELECT * FROM Node WHERE distance( lat, lng, $lat, $lng ) < $threshold
Would creating an index on lat and lng make any optimization ? (I'm working with SQLite)
Thanks
Edit I just thought about the same question, but if I make the calculation directly like :
SELECT * FROM Node WHERE (lat-$lat)*(lat-$lat) + (lng-$lng)*(lng-$lng) < $threshold
For queries, you would absolutely see an performance benefit.
But with a jumbo database you will also encounter a performance hit on insertions.
The database will need to calculate the distance for each node in your example and will not benefit from an index. If you however index the lng and lat columns and use these to first eliminate all nodes that either have abs(lat - $lat) > $threshold or abs(lng - $lng) > $threshold you could see increased performance since the database can use the created index to eliminate a number of records before calculating the distance for the remaining records.
The query would look something like this:
SELECT * FROM Node
WHERE lat >= $lat - $threshold
AND lat <= $lat + $threshold
AND lng >= $lng - $threshold
AND lng <= $lng + $threshold
AND distance( lat, lng, $lat, $lng ) < $threshold;