I have a simple table containing 2 float columns representing X and Y coordinates. A non clustered index is on each of those 2 columns. In this table there are about 5 million datapoints which I want to group into custom grid using such an SQL:
SELECT COUNT(X) Count, AVG(X) CenterX, AVG(Y) CenterY
FROM DataPoints
GROUP BY FLOOR(X / 5), FLOOR(Y / 5)
On a test case I splitted a data set with 815000 points into a grid where each point gets his own grid cell. It took the SQL server 2012 26000 milliseconds to provide the results which is definitly too long. I made a C# implementation of the same grouping using LINQ on a simple point array and there it only took 3450ms! I also created a stored procedure of the SQL for some speed-up, but still it takes 26-30seconds to calcualte the grid cells.
I can't understand why it takes the SQL Server that long to calcualte those groups. I know it might take long on all 815000 poitns to calculate the grid cell index but 7 times longer than on a simple C# program can't be a realistic result.
I also tried to use spatial types to do calculate the grid but those solutions are even slower. Using a geometry column and a spatial index (GEOMETRY_AUTO_GRID) the built in sp_help_spatial_geometry_histogram needs 2:40min to calculate 4 grid cells containing the data.
Has anybody an idea how to speed up such a simple SQL? In the future this data will be sent to a map in the browser and there will be a lot of requests so <100ms would be an ultimate goal.
What does the execution plan tells you?
why is this slow?
i suggest you put a nonclustered index on x and y (not separate),
is this result better?
Related
I would like to know if it is possible to limit the bigquery query size when running a query through the web user-interface?
My idea is just to test the query but instead of querying all my tables; I would like just to query a part of it with for instance a number of row.
Limit is not optimizing my query cost, so the idea is to find a function similar to "row_number" or "fetch".
Sorry I'm a marketer and not a developer, so thank you in advance for your kind help.
How to limit BigQuery query size for testing ... ?
1 - Try to minimize number of tables involved in your testing
In your query – there are 60+ tables involved for respectively dates between 2016-12-11 and nowadays
SELECT <fields_list> FROM
TABLE_DATE_RANGE([XXX:85801771.ga_sessions_],
TIMESTAMP('20161211'),
TIMESTAMP('20170315'))
Instead you can use same day as start and end of time range, thus drastically reducing number of involved tables (down to just one table) and overall scan size. For example
SELECT <fields_list> FROM
TABLE_DATE_RANGE([XXX:85801771.ga_sessions_],
TIMESTAMP('20161211'),
TIMESTAMP('20161211'))
2 - Minimize number of rows. Ability to do so really depends on how your table is being loaded with data. If table loaded incrementally - you can use so called table decorators.
Note - this technique works with tables within last 7 days
For example, below will scan only data that was in table at one hour ago (so called snapshot decorator)
SELECT <fields_list> FROM [XXX:85801771.ga_sessions_20170212#-3600000]
This works well with the most recent day's table especially at the start of the day when size of table is not big yet
So, to limit further, you can use below version (so called range decorator) - gives you data added between one hour and half an hour ago
SELECT <fields_list> FROM [XXX:85801771.ga_sessions_20170212#-3600000--1800000]
Finally, #0 is a special case that references the oldest possible snapshot of the table: either 7 days in the past, or the table's creation time if the table is less than 7 days old. For example
SELECT <fields_list> FROM [XXX:85801771.ga_sessions_20170210#0]
3 - Test against Sampled Table. If you expect experimenting with your query again and again - you can first prepare downsized version of your table with just as many rows as you need and applying sampling logic that fit in your business logic. To limit number of rows you can use LIMIT Clause. To get random rows you can use RAND function for example
After sampled table is prepared - run all your query against it till when you have final version - after this - you can run it against your original table(s)
And btw, to create sampled table you need to set destination table under options in Web UI.
I have a dataset that I've loaded into BigQuery, it consists of 3 separate tables with a common identifier in each of the files.
When I set up my project in Tableau I performed an inner join on two of the tables. I set the connection up as an extract and not live.
There's some geo info in my file, lats and longs. When I drag lat to the rows section on my worksheet it's taking an eternity to perform that task, currently it's taken 18 mins and counting to just process whatever it's doing when I drag the lat to the row section.
Is there some other way that I can take a random sample of my data for working on it rather than having to wait for each query to process? My data is not even that big, it's around 1M rows.
I've found Tableau to bog down quite a bit long before 1 million rows, and I supsect the join compounds the problem for you.
Aggregating as much as possible in BigQuery itself, before making the extract, is your friend. The random excerpt is a good idea, too. You could try:
SELECT
*
FROM
([subquery joining your tables])
WHERE RAND() < 0.05 # or whatever gives an acceptable sample size
I'm trying to find a way to let my database support fast location-based searches (for example, all items that lie within a certain distance from some geopoint (LAT, LON)). I guess the brute-force solution which calculates the distances between every point in the database and the query point probably won't work for large dataset, so some kind of indexing should be necessary. I'm not sure if there are any existing standard ways to do this (well I know they are out there but Google failed me), but here is a method (or more like a hack?) that I think might work:
Calculate a value from (LAT, LON) and store it in an indexed column. For example, something like floor(LAT / 10) * 10 * 100 + floor(LON / 10) * 10. Each time a query arrives, we first calculate this value for the query and find all the corresponding rows, and then calculate the Euclid distances between all points and the query point.
I have a table with around 5 millions rows and each row has 10 columns representing 10 dimensions.
I would like to be able when a new input is coming to perform a search in the table to return the closest rows using Manhattan distances.
The distance is the sum of abs(Ai-Aj)+abs(Bi-Bj)...
The problem is that for the moment if I do a query, it does a full scan of the entire table, to calculate the distances from every rows, and then sort them to find the top X.
Is there a way to speed the process and make the query more efficient?
I looked at the distance function online for the SDO_GEOMETRY, but I couldn't find it for more than 4 dimensions.
Thank you
If you are inserting a point A and you want to look for points that are within a neighbourhood of radius r (i.e., are less than r away, on any metric), you can do a really simply query:
select x1, x2, ..., xn
from points
where x1 between a1 - r and a1 + r
and x2 between a2 - r and a2 + r
...
and xn between an - r and an + r
...where A = (a1, a2, ..., an), to find a bound. If you have an index over all x1, ..., xn fields of points, then this query shouldn't require a full scan. Now, this result may include points that are outside the neighbourhood (i.e., the bits in the corners), but is an easy win to find an appropriate subset: you can now check against the records in this subquery, rather than checking against every point in your table.
You may be able to refine this query further because, with the Manhattan metric, a neighbourhood will be square shaped (although at 45 degrees to the above) and squares are relatively easy to work with! (Even in 10 dimensions.) However, the more complicated logic required may be more of an overhead than an optimisation, ultimately.
I suggest using function based index. You need this distance calculated, therefore pre calculate it using function based index.
You may want to read following question and it links. Function based index creates hidden column for you. This hidden column will hold manhanttan distance , therefore sorting will be easier.
Thanks for #Xophmeister's comment. Function based index will not help you for arbitrary point. I do not know any sql function to help you here. But if you are willing to use machine learning data mining algorithm.
I suggest cluster your 5 million rows using k-means clustering. Lets say 1000 cluster center you found. Put this cluster centers to another table.
By definition clustering , your points will be assigned to cluster centers. Because of this you know which points are nearest to this cluster center, say
cluster (1) contains 20.000 points, ... cluster ( 987) contains 10.000 points ...
Your arbitrary point will be near to one cluster. You find that your point is nearest to cluster 987. Run your sql , using only points which belongs to this cluster center, that 10.000 points.
You need to add several tables/columns to your schema to make this effective. If your 5.000.000 rows changes continuously, you need to run k-means clustering again as they change. But if they are fairly constant values, one clustering per week or per month will be enough.
I have an Oracle table 12K records/gyms, and the query below takes approximately ~0.3s:
SELECT (acos(sin(41.922682*0.017453293) *
sin(to_number(LATITUDE)*0.017453293) + cos(41.922682*0.017453293) *
cos(to_number(LATITUDE)*0.017453293) * cos(to_number(LONGITUDE)*0.017453293 -
(-87.65432*0.017453293)))*3959) as distance
FROM gym
However, I would like to return all of the records where distance <= 10, and as soon as I run the following query, my query execution time jumps up to ~5.0s:
SELECT * from (SELECT (acos(sin(41.922682*0.017453293) *
sin(to_number(LATITUDE)*0.017453293) + cos(41.922682*0.017453293) *
cos(to_number(LATITUDE)*0.017453293) * cos(to_number(LONGITUDE)*0.017453293 -
(-87.65432*0.017453293)))*3959)
as distance FROM gym)
WHERE distance <= 10
ORDER BY distance asc
Any idea how I can optimize this in Oracle?
Most important:
use a where clause to exclude all longitudes and latitudes that will be more than 10 km/miles (?) away from your point. So you only need to make your calculation for the window within a 10km/miles block.
as an and very rough approximation you could use 0.1 degree as a rule, this is 11km at the equator,and less elsewhere
so add
WHERE (longitude - -87.65)<0.1 and (latitude - 41.922)<0.1
(If you use nested queries, add this to the deepest level)
Since your distance is smaller than 10 km or mile, you can consider the length of one unit latitude/longitude as constant, and calculate them once using your formula. Than you can use pythagoras rule to calculate the distance (after adding the bounding box). This is basically why people usually use projected data for calculations.
Other things:
order by is always slow if you don't have an index. Do you need to order?
save your longitude and latitude as numbers in your table. Why would you store them different in a database?
With money. Specifically, Oracle Spatial.
1) How are you measuring 0.3 seconds for the first query? I'll wager that you are measuring the time required to fetch the first row rather than the time required to fetch the last row. Most client tools will start displaying results long before the database has finished producing them if that is possible (which it almost certainly is if there is no ORDER BY). So you're probably measuring the time required by the first query to calculate the distance to the first 50 or 500 gyms against the time required by the last query to calculate the distance to all 12,000 gyms.
2) Oracle Locator is a feature that comes with all editions of the Oracle database that includes the ability to use spatial indexes and that provides built-in methods for computing distance. It's not nearly as powerful as Oracle Spatial but it should be more than sufficient for what you're discussing here.
3) If you want to roll your own, I'd second johanvdw's suggestion of using a bounding box.