Counting the number of datapoints within a Euclidean distance MS SQL - sql

Have 2 data sets
list of 300 geocordinates
list of over 2million geocordinates
For each entry in list 1, I am trying to count the number of entries from list 2 that lie within 5 mile radius.
I've decided to use the euclidean distance as i am only dealing with relatively small distances.
Here is my code. It takes forever to run. Any suggestions on how I can improve the code.
Select
DistFilter.storenumber,
count(companynumber) as sohoCount
from
(Select
UKStoreCoord.storenumber,
UKStoreCoord.latitude as SLat,
UKStoreCoord.longitude as SLng,
SohoCoordinates.companynumber,
SohoCoordinates.latitude,
SohoCoordinates.longitude
from UKStoreCoord, SohoCoordinates
where abs(UKStoreCoord.latitude - SohoCoordinates.latitude)<0.1 and abs(SohoCoordinates.longitude - UKStoreCoord.longitude)<0.1
group by
UKStoreCoord.storenumber,
UKStoreCoord.latitude,
UKStoreCoord.longitude,
SohoCoordinates.companynumber,
SohoCoordinates.latitude,
SohoCoordinates.longitude) as DistFilter
where (((Distfilter.latitude - Distfilter.SLat) * 69) ^2 + ((Distfilter.longitude - Distfilter.SLng) * 46) ^2) <25
group by
DistFilter.storenumber
cheers

Related

repeated sum of digits big o complexity

Lets say for example we have the number 12345.
This sums to 15 when you add 1 + 2 + 3 + 4 + 5, which sums to 6 when you add 1 + 5.
My question is, what would the time complexity be for a repetitive adding algorithm like this be? This process is happens until there is only a single digit left.
I know that for any given number, the # of digits is approximately ln(n). Im thinking that this means that the big o would look something like (ln(n))^k, for some k. However, I am not confident because each time you sum, the number of digits gets smaller (first summed 5 digits, then only 2).
How would I go about figuring this out?

SQL: “The maximum number of stacked diagnostics areas has been exceeded.”

The following formula in my SELECT text yields the aforementioned error when attempting to refresh.
SELECT CAPCSQ/((((ih01su + ih02su + ih03su + ih04su + ih05su + ih06su + ih07su + ih08su + ih09su + ih10su + ih11su + ih12su)/12)*1.32)/2)
The ih01su.... portion represents the last 12 months of sales data, divided by 12 to get an average. I then multiply by the 1.32 to get a projection of future sales in roughly 2 years based on store growth. I then divided by 2 because I really only want the value that will be 2 weeks worth of sales, not a full month.
The formula works fine until I attempt to divide the ‘CAPCSQ’ column by this calculated value. At this point I get the following error:
The maximum number of stacked diagnostics areas has been exceeded.

Distribute numbers as close to possible

This seems to be a 2 step problem I'm trying to solve.
Let's say we have N records, and we are trying to distribute as evenly as possible into K groups.
The second problem - each group in K can only accept an M amount of records.
For example, if we have 5 records, and 3 groups, then we would distribute 2 into Group K1, 2 into Group K2 and 1 record into Group K3. However, if say in group 1, it only accepts at most 1 record. Then the arrangement would need to be 1 into Group K1, 2 into Group K2, and 2 into Group K3.
I'm not necessary after the solution but what algorithm I might need to use to solve this? Apparently for the distribution, I need to use the Greedy algorithm? But for the second step, this seems to be a bit more complicated
Edit:
The example I'm looking at is:
Number of records: 23
Groups: 10
Max records for each group
G1 = 4
G2 = 1
G3 = 0
G4 = 5
G5 = 0
G6 = 0
G7 = 2
G8 = 4
G9 = 2
G10 = 2
if N=12 and K=3 then in normal situation,you just split it V=12/3=4 for each group. but since you have M limitation, and for example K3 can only accept 1 then the distribution can be 6-5-1 which is not evenly distributed.
So i guess you need to sort K based on the M limitation, so for the example above the groups order become K3-K1-K2.
then if the distributed value V is bigger than the accepted amount M for that group, you need to take the remainder and distribute it again to the remaining group (K3=1, then 4-1=3 must be distributed to K1 and K2).
the implementation might be complicated, i hope you can find more simple solution for this
From what I understood, you need to separate all groups which allows a fixed number of values first and then equally distribute records among remaining groups. Let's take an example, let's say we have 15 records which needs to be distributed among 5 groups (G1, G2, G3, G4 and G5). Also let's assume that G2 and G4 allows max records of 2 and 4 respectively. Now algorithm should go like this:
Get average(ceiling integer) of records based on number of groups (In this example we'll get 3).
Add all max allowed records which are smaller than our average (In this example it's G2 only who's max limit(i.e. 2) is less than our average hence the number comes as 2).
Now subtract our number from step 2 from total records and also subtract the number of groups involved in step 2 from total groups. (remaining total records: 13, remaining total groups 4).
Get the new average(ceiling integer) using remaining records and groups. (New average 4).
Get average (Integer) (i.e. 3) and allot equal number of records to remaining groups - 1.
Get Mod (i.e. 1) and allot that number to the last group.
Now what we finally will have here:
G1(No limit): 4
G2(Limit 2): 2
G3(No limit): 4
G4(Limit 4): 4
G5(No limit): 1
Let me know if you think that this algo might fail for some scenarios.
Formula to get ceiling integer average
floor((#total_records + #total_groups-1) / #total_groups)

How to perform math calculations in Active Record Querying in Rails

I'm building an application and I'm finding it necessary to perform some simple math calculations in my query. Essentially, I've got a database with daily values from the S&P 500, and I need to get a listing of days depending on the criteria entered.
The user inputs both a day range, and a % range. For instance, if the date range is Jan 1/2013 - Apr 1/2013 and the % range is -1% - 1%, it should return a list of all S%P 500 daily values between the dates where difference between the opening and closing values are in the % range.
The problem is that I don't actually have a column for %; I only have a column for opening/closing values. It is simple enough to calculate the % given only the opening and closing values (close-open)/open*100. But I'm not sure how to do this within the query.
Right now the query is successfully searching within the date range. My query is:
#cases = Close.find(:all, conditions:["date between ? and ?",
#f_start, #f_end])
But how can I get it to check if the current row's (close-open)/open*100 value is between the two % range values?
Alternatively, if this is not possible or in bad practice, where should I be handling this?
You can calculate the open/close range yourself in Ruby/Rails and pass it on the same way you do with the date range. Something like:
percent = #f_percent / 100.0 # 5% => 0.05
low = #f_close * (1.0 - percent)
high = #f_close * (1.0 + percent)
Close.where 'date between ? and ? AND close between ? and ?', \
#f_start, #f_end, low, high

Power-law distribution in T-SQL

I basically need the answer to this SO question that provides a power-law distribution, translated to T-SQL for me.
I want to pull a last name, one at a time, from a census provided table of names. I want to get roughly the same distribution as occurs in the population. The table has 88,799 names ranked by frequency. "Smith" is rank 1 with 1.006% frequency, "Alderink" is rank 88,799 with frequency of 1.7 x 10^-6. "Sanders" is rank 75 with a frequency of 0.100%.
The curve doesn't have to fit precisely at all. Just give me about 1% "Smith" and about 1 in a million "Alderink"
Here's what I have so far.
SELECT [LastName]
FROM [LastNames] as LN
WHERE LN.[Rank] = ROUND(88799 * RAND(), 0)
But this of course yields a uniform distribution.
I promise I'll still be trying to figure this out myself by the time a smarter person responds.
Why settle for the power-law distribution when you can draw from the actual distribution ?
I suggest you alter the LastNames table to include a numeric column which would contain a numeric value representing the actual number of indivuduals with a name that is more common. You'll probably want a number on a smaller but proportional scale, say, maybe 10,000 for each percent of representation.
The list would then look something like:
(other than the 3 names mentioned in the question, I'm guessing about White, Johnson et al)
Smith 0
White 10,060
Johnson 19,123
Williams 28,456
...
Sanders 200,987
..
Alderink 999,997
And the name selection would be
SELECT TOP 1 [LastName]
FROM [LastNames] as LN
WHERE LN.[number_described_above] < ROUND(100000 * RAND(), 0)
ORDER BY [number_described_above] DESC
That's picking the first name which number does not exceed the [uniform distribution] random number. Note how the query, uses less than and ordering in desc-ending order; this will guaranty that the very first entry (Smith) gets picked. The alternative would be to start the series with Smith at 10,060 rather than zero and to discard the random draws smaller than this value.
Aside from the matter of boundary management (starting at zero rather than 10,060) mentioned above, this solution, along with the two other responses so far, are the same as the one suggested in dmckee's answer to the question referenced in this question. Essentially the idea is to use the CDF (Cumulative Distribution function).
Edit:
If you insist on using a mathematical function rather than the actual distribution, the following should provide a power law function which would somehow convey the "long tail" shape of the real distribution. You may wan to tweak the #PwrCoef value (which BTW needn't be a integer), essentially the bigger the coeficient, the more skewed to the beginning of the list the function is.
DECLARE #PwrCoef INT
SET #PwrCoef = 2
SELECT 88799 - ROUND(POWER(POWER(88799.0, #PwrCoef) * RAND(), 1.0/#PwrCoef), 0)
Notes:
- the extra ".0" in the function above are important to force SQL to perform float operations rather than integer operations.
- the reason why we subtract the power calculation from 88799 is that the calculation's distribution is such that the closer a number is closer to the end of our scale, the more likely it is to be drawn. The List of family names being sorted in the reverse order (most likely names first), we need this substraction.
Assuming a power of, say, 3 the query would then look something like
SELECT [LastName]
FROM [LastNames] as LN
WHERE LN.[Rank]
= 88799 - ROUND(POWER(POWER(88799.0, 3) * RAND(), 1.0/3), 0)
Which is the query from the question except for the last line.
Re-Edit:
In looking at the actual distribution, as apparent in the Census data, the curve is extremely steep and would require a very big power coefficient, which in turn would cause overflows and/or extreme rounding errors in the naive formula shown above.
A more sensible approach may be to operate in several tiers i.e. to perform an equal number of draws in each of the, say, three thirds (or four quarters or...) of the cumulative distribution; within each of these parts list, we would draw using a power law function, possibly with the same coeficient, but with different ranges.
For example
Assuming thirds, the list divides as follow:
First third = 425 names, from Smith to Alvarado
Second third = 6,277 names, from to Gainer
Last third = 82,097 names, from Frisby to the end
If we were to need, say, 1,000 names, we'd draw 334 from the top third of the list, 333 from the second third and 333 from the last third.
For each of the thirds we'd use a similar formula, maybe with a bigger power coeficient for the first third (were were are really interested in favoring the earlier names in the list, and also where the relative frequencies are more statistically relevant). The three selection queries could look like the following:
-- Random Drawing of a single Name in top third
-- Power Coef = 12
SELECT [LastName]
FROM [LastNames] as LN
WHERE LN.[Rank]
= 425 - ROUND(POWER(POWER(425.0, 12) * RAND(), 1.0/12), 0)
-- Second third; Power Coef = 7
...
WHERE LN.[Rank]
= (425 + 6277) - ROUND(POWER(POWER(6277.0, 7) * RAND(), 1.0/7), 0)
-- Bottom third; Power Coef = 4
...
WHERE LN.[Rank]
= (425 + 6277 + 82097) - ROUND(POWER(POWER(82097.0, 4) * RAND(), 1.0/4), 0)
Instead of storing the pdf as rank, store the CDF (the sum of all frequencies until that name, starting from Aldekirk).
Then modify your select to retrieve the first LN with rank greater than your formula result.
I read the question as "I need to get a stream of names which will mirror the frequency of last names from the 1990 US Census"
I might have read the question a bit differently than the other suggestions and although an answer has been accepted, and a very through answer it is, I will contribute my experience with the Census last names.
I had downloaded the same data from the 1990 census. My goal was to produce a large number of names to be submitted for search testing during performance testing of a medical record app. I inserted the last names and the percentage of frequency into a table. I added a column and filled it with a integer which was the product of the "total names required * frequency". The frequency data from the census did not add up to exactly 100% so my total number of names was also a bit short of the requirement. I was able to correct the number by selecting random names from the list and increasing their count until I had exactly the required number, the randomly added count never ammounted to more than .05% of the total of 10 million.
I generated 10 million random numbers in the range of 1 to 88799. With each random number I would pick that name from the list and decrement the counter for that name. My approach was to simulate dealing a deck of cards except my deck had many more distinct cards and a varing number of each card.
Do you store the actual frequencies with the ranks?
Converting the algebra from that accepted answer to MySQL is no bother, if you know what values to use for n. y would be what you currently have ROUND(88799 * RAND(), 0) and x0,x1 = 1,88799 I think, though I might misunderstand it. The only non-standard maths operator involved from a T-SQL perspective is ^ which is just POWER(x,y) == x^y.