I want is to select Armstrong numbers from the list below list I have searched of solution of this question bu unable to find in SQL-Server:
Numbers
121
113
423
153
541
371
I am sure most of you know what's the Armstrong number and how to calculate though I am describing is for the simplicity : sum of the cubes of its digits is equal to the number itself i.e.
1*1*1 + 5*5*5 + 3*3*3 = 153
3*3*3 + 7*7*7 + 1*1*1 = 371
Please help me on this as I am also trying but seeking for quick solution. It will be very helpful to me. Thanks in advance.
Obviously static processing during each query is not correct approach but we can create function like this and
create function dbo.IsArmstrongNumber(#n int)
returns int as
begin
declare #retValue int = 0
declare #sum int = 0
declare #num int = #n
while #num > 0
begin
set #sum += (#num%10) * (#num%10) * (#num%10)
set #num = #num/10
end
IF #sum = #n
set #retValue = 1
return #retValue
end
Pre-processing and selecting in IN clause is better
select * from #Numbers where dbo.IsArmstrongNumber(n) = 1
select 153 x into #temp;
insert #temp values(371);
insert #temp values(541);
with cte as (select x, substring(cast(x as nvarchar(40)) ,1,1) as u, 1 as N FROM #temp
union all
select x, substring(cast(x as nvarchar(40)),n+1,1) as u , n+1 from cte where len(cast(x as nvarchar(40))) > n
)
select x from cte group by x having SUM(POWER(cast(u as int),3)) = x
drop table #temp;
here is the mark 2 - you can change the #ORDER to explore power of 4,5 etc
declare #order int = 3;
declare #limit int = 50000;
with nos as (select 1 no
union all
select no + 1 from nos where no < #limit),
cte as (select no as x, substring(cast(no as nvarchar(40)) ,1,1) as u, 1 as N FROM nos
union all
select x, substring(cast(x as nvarchar(40)),n+1,1) as u , n+1 from cte where len(cast(x as nvarchar(40))) > n
)
select x from cte group by x having SUM(POWER(cast(u as int),#order)) = x
option (maxrecursion 0);
This is a quick mod to my sum of digits UDF
Declare #Table table (Numbers int)
Insert into #Table values
(121),
(113),
(423),
(153),
(541),
(371)
Select * from #Table where [dbo].[udf-Stat-Is-Armstrong](Numbers)=1
Returns
Numbers
153
371
The UDF
CREATE Function [dbo].[udf-Stat-Is-Armstrong](#Val bigint)
Returns Bit
As
Begin
Declare #RetVal as bigint
Declare #LenInp as bigint = len(cast(#Val as varchar(25)))
;with i AS (
Select #Val / 10 n, #Val % 10 d
Union ALL
Select n / 10, n % 10
From i
Where n > 0
)
Select #RetVal = IIF(SUM(power(d,#LenInp))=#Val,1,0) FROM i;
Return #RetVal
End
You can use the following to find Armstrong numbers using Sql functions:
WITH Numbers AS(
SELECT 0 AS number UNION ALL SELECT number + 1 FROM Numbers WHERE number < 10000)
SELECT number AS ArmstrongNumber FROM Numbers
WHERE
number = POWER(COALESCE(SUBSTRING(CAST(number AS VARCHAR(10)),1,1),0),3)
+ POWER(COALESCE(SUBSTRING(CAST(number AS VARCHAR(10)),2,1),0),3)
+ POWER(COALESCE(SUBSTRING(CAST(number AS VARCHAR(10)),3,1),0),3)
OPTION(MAXRECURSION 0)
I have a query that takes a LINESTRING and converts it to a result set of POINTS.
What I can't figure out is how to find the distance between 2 specific row points in this result set.
This is what I have so far:
DECLARE #GeographyToConvert geography
SET #GeographyToConvert = geography::STGeomFromText('LINESTRING (26.6434033 -81.7097817, 26.6435367 -81.709785, 26.6435783 -81.7098033, 26.6436067 -81.709825, 26.6435883 -81.709875, 26.64356 -81.7100417, 26.6434417 -81.710125, 26.6433167 -81.7101467, 26.643195 -81.7101033, 26.6431533 -81.7099517, 26.643175 -81.7097867, 26.643165 -81.7097917, 26.6431633 -81.7097367, 26.6431583 -81.7097083)',4326);
WITH GeographyPoints(N, Point) AS
(
SELECT 1, #GeographyToConvert.STPointN(1)
UNION ALL
SELECT N + 1, #GeographyToConvert.STPointN(N + 1)
FROM GeographyPoints GP
WHERE N < #GeographyToConvert.STNumPoints()
)
SELECT N,Point.STBuffer(0.25) as point, Point.STAsText() FROM GeographyPoints
For example, how can I compare the distance between N=10 & N=11?
This is what I was trying, but it does not work:
Declare #Point1 geography;
Declare #Point2 geography;
DECLARE #GeographyToConvert geography
--SET #GeometryToConvert = (select top 1 geotrack from dbo.SYNCTESTING2 where geotrack is not null);
SET #GeographyToConvert = geography::STGeomFromText('LINESTRING (26.6434033 -81.7097817, 26.6435367 -81.709785, 26.6435783 -81.7098033, 26.6436067 -81.709825, 26.6435883 -81.709875, 26.64356 -81.7100417, 26.6434417 -81.710125, 26.6433167 -81.7101467, 26.643195 -81.7101033, 26.6431533 -81.7099517, 26.643175 -81.7097867, 26.643165 -81.7097917, 26.6431633 -81.7097367, 26.6431583 -81.7097083)',4326);
WITH GeographyPoints(N, Point) AS
(
SELECT 1, #GeographyToConvert.STPointN(1)
UNION ALL
SELECT N + 1, #GeographyToConvert.STPointN(N + 1)
FROM GeographyPoints GP
WHERE N < #GeographyToConvert.STNumPoints()
)
SELECT N,Point.STBuffer(0.25) as point, Point.STAsText() FROM GeographyPoints
select #Point1 = Point FROM GeometryPoints where N = 10;
select #Point2 = Point FROM GeometryPoints where N = 11
select #Point1.STDistance(#Point2) as [Distance in Meters]
Replace
SELECT N,Point.STBuffer(0.25) as point, Point.STAsText() FROM GeographyPoints
With
SELECT * INTO #GeographyPoints FROM GeographyPoints
DECLARE #N1 INT = 10
DECLARE #N2 INT = 11
SELECT (SELECT Point FROM #GeographyPoints WHERE N=#N1).STDistance(
(SELECT Point FROM #GeographyPoints WHERE N=#N2))
DROP TABLE #GeographyPoints
And just change the values for #N1 & #N2 as neccessary
Is this what you're looking for? Distance to the previous point?
DECLARE #GeographyToConvert geography
SET #GeographyToConvert = geography::STGeomFromText('LINESTRING (26.6434033 -81.7097817, 26.6435367 -81.709785, 26.6435783 -81.7098033, 26.6436067 -81.709825, 26.6435883 -81.709875, 26.64356 -81.7100417, 26.6434417 -81.710125, 26.6433167 -81.7101467, 26.643195 -81.7101033, 26.6431533 -81.7099517, 26.643175 -81.7097867, 26.643165 -81.7097917, 26.6431633 -81.7097367, 26.6431583 -81.7097083)',4326);
WITH GeographyPoints(N, Point, PreviousPoint, DistanceFromPrevious) AS
(
SELECT 1, #GeographyToConvert.STPointN(1), CAST(NULL AS GEOGRAPHY), CAST(0 AS Float)
UNION ALL
SELECT N + 1, #GeographyToConvert.STPointN(N + 1)
, #GeographyToConvert.STPointN(N)
, #GeographyToConvert.STPointN(N).STDistance(#GeographyToConvert.STPointN(N + 1))
FROM GeographyPoints GP
WHERE N < #GeographyToConvert.STNumPoints()
)
SELECT N,Point.STBuffer(0.25) as point, Point.STAsText(), PreviousPoint, DistanceFromPrevious FROM GeographyPoints
We have had a request to provide some data to an external company.
They require only a sample of data, simple right? wrong.
Here is their sampling criteria:
Total Number of records divided by 720 (required sample size) - this gives sampling interval (if result is a fraction, round down to next whole number).
Halve the sampling interval to get the starting point.
Return each record by adding on the sampling interval.
EXAMPLE:
10,000 Records - Sampling interval = 13 (10,000/720)
Starting Point = 6 (13/2 Rounded)
Return records 6, 19 (6+13), 32 (19+13), 45 (32+13) etc.....
Please can someone tell me how (if) something like this is possible in SQL.
If you have use of ROW_NUMBER(), then you can do this relatively easily.
SELECT
*
FROM
(
SELECT
ROW_NUMBER() OVER (ORDER BY a, b, c, d) AS record_id,
*
FROM
yourTable
)
AS data
WHERE
(record_id + 360) % 720 = 0
ROW_NUMBER() gives all your data a sequential identifier (this is important as the id field must both be unique and NOT have ANY gaps). It also defines the order you want the data in (ORDER BY a, b, c, d).
With that id, if you use Modulo (Often the % operator), you can test if the record is the 720th record, 1440th record, etc (because 720 % 720 = 0).
Then, if you offset your id value by 360, you can change the starting point of your result set.
EDIT
After re-reading the question, I see you don't want every 720th record, but uniformly selected 720 records.
As such, replace 720 with (SELECT COUNT(*) / 720 FROM yourTable)
And replace 360 with (SELECT (COUNT(*) / 720) / 2 FROM yourTable)
EDIT
Ignoring the rounding conditions will allow a result of exactly 720 records. This requires using non-integer values, and the result of the modulo being less than 1.
WHERE
(record_id + (SELECT COUNT(*) FROM yourTable) / 1440.0)
%
((SELECT COUNT(*) FROM yourTable) / 720.0)
<
1.0
declare #sample_size int, #starting_point int
select #sample_size = 200
select top (#sample_size) col1, col2, col3, col4
from (
select *, row_number() over (order by col1, col2) as row
from your_table
) t
where (row % ((select count(*) from your_table) / #sample_size)) - (select count(*) from your_table) / #sample_size / 2) = 0
It's going to work in SQL Server 2005+.
TOP (#variable) is used to limit rows (where condition because of integers rounding might not be enough, may return more rows then needed) and ROW_NUMBER() to number and order rows.
Working example: https://data.stackexchange.com/stackoverflow/query/62315/sql-data-sampling below code:
declare #tab table (id int identity(1,1), col1 varchar(3), col2 varchar(3))
declare #i int
set #i = 0
while #i <= 1000
begin
insert into #tab
select 'aaa', 'bbb'
set #i = #i+1
end
declare #sample_size int
select #sample_size = 123
select ((select count(*) from #tab) / #sample_size) as sample_interval
select top (#sample_size) *
from (
select *, row_number() over (order by col1, col2, id desc) as row
from #tab
) t
where (row % ((select count(*) from #tab) / #sample_size)) - ((select count(*) from #tab) / #sample_size / 2) = 0
SQL server has in-built function for it.
SELECT FirstName, LastName
FROM Person.Person
TABLESAMPLE (10 PERCENT) ;
You can use rank to get a row-number. The following code will create 10000 records in a table, then select the 6th, 19th, 32nd, etc, for a total of 769 rows.
CREATE TABLE Tbl (
Data varchar (255)
)
GO
DECLARE #i int
SET #i = 0
WHILE (#i < 10000)
BEGIN
INSERT INTO Tbl (Data) VALUES (CONVERT(varchar(255), NEWID()))
SET #i = #i + 1
END
GO
DECLARE #interval int
DECLARE #start int
DECLARE #total int
SELECT #total = COUNT(*),
#start = FLOOR(COUNT(*) / 720) / 2,
#interval = FLOOR(COUNT(*) / 720)
FROM Tbl
PRINT 'Start record: ' + CAST(#start as varchar(10))
PRINT 'Interval: ' + CAST(#interval as varchar(10))
SELECT rank, Data
FROM (
SELECT rank()
OVER (ORDER BY t.Data) as rank, t.Data AS Data
FROM Tbl t) q
WHERE ((rank + 1) + #start) % #interval = 0
Are there any Linear Regression Function in SQL Server 2005/2008, similar to the the Linear Regression functions in Oracle ?
To the best of my knowledge, there is none. Writing one is pretty straightforward, though. The following gives you the constant alpha and slope beta for y = Alpha + Beta * x + epsilon:
-- test data (GroupIDs 1, 2 normal regressions, 3, 4 = no variance)
WITH some_table(GroupID, x, y) AS
( SELECT 1, 1, 1 UNION SELECT 1, 2, 2 UNION SELECT 1, 3, 1.3
UNION SELECT 1, 4, 3.75 UNION SELECT 1, 5, 2.25 UNION SELECT 2, 95, 85
UNION SELECT 2, 85, 95 UNION SELECT 2, 80, 70 UNION SELECT 2, 70, 65
UNION SELECT 2, 60, 70 UNION SELECT 3, 1, 2 UNION SELECT 3, 1, 3
UNION SELECT 4, 1, 2 UNION SELECT 4, 2, 2),
-- linear regression query
/*WITH*/ mean_estimates AS
( SELECT GroupID
,AVG(x * 1.) AS xmean
,AVG(y * 1.) AS ymean
FROM some_table
GROUP BY GroupID
),
stdev_estimates AS
( SELECT pd.GroupID
-- T-SQL STDEV() implementation is not numerically stable
,CASE SUM(SQUARE(x - xmean)) WHEN 0 THEN 1
ELSE SQRT(SUM(SQUARE(x - xmean)) / (COUNT(*) - 1)) END AS xstdev
, SQRT(SUM(SQUARE(y - ymean)) / (COUNT(*) - 1)) AS ystdev
FROM some_table pd
INNER JOIN mean_estimates pm ON pm.GroupID = pd.GroupID
GROUP BY pd.GroupID, pm.xmean, pm.ymean
),
standardized_data AS -- increases numerical stability
( SELECT pd.GroupID
,(x - xmean) / xstdev AS xstd
,CASE ystdev WHEN 0 THEN 0 ELSE (y - ymean) / ystdev END AS ystd
FROM some_table pd
INNER JOIN stdev_estimates ps ON ps.GroupID = pd.GroupID
INNER JOIN mean_estimates pm ON pm.GroupID = pd.GroupID
),
standardized_beta_estimates AS
( SELECT GroupID
,CASE WHEN SUM(xstd * xstd) = 0 THEN 0
ELSE SUM(xstd * ystd) / (COUNT(*) - 1) END AS betastd
FROM standardized_data pd
GROUP BY GroupID
)
SELECT pb.GroupID
,ymean - xmean * betastd * ystdev / xstdev AS Alpha
,betastd * ystdev / xstdev AS Beta
FROM standardized_beta_estimates pb
INNER JOIN stdev_estimates ps ON ps.GroupID = pb.GroupID
INNER JOIN mean_estimates pm ON pm.GroupID = pb.GroupID
Here GroupID is used to show how to group by some value in your source data table. If you just want the statistics across all data in the table (not specific sub-groups), you can drop it and the joins. I have used the WITH statement for sake of clarity. As an alternative, you can use sub-queries instead. Please be mindful of the precision of the data type used in your tables as the numerical stability can deteriorate quickly if the precision is not high enough relative to your data.
EDIT: (in answer to Peter's question for additional statistics like R2 in the comments)
You can easily calculate additional statistics using the same technique. Here is a version with R2, correlation, and sample covariance:
-- test data (GroupIDs 1, 2 normal regressions, 3, 4 = no variance)
WITH some_table(GroupID, x, y) AS
( SELECT 1, 1, 1 UNION SELECT 1, 2, 2 UNION SELECT 1, 3, 1.3
UNION SELECT 1, 4, 3.75 UNION SELECT 1, 5, 2.25 UNION SELECT 2, 95, 85
UNION SELECT 2, 85, 95 UNION SELECT 2, 80, 70 UNION SELECT 2, 70, 65
UNION SELECT 2, 60, 70 UNION SELECT 3, 1, 2 UNION SELECT 3, 1, 3
UNION SELECT 4, 1, 2 UNION SELECT 4, 2, 2),
-- linear regression query
/*WITH*/ mean_estimates AS
( SELECT GroupID
,AVG(x * 1.) AS xmean
,AVG(y * 1.) AS ymean
FROM some_table pd
GROUP BY GroupID
),
stdev_estimates AS
( SELECT pd.GroupID
-- T-SQL STDEV() implementation is not numerically stable
,CASE SUM(SQUARE(x - xmean)) WHEN 0 THEN 1
ELSE SQRT(SUM(SQUARE(x - xmean)) / (COUNT(*) - 1)) END AS xstdev
, SQRT(SUM(SQUARE(y - ymean)) / (COUNT(*) - 1)) AS ystdev
FROM some_table pd
INNER JOIN mean_estimates pm ON pm.GroupID = pd.GroupID
GROUP BY pd.GroupID, pm.xmean, pm.ymean
),
standardized_data AS -- increases numerical stability
( SELECT pd.GroupID
,(x - xmean) / xstdev AS xstd
,CASE ystdev WHEN 0 THEN 0 ELSE (y - ymean) / ystdev END AS ystd
FROM some_table pd
INNER JOIN stdev_estimates ps ON ps.GroupID = pd.GroupID
INNER JOIN mean_estimates pm ON pm.GroupID = pd.GroupID
),
standardized_beta_estimates AS
( SELECT GroupID
,CASE WHEN SUM(xstd * xstd) = 0 THEN 0
ELSE SUM(xstd * ystd) / (COUNT(*) - 1) END AS betastd
FROM standardized_data
GROUP BY GroupID
)
SELECT pb.GroupID
,ymean - xmean * betastd * ystdev / xstdev AS Alpha
,betastd * ystdev / xstdev AS Beta
,CASE ystdev WHEN 0 THEN 1 ELSE betastd * betastd END AS R2
,betastd AS Correl
,betastd * xstdev * ystdev AS Covar
FROM standardized_beta_estimates pb
INNER JOIN stdev_estimates ps ON ps.GroupID = pb.GroupID
INNER JOIN mean_estimates pm ON pm.GroupID = pb.GroupID
EDIT 2 improves numerical stability by standardizing data (instead of only centering) and by replacing STDEV because of numerical stability issues. To me, the current implementation seems to be the best trade-off between stability and complexity. I could improve stability by replacing my standard deviation with a numerically stable online algorithm, but this would complicate the implementation substantantially (and slow it down). Similarly, implementations using e.g. Kahan(-Babuška-Neumaier) compensations for the SUM and AVG seem to perform modestly better in limited tests, but make the query much more complex. And as long as I do not know how T-SQL implements SUM and AVG (e.g. it might already be using pairwise summation), I cannot guarantee that such modifications always improve accuracy.
This is an alternate method, based off a blog post on Linear Regression in T-SQL, which uses the following equations:
The SQL suggestion in the blog uses cursors though. Here's a prettified version of a forum answer that I used:
table
-----
X (numeric)
Y (numeric)
/**
* m = (nSxy - SxSy) / (nSxx - SxSx)
* b = Ay - (Ax * m)
* N.B. S = Sum, A = Mean
*/
DECLARE #n INT
SELECT #n = COUNT(*) FROM table
SELECT (#n * SUM(X*Y) - SUM(X) * SUM(Y)) / (#n * SUM(X*X) - SUM(X) * SUM(X)) AS M,
AVG(Y) - AVG(X) *
(#n * SUM(X*Y) - SUM(X) * SUM(Y)) / (#n * SUM(X*X) - SUM(X) * SUM(X)) AS B
FROM table
I've actually written an SQL routine using Gram-Schmidt orthoganalization. It, as well as other machine learning and forecasting routines, is available at sqldatamine.blogspot.com
At the suggestion of Brad Larson I've added the code here rather than just direct users to my blog. This produces the same results as the linest function in Excel. My primary source is Elements of Statistical Learning (2008) by Hastie, Tibshirni and Friedman.
--Create a table of data
create table #rawdata (id int,area float, rooms float, odd float, price float)
insert into #rawdata select 1, 2201,3,1,400
insert into #rawdata select 2, 1600,3,0,330
insert into #rawdata select 3, 2400,3,1,369
insert into #rawdata select 4, 1416,2,1,232
insert into #rawdata select 5, 3000,4,0,540
--Insert the data into x & y vectors
select id xid, 0 xn,1 xv into #x from #rawdata
union all
select id, 1,rooms from #rawdata
union all
select id, 2,area from #rawdata
union all
select id, 3,odd from #rawdata
select id yid, 0 yn, price yv into #y from #rawdata
--create a residuals table and insert the intercept (1)
create table #z (zid int, zn int, zv float)
insert into #z select id , 0 zn,1 zv from #rawdata
--create a table for the orthoganal (#c) & regression(#b) parameters
create table #c(cxn int, czn int, cv float)
create table #b(bn int, bv float)
--#p is the number of independent variables including the intercept (#p = 0)
declare #p int
set #p = 1
--Loop through each independent variable and estimate the orthagonal parameter (#c)
-- then estimate the residuals and insert into the residuals table (#z)
while #p <= (select max(xn) from #x)
begin
insert into #c
select xn cxn, zn czn, sum(xv*zv)/sum(zv*zv) cv
from #x join #z on xid = zid where zn = #p-1 and xn>zn group by xn, zn
insert into #z
select zid, xn,xv- sum(cv*zv)
from #x join #z on xid = zid join #c on czn = zn and cxn = xn where xn = #p and zn<xn group by zid, xn,xv
set #p = #p +1
end
--Loop through each independent variable and estimate the regression parameter by regressing the orthoganal
-- resiuduals on the dependent variable y
while #p>=0
begin
insert into #b
select zn, sum(yv*zv)/ sum(zv*zv)
from #z join
(select yid, yv-isnull(sum(bv*xv),0) yv from #x join #y on xid = yid left join #b on xn=bn group by yid, yv) y
on zid = yid where zn = #p group by zn
set #p = #p-1
end
--The regression parameters
select * from #b
--Actual vs. fit with error
select yid, yv, fit, yv-fit err from #y join
(select xid, sum(xv*bv) fit from #x join #b on xn = bn group by xid) f
on yid = xid
--R Squared
select 1-sum(power(err,2))/sum(power(yv,2)) from
(select yid, yv, fit, yv-fit err from #y join
(select xid, sum(xv*bv) fit from #x join #b on xn = bn group by xid) f
on yid = xid) d
There are no linear regression functions in SQL Server. But to calculate a Simple Linear Regression (Y' = bX + A) between pairs of data points x,y - including the calculation of the Correlation Coefficient, Coefficient of Determination (R^2) and Standard Estimate of Error (Standard Deviation), do the following:
For a table regression_data with numeric columns x and y:
declare #total_points int
declare #intercept DECIMAL(38, 10)
declare #slope DECIMAL(38, 10)
declare #r_squared DECIMAL(38, 10)
declare #standard_estimate_error DECIMAL(38, 10)
declare #correlation_coefficient DECIMAL(38, 10)
declare #average_x DECIMAL(38, 10)
declare #average_y DECIMAL(38, 10)
declare #sumX DECIMAL(38, 10)
declare #sumY DECIMAL(38, 10)
declare #sumXX DECIMAL(38, 10)
declare #sumYY DECIMAL(38, 10)
declare #sumXY DECIMAL(38, 10)
declare #Sxx DECIMAL(38, 10)
declare #Syy DECIMAL(38, 10)
declare #Sxy DECIMAL(38, 10)
Select
#total_points = count(*),
#average_x = avg(x),
#average_y = avg(y),
#sumX = sum(x),
#sumY = sum(y),
#sumXX = sum(x*x),
#sumYY = sum(y*y),
#sumXY = sum(x*y)
from regression_data
set #Sxx = #sumXX - (#sumX * #sumX) / #total_points
set #Syy = #sumYY - (#sumY * #sumY) / #total_points
set #Sxy = #sumXY - (#sumX * #sumY) / #total_points
set #correlation_coefficient = #Sxy / SQRT(#Sxx * #Syy)
set #slope = (#total_points * #sumXY - #sumX * #sumY) / (#total_points * #sumXX - power(#sumX,2))
set #intercept = #average_y - (#total_points * #sumXY - #sumX * #sumY) / (#total_points * #sumXX - power(#sumX,2)) * #average_x
set #r_squared = (#intercept * #sumY + #slope * #sumXY - power(#sumY,2) / #total_points) / (#sumYY - power(#sumY,2) / #total_points)
-- calculate standard_estimate_error (standard deviation)
Select
#standard_estimate_error = sqrt(sum(power(y - (#slope * x + #intercept),2)) / #total_points)
From regression_data
Here it is as a function that takes a table type of type: table (Y float, X double) which is
called XYDoubleType and assumes our linear function is of the form AX + B. It returns A and B a Table column just in case you want to have it in a join or something
CREATE FUNCTION FN_GetABForData(
#XYData as XYDoubleType READONLY
) RETURNS #ABData TABLE(
A FLOAT,
B FLOAT,
Rsquare FLOAT )
AS
BEGIN
DECLARE #sx FLOAT, #sy FLOAT
DECLARE #sxx FLOAT,#syy FLOAT, #sxy FLOAT,#sxsy FLOAT, #sxsx FLOAT, #sysy FLOAT
DECLARE #n FLOAT, #A FLOAT, #B FLOAT, #Rsq FLOAT
SELECT #sx =SUM(D.X) ,#sy =SUM(D.Y), #sxx=SUM(D.X*D.X),#syy=SUM(D.Y*D.Y),
#sxy =SUM(D.X*D.Y),#n =COUNT(*)
From #XYData D
SET #sxsx =#sx*#sx
SET #sxsy =#sx*#sy
SET #sysy = #sy*#sy
SET #A = (#n*#sxy -#sxsy)/(#n*#sxx -#sxsx)
SET #B = #sy/#n - #A*#sx/#n
SET #Rsq = POWER((#n*#sxy -#sxsy),2)/((#n*#sxx-#sxsx)*(#n*#syy -#sysy))
INSERT INTO #ABData (A,B,Rsquare) VALUES(#A,#B,#Rsq)
RETURN
END
To add to #icc97 answer, I have included the weighted versions for the slope and the intercept. If the values are all constant the slope will be NULL (with the appropriate settings SET ARITHABORT OFF; SET ANSI_WARNINGS OFF;) and will need to be substituted for 0 via coalesce().
Here is a solution written in SQL:
with d as (select segment,w,x,y from somedatasource)
select segment,
avg(y) - avg(x) *
((count(*) * sum(x*y)) - (sum(x)*sum(y)))/
((count(*) * sum(x*x)) - (Sum(x)*Sum(x))) as intercept,
((count(*) * sum(x*y)) - (sum(x)*sum(y)))/
((count(*) * sum(x*x)) - (sum(x)*sum(x))) AS slope,
avg(y) - ((avg(x*y) - avg(x)*avg(y))/var_samp(X)) * avg(x) as interceptUnstable,
(avg(x*y) - avg(x)*avg(y))/var_samp(X) as slopeUnstable,
(Avg(x * y) - Avg(x) * Avg(y)) / (stddev_pop(x) * stddev_pop(y)) as correlationUnstable,
(sum(y*w)/sum(w)) - (sum(w*x)/sum(w)) *
((sum(w)*sum(x*y*w)) - (sum(x*w)*sum(y*w)))/
((sum(w)*sum(x*x*w)) - (sum(x*w)*sum(x*w))) as wIntercept,
((sum(w)*sum(x*y*w)) - (sum(x*w)*sum(y*w)))/
((sum(w)*sum(x*x*w)) - (sum(x*w)*sum(x*w))) as wSlope,
(count(*) * sum(x * y) - sum(x) * sum(y)) / (sqrt(count(*) * sum(x * x) - sum(x) * sum(x))
* sqrt(count(*) * sum(y * y) - sum(y) * sum(y))) as correlation,
(sum(w) * sum(x*y*w) - sum(x*w) * sum(y*w)) /
(sqrt(sum(w) * sum(x*x*w) - sum(x*w) * sum(x*w)) * sqrt(sum(w) * sum(y*y*w)
- sum(y*w) * sum(y*w))) as wCorrelation,
count(*) as n
from d where x is not null and y is not null group by segment
Where w is the weight. I double checked this against R to confirm the results.
One may need to cast the data from somedatasource to floating point.
I included the unstable versions to warn you against those. (Special thanks goes to Stephan in another answer.)
Update: added weighted correlation
I have translated the Linear Regression Function used in the funcion Forecast in Excel, and created an SQL function that returns a,b, and the Forecast.
You can see the complete teorical explanation in the excel help for FORECAST fuction.
Firs of all you will need to create the table data type XYFloatType:
CREATE TYPE [dbo].[XYFloatType]
AS TABLE(
[X] FLOAT,
[Y] FLOAT)
Then write the follow function:
/*
-- =============================================
-- Author: Me :)
-- Create date: Today :)
-- Description: (Copied Excel help):
--Calculates, or predicts, a future value by using existing values.
The predicted value is a y-value for a given x-value.
The known values are existing x-values and y-values, and the new value is predicted by using linear regression.
You can use this function to predict future sales, inventory requirements, or consumer trends.
-- =============================================
*/
CREATE FUNCTION dbo.FN_GetLinearRegressionForcast
(#PtXYData as XYFloatType READONLY ,#PnFuturePointint)
RETURNS #ABDData TABLE( a FLOAT, b FLOAT, Forecast FLOAT)
AS
BEGIN
DECLARE #LnAvX Float
,#LnAvY Float
,#LnB Float
,#LnA Float
,#LnForeCast Float
Select #LnAvX = AVG([X])
,#LnAvY = AVG([Y])
FROM #PtXYData;
SELECT #LnB = SUM ( ([X]-#LnAvX)*([Y]-#LnAvY) ) / SUM (POWER([X]-#LnAvX,2))
FROM #PtXYData;
SET #LnA = #LnAvY - #LnB * #LnAvX;
SET #LnForeCast = #LnA + #LnB * #PnFuturePoint;
INSERT INTO #ABDData ([A],[B],[Forecast]) VALUES (#LnA,#LnB,#LnForeCast)
RETURN
END
/*
your tests:
(I used the same values that are in the excel help)
DECLARE #t XYFloatType
INSERT #t VALUES(20,6),(28,7),(31,9),(38,15),(40,21) -- x and y values
SELECT *, A+B*30 [Prueba]FROM dbo.FN_GetLinearRegressionForcast#t,30);
*/
I hope the following answer helps one understand where some of the solutions come from. I am going to illustrate it with a simple example, but the generalization to many variables is theoretically straightforward as long as you know how to use index notation or matrices. For implementing the solution for anything beyond 3 variables you'll Gram-Schmidt (See Colin Campbell's answer above) or another matrix inversion algorithm.
Since all the functions we need are variance, covariance, average, sum etc. are aggregation functions in SQL, one can easily implement the solution. I've done so in HIVE to do linear calibration of the scores of a Logistic model - amongst many advantages, one is that you can function entirely within HIVE without going out and back in from some scripting language.
The model for your data (x_1, x_2, y) where your data points are indexed by i, is
y(x_1, x_2) = m_1*x_1 + m_2*x_2 + c
The model appears "linear", but needn't be, For example x_2 can be any non-linear function of x_1, as long as it has no free parameters in it, e.g. x_2 = Sinh(3*(x_1)^2 + 42). Even if x_2 is "just" x_2 and the model is linear, the regression problem isn't. Only when you decide that the problem is to find the parameters m_1, m_2, c such that they minimize the L2 error do you have a Linear Regression problem.
The L2 error is sum_i( (y[i] - f(x_1[i], x_2[i]))^2 ). Minimizing this w.r.t. the 3 parameters (set the partial derivatives w.r.t. each parameter = 0) yields 3 linear equations for 3 unknowns. These equations are LINEAR in the parameters (this is what makes it Linear Regression) and can be solved analytically. Doing this for a simple model (1 variable, linear model, hence two parameters) is straightforward and instructive. The generalization to a non-Euclidean metric norm on the error vector space is straightforward, the diagonal special case amounts to using "weights".
Back to our model in two variables:
y = m_1*x_1 + m_2*x_2 + c
Take the expectation value =>
= m_1* + m_2* + c (0)
Now take the covariance w.r.t. x_1 and x_2, and use cov(x,x) = var(x):
cov(y, x_1) = m_1*var(x_1) + m_2*covar(x_2, x_1) (1)
cov(y, x_2) = m_1*covar(x_1, x_2) + m_2*var(x_2) (2)
These are two equations in two unknowns, which you can solve by inverting the 2X2 matrix.
In matrix form:
...
which can be inverted to yield
...
where
det = var(x_1)*var(x_2) - covar(x_1, x_2)^2
(oh barf, what the heck are "reputation points? Gimme some if you want to see the equations.)
In any case, now that you have m1 and m2 in closed form, you can solve (0) for c.
I checked the analytical solution above to Excel's Solver for a quadratic with Gaussian noise and the residual errors agree to 6 significant digits.
Contact me if you want to do Discrete Fourier Transform in SQL in about 20 lines.