I have two tables.
Student Table
Property Table
Result Table
How can I get the value of Student Table and the property ID of the column fron the Property table and merge that into the Result table?
Any advice would be helpful.
Update #1:
I tried using Christian Moen 's suggestion, this is what i get.
You need to UNPIVOT the Student's columns first, to get the columns (properties names) in one column as rows. Then join with the Property table based on the property name like this:
WITH UnPivoted
AS
(
SELECT ID, value,col
FROM
(
SELECT ID,
CAST(Name AS NVARCHAR(50)) AS Name,
CAST(Class AS NVARCHAR(50)) AS Class,
CAST(ENG AS NVARCHAR(50)) AS ENG,
CAST(TAM AS NVARCHAR(50)) AS TAM,
CAST(HIN AS NVARCHAR(50)) AS HIN,
CAST(MAT AS NVARCHAR(50)) AS MAT,
CAST(PHY AS NVARCHAR(50)) AS PHY
FROM Student
) AS s
UNPIVOT
(value FOR col IN
([Name], [class], [ENG], [TAM], [HIN], [MAT], [PHY])
)AS unpvt
)
SELECT
ROW_NUMBER() OVER(ORDER BY u.ID,PropertyID) AS ID,
p.PropertyID,
u.Value,
u.ID AS StudID
FROM Property AS p
INNER JOIN UnPivoted AS u ON p.PropertyName = u.col;
For the first ID, I used the ranking function ROW_NUMBER() to generate this sequence id.
This will give the exact results that you are looking for.
Results:
| ID | PropertyID | Value | StudID |
|----|------------|--------|--------|
| 1 | 1 | Jack | 1 |
| 2 | 2 | 10 | 1 |
| 3 | 3 | 89 | 1 |
| 4 | 4 | 88 | 1 |
| 5 | 5 | 45 | 1 |
| 6 | 6 | 100 | 1 |
| 7 | 7 | 98 | 1 |
| 8 | 1 | Jill | 2 |
| 9 | 2 | 10 | 2 |
| 10 | 3 | 89 | 2 |
| 11 | 4 | 99 | 2 |
| 12 | 5 | 100 | 2 |
| 13 | 6 | 78 | 2 |
| 14 | 7 | 91 | 2 |
| 15 | 1 | Trevor | 3 |
| 16 | 2 | 12 | 3 |
| 17 | 3 | 100 | 3 |
| 18 | 4 | 50 | 3 |
| 19 | 5 | 49 | 3 |
| 20 | 6 | 94 | 3 |
| 21 | 7 | 100 | 3 |
| 22 | 1 | Jim | 4 |
| 23 | 2 | 8 | 4 |
| 24 | 3 | 100 | 4 |
| 25 | 4 | 91 | 4 |
| 26 | 5 | 92 | 4 |
| 27 | 6 | 100 | 4 |
| 28 | 7 | 100 | 4 |
Other option is to use of apply if you don't want to go unpivot way
select row_number() over (order by (select 1)) ID, p.PropertyID [PropID], a.Value, a.StuID
from Student s
cross apply
(
values (s.ID, 'Name', s.Name),
(s.ID, 'Class', cast(s.Class as varchar)),
(s.ID, 'ENG', cast(s.ENG as varchar)),
(s.ID, 'TAM', cast(s.TAM as varchar)),
(s.ID, 'HIN', cast(s.HIN as varchar)),
(s.ID, 'MAT', cast(s.MAT as varchar)),
(s.ID, 'PHY', cast(s.PHY as varchar))
) as a(StuID, Property, Value)
join Property p on p.PropertyName = a.Property
Related
I have two tables:
TABLE A :
CREATE TABLE z_ostan ( id NUMBER PRIMARY KEY,
name VARCHAR2(30) NOT NULL CHECK (upper(name)=name)
);
TABLE B:
CREATE TABLE z_shahr ( id NUMBER PRIMARY KEY,
name VARCHAR2(30) NOT NULL CHECK (upper(name)=name),
ref_ostan NUMBER,
CONSTRAINT fk_ref_ostan FOREIGN KEY (ref_ostan) REFERENCES z_ostan(id)
);
How can I find the second and third place "id" from -Table A- The least used table B in the table? Without using predefined functions like "count()"
This only processes existing references to Table A.
Updated for oracle (used 12c)
Without using any aggregate or window functions:
Sample data for Table: tblb
+----+---------+---------+
| id | name | tbla_id |
+----+---------+---------+
| 1 | TBLB_01 | 1 |
| 2 | TBLB_02 | 1 |
| 3 | TBLB_03 | 1 |
| 4 | TBLB_04 | 1 | 4 rows
| 5 | TBLB_05 | 2 |
| 6 | TBLB_06 | 2 |
| 7 | TBLB_07 | 2 | 3 rows
| 8 | TBLB_08 | 3 |
| 9 | TBLB_09 | 3 |
| 10 | TBLB_10 | 3 |
| 11 | TBLB_11 | 3 |
| 12 | TBLB_12 | 3 |
| 13 | TBLB_13 | 3 | 6 rows
| 14 | TBLB_14 | 4 |
| 15 | TBLB_15 | 4 |
| 16 | TBLB_16 | 4 | 3 rows
| 17 | TBLB_17 | 5 | 1 row
| 18 | TBLB_18 | 6 |
| 19 | TBLB_19 | 6 | 2 rows
| 20 | TBLB_20 | 7 | 1 row
+----+---------+---------+
There are many ways to express this logic.
Step by step with CTE terms.
The intent is (for each set of tbla_id rows in tblb)
generate a row_number (n) for the rows in each partition.
We would normally use window functions for this.
But I assume these are not allowed.
Use this row_number (n) to determine the count of rows in each tbla_id partition.
To find that count per partition, find the last row in each partition (from step 1).
Order the results of step 2 by n of these last rows.
Choose the 2nd and 3rd row of this result
Done.
WITH first AS ( -- Find the first row per tbla_id
SELECT t1.*
FROM tblb t1
LEFT JOIN tblb t2
ON t1.id > t2.id
AND t1.tbla_id = t2.tbla_id
WHERE t2.id IS NULL
)
, rnum (id, name, tbla_id, n) AS ( -- Generate a row_number (n) for each tbla_id partition
SELECT f.*, 1 FROM first f UNION ALL
SELECT n.id, n.name, n.tbla_id, c.n+1
FROM rnum c
JOIN tblb n
ON c.tbla_id = n.tbla_id
AND c.id < n.id
LEFT JOIN tblb n2
ON n.tbla_id = n2.tbla_id
AND c.id < n2.id
AND n.id > n2.id
WHERE n2.id IS NULL
)
, last AS ( -- Find the last row in each partition to obtain the count of tbla_id references
SELECT t1.*
FROM rnum t1
LEFT JOIN rnum t2
ON t1.id < t2.id
AND t1.tbla_id = t2.tbla_id
WHERE t2.id IS NULL
)
SELECT * FROM last
ORDER BY n, tbla_id OFFSET 1 ROWS FETCH NEXT 2 ROWS ONLY
;
Final Result, where n is the count of references to tbla:
+------+---------+---------+------+
| id | name | tbla_id | n |
+------+---------+---------+------+
| 20 | TBLB_20 | 7 | 1 |
| 19 | TBLB_19 | 6 | 2 |
+------+---------+---------+------+
Some intermediate results...
last CTE term result. The 2nd and 3rd rows of this become the final result.
+------+---------+---------+------+
| id | name | tbla_id | n |
+------+---------+---------+------+
| 17 | TBLB_17 | 5 | 1 |
| 20 | TBLB_20 | 7 | 1 |
| 19 | TBLB_19 | 6 | 2 |
| 7 | TBLB_07 | 2 | 3 |
| 16 | TBLB_16 | 4 | 3 |
| 4 | TBLB_04 | 1 | 4 |
| 13 | TBLB_13 | 3 | 6 |
+------+---------+---------+------+
rnum CTE term result. This provides the row_number over tbla_id partitions ordered by id
+------+---------+---------+------+
| id | name | tbla_id | n |
+------+---------+---------+------+
| 1 | TBLB_01 | 1 | 1 |
| 2 | TBLB_02 | 1 | 2 |
| 3 | TBLB_03 | 1 | 3 |
| 4 | TBLB_04 | 1 | 4 |
| 5 | TBLB_05 | 2 | 1 |
| 6 | TBLB_06 | 2 | 2 |
| 7 | TBLB_07 | 2 | 3 |
| 8 | TBLB_08 | 3 | 1 |
| 9 | TBLB_09 | 3 | 2 |
| 10 | TBLB_10 | 3 | 3 |
| 11 | TBLB_11 | 3 | 4 |
| 12 | TBLB_12 | 3 | 5 |
| 13 | TBLB_13 | 3 | 6 |
| 14 | TBLB_14 | 4 | 1 |
| 15 | TBLB_15 | 4 | 2 |
| 16 | TBLB_16 | 4 | 3 |
| 17 | TBLB_17 | 5 | 1 |
| 18 | TBLB_18 | 6 | 1 |
| 19 | TBLB_19 | 6 | 2 |
| 20 | TBLB_20 | 7 | 1 |
+------+---------+---------+------+
There are a few other ways to tackle this problem in just SQL.
I have a simple Invoice table that has each item sold and the date it was sold.
Here is some sample data of taking the base database and counting how much times each item was sold per week.
+------+-----------------+------------+---------+
| Week | Item_Number | Color_Code | Touches |
+------+-----------------+------------+---------+
| 1 | 11073900LRGMO | 02000 | 7 |
| 1 | 11073900MEDMO | 02000 | 9 |
| 2 | 1114900011BMO | 38301 | 62 |
| 2 | 1114910012BMO | 21701 | 147 |
| 2 | 1114910012BMO | 38301 | 147 |
| 2 | 1114910012BMO | 46260 | 147 |
| 3 | 13MK430R03R | 00101 | 2 |
| 3 | 13MK430R03R | 10001 | 2 |
| 3 | 13MK430R03R | 65004 | 8 |
| 3 | 13MK430R03S | 00101 | 2 |
| 3 | 13MK430R03S | 10001 | 2 |
+------+-----------------+------------+---------+
Then I created a matrix out of this data using a dynamic query and the pivot operator. Here is how I did that,
First, I create a temporary table
DECLARE #cols AS NVARCHAR(MAX)
DECLARE #query AS NVARCHAR(MAX)
IF OBJECT_ID('tempdb..#VTable') IS NOT NULL
DROP TABLE #VTable
CREATE TABLE #VTable
(
[Item_Number] NVARCHAR(100),
[Color_Code] NVARCHAR(100),
[Item_Cost] NVARCHAR(100),
[Week] NVARCHAR(10),
[xCount] int
);
Then I insert my data into that table,
INSERT INTO #VTable
(
[Item_Number],
[Color_Code],
[Item_Cost],
[Week],
[xCount]
)
SELECT
*
FROM (
SELECT
Item_Number
,Color_Code
,Item_Cost
,Week
,Count(Item_Number) Touches
FROM (
SELECT
DATEPART (year, I.Date_Invoiced) Year
,DATEPART (month, I.Date_Invoiced) Month
,Concat(CASE WHEN DATEPART (week, I.Date_Invoiced) <10 THEN CONCAT('0',DATEPART (week, I.Date_Invoiced)) ELSE CAST(DATEPART (week, I.Date_Invoiced) AS NVARCHAR) END,'-',RIGHT(DATEPART (year, I.Date_Invoiced),2) ) WEEK
,DATEPART (day, I.Date_Invoiced) Day
,I.Invoice_Number
,I.Customer_Number
,I.Warehouse_Code
,S.Pack_Type
,S.Quantity_Per_Carton
,S.Inner_Pack_Quantity
,LTRIM(RTRIM(ID.Item_Number)) Item_Number
,LTRIM(RTRIM(ID.Color_Code)) Color_Code
,CASE
WHEN ISNULL(s.Actual_Cost, 0) = 0
THEN ISNULL(s.Standard_Cost, 0)
ELSE s.Actual_Cost
END Item_Cost
,ID.Quantity
,case when s.Pack_Type='carton' then id.Quantity/s.Quantity_Per_Carton when s.Pack_Type='Inner Poly' then id.Quantity/s.Inner_Pack_Quantity end qty
,ID.Line_Number
FROM Invoices I
LEFT JOIN Invoices_Detail ID on I.Company_Code = ID.Company_Code and I.Division_Code = ID.Division_Code and I.Invoice_Number = ID.Invoice_Number
LEFT JOIN Style S on I.Company_Code = S.Company_Code and I.Division_Code = S.Division_Code and ID.Item_Number = S.Item_Number and ID.Color_Code = S.Color_Code
WHERE 1=1
AND (I.Company_Code = #LocalCompanyCode OR #LocalCompanyCode IS NULL)
AND (I.Division_Code = #LocalDivisionCode OR #LocalDivisionCode IS NULL)
AND (I.Warehouse_Code = #LocalWarehouse OR #LocalWarehouse IS NULL)
AND (S.Pack_Type = #LocalPackType OR #LocalPackType IS NULL)
AND (I.Customer_Number = #LocalCustomerNumber OR #LocalCustomerNumber IS NULL)
AND (I.Date_Invoiced Between #LocalFromDate And #LocalToDate)
) T
GROUP BY Item_Number,Color_Code,Item_Cost,Week
) TT
Then I use a dynamic query to create the matrix:
select #cols = STUFF((SELECT ',' + QUOTENAME(Week)
from #VTable
group by Week
order by (Right(Week,2) + LEFT(Week,2))
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set #query = '
SELECT
*
FROM (
SELECT Item_Number,Color_Code, Item_Cost,' + #cols + ' from
(
select Item_Number, Color_Code, Item_Cost, week, xCount
from #Vtable
) x
pivot
(
sum(xCount)
for week in (' + #cols + ')
) p
)T
'
execute(#query);
This gives me what I am looking for, here is what the matrix looks like.
+---------------+------------+-----------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| Item_Number | Color_Code | Item_Cost | 36-18 | 37-18 | 38-18 | 39-18 | 40-18 | 41-18 | 42-18 | 43-18 | 44-18 | 45-18 | 46-18 | 47-18 | 48-18 | 49-18 | 50-18 | 51-18 | 52-18 | 53-18 | 01-19 | 02-19 | 03-19 | 04-19 | 05-19 | 06-19 | 07-19 | 08-19 | 09-19 | 10-19 | 11-19 | 12-19 | 13-19 | 14-19 | 15-19 | 16-19 | 17-19 | 18-19 | 19-19 | 20-19 | 21-19 | 22-19 | 23-19 | 24-19 | 25-19 | 26-19 | 27-19 | 28-19 | 29-19 | 30-19 | 31-19 | 32-19 | 33-19 | 34-19 | 35-19 |
+---------------+------------+-----------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
| 11073900LRGMO | 02000 | 8.51 | 1 | NULL | 13 | NULL | 3 | NULL | NULL | 3 | 3 | NULL | 4 | 3 | 6 | NULL | 4 | NULL | NULL | NULL | 7 | 4 | NULL | 3 | 2 | 5 | 30 | 7 | 3 | 10 | NULL | 9 | 19 | 5 | NULL | 10 | 9 | 5 | 2 | 3 | 5 | 4 | 3 | 9 | 7 | NULL | 5 | 1 | 3 | 5 | NULL | NULL | 11 | 7 | 3 |
| 11073900MEDMO | 02000 | 8.49 | 11 | NULL | 22 | NULL | 5 | NULL | NULL | 14 | 4 | NULL | 4 | 3 | 8 | NULL | 9 | NULL | NULL | NULL | 9 | 3 | NULL | 7 | 6 | 4 | 37 | 10 | 8 | 9 | NULL | 7 | 30 | 14 | NULL | 12 | 5 | 7 | 8 | 7 | 2 | 4 | 6 | 15 | 4 | NULL | 2 | 7 | 3 | 7 | NULL | NULL | 11 | 9 | 3 |
| 11073900SMLMO | 02000 | 8.50 | 6 | NULL | 18 | NULL | 3 | NULL | NULL | 3 | 7 | NULL | 5 | NULL | 7 | NULL | 9 | NULL | NULL | NULL | 7 | 4 | NULL | 7 | 2 | 6 | 37 | 9 | 4 | 7 | NULL | 7 | 19 | 7 | NULL | 11 | 5 | 7 | 7 | 2 | 3 | 8 | 8 | 9 | 2 | NULL | 2 | 2 | 2 | 4 | NULL | NULL | 8 | 5 | 4 |
| 11073900XLGMO | 02000 | 8.51 | 2 | NULL | 6 | NULL | 3 | NULL | NULL | 2 | 4 | NULL | 3 | 1 | 3 | NULL | 4 | NULL | NULL | NULL | 4 | 4 | NULL | NULL | 3 | 1 | 27 | 4 | 3 | 4 | NULL | 8 | 11 | 9 | NULL | 7 | 2 | 4 | 1 | 5 | 1 | 6 | 5 | 6 | 1 | NULL | 1 | 3 | NULL | 3 | NULL | NULL | 3 | 4 | 2 |
+---------------+------------+-----------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+-------+
The last thing I want to do is find a good way to sort this table. I think the best way to do that would be to sort by which item numbers are picked the most across all weeks. Doing column wise sum will give me the total amount of touches per week for all items, but I want to do a row wise sum where there is another column at the end that has the touches per item. Does anyone know how I would do this? I've tried messing around with another dynamic query from this link -> (calculate Row Wise Sum - Sql server )
but I couldn't get it to work.
Here is a quick-and-dirty solution based on this answer to do the "coalesce sum" over your wk-yr columns. This does not modify your existing code, but for efficiency it may be better do as #Sean Lange suggests.
Tested on SQL Server 2017 latest (linux docker image).
Input dataset:
(Only 3 wk-yr columns here for simplicity. The code should work on arbitrary amount of columns):
create table WeeklySum (
Item_Number varchar(50),
Color_Code varchar(10),
Item_Cost float,
[36-18] float,
[37-18] float,
[38-18] float
)
insert into WeeklySum (Item_Number, Color_Code, Item_Cost, [36-18], [37-18], [38-18])
values ('11073900LRGMO', '02000', 8.51, 1, NULL, 13),
('11073900MEDMO', '02000', 8.49, 11, NULL, 22),
('11073900SMLMO', '02000', 8.50, 6, NULL, 18),
('11073900XLGMO', '02000', 8.51, 2, NULL, 6);
select * from WeeklySum;
Sample Code:
/* 1. Expression of the sum of coalesce(wk-yr, 0) */
declare #s varchar(max);
-- In short, this query select wanted columns by exclusion in sys.columns
-- and then do the "coalesce sum" over the selected columns in a row.
-- The "#s = coalesce()" expression is to avoid redundant '+' at beginning.
-- NOTE: May have to change sys.columns -> syscolumns for SQL Server 2005
-- or earlier versions
select #s = coalesce(#s + ' + coalesce([' + C.name + '], 0)', 'coalesce([' + C.name + '], 0)')
from sys.columns as C
where C.object_id = (select top 1 object_id from sys.objects
where name = 'WeeklySum')
and C.name not in ('Item_Number', 'Color_Code', 'Item_Cost');
print #s;
/* 2. Perform the sorting query */
declare #sql varchar(max);
set #sql = 'select *, ' + #s + ' as totalCount ' +
'from WeeklySum ' +
'order by totalCount desc';
print #sql;
execute(#sql);
Output:
| Item_Number | Color_Code | Item_Cost | 36-18 | 37-18 | 38-18 | totalCount |
|---------------|------------|-----------|-------|-------|-------|------------|
| 11073900MEDMO | 02000 | 8.49 | 11 | NULL | 22 | 33 |
| 11073900SMLMO | 02000 | 8.5 | 6 | NULL | 18 | 24 |
| 11073900LRGMO | 02000 | 8.51 | 1 | NULL | 13 | 14 |
| 11073900XLGMO | 02000 | 8.51 | 2 | NULL | 6 | 8 |
Also check the generated expressions on the messages window:
#s:
coalesce([36-18], 0) + coalesce([37-18], 0) + coalesce([38-18], 0) as totalCount
#sql:
select *, coalesce([36-18], 0) + coalesce([37-18], 0) + coalesce([38-18], 0) as totalCount from WeeklySum order by totalCount desc
Here is my table A.
| Id | GroupId | StoreId | Amount |
| 1 | 20 | 7 | 15000 |
| 2 | 20 | 7 | 1230 |
| 3 | 20 | 7 | 14230 |
| 4 | 20 | 7 | 9540 |
| 5 | 20 | 7 | 24230 |
| 6 | 20 | 7 | 1230 |
| 7 | 20 | 7 | 1230 |
Here is my table B.
| Id | GroupId | StoreId | Credit |
| 12 | 20 | 7 | 1230 |
| 14 | 20 | 7 | 15000 |
| 15 | 20 | 7 | 14230 |
| 16 | 20 | 7 | 1230 |
| 17 | 20 | 7 | 7004 |
| 18 | 20 | 7 | 65523 |
I want to get this result without getting duplicate Id of both table.
I need to get the Id of table B and A where the Amount = Credit.
| A.ID | B.ID | Amount |
| 1 | 14 | 15000 |
| 2 | 12 | 1230 |
| 3 | 15 | 14230 |
| 4 | null | 9540 |
| 5 | null | 24230 |
| 6 | 16 | 1230 |
| 7 | null | 1230 |
My problem is when I have 2 or more same Amount in table A, I get duplicate ID of table B. which should be null. Please help me. Thank you.
I think you want a left join. But this is tricky because you have duplicate amounts, but you only want one to match. The solution is to use row_number():
select . . .
from (select a.*, row_number() over (partition by amount order by id) as seqnum
from a
) a left join
(select b.*, row_number() over (partition by credit order by id) as seqnum
from b
)b
on a.amount = b.credit and a.seqnum = b.seqnum;
Another approach, I think simplier and shorter :)
select ID [A.ID],
(select top 1 ID from TABLE_B where Credit = A.Amount) [B.ID],
Amount
from TABLE_A [A]
Code :
select name,rank() over (order by name asc) as rank,
ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT)) AS [RandomNumber]
from Student
I would like to combine random number and name to get rank
I do not know why you want to use a random number with rank() instead of just using row_number(), but it goes something like this:
rextester: http://rextester.com/OLKI98516
create table student(
id int not null identity(1,1) primary key
, name varchar(255) not null
);
insert into student values
('Santosh'),('Kumar'),('Reddy'),('Badugula'),('SqlZim')
,('Emma'),('Xandra'),('Naida'),('Daria'),('Colby'),('Yetta')
,('Zena'),('Deacon'),('Francis'),('Lilah'),('Risa'),('Lee')
,('Vanna'),('Molly'),('Destiny'),('Tallulah'),('Meghan')
,('Deacon'),('Francis'),('Daria'),('Colby');
select
name
, RandomNumber = abs(cast(cast(newid() as varbinary) as int))
, Name_w_RandomNumber = concat(name, '_', abs(cast(cast(newid() as varbinary) as int)))
, rank = rank() over (order by name asc)
, row_number = row_number() over (order by name asc)
, rank_w_Rand = rank() over (order by name,abs(cast(cast(newid() as varbinary) as int)) asc)
from student
results:
+----------+--------------+---------------------+------+------------+-------------+
| name | RandomNumber | Name_w_RandomNumber | rank | row_number | rank_w_Rand |
+----------+--------------+---------------------+------+------------+-------------+
| Badugula | 1105357025 | Badugula_1036749632 | 1 | 1 | 1 |
| Colby | 1125329440 | Colby_1442709274 | 2 | 2 | 2 |
| Colby | 1891932149 | Colby_1045919975 | 2 | 3 | 3 |
| Daria | 1494409363 | Daria_112566484 | 4 | 4 | 4 |
| Daria | 666341314 | Daria_262264162 | 4 | 5 | 5 |
| Deacon | 1530588472 | Deacon_1783529467 | 6 | 6 | 6 |
| Deacon | 350443065 | Deacon_1150932866 | 6 | 7 | 7 |
| Destiny | 2007923301 | Destiny_793747374 | 8 | 8 | 8 |
| Emma | 435476101 | Emma_659930976 | 9 | 9 | 9 |
| Francis | 1638790395 | Francis_2132056162 | 10 | 10 | 10 |
| Francis | 793873129 | Francis_756254272 | 10 | 11 | 11 |
| Kumar | 20071275 | Kumar_2007808448 | 12 | 12 | 12 |
| Lee | 2069120264 | Lee_837143565 | 13 | 13 | 13 |
| Lilah | 1319087807 | Lilah_605243166 | 14 | 14 | 14 |
| Meghan | 487733175 | Meghan_1884481541 | 15 | 15 | 15 |
| Molly | 2086860257 | Molly_1914281986 | 16 | 16 | 16 |
| Naida | 169335218 | Naida_719205571 | 17 | 17 | 17 |
| Reddy | 528578158 | Reddy_1297094295 | 18 | 18 | 18 |
| Risa | 1826403411 | Risa_1530611023 | 19 | 19 | 19 |
| Santosh | 723134579 | Santosh_487617337 | 20 | 20 | 20 |
| SqlZim | 937324776 | SqlZim_738072767 | 21 | 21 | 21 |
| Tallulah | 521881065 | Tallulah_1717653898 | 22 | 22 | 22 |
| Vanna | 1508284361 | Vanna_1620612208 | 23 | 23 | 23 |
| Xandra | 532483290 | Xandra_493053714 | 24 | 24 | 24 |
| Yetta | 1735945301 | Yetta_1548495144 | 25 | 25 | 25 |
| Zena | 311372084 | Zena_1429570716 | 26 | 26 | 26 |
+----------+--------------+---------------------+------+------------+-------------+
Here is the query I was referring to in my comment, not pretty but functional. You asked so here it is, if I am correct I still would never use a random value myself as each time you run it you will get different results.
select name,
rank() over (order by name,
ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT))) asc) as rank
from Student
EDIT: With cte to show random number, NEWID() is guaranteed unique but not sure if it will still be when using ABS, you will need to look into that.
with cteQry As
( select name, ABS(CAST(CAST(NEWID() AS VARBINARY) AS INT))) NewIdVal
from Student
)
select name, NewIdVal,
rank() over (order by name, NewIdVal asc) as rank
from cteQry
I have data as per the table below, I pass in a list of numbers and need the raceId where all the numbers appear in the the data column for that race.
+-----+--------+------+
| Id | raceId | data |
+-----+--------+------+
| 14 | 1 | 1 |
| 12 | 1 | 2 |
| 13 | 1 | 3 |
| 16 | 1 | 8 |
| 47 | 2 | 1 |
| 43 | 2 | 2 |
| 46 | 2 | 6 |
| 40 | 2 | 7 |
| 42 | 2 | 8 |
| 68 | 3 | 3 |
| 69 | 3 | 6 |
| 65 | 3 | 7 |
| 90 | 4 | 1 |
| 89 | 4 | 2 |
| 95 | 4 | 6 |
| 92 | 4 | 7 |
| 93 | 4 | 8 |
| 114 | 5 | 1 |
| 116 | 5 | 2 |
| 117 | 5 | 3 |
| 118 | 5 | 8 |
| 138 | 6 | 2 |
| 139 | 6 | 6 |
| 140 | 6 | 7 |
| 137 | 6 | 8 |
+-----+--------+------+
Example I pass in 1,2,7 I would get the following Id's:
2 and 4
I have tried the simple statement
SELECT * FROM table WHERE ((data = 1) or (data = 2) or (data = 7))
But I don't really understand the grouping by clause or indeed if it is the correct way of doing this.
select raceId
from yourtable
where data in (1,2,7)
group by raceId
having count(raceId) = 3 /* length(1,2,7) */
This is assuming raceId, data pair is unique. If it's not the you should use
select raceId
from (select distinct raceId, data
from yourtable
where data in(1,2,7))
group by raceId
having count(raceId) = 3
SELECT DISTINCT raceId WHERE data IN (1, 2, 7)
This is an example of a "set-within-sets" query. I like to solve these with group by and having.
select raceid
from races
where data in (1, 2, 7)
group by raceid
having count(*) = 3;