SQL Case statement with reference to different column in same row - sql

I'm trying to write a select statement that identifies if a customer has signed up multiple times in the past 3 months while still providing all the data in the table I'm working from (example below)
I'm imaging the case query should look something like this but am do not know how to reference the row I want the result to populate in:
case when name in
(select name
from table1
where count(name from this row) > 1
and count(email from this row) > 1
and (date from this row) >= getdate()-120
end
table has 3 colums:
name, email, date
Any help?Thanks.

If you database is SQL Server 2005 or above then you can write a query as:
create table table1 (name varchar(20), email varchar(50), date datetime);
insert into table1 values ('Name1','email#abc.com',getdate()-30);
insert into table1 values ('Name1','email#abc.com',getdate()-60);
insert into table1 values ('Name1','email#abc.com',getdate());
insert into table1 values ('Name2','email2#abc.com',getdate()-20);
insert into table1 values ('Name3','email3#abc.com',getdate());
insert into table1 values ('Name4','email4#abc.com',getdate()-120);
with CTE as
(
select
row_number() over (partition by [name],[email] order by [name] asc) as rownum,
[name],
[email],
[date]
from table1
where [date] > = GETDATE() - 120
)
select * from CTE
where rownum > = 3

Related

How can I print the first three rows and the last three rows one under another in ORACLE SQL?

In ORACLE SQL, I want to print the first two and the last two rows one under the other. How can I do this with a one query? Suppose you have table with ten rows. I want to print the first two rows and the last two rows as below:
Row Number
Values
1
A
2
B
9
C
10
D
You can use row_number()over(order by row_number) window function to achieve what you are looking for. Even you can have first two rows order by row_number and last two rows order by column valuess if you want (row_number()over(order by row_number)FirstRowNumber, row_number()over(order by valuess desc)LastRowNumber )).
Table structure and insert statement:
create table tableName (Row_Number int, Valuess varchar(20));
insert into tablename values(1,'A');
insert into tablename values(2,'B');
insert into tablename values(9,'C');
insert into tablename values(10,'D');
insert into tablename values(11,'E');
insert into tablename values(12,'F');
Query:
SELECT ROW_NUMBER,Valuess FROM (
select ROW_NUMBER,Valuess,row_number()over(order by row_number)FirstRowNumber, row_number()over(order by row_number desc)LastRowNumber from tablename ) T
WHERE FIRSTROWNUMBER<=2 OR LASTROWNUMBER<=2
ORDER BY FIRSTROWNUMBER
Output:
ROW_NUMBER
VALUESS
1
A
2
B
11
E
12
F
I think you need below query,
select t.*
from (select table1.*, row_number() over (order by [Row Number]) as seqnum,
count(*) over () as cnt
from table1
) t
where seqnum in( 1,2) or seqnum in( cnt, cnt-1);

Finding a random sample of unique data across multiple columns - SQL Server

Given a set of data in a SQL Server database with the following columns
AccountID, UserID_Salesperson, UserID_Servicer1, UserID_Servicer2
All three columns are primary keys from the same users table. I need to find a random sample that will include every UserID available in all three columns no matter the position while guaranteeing the fewest unique AccountID's possible.
--SET UP TEST DATA
CREATE TABLE MY_TABLE
(
AccountID int,
UserID_Salesperson int,
UserID_Servicer1 int,
UserID_Servicer2 int
)
INSERT INTO MY_TABLE (AccountID, UserID_Salesperson, UserID_Servicer1, UserID_Servicer2)
VALUES (12345, 1, 1, 2)
INSERT INTO MY_TABLE (AccountID, UserID_Salesperson, UserID_Servicer1, UserID_Servicer2)
VALUES (12346, 3, 2, 1)
INSERT INTO MY_TABLE (AccountID, UserID_Salesperson, UserID_Servicer1, UserID_Servicer2)
VALUES (12347, 4, 3, 1)
INSERT INTO MY_TABLE (AccountID, UserID_Salesperson, UserID_Servicer1, UserID_Servicer2)
VALUES (12348, 1, 2, 3)
--VIEW THE NEW TABLE
SELECT * FROM MY_TABLE
--NORMALIZE DATA (Unique List of UserID's)
SELECT DISTINCT MyDistinctUserIDList
FROM
(SELECT UserID_Salesperson as MyDistinctUserIDList, 'Sales' as Position
FROM MY_TABLE
UNION
SELECT UserID_Servicer1, 'Service1' as Position
FROM MY_TABLE
UNION
SELECT UserID_Servicer2, 'Service2' as Position
FROM MY_TABLE) MyDerivedTable
--NORMALIZED DATA
SELECT *
FROM
(SELECT AccountID, UserID_Salesperson as MyDistinctUserIDList, 'Sales' as Position
FROM MY_TABLE
UNION
SELECT AccountID, UserID_Servicer1, 'Service1' as Position
FROM MY_TABLE
UNION
SELECT AccountID, UserID_Servicer2, 'Service2' as Position
FROM MY_TABLE) MyDerivedTable
DROP TABLE MY_TABLE
For this example table, I could select AccountID (12347 and 12348) OR (12347 and 12346) to get the least accounts with all users.
My current solution is inefficient and can make mistakes. I currently select a random AccountID, insert the data into a temp table and try to find the next insert from something I have not already put in the temp table. I loop through the records until it finds something not used beforeā€¦ and after a few thousand loops it will give up and select any record.
I don't know how you guarantee the fewest account ids, but you can get one row per user id using:
select t.*
from (select t.*,
row_number() over (partition by UserId order by newid()) as seqnum
from my_table t cross apply
(values (t.UserID_Salesperson), (t.UserID_Servicer1), (t.UserID_Servicer2)
) v(UserID)
) t
where seqnum = 1;
Your original table doesn't have a primary key. Assuming that there is one row per account, you can dedup this so it doesn't have duplicate accounts:
select top (1) with ties t.*
from (select t.*,
row_number() over (partition by UserId order by newid()) as seqnum
from my_table t cross apply
(values (t.UserID_Salesperson), (t.UserID_Servicer1), (t.UserID_Servicer2)
) v(UserID)
) t
where seqnum = 1
order by row_number() over (partition by accountID order by accountID);

Counting repeated data

I'm trying to get maximum repeat of integer in table I tried many ways but could not make it work. The result I'm looking for is as:
"james";"108"
As this 108 when I concat of two fields loca+locb repeated two times but others did not I try below sqlfiddle link with sample table structure and the query I tried... sqlfiddle link
Query I tried is :
select * from (
select name,CONCAT(loca,locb),loca,locb
, row_number() over (partition by CONCAT(loca,locb) order by CONCAT(loca,locb) ) as att
from Table1
) tt
where att=1
please click here so you can see complete sample table and query I tried.
Edite: adding complete table structure and data:
CREATE TABLE Table1
(name varchar(50),loca int,locb int)
;
insert into Table1 values ('james',100,2);
insert into Table1 values ('james',100,3);
insert into Table1 values ('james',10,8);
insert into Table1 values ('james',10,8);
insert into Table1 values ('james',10,7);
insert into Table1 values ('james',10,6);
insert into Table1 values ('james',0,7);
insert into Table1 values ('james',10,0);
insert into Table1 values ('james',10);
insert into Table1 values ('james',10);
and what I'm looking for is to get (james,108) as that value is repeated two time in entire data, there is repetion of (james,10) but that have null value of loca so Zero value and Null value is to be ignored only those to be considered that have value in both(loca,locb).
SQL Fiddle
select distinct on (name) *
from (
select name, loca, locb, count(*) as total
from Table1
where loca is not null and locb is not null
group by 1,2,3
) s
order by name, total desc
WITH concat AS (
-- get concat values
SELECT name,concat(loca,locb) as merged
FROM table1 t1
WHERE t1.locb NOTNULL
AND t1.loca NOTNULL
), concat_count AS (
-- calculate count for concat values
SELECT name,merged,count(*) OVER (PARTITION BY name,merged) as merged_count
FROM concat
)
SELECT cc.name,cc.merged
FROM concat_count cc
WHERE cc.merged_count = (SELECT max(merged_count) FROM concat_count)
GROUP BY cc.name,cc.merged;
SqlFiddleDemo
select name,
newvalue
from (
select name,
CONCAT(loca,locb) newvalue,
COUNT(CONCAT(loca,locb)) as total,
row_number() over (order by COUNT(CONCAT(loca,locb)) desc) as att
from Table1
where loca is not null
and locb is not null
GROUP BY name, CONCAT(loca,locb)
) tt
where att=1

Get detailed number of Rows affected by insert statement

My table is setup with one of the columns named 'Month".
I run a sql insert statement on the table, and get the number of rows affected.
Is there a way to get the number of rows affected for each value of "Month"?
If my table is partition on the "Month" column, can that help?
My sql a standard one like follows:
INSERT INTO 'TargetTable'
(Column1, Column2, Months, Column4,....)
SELECT column_names
FROM SourceTable;
I'm using SqlCommand from .Net's SqlClient to pass the sql into SQL server.
A single insert statement returns a single rows affected count. You perform a separate insert for each value of 'month', so 12 insert statements instead of 1. That will have some performanc impact.
Alternatively, you could load the rows to be inserted into a temp table and do the insert and then report on things, something along these lines:
create table #work
( month int not null ,
primary_key_of_real_table int not null ,
)
insert #work
select t.month , t.primary_key_column
from source_table t
where -- your filter criteria here
insert target_table
( ... column list ... )
select ... column list ...
from #work t
join source_table x on x.primary_key_column =t.primary_key_of_real_table
select m.month , cnt = sum(case when t.month is null then 1 else 0 end)
from ( select month = 1 union all
select month = 2 union all
...
select month = 12
) m
left join #work t on t.month = m.month
group by t.month
order by t.month

Select latest revision of each row in a table

I have table structures that include a composite primary key of id & revision where both are integers.
I need a query that will return the latest revision of each row. If I understood this answer correctly then the following would have worked on an Oracle DB.
SELECT Id, Title
FROM ( SELECT Id, Revision, MAX(Revision) OVER (PARTITION BY Id) LatestRevision FROM Task )
WHERE Revision = LatestRevision
I am using SQL Server (2005) and need a performant query to do the same.
I think this should work (I didn't test it)...
SELECT ID,
Title
FROM Task AS T
INNER JOIN
(
SELECT ID,
Max(Revision)
FROM Task
GROUP BY ID
) AS sub
ON T.ID = sub.ID
AND T.Revision = sub.Revision
See this post by ayende for an ealuation of the Best strategies.
I would try to create a subquery like this:
SELECT Id, Title
FROM Task T, (Select ID, Max(Revision) MaxRev from Task group by ID) LatestT
WHERE T.Revision = LatestT.MaxRev and T.ID = LatestT.ID
Another option is to "cheat" and create a trigger that will flag the revision as latest revision if one item is added.
Then add that field to the index. (I would link the table to insert only)
Also an index on ID, Revision desc could help the performance.
The query you posted will work in SQL 2005 (in compatibility mode 90) with the syntax errors corrected:
SELECT t1.Id, t1.Title
FROM ( SELECT Id, Revision, MAX(Revision) OVER (PARTITION BY Id) LatestRevision FROM Task ) AS x
JOIN Task as t1
ON t1.Revision = x.LatestRevision
AND t1.id = x.id
try this:
DECLARE #YourTable table(RowID int, Revision int, Title varchar(10))
INSERT INTO #YourTable VALUES (1,1,'A')
INSERT INTO #YourTable VALUES (2,1,'B')
INSERT INTO #YourTable VALUES (2,2,'BB')
INSERT INTO #YourTable VALUES (3,1,'C')
INSERT INTO #YourTable VALUES (4,1,'D')
INSERT INTO #YourTable VALUES (1,2,'AA')
INSERT INTO #YourTable VALUES (2,3,'BBB')
INSERT INTO #YourTable VALUES (5,1,'E')
INSERT INTO #YourTable VALUES (5,2,'EE')
INSERT INTO #YourTable VALUES (4,2,'DD')
INSERT INTO #YourTable VALUES (4,3,'DDD')
INSERT INTO #YourTable VALUES (6,1,'F')
;WITH YourTableRank AS
(
SELECT
RowID,Revision,Title, ROW_NUMBER() OVER(PARTITION BY RowID ORDER BY RowID,Revision DESC) AS Rank
FROM #YourTable
)
SELECT
RowID, Revision, Title
FROM YourTableRank
WHERE Rank=1
OUTPUT:
RowID Revision Title
----------- ----------- ----------
1 2 AA
2 3 BBB
3 1 C
4 3 DDD
5 2 EE
6 1 F
(6 row(s) affected)