Finding a match for an entry on a date - sql

I am looking for a query that would find all entries that have a login without a logout.
My data looks like this
Key Date Employee
LOGIN 20171225 111
LOGIN 20171225 111
LOGIN 20171226 111
There should be a record here. I need to catch that.
LOGIN 20171227 111
LOGIN 20171227 111
12345 20171227 222 (There is also a LOT of other random data in the table.)
Select Date, Employee
From My Table
Where Key = 'LOGIN'
Group by date, employee
Order by employee
I don't know how to filter out to see if there is one or two logins for that day. I'll need to see where there's only one, because that indicates they have not logged out. This isn't giving me the correct information.
Thank you.

You might use this
DECLARE #dummyTbl TABLE([Key] VARCHAR(100),[Date] DATE,Employee INT);
INSERT INTO #dummyTbl VALUES
('LOGIN','20171225',111)
,('LOGIN','20171225',111)
,('LOGIN','20171226',111)
,('LOGIN','20171227',111)
,('LOGIN','20171227',111);
SELECT *
FROM #dummyTbl
GROUP BY [Key],[Date],Employee
HAVING COUNT(*)=1
But I wonder, why your Key is LOGIN in both cases, why not use LOGOUT?

If your key is always LogIn then you really want to look for any ODD number of entries. Which you can do by using remainder (%) of division by 2 not equal to 0.
DECLARE #dummyTbl TABLE([Key] VARCHAR(100),[Date] DATE,Employee INT);
INSERT INTO #dummyTbl VALUES
('LOGIN','20171225',111)
,('LOGIN','20171225',111)
,('LOGIN','20171226',111)
,('LOGIN','20171227',111)
,('LOGIN','20171227',111);
SELECT *
FROM #dummyTbl
GROUP BY [Key],[Date],Employee
HAVING COUNT(*) % 2 <> 0
If you have multiple keys login/logout and you are attempting to figure out if the user is Logged Out then it is best to look at the last value for the user for the day and if it is not logout then you know they are still logged in.
DECLARE #dummyTbl TABLE(Id INT IDENTITY(1,1), [Key] VARCHAR(100),[Date] DATE,Employee INT);
INSERT INTO #dummyTbl VALUES
('LOGIN','20171225',111)
,('LOGIN','20171225',111)
,('LOGOUT','20171225',111)
,('LOGIN','20171226',111)
,('LOGIN','20171227',111)
,('LOGOUT','20171227',111);
;WITH cteRowNum AS (
SELECT *
,LastDailyActivityRowNum = ROW_NUMBER() OVER (PARTITION BY Date, Employee ORDER BY Id DESC)
FROM
#dummyTbl
)
SELECT *
FROM
cteRowNum
WHERE
LastDailyActivityRowNum = 1
AND [Key] = 'LOGIN'
If you potentially have dirty data (missing login or logout record) then it gets a bit more complicated where you will have to make some business decisions but the last record method is still generally the way to go. When you have employees that can work past midnight without logging out then it gets a bit more complicated too.....

Related

Split one large, denormalized table into a normalized database

I have a large (5 million row, 300+ column) csv file I need to import into a staging table in SQL Server, then run a script to split each row up and insert data into the relevant tables in a normalized db. The format of the source table looks something like this:
(fName, lName, licenseNumber1, licenseIssuer1, licenseNumber2, licenseIssuer2..., specialtyName1, specialtyState1, specialtyName2, specialtyState2..., identifier1, identifier2...)
There are 50 licenseNumber/licenseIssuer columns, 15 specialtyName/specialtyState columns, and 15 identifier columns. There is always at least one of each of those, but the remaining 49 or 14 could be null. The first identifier is unique, but is not used as the primary key of the Person in our schema.
My database schema looks like this
People(ID int Identity(1,1))
Names(ID int, personID int, lName varchar, fName varchar)
Licenses(ID int, personID int, number varchar, issuer varchar)
Specialties(ID int, personID int, name varchar, state varchar)
Identifiers(ID int, personID int, value)
The database will already be populated with some People before adding the new ones from the csv.
What is the best way to approach this?
I have tried iterating over the staging table one row at a time with select top 1:
WHILE EXISTS (Select top 1 * from staging)
BEGIN
INSERT INTO People Default Values
SET #LastInsertedID = SCOPE_IDENTITY() -- might use the output clause to get this instead
INSERT INTO Names (personID, lName, fName)
SELECT top 1 #LastInsertedID, lName, fName from staging
INSERT INTO Licenses(personID, number, issuer)
SELECT top 1 #LastInsertedID, licenseNumber1, licenseIssuer1 from staging
IF (select top 1 licenseNumber2 from staging) is not null
BEGIN
INSERT INTO Licenses(personID, number, issuer)
SELECT top 1 #LastInsertedID, licenseNumber2, licenseIssuer2 from staging
END
-- Repeat the above 49 times, etc...
DELETE top 1 from staging
END
One problem with this approach is that it is prohibitively slow, so I refactored it to use a cursor. This works and is significantly faster, but has me declaring 300+ variables for Fetch INTO.
Is there a set-based approach that would work here? That would be preferable, as I understand that cursors are frowned upon, but I'm not sure how to get the identity from the INSERT into the People table for use as a foreign key in the others without going row-by-row from the staging table.
Also, how could I avoid copy and pasting the insert into the Licenses table? With a cursor approach I could try:
FETCH INTO ...#LicenseNumber1, #LicenseIssuer1, #LicenseNumber2, #LicenseIssuer2...
INSERT INTO #LicenseTemp (number, issuer) Values
(#LicenseNumber1, #LicenseIssuer1),
(#LicenseNumber2, #LicenseIssuer2),
... Repeat 48 more times...
.
.
.
INSERT INTO Licenses(personID, number, issuer)
SELECT #LastInsertedID, number, issuer
FROM #LicenseTEMP
WHERE number is not null
There still seems to be some redundant copy and pasting there, though.
To summarize the questions, I'm looking for idiomatic approaches to:
Break up one large staging table into a set of normalized tables, retrieving the Primary Key/identity from one table and using it as the foreign key in the others
Insert multiple rows into the normalized tables that come from many repeated columns in the staging table with less boilerplate/copy and paste (Licenses and Specialties above)
Short of discreet answers, I'd also be very happy with pointers towards resources and references that could assist me in figuring this out.
Ok, I'm not an SQL Server expert, but here's the "strategy" I would suggest.
Calculate the personId on the staging table
As #Shnugo suggested before me, calculating the personId in the staging table will ease the next steps
Use a sequence for the personID
From SQL Server 2012 you can define sequences. If you use it for every person insert, you'll never risk an overlapping of IDs. If you have (as it seems) personId that were loaded before the sequence you can create the sequence with the first free personID as starting value
Create a numbers table
Create an utility table keeping numbers from 1 to n (you need n to be at least 50.. you can look at this question for some implementations)
Use set logic to do the insert
I'd avoid cursor and row-by-row logic: you are right that it is better to limit the number of accesses to the table, but I'd say that you should strive to limit it to one access for target table.
You could proceed like these:
People:
INSERT INTO People (personID)
SELECT personId from staging;
Names:
INSERT INTO Names (personID, lName, fName)
SELECT personId, lName, fName from staging;
Licenses:
here we'll need the Number table
INSERT INTO Licenses (personId, number, issuer)
SELECT * FROM (
SELECT personId,
case nbrs.n
when 1 then licenseNumber1
when 2 then licenseNumber2
...
when 50 then licenseNumber50
end as licenseNumber,
case nbrs.n
when 1 then licenseIssuer1
when 2 then licenseIssuer2
...
when 50 then licenseIssuer50
end as licenseIssuer
from staging
cross join
(select n from numbers where n>=1 and n<=50) nbrs
) WHERE licenseNumber is not null;
Specialties:
INSERT INTO Specialties(personId, name, state)
SELECT * FROM (
SELECT personId,
case nbrs.n
when 1 then specialtyName1
when 2 then specialtyName2
...
when 15 then specialtyName15
end as specialtyName,
case nbrs.n
when 1 then specialtyState1
when 2 then specialtyState2
...
when 15 then specialtyState15
end as specialtyState
from staging
cross join
(select n from numbers where n>=1 and n<=15) nbrs
) WHERE specialtyName is not null;
Identifiers:
INSERT INTO Identifiers(personId, value)
SELECT * FROM (
SELECT personId,
case nbrs.n
when 1 then identifier1
when 2 then identifier2
...
when 15 then identifier15
end as value
from staging
cross join
(select n from numbers where n>=1 and n<=15) nbrs
) WHERE value is not null;
Hope it helps.
You say: but the staging table could be modified
I would
add a PersonID INT NOT NULL column and fill it with DENSE_RANK() OVER(ORDER BY fname,lname)
add an index to this PersonID
use this ID in combination with GROUP BY to fill your People table
do the same with your names table
And then use this ID for a set-based insert into your three side tables
Do it like this
SELECT AllTogether.PersonID, AllTogether.TheValue
FROM
(
SELECT PersonID,SomeValue1 AS TheValue FROM StagingTable
UNION ALL SELECT PersonID,SomeValue2 FROM StagingTable
UNION ALL ...
) AS AllTogether
WHERE AllTogether.TheValue IS NOT NULL
UPDATE
You say: might cause a conflict with IDs that already exist in the People table
You did not tell anything about existing People...
Is there any sure and unique mark to identify them? Use a simple
UPDATE StagingTable SET PersonID=xyz WHERE ...
to set existing PersonIDs into your staging table and then use something like
UPDATE StagingTable
SET PersonID=DENSE RANK() OVER(...) + MaxExistingID
WHERE PersonID IS NULL
to set new IDs for PersonIDs still being NULL.

Higher Query result with the DISTINCT Keyword?

Say I have a table with 100,000 User IDs (UserID is an int).
When I run a query like
SELECT COUNT(Distinct User ID) from tableUserID
the result I get is HIGHER than the result from the following statement:
SELECT COUNT(User ID) from tableUserID
I thought Distinct implied unique, which would mean a lower result. What would cause this discrepancy and how would I identify those user IDs that don't show up in the 2nd query?
Thanks
**
UPDATE - 11:14 am est
**
Hi All
I sincerely apologize as I should've taken the trouble to reproduce this in my local environment. But I just wanted to see if there was a general consensus about this. Here are the full details:
The query is a result of an inner join between 2 tables.
One has this information:
TABLE ACTIVITY (NO PRIMARY KEY)
UserID int (not Nullable)
JoinDate datetime
Status tinyint
LeaveDate datetime
SentAutoMessage tinyint
SectionDetails varchar
And here is the second table:
TABLE USER_INFO (CLUSTERED PRIMARY KEY)
UserID int (not Nullable)
UserName varchar
UserActive int
CreatedOn datetime
DisabledOn datetime
The tables are joined on UserID and the UserID being selected in the original 2 queries is the one from the TABLE ACTIVITY.
Hope this clarifies the question.
This is not technically an answer, but since I took time to analyze this, I might as well post it (although I have the risk of being down voted).
There was no way I could reproduce the described behavior.
This is the scenario:
declare #table table ([user id] int)
insert into #table values
(1),(1),(1),(1),(1),(1),(1),(2),(2),(2),(2),(2),(2),(null),(null)
And here are some queries and their results:
SELECT COUNT(User ID) FROM #table --error: this does not run
SELECT COUNT(dsitinct User ID) FROM #table --error: this does not run
SELECT COUNT([User ID]) FROM #table --result: 13 (nulls not counted)
SELECT COUNT(distinct [User ID]) FROM #table --result: 2 (nulls not counted)
And something interesting:
SELECT user --result: 'dbo' in my sandbox DB
SELECT count(user) from #table --result: 15 (nulls are counted because user value
is not null)
SELECT count(distinct user) from #table --result: 1 (user is the same
value always)
I find it very odd that you are able to run the queries exactly how you described. You'd have to let us know the table structure and the data to get further help.
how would I identify those user IDs that don't show up in the 2nd query
Try this query
SELECT UserID from tableUserID Where UserID not in (SELECT Distinct User ID from tableUserID)
I think there will be no row.
Edit:
User is a reserved keyword. Do you mean UserID in your requests ?
Ray : Yes
I tried to reproduce the problem in my environment and my conclusion is that given the conditions you described, the result from the first query can not be higher than the second one. Even if there would be NULL's, that just won't happen.
Did you run the query #Jean-Charles sugested?
I'm very intrigued with this, please let us know what turns out to be the problem.

Possible ways to get the group Last/Max record in a hierarchical query?

Assuming I have a table like this one:
CREATE TABLE user_delegates (
[id] INT IDENTITY(1,1) NOT NULL,
[user_from] VARCHAR(10) NOT NULL,
[user_to] VARCHAR(10) NOT NULL,
CONSTRAINT [PK_user_delegates] PRIMARY KEY CLUSTERED ([id] ASC),
CONSTRAINT [UK_user_delegates] UNIQUE ([user_from] ASC)
)
So an user A has to right to delegate her system access to another user B. When she does that, she won't be able to access the system anymore - user B will have to "break" that delegation before she is able to use the system again...
BUT also consider that, if user B delegates access to user C, user C will also start impersonating user A, and so on.
(I know this seems to be a security nightmare - please let's just forget about that, OK? :-))
Also consider those records:
INSERT INTO user_delegates([user_from], [user_to]) values ('ANTHONY', 'JOHN')
INSERT INTO user_delegates([user_from], [user_to]) values ('JOHN', 'JOHN')
INSERT INTO user_delegates([user_from], [user_to]) values ('KARL', 'JOSHUA')
INSERT INTO user_delegates([user_from], [user_to]) values ('JOSHUA', 'PIOTR')
INSERT INTO user_delegates([user_from], [user_to]) values ('PIOTR', 'HANS')
So what I need is finding the last (which means the active) delegation for each user.
I have come to a solution that I've decided to not show here (unless everybody ignores me, which is always a possibility). All I can say is that it is a somewhat long answer, and it surely seems like using a cannon to kill a flea...
But how would you do that? Consider any relevant SQL Server extension available, and notice we're looking for an answer that is both elegant and with a good performance...
BTW, this is the expected result set:
id user_from user_to
----------- ---------- ----------
1 ANTHONY JOHN
2 JOHN JOHN
3 KARL HANS
4 JOSHUA HANS
5 PIOTR HANS
(5 row(s) affected)
And thanks in advance!
WITH q (user_initial, user_from, user_to, link) AS
(
SELECT user_id, user_id, user_id, link
FROM users
UNION ALL
SELECT user_initial, q.user_to, ud.user_to, link + 1
FROM q
JOIN user_delegates ud
ON ud.user_from = q.user_to
)
SELECT *
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY user_initial ORDER BY link DESC) rn
FROM q
)
WHERE rn = 1

Simulating an identity column within an insert trigger

I have a table for logging that needs a log ID but I can't use an identity column because the log ID is part of a combo key.
create table StuffLogs
{
StuffID int
LogID int
Note varchar(255)
}
There is a combo key for StuffID & LogID.
I want to build an insert trigger that computes the next LogID when inserting log records. I can do it for one record at a time (see below to see how LogID is computed), but that's not really effective, and I'm hoping there's a way to do this without cursors.
select #NextLogID = isnull(max(LogID),0)+1
from StuffLogs where StuffID = (select StuffID from inserted)
The net result should allow me to insert any number of records into StuffLogs with the LogID column auto computed.
StuffID LogID Note
123 1 foo
123 2 bar
456 1 boo
789 1 hoo
Inserting another record using StuffID: 123, Note: bop will result in the following record:
StuffID LogID Note
123 3 bop
Unless there is a rigid business reason that requires each LogID to be a sequence starting from 1 for each distinct StuffID, then just use an identity. With an identity, you'll still be able to order rows properly with StuffID+LogID, but you'll not have the insert issues of trying to do it manually (concurrency, deadlocks, locking/blocking, slow inserts, etc.).
Make sure the LogId has a default value of NULL, so that it need not be supplied during insert statements, like it was an identity column.
CREATE TRIGGER Insert ON dbo.StuffLogs
INSTEAD OF INSERT
AS
UPDATE #Inserted SET LogId = select max(LogId)+1 from StuffLogs where StuffId=[INSERTED].StuffId
Select Row_Number() Over( Order By LogId ) + MaxValue.LogId + 1
From inserted
Cross Join ( Select Max(LogId) As Id From StuffLogs ) As MaxValue
You would need to thoroughly test this and ensure that if two connections were inserting into the table at the same time that you do not get collisions on LogId.

How can I query rankings for the users in my DB, but only consider the latest entry for each user?

Lets say I have a database table called "Scrape" possibly setup like:
UserID (int)
UserName (varchar)
Wins (int)
Losses (int)
ScrapeDate (datetime)
I'm trying to be able to rank my users based on their Wins/Loss ratio. However, each week I'll be scraping for new data on the users and making another entry in the Scrape table.
How can I query a list of users sorted by wins/losses, but only taking into consideration the most recent entry (ScrapeDate)?
Also, do you think it matters that people will be hitting the site and the scrape may possibly be in the middle of completing?
For example I could have:
1 - Bob - Wins: 320 - Losses: 110 - ScrapeDate: 7/8/09
1 - Bob - Wins: 360 - Losses: 122 - ScrapeDate: 7/17/09
2 - Frank - Wins: 115 - Losses: 20 - ScrapeDate: 7/8/09
Where, this represents a scrape that has only updated Bob so far, and is in the process of updating Frank but has yet to be inserted. How would you handle this situation as well?
So, my question is:
How would you handle querying only the most recent scrape of each user to determine the rankings
Do you think the fact that the database may be in a state of updating (especially if a scrape could take up to 1 day to complete), and not all users have completely updated yet matters? If so, how would you handle this?
Thank you, and thank you for your responses you have given me on my related question:
When scraping a lot of stats from a webpage, how often should I insert the collected results in my DB?
This is what I call the "greatest-n-per-group" problem. It comes up several times per week on StackOverflow.
I solve this type of problem using an outer join technique:
SELECT s1.*, s1.wins / s1.losses AS win_loss_ratio
FROM Scrape s1
LEFT OUTER JOIN Scrape s2
ON (s1.username = s2.username AND s1.ScrapeDate < s2.ScrapeDate)
WHERE s2.username IS NULL
ORDER BY win_loss_ratio DESC;
This will return only one row for each username -- the row with the greatest value in the ScrapeDate column. That's what the outer join is for, to try to match s1 with some other row s2 with the same username and a greater date. If there is no such row, the outer join returns NULL for all columns of s2, and then we know s1 corresponds to the row with the greatest date for that given username.
This should also work when you have a partially-completed scrape in progress.
This technique isn't necessarily as speedy as the CTE and RANKING solutions other answers have given. You should try both and see what works better for you. The reason I prefer my solution is that it works in any flavor of SQL.
Try something like:
Select user id and max date of last entry for each user.
Select and order records to get ranking based on above query results.
This should work, however depends on your database size.
DECLARE
#last_entries TABLE(id int, dte datetime)
-- insert date (dte) of last entry for each user (id)
INSERT INTO
#last_entries (id, dte)
SELECT
UserID,
MAX(ScrapeDate)
FROM
Scrape WITH (NOLOCK)
GROUP BY
UserID
-- select ranking
SELECT
-- optionally you can use RANK OVER() function to get rank value
UserName,
Wins,
Losses
FROM
#last_entries
JOIN
Scraps WITH (NOLOCK)
ON
UserID = id
AND ScrapeDate = dte
ORDER BY
Winds,
Losses
I do not test this code, so it could not compile on first run.
The answer to part one of your question depends on the version of SQL server you are using - SQL 2005+ offers ranking functions which make this kind of query a bit simpler than in SQL 2000 and before. I'll update this with more detail if you will indicate which platform you're using.
I suspect the clearest way to handle part 2 is to display the stats for the latest complete scraping exercise, otherwise you aren't showing a time-consistent ranking (although, if your data collection exercise takes 24 hours, there's a certain amount of latitude already).
To simplify this, you could create a table to hold metadata about each scrape operation, giving each one an id, start date and completion date (at a minimum), and display those records which relate to the latest complete scrape. To make this easier, you could remove the "scrape date" from the data collection table, and replace it with a foreign key linking each data row to a row in the scrape table.
EDIT
The following code illustrates how to rank users by their latest score, regardless of whether they are time-consistent:
create table #scrape
(userName varchar(20)
,wins int
,losses int
,scrapeDate datetime
)
INSERT #scrape
select 'Alice',100,200,'20090101'
union select 'Alice',120,210,'20090201'
union select 'Bob' ,200,200,'20090101'
union select 'Clara',300,100,'20090101'
union select 'Clara',300,210,'20090201'
union select 'Dave' ,100,10 ,'20090101'
;with latestScrapeCTE
AS
(
SELECT *
,ROW_NUMBER() OVER (PARTITION BY userName
ORDER BY scrapeDate desc
) AS rn
,wins + losses AS totalPlayed
,wins - losses as winDiff
from #scrape
)
SELECT userName
,wins
,losses
,scrapeDate
,winDiff
,totalPlayed
,RANK() OVER (ORDER BY winDiff desc
,totalPlayed desc
) as rankPos
FROM latestScrapeCTE
WHERE rn = 1
ORDER BY rankPos
EDIT 2
An illustration of the use of a metadata table to select the latest complete scrape:
create table #scrape_run
(runID int identity
,startDate datetime
,completedDate datetime
)
create table #scrape
(userName varchar(20)
,wins int
,losses int
,scrapeRunID int
)
INSERT #scrape_run
select '20090101', '20090102'
union select '20090201', null --null completion date indicates that the scrape is not complete
INSERT #scrape
select 'Alice',100,200,1
union select 'Alice',120,210,2
union select 'Bob' ,200,200,1
union select 'Clara',300,100,1
union select 'Clara',300,210,2
union select 'Dave' ,100,10 ,1
;with latestScrapeCTE
AS
(
SELECT TOP 1 runID
,startDate
FROM #scrape_run
WHERE completedDate IS NOT NULL
)
SELECT userName
,wins
,losses
,startDate AS scrapeDate
,wins - losses AS winDiff
,wins + losses AS totalPlayed
,RANK() OVER (ORDER BY (wins - losses) desc
,(wins + losses) desc
) as rankPos
FROM #scrape
JOIN latestScrapeCTE
ON runID = scrapeRunID
ORDER BY rankPos