Order by or where clause: which is effecient when retrieving a record from versioned table - sql

This might be a basic sql questions, however I was curious to know the answer to this.
I need to fetch top one record from the db. Which query would be more efficient, one with where clause or order by?
Example:
Table
Movie
id name isPlaying endDate isDeleted
Above is a versioned table for storing records for movie.
If the endDate is not null and isDeleted = 1 then the record is old and an updated one already exist in this table.
So to fetch the movie "Gladiator" which is currently playing, I can write a query in two ways:
1.
Select m.isPlaying
From Movie m
where m.name=:name (given)
and m.endDate is null and m.isDeleted=0
2. Select TOP 1 m.isPlaying
From Movie m
where m.name=:name (given)
order by m.id desc --- This will always give me the active record (one which is not deleted)
Which query is faster and the correct way to do it?
Update:
id is the only indexed column and id is the unique key. I am expecting the queries to return me only one result.
Update:
Examples:
Movie
id name isPlaying EndDate isDeleted
3 Gladiator 1 03/1/2017 1
4 Gladiator 1 03/1/2017 1
5 Gladiator 0 null 0

I would go with the where clause:
Select m.isPlaying
From Movie m
where m.id = :id and m.endDate is null and m.isDeleted = 0;
This can take advantage of an index on (id, isDeleted, endDate).
Also, the two are not equivalent. The second might return multiple rows when the first returns 1. Or the second might return one row when the first returns none.

The first option might return more than 1 row. Maybe you know it won't because you know what data you have stored but the SQL engine doesn't, and it will affect it's execution plan.
Considering that you only have 1 index and it's on the ID column, the 2nd query should be faster in theory, since it would do an index scan from the highest ID with a predicate for the given name, stopping at the first match.
The first query will do a full table scan while comparing column name, endDate and isDeleted, since it won't stop at the first result that matches.
Posting your execution plans for both queries might enlighten a few loose cables.

Related

How to fetch required data using single query in sql developer

I have a subject table that has subject_id column. In the table I have one row that has subject_id null other than that subject_id has a distinct value.
I am looking for single query I can fetch the data on basis of subject_id.
Select * from subject where subject_id = x;
If there is no data found w.r.t x than it should return the row with subject_id = null
In general this is a terrible pattern for tables. NULL as a primary key value is only going to cause you pain and suffering in the long run. Using a NULL-keyed row as a default for when your query matches no other rows will lead to strange behavior somewhere unexpected.
The simplest way would be to simply include the NULL row as the last row of any query and then only fetch the first row. But that only works when your query can only return at most one valid result.
select *
from subject
where subject_id = ? or subject_id is null
order by subject_id asc nulls last
Possibly the biggest problem with a NULL PK for your default/placeholder row in subject is that anywhere else you have a NULL subject_id cannot simply join to that row using x.subject_id = y.subject_id.
If you really need such a row, I suggest using -1 instead of NULL as the "not exists" value. It will make your life much easier across the board, especially if you need to join to it.

SQL - compare multiple rows and return rows?

I have a large database and i'd like to pull info from a table (Term) where the Names are not linked to a PartyId for a certain SearchId. However:
There are multiple versions of the searches (sometimes 20-40 - otherwise I think SQL - Comparing two rows and two columns would work for me)
The PartyId will almost always be NULL for the first version for the search, and if the same Name for the same SearchId has a PartyId associated in a later version the NULL row should not appear in the results of the query.
I have 8 left joins to display the information requested - 3 of them are joined on the Term table
A very simplified sample of data is below
CASE statement? Join the table with itself for comparison? A temp table or do I just return the fields I'm joining on and/or want to display?
Providing sample data that yields no expected result is not as useful as providing data that gives an expected result..
When asking a question start with defining the problem in plain English. If you can't you don't understand your problem well enough yet. Then define the tables which are involved in the problem (including the columns) and sample data; the SQL you've tried, and what you're expected result is using the data in your sample. Without this minimum information we make many guesses and even with that information we may have to make assumptions; but without a minimum verifiable example showing illustrating your question, helping is problematic.
--End soap box
I'm guessing you're after only the names for a searchID which has a NULL partyID for the highest SearchVerID
So if we eliminated ID 6 from your example data, then 'Bob' would be returned
If we added ID 9 to your sample data for name 'Harry' with a searchID of 2 and a searchVerID of 3 and a null partyID then 'Harry' too would be returned...
If my understanding is correct, then perhaps...
WITH CTE AS (
SELECT Name, Row_Number() over (partition by Name order by SearchVersID Desc)
FROM Term
WHERE SearchID = 2)
SELECT Name
FROM CTE
WHERE RN = 1
and partyID is null;
This assigns a row number (RN) to each name starting at 1 and increasing by one for each entry; for searchID's of 2. The highest searchversion will always have a RN of 1. Then we filter to include only those RN which are 1 and have a null partyID. This would result in only those names having a searchID of 2 the highest search version and a NULL partyID
Ok So I took the question a different way too..
If you simply want all the names not linked to a PartyID for a given search.
SELECT A.*
FROM TERM A
WHERE NOT EXISTS (SELECT 1
FROM TERM B
WHERE A.Name = B.Name
AND SearchID = 2) and partyID is not null)
AND searchID = 2
The above should return all term records associated to searchID 2
that have a partyId. This last method is the exists not exists and set logic I was talking about in comments.

Why select of count fetches a lot of rows?

Table teams contains 1169 rows, 1133 from them have UserId field !=0. There is an index on the "UserId" field
Query:
EXPLAIN SELECT count(*) FROM teams WHERE UserId != 0
returns output that has Estimate of rows to be examined equal to 1133.
Why query need to examine all rows? Should not it just use index for this purpose?
Thank you.
It will examine almost all rows because you want almost all rows (because you said UserId != 0). Sure, you then make a "count" so you show only one record, but they all had to be fetched in order to count them.
If you where to do
select count(1) from teams where UserId = 100
then it will examine ony a few rows, because you are asking for a precise value (UserId = XX as opposed to UserId != yy).

How do I check if all posts from a joined table has the same value in a column?

I'm building a BI report for a client where there is a 1-n related join involved.
The joined table has a field for employee ID (EmplId).
The query that I've built for this report is supposed to give a 1 in its field "OneEmployee" if all the related posts have the same employee in the EmplId field, null if it's different employees, i.e:
TaskTrans
TaskTransHours > EmplId: 'John'
TaskTransHours > EmplId: 'John'
This should give a 1 in the said field in the query
TaskTrans
TaskTransHours > EmplId: 'John'
TaskTransHours > EmplId: 'George'
This should leave the said field blank
The idea is to create a field where a case function checks this and returns the correct value. But my problem is whereas there is a way to check for this through SQL.
select not count(*) from your_table
where employee_id = GIVEN_ID
and your_field not in ( select min(your_field)
from your_table
where employee_id = GIVEN_ID);
Note: my first idea was to use LIMIT 1 in the inner query, but MYSQL didn't like it, so min it was - the points to use any, but only one. Min should work, but the field should be indexed, then this query will actually execute rather fast, as only indexes would be used (obviously employee_id should also be indexed).
Note2: Do not get too confused with not in front of count(*), you want 1 when there is none that is different, I count different ones, and then give you the not count(*), which will be one if count is 0, otherwise 0.
Seems a job for a window COUNT():
SELECT
…,
CASE COUNT(DISTINCT TaskTransHours.EmplId) OVER () WHEN 1 THEN 1 END
AS OneEmployee
FROM …

finding consecutive date pairs in SQL

I have a question here that looks a little like some of the ones that I found in search, but with solutions for slightly different problems and, importantly, ones that don't work in SQL 2000.
I have a very large table with a lot of redundant data that I am trying to reduce down to just the useful entries. It's a history table, and the way it works, if two entries are essentially duplicates and consecutive when sorted by date, the latter can be deleted. The data from the earlier entry will be used when historical data is requested from a date between the effective date of that entry and the next non-duplicate entry.
The data looks something like this:
id user_id effective_date important_value useless_value
1 1 1/3/2007 3 0
2 1 1/4/2007 3 1
3 1 1/6/2007 NULL 1
4 1 2/1/2007 3 0
5 2 1/5/2007 12 1
6 3 1/1/1899 7 0
With this sample set, we would consider two consecutive rows duplicates if the user_id and the important_value are the same. From this sample set, we would only delete row with id=2, preserving the information from 1-3-2007, showing that the important_value changed on 1-6-2007, and then showing the relevant change again on 2-1-2007.
My current approach is awkward and time-consuming, and I know there must be a better way. I wrote a script that uses a cursor to iterate through the user_id values (since that breaks the huge table up into manageable pieces), and creates a temp table of just the rows for that user. Then to get consecutive entries, it takes the temp table, joins it to itself on the condition that there are no other entries in the temp table with a date between the two dates. In the pseudocode below, UDF_SameOrNull is a function that returns 1 if the two values passed in are the same or if they are both NULL.
WHILE (##fetch_status <> -1)
BEGIN
SELECT * FROM History INTO #history WHERE user_id = #UserId
--return entries to delete
SELECT h2.id
INTO #delete_history_ids
FROM #history h1
JOIN #history h2 ON
h1.effective_date < h2.effective_date
AND dbo.UDF_SameOrNull(h1.important_value, h2.important_value)=1
WHERE NOT EXISTS (SELECT 1 FROM #history hx WHERE hx.effective_date > h1.effective_date and hx.effective_date < h2.effective_date)
DELETE h1
FROM History h1
JOIN #delete_history_ids dh ON
h1.id = dh.id
FETCH NEXT FROM UserCursor INTO #UserId
END
It also loops over the same set of duplicates until there are none, since taking out rows creates new consecutive pairs that are potentially dupes. I left that out for simplicity.
Unfortunately, I must use SQL Server 2000 for this task and I am pretty sure that it does not support ROW_NUMBER() for a more elegant way to find consecutive entries.
Thanks for reading. I apologize for any unnecessary backstory or errors in the pseudocode.
OK, I think I figured this one out, excellent question!
First, I made the assumption that the effective_date column will not be duplicated for a user_id. I think it can be modified to work if that is not the case - so let me know if we need to account for that.
The process basically takes the table of values and self-joins on equal user_id and important_value and prior effective_date. Then, we do 1 more self-join on user_id that effectively checks to see if the 2 joined records above are sequential by verifying that there is no effective_date record that occurs between those 2 records.
It's just a select statement for now - it should select all records that are to be deleted. So if you verify that it is returning the correct data, simply change the select * to delete tcheck.
Let me know if you have questions.
select
*
from
History tcheck
inner join History tprev
on tprev.[user_id] = tcheck.[user_id]
and tprev.important_value = tcheck.important_value
and tprev.effective_date < tcheck.effective_date
left join History checkbtwn
on tcheck.[user_id] = checkbtwn.[user_id]
and checkbtwn.effective_date < tcheck.effective_date
and checkbtwn.effective_date > tprev.effective_date
where
checkbtwn.[user_id] is null
OK guys, I did some thinking last night and I think I found the answer. I hope this helps someone else who has to match consecutive pairs in data and for some reason is also stuck in SQL Server 2000.
I was inspired by the other results that say to use ROW_NUMBER(), and I used a very similar approach, but with an identity column.
--create table with identity column
CREATE TABLE #history (
id int,
user_id int,
effective_date datetime,
important_value int,
useless_value int,
idx int IDENTITY(1,1)
)
--insert rows ordered by effective_date and now indexed in order
INSERT INTO #history
SELECT * FROM History
WHERE user_id = #user_id
ORDER BY effective_date
--get pairs where consecutive values match
SELECT *
FROM #history h1
JOIN #history h2 ON
h1.idx+1 = h2.idx
WHERE h1.important_value = h2.important_value
With this approach, I still have to iterate over the results until it returns nothing, but I can't think of any way around that and this approach is miles ahead of my last one.