Selecting all columns using distinct against one specific column [duplicate] - sql

This question already has answers here:
SELECT DISTINCT on one column
(7 answers)
Closed 8 years ago.
I appreciate that this question has been asked before but I am struggling to find an answer that will even run within Oracle 10g (10.2.0.5.0)
I have a table called BASIC which contains approximately 70 columns. Currently, I return a specified number of rows using the following code (as an example) - the result being the first 20 members who have a MEMBNO after 5000
SELECT * FROM BASIC WHERE MEMBNO>5000 AND ROWNUM <=20 ORDER BY MEMBNO;
Within the 20 rows returned, several of the rows have the same value in the NINO column
I would like to modify my SELECT statement to return the next 20 rows with distinct/unique NINO values
Simply wrapping a DISTINCT around the * gives me an ORA-00936: missing expression error, plus it would not be as precise as I would like.

Can you try the code below:- I have used analytical query concept to fetch only distinct nino values.
select * from
(SELECT b.*,row_number() over (partition by nino order by MEMBNO ) rn
FROM BASIC b WHERE MEMBNO>5000)
where rn =1 AND ROWNUM <=20 ORDER BY MEMBNO;
Let me know in case you encounter any issues.

I think I have found a solution via another source
This shows the rows where there are duplicates...
select * from basic where rowid not in (select min(rowid) from basic group by nino)
This shows the rows with the duplicate rows removed...
select * from basic where rowid in (select min(rowid) from basic group by nino)
Then I can add my row count and membno filters for the final result...
select * from basic where rowid in (select min(rowid) from basic where membno>6615 group by NINO) and rownum <=20 order by membno;

Related

How can I get the total result count, and a given subset ('page' of results) with the same SQL Query with Oracle

I would like to display a table of results. The data is sourced from a SQL query on an Oracle database. I would like to show the results one page (say, 10 records) at a time, minimising the actual data being sent to the front-end.
At the same time, I would like to show the total number of possible results (say, showing 1-10 of 123), and to allow for pagination (say, to calculate that 10 per page, 123 results, therefore 13 pages).
I can get the total number of results with a single count query.
SELECT count(*) AS NUM_RESULTS FROM ... etc.
and I can get the desired subset with another query
SELECT * FROM ... etc. WHERE ? <= ROWNUM AND ROWNUM < ?
But, is there a way to get all the relevant details in one single query?
Update
Actually, the above query using ROWNUM seems to work for 0 - 10, but not for 10 - 20, so how can I do that too?
ROWNUM is a bit tricky to use.
The ROWNUM pseudocolumn always starts with 1 for the first result that actually gets fetched. If you filter for ROWNUM>10, you will never fetch any result and therefore will not get any.
If you want to use it for paging (not that you really should), it requires nested subqueries:
select * from
(select rownum n, x.* from
(select * from mytable order by name) x
)
where n between 3 and 5;
Note that you need another nested subquery to get the order by right; if you put the order by one level higher
select * from
(select rownum n, x.* from mytable x order by name)
where n between 3 and 5;
it will pick 3 random(*) rows and sort them, but that is ususally not what you want.
(*) not really random, but probably not what you expect.
See http://use-the-index-luke.com/sql/partial-results/window-functions for more effient ways to implement pagination.
You can use inner join on your table and fetch total number of result in your subquery. The example of an query is as follows:
SELECT E.emp_name, E.emp_age, E.emp_sal, E.emp_count
FROM EMP as E
INNER JOIN (SELECT emp_name, COUNT(*) As emp_count
FROM EMP GROUP BY emp_name) AS T
ON E.emp_name = T.emp_name WHERE E.emp_age < 35;
Not sure exactly what you're after based on your question wording, but it seems like you want to see your specialized table of all records with a row number between two values, and in an adjacent field in each record see the total count of records. If so, you can try selecting everything from your table and joining a subquery of a COUNT value as a field by saying where 1=1 (i.e. everywhere) tack that field onto the record. Example:
SELECT *
FROM table_name LEFT JOIN (SELECT COUNT(*) AS NUM_RESULTS FROM table_name) ON 1=1
WHERE ? <= ROWNUM AND ROWNUM < ?

MS Access select distinct random values

How can I select 4 distinct random values from the field answer in MS Access table question?
SELECT TOP 4 answer,ID FROM question GROUP BY answer ORDER BY rnd(INT(NOW*ID)-NOW*ID)
Gives error message:
Run-time error '3122': Your query does not include the specified
expression 'ID' as part of an aggregate function.
SELECT DISTINCT TOP 4 answer,ID FROM question ORDER BY rnd(INT(NOW*ID)-NOW*ID)
Gives error message:
Run-time error '3093': ORDER BY clause (rnd(INT(NOWID)-NOWID))
conflicts with DISTINCT.
Edit:
Tried this:
SELECT TOP 4 *
FROM (SELECT answer, Rnd(MIN(ID)) AS rnd_id FROM question GROUP BY answer) AS A
ORDER BY rnd_id;
Seems to work sofar..
I suggest:
SELECT TOP 4 answer
FROM question
GROUP BY answer
ORDER BY Rnd(MIN(ID));
I don't think the subquery is necessary. And including the random value on the SELECT doesn't seem useful.
I've creted a simple quiz application 2 years ago, and this is the query that I use to get a random question from the table.
SELECT TOP 4 * FROM Questions ORDER BY NEWID()

SQL Automatically Delete one of two duplicates [duplicate]

This question already has answers here:
Delete duplicate records in SQL Server?
(10 answers)
Closed 8 years ago.
I have a query which finds duplicate IDs
SELECT uniqueID2, count(uniqueID2) as 'count'
FROM gpDetailAfterMultiplier
group by uniqueID2
having count(uniqueID2) > 1
this produces an output something like:
uniqueID2 count
111111111 2
111111112 2
111111113 2
111111114 2
How do I automatically delete one of the two duplicates?
I can do this one at a time by doing
DELETE top(1) from gpDetailAfterMultiplier where UniqueID2 = '111111111'
is there any way to do this so that it automatially 'loops' through each result and deletes one of the two duplicates for each unique id?
Try this:
WITH CTE AS(
SELECT *,
RN = ROW_NUMBER()OVER(PARTITION BY uniqueID2 ORDER BY uniqueID2)
FROM gpDetailAfterMultiplier
)
DELETE FROM CTE WHERE RN > 1
It will delete all duplicates from the table.
See result in Fiddle (Used SELECT query in fiddle to see which records are going to be deleted).

Exclude specific column from result in SQL Server [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
SQL exclude a column using SELECT * [except columnA] FROM tableA?
I have following query and I want to exclude the column RowNum from the result, how can I do it ?
SELECT *
FROM
(SELECT
ROW_NUMBER() OVER ( ORDER BY [Report].[dbo].[Reflow].ReflowID ) AS RowNum, *
FROM
[Report].[dbo].[Reflow]
WHERE
[Report].[dbo].[Reflow].ReflowProcessID = 2) AS RowConstrainedResult
WHERE
RowNum >= 100 AND RowNum < 120
ORDER BY
RowNum
Thanks.
It's considered bad practice to not specify column names in your query.
You could push the data into a #temp table, then ALTER the columns in that #temp to DROP a COLUMN, then SELECT * FROM #temp.
This would be inefficent, but it will get you the result you are asking for. By default though, it's best to get into the way of specifying all the columns you require. If someone ALTERs your initial table, even using the push #temp method above, you'll end up with different columns.
Do not use * but give the field lsit you are interested in. That simple. Using a "*" is bad practice anyawy as the order is not defined.
Because you want to order the results based on RowNum's values, you can not exclude this column from your results. You can save the result of your query in a temp table and then make another query on temp table and mention the columns that you want to show in the results(instead of select *). Such an approach will show all columns except RowNum which are ordered based on RowNum's values.
This should work, I dont know the names of your columns so used generic names. Try not to use * its considered bad practice, makes it difficult for people to read your code.
SELECT [column1],
[column2],
[etcetc]
FROM ( SELECT ROW_NUMBER() OVER(ORDER BY RowConstrainedResult.RowNum) [RN],
*
FROM ( SELECT ROW_NUMBER() OVER ( ORDER BY [Report].[dbo].[Reflow].ReflowID ) AS RowNum, *
FROM [Report].[dbo].[Reflow]
WHERE [Report].[dbo].[Reflow].ReflowProcessID = 2
) AS RowConstrainedResult
WHERE RowNum >= 100
AND RowNum < 120

How to find unused ID in a column? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
SQL query to find Missing sequence numbers
I have a table which has a user Id column, The user could select which user ID to add in the table. I am wondering if there is a one sql code that could point me to the list of unused user id or even just the smallest unused ID?
For example, I have the following IDs
USER_ID
1
2
3
5
6
7
8
10
I would like to know if there is a way to select 4 or even selecting 4 and 9?
You can try using the "NOT IN" clause:
select
user_id
from table
where
user_id not in (select user_id from another_table)
Like this:
select
u1.user_id + 1 as start
from users as u1
left outer join users as u2 on u1.user_id + 1 = u2.id
where
u2.id is null
From here.
It depends on the Database you are using. If you are using Oracle, something like this will work:
Step 1: Find out max value of userid in your table:
select max(userid) from tbl_userid
let this number be m
Step 2: Find out the max value of rownum in the foll query
select rownum from all_objects
Step 3: If the max value is greater than m then you can use the foll query to list your unused user ids
select user_id
from tbl_userid
where user_id NOT IN (select rownum from all_objects)
If max value returned by step 2 is less than m you can tweak your query to the following
select user_id
from tbl_userid
where user_id NOT IN
(select rownum
from (select *
from all_objects
UNION ALL
select * from all_objects)
)
Repeat the UNION ALL until you get max(rownum) >= m
If you are using SQL server, kindly let me know. There is no direct equivalent of ROWNUM pseudocolumn in sql server but there are workarounds using the RANK() function.
Given that SQL is generally a set-based language, the only way I could think to do this would be to create the full set of ID's, and outer join your table where no ID's matched. Problem with that is if your table has a significant number of records, you would have to generate a temporary table containing every ID from 1 through MAX(USER_ID). Given a table with tens or hundreds of millions of records, that could be very slow.
Just out of curiosity, why do you need to know the ID holes? Is there some specific reason, or are you just trying to not "waste" an ID? Given the processing effort to find the holes, I would think it is more efficient to just let them be.
Here's one way to do it using SQL Server 2005 or later. It may or may not work efficiently for you:
insert into T values
(1),(2),(3),(5),(6),(9),(11);
with Trk as (
select userid,
row_number() over (
order by userid
) as rk
from T
), Truns(start,finish,gp) as (
select -1+min(userid), 1+max(userid),
userid-rk
from Trk
group by userid-rk
), Tregroup as (
select start, finish,
row_number() over (
order by gp
) as rk
from Truns
), Tpre as (
select a.finish, b.start
from Tregroup as a full outer join Tregroup as b
on a.rk + 1 = b.rk
)
select
rtrim(finish) + case when start = finish then '' else + '-' + rtrim(start) end as gap
from Tpre
where finish+start is not null
drop table T;
Short of looping through all the ids (perhaps using binary search tree logic?) I don't have a good answer for you.
I would ask what you want this for? By their nature, ids are essentially meaningless - all they do is identify some data, not describe it, and as such it shouldn't be a problem if you have large gaps in your user ids. (In fact, some people would say that it's even better to have unguessable ids, to avoid users tampering with information to find security holes)