Hi i am looking for a way to write a SQL statement which will come out with the following results :-
Lets say we have Dept & Emp ID i would like to generate like records from Dept 3 for the first two rows then followed by Dept 2 with one row then continue Dept 3 and so on :
DEPT EMPID
----- ------
3 1
3 2
2 3
3 7
3 8
2 9
Thank You.
You could use something like this
SELECT
DEPT,
EMPID
FROM (
SELECT
*,
ceil((row_number() OVER (PARTITION BY dept ORDER BY EMPID ))/ 2::numeric(5,2)) AS multiple_row_dept,
row_number() OVER (PARTITION BY dept ORDER BY EMPID ) AS single_row_dept
FROM
test_data2
) sub_query
ORDER BY
CASE
WHEN DEPT = 2 THEN single_row_dept
ELSE multiple_row_dept
END,
DEPT DESC,
EMPID
single_row_dept specifics which dept should appear only once, in this case its DEPT 2 followed by multiple other departments
First order a table by empid in a subquery,
then calculate a remainder of rowids divided by 3,
then depending on a result of calculation, return 2 or 3, using case expression,
like this
SELECT
CASE REMAINDER( rownum, 3 )
WHEN 0 THEN 2
ELSE 3
END As DeptId,
empid
FROM (
SELECT empid
FROM table1
ORDER BY empid
)
demo: http://sqlfiddle.com/#!4/bd1bb/3
The following code has some limitations:
1) There are only 2 DepId's in the source table
2) Ratio between records having different DepId is exactly 2:1
3) Ordering in one place of the query should be changed depending whether DeptId having more records is naturally greater then the other or not
with t as (
select 3 dept_id, 1 emp_id from dual
union all
select 3, 2 from dual
union all
select 3, 7 from dual
union all
select 3, 8 from dual
union all
select 2, 3 from dual
union all
select 2, 9 from dual)
select dept_id, emp_id
from (select dept_id,
emp_id,
dense_rank() over(partition by dept_id order by in_num + mod(in_num, out_num)) ord
from (select dept_id,
emp_id,
row_number() over(partition by dept_id order by emp_id) in_num,
/*in the following ORDER BY change to DESC or ASC depending on which dept_id has more records*/
dense_rank() over(order by dept_id) out_num
from t))
order by ord, dept_id desc;
DEPT_ID EMP_ID
---------- ----------
3 1
3 2
2 3
3 8
3 7
2 9
Related
I have table MY_TABLE like this, which stores changed data(update tracking) from many tables. So whenever other table has some update, I am storing new data in MY_TABLE using AFTER UPDATE TRIGGER on base tables.
ID RECORD_DESC EMP_ID FIRST_NAME LAST_NAME GENDER SALARY
1 EMP 5 ABC XYZ
2 EMP 5 M
3 EMP 5 XYZ-NEW F
4 SAL 5 1000
5 EMP 5 M
6 SAL 5 ABC-NEW 750
Now I want to query MY_TABLE to get employee data with latest changes from all columns and result should be like this row:
EMP_ID FIRST_NAME LAST_NAME GENDER SALARY
5 ABC-NEW XYZ-NEW M 750
What I did up to now is, getting MAX(ID) for each column and from that ID I am querying table again to get the column value for that ID.
But problem is this query will experience quite a load on db because I have 25 columns like this and table will get larger with time.
So, can some one suggest me better way to write the query below:
SELECT (SELECT FIRST_NAME FROM MY_TABLE WHERE ID = T2.FIRST_NAME_PK) AS FIRST_NAME
, (SELECT LAST_NAME FROM MY_TABLE WHERE ID = T2.LAST_NAME_PK ) AS LAST_NAME
, (SELECT GENDER FROM MY_TABLE WHERE ID = T2.GENDER_PK ) AS GENDER
, (SELECT SALARY FROM MY_TABLE WHERE ID = T2.SALARY_PK ) AS SALARY
FROM (SELECT (SELECT MAX(ID) FROM MY_TABLE WHERE EMP_ID = T1.EMP_ID AND FIRST_NAME IS NOT NULL) FIRST_NAME_PK -- ID = 6
, (SELECT MAX(ID) FROM MY_TABLE WHERE EMP_ID = T1.EMP_ID AND LAST_NAME IS NOT NULL) LAST_NAME_PK -- ID = 3
, (SELECT MAX(ID) FROM MY_TABLE WHERE EMP_ID = T1.EMP_ID AND GENDER IS NOT NULL) GENDER_PK -- ID = 5
, (SELECT MAX(ID) FROM MY_TABLE WHERE EMP_ID = T1.EMP_ID AND SALARY IS NOT NULL) SALARY_PK -- ID = 6
FROM (SELECT DISTINCT EMP_ID
FROM MY_TABLE
) T1
) T2;
Try this
SELECT DISTINCT EMP_ID,
, LAST_VALUE(FIRST_NAME) IGNORE NULLS OVER (PARTITION BY EMP_ID ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
, LAST_VALUE(LAST_NAME) IGNORE NULLS OVER (PARTITION BY EMP_ID ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
, LAST_VALUE(GENDER) IGNORE NULLS OVER (PARTITION BY EMP_ID ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
, LAST_VALUE(SALARY) IGNORE NULLS OVER (PARTITION BY EMP_ID ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
FROM MY_TABLE
You can use the KEEP clause as follows:
SELECT ID,
MAX(FIRST_NAME) KEEP (DENSE_RANK FIRST ORDER BY CASE WHEN FIRST_NAME IS NULL THEN NULL ELSE ID END DESC NULLS LAST) AS FIRST_NAME,
MAX(LAST_NAME) KEEP (DENSE_RANK FIRST ORDER BY CASE WHEN LAST_NAME IS NULL THEN NULL ELSE ID END DESC NULLS LAST) AS LAST_NAME,
MAX(GENDER) KEEP (DENSE_RANK FIRST ORDER BY CASE WHEN GENDER IS NULL THEN NULL ELSE ID END DESC NULLS LAST) AS GENDER,
MAX(SALARY) KEEP (DENSE_RANK FIRST ORDER BY CASE WHEN SALARY IS NULL THEN NULL ELSE ID END DESC NULLS LAST) AS SALARY
FROM MY_TABLE
GROUP BY ID;
How about this? See comments within code.
SQL> with my_table (id, record_Desc, emp_id, first_name, last_name, gender, salary) as
2 -- sample data
3 (select 1, 'emp', 5, 'abc', 'xyz', null , null from dual union all
4 select 2, 'emp', 5, null, null, 'm', null from dual union all
5 select 3, 'emp', 5, null, 'xyz-new', 'f', null from dual union all
6 select 4, 'emp', 5, null, null, null, 1000 from dual union all
7 select 5, 'emp', 5, null, null, 'm', null from dual union all
8 select 6, 'emp', 5, 'abc-new', null, null, 750 from dual
9 ),
10 temp as
11 -- find last values
12 (select a.id,
13 a.record_desc,
14 a.emp_id,
15 last_value(a.first_name ignore nulls) over (partition by a.record_desc, a.emp_id order by a.id) first_name,
16 last_value(a.last_name ignore nulls) over (partition by a.record_desc, a.emp_id order by a.id) last_name,
17 last_value(a.gender ignore nulls) over (partition by a.record_desc, a.emp_id order by a.id) gender,
18 last_value(a.salary ignore nulls) over (partition by a.record_desc, a.emp_id order by a.id) salary
19 from my_table a
20 )
21 -- extract only the last row per RECORD_DESC and EMP_ID
22 select *
23 from temp c
24 where c.id = (select max(b.id) From my_table b
25 where b.record_desc = c.record_Desc
26 and b.emp_id = c.emp_id
27 );
ID REC EMP_ID FIRST_N LAST_NA G SALARY
---------- --- ---------- ------- ------- - ----------
6 emp 5 abc-new xyz-new m 750
SQL>
Your employee table has the current data. You only want to show the data that has been altered, though.
What I'd do is show the employees data in case the column has an update in the log table. We don't have to find the latest update, because no matter how often the column was updated, the employee table contains the last value. This is a very simple operation in spite of having to read the whole log table.
select
e.emp_id,
case when log.some_first_name is not null then e.first_name end as first_name,
case when log.some_last_name is not null then e.last_name end as last_name,
case when log.some_gender is not null then e.gender end as gender,
case when log.some_salary is not null then e.salary end as salary
from employees e
join
(
select
emp_id,
min(first_name) as some_first_name,
min(last_name) as some_last_name,
min(gender) as some_gender,
min(salary) as some_salary
from my_table
group by emp_id
) log on log.emp_id = e.emp_id
order by e.emp_id;
An alternative to running this query again and again would be a last_updates table with one row per employee and a trigger that fills it on every insert into the existing log table. If you need this often, that's the route I'd choose.
Trainee here.
I need to create a list last names from the same table.
say we have a table named "sample", this table only consists of:
What I want to do here is both last name column will be selected but have different order, the first column would be ascending and the second column would be descending like the photo below
Here's one option: it splits first and last name into two subqueries which use row_number analytic function. It is then used to join rows.
Lines #1 - 6 represent your sample data. Query you really need begins at line #7.
SQL> with test (last_name, first_name) as
2 (select 'L1one' , 'F1one' from dual union all
3 select 'L2two' , 'F2two' from dual union all
4 select 'L3three', 'F3hthree' from dual union all
5 select 'L4four' , 'F4four' from dual
6 ),
7 ln as
8 (select last_name,
9 row_Number() over (order by last_name) rn
10 from test
11 ),
12 fn as
13 (select first_name,
14 row_number() over (order by first_name desc) rn
15 from test
16 )
17 select l.last_name, f.first_name
18 from ln l join fn f on f.rn = l.rn
19 order by l.last_name
20 /
LAST_NA FIRST_NA
------- --------
L1one F4four
L2two F3hthree
L3three F2two
L4four F1one
SQL>
[EDIT: both last names? I thought it was a typo]
If that's so, self-join is a better option:
SQL> with test (last_name, first_name) as
2 (select 'L1one' , 'F1one' from dual union all
3 select 'L2two' , 'F2two' from dual union all
4 select 'L3three', 'F3hthree' from dual union all
5 select 'L4four' , 'F4four' from dual
6 ),
7 temp as
8 (select last_name,
9 row_number() over (order by last_name asc) rna,
10 row_number() over (order by last_name desc) rnd
11 from test
12 )
13 select a.last_name, d.last_name
14 from temp a join temp d on a.rna = d.rnd
15 order by a.last_name;
LAST_NA LAST_NA
------- -------
L1one L4four
L2two L3three
L3three L2two
L4four L1one
SQL>
Here is one way to do it. In a single subquery, assign the ordinal number (rn) based on ascending order, but also keep track of the total row count. Then follow with a join.
with
test (last_name, first_name) as (
select 'L1one' , 'F1one' from dual union all
select 'L2two' , 'F2two' from dual union all
select 'L3three', 'F3hthree' from dual union all
select 'L4four' , 'F4four' from dual
)
, prep (last_name, rn, ct) as (
select last_name, row_number() over (order by last_name), count(*) over ()
from test
)
select a.last_name as last_name_asc, b.last_name as last_name_desc
from prep a inner join prep b on a.rn + b.rn = a.ct + 1
;
LAST_NAME_ASC LAST_NAME_DESC
-------------- --------------
L1one L4four
L2two L3three
L3three L2two
L4four L1one
ID Name dep_id
1 A 1
2 B 2
3 A 1
4 A 2
5 B 2
6 A 2
I think you want to have such a SQL
with tab( ID, Name, dep_id) as
(
select 1,'A',1 union all
select 2,'B',2 union all
select 3,'A',1 union all
select 4,'A',2 union all
select 5,'B',2 union all
select 6,'A',2
)
select name,
count(dep_id) as dept_count
from tab t
group by name
having count(name)>1;
NAME DEPT_COUNT
---- ----------
A 4
B 2
Due to you last edit( which you wanted to add to this answer ), consider grouping also by dept_id :
with tab( ID, Name, dep_id) as
(
select 1,'A',1 union all
select 2,'B',2 union all
select 3,'A',1 union all
select 4,'A',2 union all
select 5,'B',2 union all
select 6,'A',2
)
select name, dept_id,
count(dept_id) as dept_count
from tab t
group by name, dept_id
having count(name)>1;
NAME DEPT_ID DEPT_COUNT
---- ------ ----------
A 2 2
A 1 2
B 2 2
I am trying to write a Query to show Id, Name and No. of department in given Table which are referring more than one department.
ID Name Department
-- ---- ----------
1 Sam HR
1 Sam FINANCE
2 Ron PAYROLL
3 Kia HR
3 Kia IT
Result :
ID Name Department
-- ---- ----------
1 Sam 2
3 Kia 2
I tried using group by id and using count(*), but query is giving error.
How can I do this?
Without seeing your query, a blind guess is that you wrongly wrote the GROUP BY clause (if you used it) and forgot to include the HAVING clause.
Anyway, something like this might be what you're looking for:
SQL> with test (id, name, department) as
2 (select 1, 'sam', 'hr' from dual union
3 select 1, 'sam', 'finance' from dual union
4 select 2, 'ron', 'payroll' from dual union
5 select 3, 'kia', 'hr' from dual union
6 select 3, 'kia', 'it' from dual
7 )
8 select id, name, count(*)
9 from test
10 group by id, name
11 having count(*) > 1
12 order by id;
ID NAM COUNT(*)
---------- --- ----------
1 sam 2
3 kia 2
SQL>
You were right about using count(). You need to group by other columns though and only count unique departments then filter on the number in having clause.
select id, name, count(distinct department) as no_of_department
from table
group by id, name
having count(distinct department) > 1
This can also be done using analytic functions like below:
select *
from (
select id, name, count(distinct department) over (partition by id, name) as no_of_department
from table
) t
where no_of_department > 1
You can use window function with subquery :
select distinct id, name, Noofdepartment
from (select t.*, count(*) over (partition by id,name) Noofdepartment
from table t
) t
where Noofdepartment > 1;
However, you can also use group by clause:
select id, name, count(*) as Noofdepartment
from table t
group by id, name
having count(*) > 1;
Here is my table:
ID Name Department
1 Michael Marketing
2 Alex Marketing
3 Tom Marketing
4 John Sales
5 Brad Marketing
6 Leo Marketing
7 Kevin Production
I am trying to find ID ranges where Department = 'Marketing':
Range From To
Range1 1 3
Range2 5 6
Any help would be appreciated.
This is easy to do with a technique called Tabibitosan.
What this technique does is compare the positions of each group's rows to the overall set of rows, in order to work out if rows in the same group are next to each other or not.
E.g., with your example data, this looks like:
WITH your_table AS (SELECT 1 ID, 'Michael' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 2 ID, 'Alex' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 3 ID, 'Tom' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 4 ID, 'John' NAME, 'Sales' department FROM dual UNION ALL
SELECT 5 ID, 'Brad' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 6 ID, 'Leo' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 7 ID, 'Kevin' NAME, 'Production' department FROM dual)
-- end of mimicking your table with data in it. See the SQL below:
SELECT ID,
NAME,
department,
row_number() OVER (ORDER BY ID) overall_rn,
row_number() OVER (PARTITION BY department ORDER BY ID) department_rn,
row_number() OVER (ORDER BY ID) - row_number() OVER (PARTITION BY department ORDER BY ID) grp
FROM your_table;
ID NAME DEPARTMENT OVERALL_RN DEPARTMENT_RN GRP
---------- ------- ---------- ---------- ------------- ----------
1 Michael Marketing 1 1 0
2 Alex Marketing 2 2 0
3 Tom Marketing 3 3 0
4 John Sales 4 1 3
5 Brad Marketing 5 4 1
6 Leo Marketing 6 5 1
7 Kevin Production 7 1 6
Here, I've given all the rows across the entire set of data a row number in ascending id order (the overall_rn column), and I've given the rows in each department a row number (the department_rn column), again in ascending id order.
Now that I've done that, we can subtract one from the other (the grp column).
Notice how the number in the grp column remains the same for deparment rows that are next to each other, but it changes each time there's a gap.
E.g. for the Marketing department, rows 1-3 are next to each other and have grp = 0, but the 4th Marketing row is actually on the 5th row of the overall results set, so it now has a different grp number. Since the 5th marketing row is on the 6th row of the overall set, it has the same grp number as the 4th marketing row, so we know they're next to each other.
Once we have that grp information, it's a simple matter of doing an aggregate query grouping on both the department and our new grp column, using min and max to find the start and end ids:
WITH your_table AS (SELECT 1 ID, 'Michael' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 2 ID, 'Alex' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 3 ID, 'Tom' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 4 ID, 'John' NAME, 'Sales' department FROM dual UNION ALL
SELECT 5 ID, 'Brad' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 6 ID, 'Leo' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 7 ID, 'Kevin' NAME, 'Production' department FROM dual)
-- end of mimicking your table with data in it. See the SQL below:
SELECT department,
MIN(ID) start_id,
MAX(ID) end_id
FROM (SELECT ID,
NAME,
department,
row_number() OVER (ORDER BY ID) - row_number() OVER (PARTITION BY department ORDER BY ID) grp
FROM your_table)
GROUP BY department, grp;
DEPARTMENT START_ID END_ID
---------- ---------- ----------
Marketing 1 3
Marketing 5 6
Sales 4 4
Production 7 7
N.B., I've assumed that gaps in the id columns aren't important (i.e. if there was no row for id = 6 (so Leo and Kevin's ids were 7 and 8 respectively), then Leo and Brad would still appear in the same group, with a start id = 5 and end id = 7.
If gaps in the id columns count as indicating a new group, then you could just use the id to label the overall set of rows (i.e. no need to caluclate the overall_rn; just use the id column instead).
That means your query would become:
WITH your_table AS (SELECT 1 ID, 'Michael' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 2 ID, 'Alex' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 3 ID, 'Tom' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 4 ID, 'John' NAME, 'Sales' department FROM dual UNION ALL
SELECT 5 ID, 'Brad' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 7 ID, 'Leo' NAME, 'Marketing' department FROM dual UNION ALL
SELECT 8 ID, 'Kevin' NAME, 'Production' department FROM dual)
-- end of mimicking your table with data in it. See the SQL below:
SELECT department,
MIN(ID) start_id,
MAX(ID) end_id
FROM (SELECT ID,
NAME,
department,
ID - row_number() OVER (PARTITION BY department ORDER BY ID) grp
FROM your_table)
GROUP BY department, grp;
DEPARTMENT START_ID END_ID
---------- ---------- ----------
Marketing 1 3
Sales 4 4
Marketing 5 5
Marketing 7 7
Production 8 8
I don't have the environment currently but you can try something like this
select * from tab1 where id in
(select min(id) from tab1 where Department = 'Marketing'
union
select max(id) from tab1 where Department = 'Marketing')