I have multiple text in file.txt
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I 30
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 40
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness 30
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA 30
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 40
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness 30
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit 20
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit 40
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 40
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I 10
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW 40
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit 40
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness 30
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness 30
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S 40
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest 50
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness 40
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN 40
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D 10
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ 40
10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner,
I tried to use sed 's/\b30\b/View/g' file.txt , but there was a change throughout the file
and i only need to change the last column.
I need to change the text to look like this
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 Reporter
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness View
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA View
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 Reporter
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness View
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit Guest
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit Reporter
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 Reporter
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I Minimal acces
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW Reporter
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit Reporter
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness View
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness View
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S Reporter
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest Owner
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness Reporter
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN Reporter
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D Minimal acces
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ Reporter
I would use GNU AWK for this task following way, let file.txt content be
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I 30
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 40
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness 30
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA 30
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 40
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness 30
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit 20
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit 40
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 40
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I 10
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW 40
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit 40
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness 30
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness 30
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S 40
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest 50
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness 40
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN 40
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D 10
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ 40
then
awk 'BEGIN{a[10]="Minimal acces";a[20]="Guest";a[30]="View";a[40]="Reporter";a[50]="Owner"}{$NF=a[$NF];print}' file.txt
gives output
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 Reporter
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness View
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA View
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 Reporter
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness View
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit Guest
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit Reporter
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 Reporter
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I Minimal acces
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW Reporter
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit Reporter
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness View
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness View
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S Reporter
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest Owner
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness Reporter
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN Reporter
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D Minimal acces
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ Reporter
Explanation: In BEGIN I create array a with replacements as described in requirements, then for each line I use value from a to replace value of last field ($NF) and print such changed line. Disclaimer: this solution assumes that value of last field is always one present in a.
(tested in gawk 4.2.1)
Using any awk in any shell on every Unix box:
$ cat tst.awk
BEGIN {
n = split("10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner",tmp,/ *, */)
for (i in tmp) {
old = new = tmp[i]
sub(/ .*/,"",old)
sub(/[^ ]* */,"",new)
map[old] = new
}
}
$NF in map {
$NF = map[$NF]
}
{ print }
$ awk -f tst.awk file
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 Reporter
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness View
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA View
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 Reporter
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness View
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit Guest
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit Reporter
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 Reporter
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I Minimal acces
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW Reporter
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit Reporter
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness View
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness View
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S Reporter
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest Owner
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness Reporter
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN Reporter
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D Minimal acces
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ Reporter
This might work for you (GNU sed):
lookup=" 10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner,"
sed -E 's/$/\n'"${lookup}"'/;s/( \S+)\n.*\1( [^,]+).*/\2/;P;d' file
Append a lookup table to each line and using regexp back references, replace the last field in each line with its match.
N.B. If no match is found the line is printed as is.
mapping="10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner, "
awk -v map="$mapping" '
BEGIN {
split(map, a, ",");
for (i in a) {
num = gensub(/^([ ]*)?([^ ]*)([ ]*)?(.*)$/, "\\2", "g", a[i])
desc = gensub(/^([ ]*)?([^ ]*)([ ]*)?(.*)$/, "\\4", "g", a[i])
newnf[num] = desc
}
}
{$NF = newnf[$NF]}1
' input_file
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 Reporter
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness View
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA View
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 Reporter
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness View
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit Guest
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit Reporter
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 Reporter
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I Minimal acces
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW Reporter
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit Reporter
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness View
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness View
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S Reporter
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest Owner
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness Reporter
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN Reporter
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D Minimal acces
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ Reporter
Other solution
mapping="10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner, "
awk -v map="$mapping" '
NR==FNR{ n=$1; $1=""; gsub(/^ /,"",$0); a[n]=$0; next}
{ $NF=a[$NF] }1
' <(tr ',' '\n' <<<"$mapping") input_file
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 Reporter
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness View
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA View
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 Reporter
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness View
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit Guest
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit Reporter
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 Reporter
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I Minimal acces
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW Reporter
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit Reporter
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness View
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness View
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S Reporter
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest Owner
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness Reporter
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN Reporter
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D Minimal acces
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ Reporter
You can use
sed 's/\([[:space:]]\)30[[:space:]]*$/\1View/' file.txt > newfile.txt
Here, the 30 number is matched only at the end of the string ($) and when preceded with a whitespace.
Details:
\([[:space:]]\) - Capturing group 1 (\1 in the replacement pattern refers to this group value): a whitespace
30 - a fixed string
[[:space:]]* - zero or more (trailing) whitespaces
$ - end of string.
See an online demo:
#!/bin/bash
s='4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I 30
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 40
10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner,'
sed 's/\([[:space:]]\)30[[:space:]]*$/\1View/' <<< "$s"
Output:
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 40
10 Minimal acces, 20 Guest, 30 View, 40 Reporter, 50 Owner,
As long as you're not concerned about existing extra padded-spaces being squeezed away :
{m,g}awk '
BEGIN {
__[___ =_+= _+= (_+=_^=_<_) \
+_--] = "Minimal access"
__[_+=+_] = "Guest"
__[_+=+_] = "Reporter"
__[_+___] = "Owner"
__[_-___] = "View" } $NF = __[$NF]'
|
4141 2019-01-08T14:42:55.000+02:00 JonhSmith LS08EE0I View
2128 2019-11-02T13:47:34.000+02:00 James Davis RT84SO1 Reporter
2293 2019-12-21T17:41:37.000+02:00 James Davis bissness View
1931 2019-12-15T12:16:48.000+02:00 James Davis IL44DEAA View
2124 2019-10-12T15:23:46.000+03:00 James Davis AA4074S21 Reporter
2035 2019-12-09T15:33:28.000+02:00 James Davis bissness View
4843 2022-03-02T12:48:34.000+02:00 Wilson Robert JR autotesit Guest
5361 2022-03-02T12:44:55.000+02:00 Wilson Robert JR autotesit Reporter
2135 2019-10-12T21:06:30.000+03:00 James Davis FR4SA21 Reporter
2122 2019-12-23T20:10:06.000+02:00 Administrator QQ2366I Minimal access
2123 2019-10-12T15:40:16.000+03:00 James Davis LS1d0784EW Reporter
5075 2022-03-02T12:49:10.000+02:00 Lee Patricia JR autotesit Reporter
2224 2019-12-20T16:26:36.000+02:00 James Davis G1bissness View
2582 2021-06-20T15:07:19.000+03:00 Jame E2bissness View
2121 2019-10-12T17:12:38.000+03:00 James Davis AZ1878S Reporter
4694 2022-06-20T16:00:48.000+03:00 Oliver A autotest Owner
2076 2019-12-02T18:32:42.000+02:00 James Davis bissness Reporter
2694 2021-04-23T11:42:58.000+03:00 Scott Harper JR AZ0410MAN Reporter
1721 2019-07-13T15:30:56.000+03:00 Hall Braylon AZ14089D Minimal access
1863 2019-07-25T15:45:02.000+03:00 Diaz Thomas AZ141IJ Reporter
I am using SQL Server 2014 and I have the following T-SQL query running against a table (tbl1).
Extract of tbl1:
emp_code Name Address Company
---------------------------------------
100 Peter London ABC
125 Allan Cambridge DCE
125 Allan Cambridge DCE
115 John Suffolk ABC
115 John Suffolk XYZ
154 Mary Highlands ABC
154 Mary Bristol ABC
124 Mary Chester ABC
My T-SQL query stands as follows:
SELECT
[ID],
[Name],
[Address],
[Company],
ROW_NUMBER() OVER (PARTITION BY [emp_code] ORDER BY [Address]) AS RowNumber
FROM
[tbl1]
Output from above query:
emp_code Name Address Company RowNumber
--------------------------------------------------------
100 Peter London ABC 1
125 Allan Cambridge DCE 1
125 Allan Cambridge DCE 2
115 John Suffolk ABC 1
115 John Suffolk XYZ 2
154 Mary Highlands ABC 1
154 Mary Bristol ABC 2
154 Mary Chester ABC 3
Output I'm after:
emp_code Name Address Company RowNumber
---------------------------------------------------------
100 Peter London ABC 1
125 Allan Cambridge DCE 1
125 Allan Cambridge DCE 1
115 John Suffolk ABC 1
115 John Suffolk XYZ 1
154 Mary Highlands ABC 1
154 Mary Bristol ABC 2
154 Mary Chester ABC 3
I want my RowNumber (or change the column name if need be) to change based on the [Address] column for each [emp_code]. If the employee has the SAME address, it should have the same value (that is, 1). Else, it should give the values as in the case of employee "Mary" (above output).
I am assuming the Row_Number() function is not the right one to be used for what I'm after.
Any help would be appreciated.
I think you want DENSE_RANK here rather than ROW_NUMBER():
SELECT [ID], [Name], [Address], [Company],
DENSE_RANK() OVER (PARTITION BY [emp_code]
ORDER BY [Address]) AS DenseRank
FROM [tbl1];
Demo
I have three tables in my database Books, Borrowers and Movement:
Books
BookID Title Author Category Published
----------- ------------------------------ ------------------------- --------------- ----------
101 Ulysses James Joyce Fiction 1922-06-16
102 Huckleberry Finn Mark Twain Fiction 1884-03-24
103 The Great Gatsby F. Scott Fitzgerald Fiction 1925-06-17
104 1984 George Orwell Fiction 1949-04-19
105 War and Peace Leo Tolstoy Fiction 1869-08-01
106 Gullivers Travels Jonathan Swift Fiction 1726-07-01
107 Moby Dick Herman Melville Fiction 1851-08-01
108 Pride and Prejudice Jane Austen Fiction 1813-08-13
110 The Second World War Winston Churchill NonFiction 1953-09-01
111 Relativity Albert Einstein NonFiction 1917-01-09
112 The Right Stuff Tom Wolfe NonFiction 1979-09-07
121 Hitchhikers Guide to Galaxy Douglas Adams Humour 1975-10-27
122 Dad Is Fat Jim Gaffigan Humour 2013-03-01
131 Kick-Ass 2 Mark Millar Comic 2012-03-03
133 Beautiful Creatures: The Manga Kami Garcia Comic 2014-07-01
Borrowers
BorrowerID Name Birthday
----------- ------------------------- ----------
2 Bugs Bunny 1938-09-08
3 Homer Simpson 1992-09-09
5 Mickey Mouse 1928-02-08
7 Fred Flintstone 1960-06-09
11 Charlie Brown 1965-06-05
13 Popeye 1933-03-03
17 Donald Duck 1937-07-27
19 Mr. Magoo 1949-09-14
23 George Jetson 1948-04-08
29 SpongeBob SquarePants 1984-08-04
31 Stewie Griffin 1971-11-17
Movement
MoveID BookID BorrowerID DateOut DateIn ReturnCondition
----------- ----------- ----------- ---------- ---------- ---------------
1 131 31 2012-06-01 2013-05-24 good
2 101 23 2012-02-10 2012-03-24 good
3 102 29 2012-02-01 2012-04-01 good
4 105 7 2012-03-23 2012-05-11 good
5 103 7 2012-03-22 2012-04-22 good
6 108 7 2012-01-23 2012-02-12 good
7 112 19 2012-01-12 2012-02-10 good
8 122 11 2012-04-14 2013-05-01 poor
9 106 17 2013-01-24 2013-02-01 good
10 104 2 2013-02-24 2013-03-10 bitten
11 121 3 2013-03-01 2013-04-01 good
12 131 19 2013-04-11 2013-05-23 good
13 111 5 2013-05-22 2013-06-22 poor
14 131 2 2013-06-12 2013-07-23 bitten
15 122 23 2013-07-10 2013-08-12 good
16 107 29 2014-01-01 2014-02-14 good
17 110 7 2014-01-11 2014-02-01 good
18 105 2 2014-02-22 2014-03-02 bitten
What is a query I can use to find out which book was borrowed by the oldest borrower?
I am new to SQL and am using Microsoft SQL Server 2014
Here are two different solutions:
First using two sub querys and one equi-join:
select Title
from Books b , Movement m
where b.BookID = m.BookID and m.BorrowerID = (select BorrowerID
from Borrowers
where Birthday = (select MIN(Birthday)
from Borrowers))
Using two equi-joins and one sub query:
select Title
from Books b, Borrowers r, Movement m
where b.BookID = m.BookID
and m.BorrowerID = r.BorrowerID
and Birthday = (select MIN(Birthday) from Borrowers)
Both above queries give the following answer:
Title
------------------------------
Relativity
I'm trying to create a new column in a SELECT statement that picks out top level lines from withing the same table.
SAMPLE DATA:
ITEM_VALUE DESCRIPTION LEVEL_NO ITEM_ABOVE
100 Ford 3 CAR
200 Honda Own 3 CAR
210 Honda 3rd Party 3 CAR
1000 Ford 4 100
2000 Honda T Own 4 200
801 Ford 1 4 1000
802 Ford 2 4 1000
803 Ford 3 4 1000
804 Ford 4 4 1000
805 Ford 5 4 1000
806 Ford 6 4 1000
807 Ford 7 4 1000
808 Ford 8 4 1000
814 Ford 4 1000
809 Honda 4 2000
2100 Honda T 3rd Party 4 210
EXPECTED OUTPUT:
DESCRIPTION ITEM_GROUP
Ford Ford
Honda Own Honda Own
Honda 3rd Party Honda 3rd Party
Ford Ford
Honda T Own Honda Own
Ford 1 Ford
Ford 2 Ford
Ford 3 Ford
Ford 4 Ford
Ford 5 Ford
Ford 6 Ford
Ford 7 Ford
Ford 8 Ford
Ford Ford
Honda Honda Own
Honda T 3rd Party Honda 3rd Party
You can use a Recursive CTE:
WITH CTE(ITEM_VALUE, ITEM_ABOVE, DESCRIPTION, ITEM_GROUP) AS
(
SELECT ITEM_VALUE, ITEM_ABOVE, DESCRIPTION, DESCRIPTION AS ITEM_GROUP
FROM mytable
WHERE ITEM_ABOVE = 'CAR'
UNION ALL
SELECT t1.ITEM_VALUE, t1.ITEM_ABOVE, t1.DESCRIPTION, t2.ITEM_GROUP
FROM mytable t1
JOIN CTE t2 ON t1.ITEM_ABOVE = t2.ITEM_VALUE
)
SELECT ITEM_VALUE, ITEM_ABOVE, DESCRIPTION, ITEM_GROUP
FROM CTE
I'm trying to find a difference between two tables. The tables are
Sample Data
PERSON_PHOTO
ID USERID FNAME
801 uid01 Geroge
801 uid05 George
803 uid01 George
901 uid01 Alice
201 uid01 Alice
330 uid01 Alice
802 uid05 Alice
803 uid05 Alice
804 uid05 Alice
901 uid05 Alice
701 uid05 Alice
201 uid05 Alice
101 uid05 Alice
330 uid05 Alice
501 uid05 Alice
501 uid12 Jane
330 uid12 Jane
101 uid12 Jane
201 uid12 Jane
701 uid12 Jane
801 uid12 Jane
901 uid12 Jane
101 uid07 Mary
101 uid03 Mary
201 uid03 Mary
801 uid03 Mary
901 uid03 Mary
201 uid15 Tom
801 uid15 Tom
Table VALID_FRIEND
FNAME USERID
Bill uid02
George uid01
Mary uid07
Jane uid12
Tom uid15
Alice uid05
Mary uid03
SAMPLE OUTPUT
USERID PHOTOS NOT IN
uid02 0
uid01 5
uid07 9
uid12 3
uid15 8
uid05 8
uid03 6
The query I'm trying to perform is to find the number of Photos that the person is not in. I'm trying to output by USERID and the number of photos not currently in. I know I need to find the count of the distinct PID in person photo and take the difference of the count of the userid in photo. Thanks for any help.