Need unique column in Union View - sql

I have a training application in which I need a union of two tables as a view in SQL Server. My company has a list of skills that are coded. Drilling is code 10, sanding is code 30, and so on. Employees are measured against these skill codes in our factory.
To illustrate, let's pick "Bob". Bob is a Metal Finisher who maintains several skills in the EmplSkills tables.
Secondly, in the JobSkills table are the skill requirements for "Metal Finisher".
By comparing the two tables you can see that Bob's skills don't exactly match the requirements for his position as Metal Finisher ...but they're pretty close.
However, from an AS9100 Quality Audit standpoint, I need to know exactly how close. I need all of Bob's skills that match the job requirements UNION'ed with all of Bob's extra skills that are not in the job scope but are nice-to-haves UNION'ed with all of Bob's missing skills that are extremely important to get trained up right away.
So, I'm creating the union view on the two tables with the SQL code below and
You can see that Bob can cut and deburr metal ...that's nice, but Bob has no sanding or scraping skills in his job position and that's very bad.
(SELECT a bunch of columns..., , row_number() over (order by t1.ref_no) as RowNum
FROM emplskills as t1
LEFT OUTER JOIN jobskills as t2 on t1.skill_ID = t2.skill_ID
WHERE t2.skill_ID is null)
UNION
(SELECT the same bunch of columns..., , row_number() over (order by t1.ref_no) as RowNum
FROM jobskills as t2
LEFT OUTER JOIN emplskills as t1 on t2.skill_ID = t1.skill_ID
WHERE t1.skill_ID is null)
UNION
(SELECT the same bunch of columns..., , row_number() over (order by t1.ref_no) as RowNum
FROM emplskills as t1, jobskills AS t2
WHERE t1.skill_ID=t2.skill_ID);
I'm using an application environment (Catavolt) in which all of the data tables need at least one unique data column that it uses remotely as a virtual "primary key" for its own housekeeping. So I need to invent one column of unique values and I can't figure out how to do this. I'd tried row_number () over (order by). I'd tried the two unique ref_no columns from each source table
SELECT ... , ((t1.ref_no*100000)+t2.ref_no) as UniqueKey
But the NULLs in the results blow out the math and I lose the uniqueness.
Is there something else I can add instead? It is of no consequence where the values come from, what they mean, or how big they are as long as they are unique.
TIA,
John
EDIT: RowNum suggestion from Tab on row_number() over (order by t1.ref_no)

Since it does sound like you really do just need to add an arbitrary unique column to your existing results, it's simply this:
SELECT *, ROW_NUMBER() OVER (ORDER BY {any column, really, it doesn't matter}) rn
FROM (
{Your existing query}
) t

Related

Efficient approach to get two-dimensional datau using

For the sake of example, let's say I have the following models:
teams
each team has an arbitrary amount of fans
In SQL, this means you end up with the following tables:
team: identifier, name
fan: identifier, name
team_fan: team_identifier, fan_identifier
I am looking for an approach to retrieve:
all teams, and
for each team, the first 5 fans of which his/her name starts with an 'A'.
What is an efficient approach to do this?
In my current naive approach, I do <# teams> + 1 queries, which is troublesome:
First: SELECT * FROM team
Then, for each team with identifier X:
SELECT *
FROM fan
INNER JOIN team_fan
ON fan.identifier = team_fan.fan_identifier AND team_fan.team_identifier = X
WHERE fan.name LIKE 'A%'
ORDER BY fan.name LIMIT 5
There should be a better way to do this.
I could first retrieve all teams, as I do now, and then do something like:
SELECT *
FROM fan
WHERE fan.name LIKE 'A%'
AND fan.identifier IN (
SELECT fan_identifier
FROM team_fan
WHERE team_identifier IN (<all team identifiers from first query>))
ORDER BY fan.name
However, this approach ignores the requirement that I need the first 5 fans for each team with his/her name starting with an 'A'. Just adding LIMIT 5 to the query above is not correct.
Also, with this approach, if I have a large amount of teams, I am sending the corresponding team identifiers back to the database in the second query (for the IN (<all team identifiers from first query>)), which might kill performance?
I am developing against PostgreSQL, Java, Spring and plain JDBC.
You need a three table join
SELECT team.*, fan.*
FROM team
JOIN team_fan
ON team.team_identifier = team_fan.team_identifier
JOIN fan
ON fan.fan_identifier = team_fan.fan_identifier
Now to filter you need to do this.
with cte as (
SELECT team.*, fan.*,
row_number() over (partition by team.team_identifier
order by fan.name) as rn
FROM team
JOIN team_fan
ON team.team_identifier = team_fan.team_identifier
JOIN fan
ON fan.fan_identifier = team_fan.fan_identifier
WHERE fan.name LIKE 'A%'
)
SELECT *
FROM cte
WHERE rn <= 5
Usually, RDBMSes have their own hacks around standard SQL that allows you to have a number in a count over some condition of grouping/ordering.
Postgres is no exception, it got ROW_NUMBER() function.
What you need is to partition your row numbers properly, order them by alphabet and restrict the query to row numbers < 6.

How to get the most frequent value SQL

I have a table Orders(id_trip, id_order), table Trip(id_hotel, id_bus, id_type_of_trip) and table Hotel(id_hotel, name).
I would like to get name of the most frequent hotel in table Orders.
SELECT hotel.name from Orders
JOIN Trip
on Orders.id_trip = Trip.id_hotel
JOIN hotel
on trip.id_hotel = hotel.id_hotel
FROM (SELECT hotel.name, rank() over (order by cnt desc) rnk
FROM (SELECT hotel.name, count(*) cnt
FROM Orders
GROUP BY hotel.name))
WHERE rnk = 1;
The "most frequently occurring value" in a distribution is a distinct concept in statistics, with a technical name. It's called the MODE of the distribution. And Oracle has the STATS_MODE() function for it. https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions154.htm
For example, using the EMP table in the standard SCOTT schema, select stats_mode(deptno) from scott.emp will return 30 - the number of the department with the most employees. (30 is the department "name" or number, it is NOT the number of employees in that department!)
In your case:
select stats_mode(h.name) from (the rest of your query)
Note: if two or more hotels are tied for "most frequent", then STATS_MODE() will return one of them (non-deterministic). If you need all the tied values, you will need a different solution - a good example is in the documentation (linked above). This is a documented flaw in Oracle's understanding and implementation of the statistical concept.
Use FIRST for a single result:
SELECT MAX(hotel.name) KEEP (DENSE_RANK FIRST ORDER BY cnt DESC)
FROM (
SELECT hotel.name, COUNT(*) cnt
FROM orders
JOIN trip USING (id_trip)
JOIN hotel USING (id_hotel)
GROUP BY hotel.name
) t
Here is one method:
select name
from (select h.name,
row_number() over (order by count(*) desc) as seqnum -- use `rank()` if you want duplicates
from orders o join
trip t
on o.id_trip = t.id_trip join -- this seems like the right join condition
hotels h
on t.id_hotel = h.id_hotel
) oth
where seqnum = 1;
** Getting the most recent statistical mode out of a data sample **
I know it's more than a year, but here's my answer. I came across this question hoping to find a simpler solution than what I know, but alas, nope.
I had a similar situation where I needed to get the mode from a data sample, with the requirement to get the mode of the most recently inserted value if there were multiple modes.
In such a case neither the STATS_MODE nor the LAST aggregate functions would do (as they would tend to return the first mode found, not necessarily the mode with the most recent entries.)
In my case it was easy to use the ROWNUM pseudo-column because the tables in question were performance metric tables that only experienced inserts (not updates)
In this oversimplified example, I'm using ROWNUM - it could easily be changed to a timestamp or sequence field if you have one.
SELECT VALUE
FROM
(SELECT VALUE ,
COUNT( * ) CNT,
MAX( R ) R
FROM
( SELECT ID, ROWNUM R FROM FOO
)
GROUP BY ID
ORDER BY CNT DESC,
R DESC
)
WHERE
(
ROWNUM < 2
);
That is, get the total count and max ROWNUM for each value (I'm assuming the values are discrete. If they aren't, this ain't gonna work.)
Then sort so that the ones with largest counts come first, and for those with the same count, the one with the largest ROWNUM (indicating most recent insertion in my case).
Then skim off the top row.
Your specific data model should have a way to discern the most recent (or the oldest or whatever) rows inserted in your table, and if there are collisions, then there's not much of a way other than using ROWNUM or getting a random sample of size 1.
If this doesn't work for your specific case, you'll have to create your own custom aggregator.
Now, if you don't care which mode Oracle is going to pick (your bizness case just requires a mode and that's it, then STATS_MODE will do fine.

SQL JOIN returning multiple rows when I only want one row

I am having a slow brain day...
The tables I am joining:
Policy_Office:
PolicyNumber OfficeCode
1 A
2 B
3 C
4 D
5 A
Office_Info:
OfficeCode AgentCode OfficeName
A 123 Acme
A 456 Acme
A 789 Acme
B 111 Ace
B 222 Ace
B 333 Ace
... ... ....
I want to perform a search to return all policies that are affiliated with an office name. For example, if I search for "Acme", I should get two policies: 1 & 5.
My current query looks like this:
SELECT
*
FROM
Policy_Office P
INNER JOIN Office_Info O ON P.OfficeCode = O.OfficeCode
WHERE
O.OfficeName = 'Acme'
But this query returns multiple rows, which I know is because there are multiple matches from the second table.
How do I write the query to only return two rows?
SELECT DISTINCT a.PolicyNumber
FROM Policy_Office a
INNER JOIN Office_Info b
ON a.OfficeCode = b.OfficeCode
WHERE b.officeName = 'Acme'
SQLFiddle Demo
To further gain more knowledge about joins, kindly visit the link below:
Visual Representation of SQL Joins
Simple join returns the Cartesian multiplication of the two sets and you have 2 A in the first table and 3 A in the second table and you probably get 6 results. If you want only the policy number then you should do a distinct on it.
(using MS-Sqlserver)
I know this thread is 10 years old, but I don't like distinct (in my head it means that the engine gathers all possible data, computes every selected row in each record into a hash and adds it to a tree ordered by that hash; I may be wrong, but it seems inefficient).
Instead, I use CTE and the function row_number(). The solution may very well be a much slower approach, but it's pretty, easy to maintain and I like it:
Given is a person and a telephone table tied together with a foreign key (in the telephone table). This construct means that a person can have more numbers, but I only want the first, so that each person only appears one time in the result set (I ought to be able concatenate multiple telephone numbers into one string (pivot, I think), but that's another issue).
; -- don't forget this one!
with telephonenumbers
as
(
select [id]
, [person_id]
, [number]
, row_number() over (partition by [person_id] order by [activestart] desc) as rowno
from [dbo].[telephone]
where ([activeuntil] is null or [activeuntil] > getdate()
)
select p.[id]
,p.[name]
,t.[number]
from [dbo].[person] p
left join telephonenumbers t on t.person_id = p.id
and t.rowno = 1
This does the trick (in fact the last line does), and the syntax is readable and easy to expand. The example is simple but when creating large scripts that joins tables left and right (literally), it is difficult to avoid that the result contains unwanted duplets - and difficult to identify which tables creates them. CTE works great for me.

Select query with join in huge table taking over 7 hours

Our system is facing performance issues selecting rows out of a 38 million rows table.
This table with 38 million rows stores information from clients/suppliers etc. These appear across many other tables, such as Invoices.
The main problem is that our database is far from normalized. The Clients_Suppliers table has a composite key made of 3 columns, the Code - varchar2(16), Category - char(2) and the last one is up_date, a date. Every change in one client's address is stored in that same table with a new date. So we can have records such as this:
code ca up_date
---------------- -- --------
1234567890123456 CL 01/01/09
1234567890123456 CL 01/01/10
1234567890123456 CL 01/01/11
1234567890123456 CL 01/01/12
6543210987654321 SU 01/01/10
6543210987654321 SU 08/03/11
Worst, in every table that uses a client's information, instead of the full composite key, only the code and category is stored. Invoices, for instance, has its own keys, including the emission date. So we can have something like this:
invoice_no serial_no emission code ca
---------- --------- -------- ---------------- --
1234567890 12345 05/02/12 1234567890123456 CL
My specific problem is that I have to generate a list of clients for which invoices where created in a given period. Since I have to get the most recent info from the clients, I have to use max(up_date).
So here's my query (in Oracle):
SELECT
CL.CODE,
CL.CATEGORY,
-- other address fields
FROM
CLIENTS_SUPPLIERS CL
INVOICES I
WHERE
CL.CODE = I.CODE AND
CL.CATEGORY = I.CATEGORY AND
CL.UP_DATE =
(SELECT
MAX(CL2.UP_DATE)
FROM
CLIENTS_SUPPLIERS CL2
WHERE
CL2.CODE = I.CODE AND
CL2.CATEGORY = I.CATEGORY AND
CL2.UP_DATE <= I.EMISSION
) AND
I.EMISSION BETWEEN DATE1 AND DATE2
It takes up to seven hours to select 178,000 rows. Invoices has 300,000 rows between DATE1 and DATE2.
It's a (very, very, very) bad design, and I've raised the fact that we should improve it, by normalizing the tables. That would involve creating a table for clients with a new int primary key for each pair of code/category and another one for Adresses (with the client primary key as a foreign key), then use the Adresses' primary key in each table that relates to clients.
But it would mean changing the whole system, so my suggestion has been shunned. I need to find a different way of improving performance (apparently using only SQL).
I've tried indexes, views, temporary tables but none have had any significant improvement on performance. I'm out of ideas, does anyone have a solution for this?
Thanks in advance!
What does the DBA have to say?
Has he/she tried:
Coalescing the tablespaces
Increasing the parallel query slaves
Moving indexes to a separate tablespace on a separate physical disk
Gathering stats on the relevant tables/indexes
Running an explain plan
Running the query through the index optimiser
I'm not saying the SQL is perfect, but if performance it is degrading over time, the DBA really needs to be having a look at it.
SELECT
CL2.CODE,
CL2.CATEGORY,
... other fields
FROM
CLIENTS_SUPPLIERS CL2 INNER JOIN (
SELECT DISTINCT
CL.CODE,
CL.CATEGORY,
I.EMISSION
FROM
CLIENTS_SUPPLIERS CL INNER JOIN INVOICES I ON CL.CODE = I.CODE AND CL.CATEGORY = I.CATEGORY
WHERE
I.EMISSION BETWEEN DATE1 AND DATE2) CL3 ON CL2.CODE = CL3.CODE AND CL2.CATEGORY = CL3.CATEGORY
WHERE
CL2.UP_DATE <= CL3.EMISSION
GROUP BY
CL2.CODE,
CL2.CATEGORY
HAVING
CL2.UP_DATE = MAX(CL2.UP_DATE)
The idea is to separate the process: first we tell oracle to give us the list of clients for which there are the invoices of the period you want, and then we get the last version of them. In your version there's a check against MAX 38000000 times, which I really think is what costed most of the time spent in the query.
However, I'm not asking for indexes, assuming they are correctly setup...
Assuming that the number of rows for a (code,ca) is smallish, I would try to force an index scan per invoice with an inline view, such as:
SELECT invoice_id,
(SELECT MAX(rowid) KEEP (DENSE_RANK FIRST ORDER BY up_date DESC
FROM clients_suppliers c
WHERE c.code = i.code
AND c.category = i.category
AND c.up_date < i.invoice_date)
FROM invoices i
WHERE i.invoice_date BETWEEN :p1 AND :p2
You would then join this query to CLIENTS_SUPPLIERS hopefully triggering a join via rowid (300k rowid read is negligible).
You could improve the above query by using SQL objects:
CREATE TYPE client_obj AS OBJECT (
name VARCHAR2(50),
add1 VARCHAR2(50),
/*address2, city...*/
);
SELECT i.o.name, i.o.add1 /*...*/
FROM (SELECT DISTINCT
(SELECT client_obj(
max(name) KEEP (DENSE_RANK FIRST ORDER BY up_date DESC),
max(add1) KEEP (DENSE_RANK FIRST ORDER BY up_date DESC)
/*city...*/
) o
FROM clients_suppliers c
WHERE c.code = i.code
AND c.category = i.category
AND c.up_date < i.invoice_date)
FROM invoices i
WHERE i.invoice_date BETWEEN :p1 AND :p2) i
The correlated subquery may be causing issues, but to me the real problem is in what seems to be your main client table, you cannot easily grab the most recent data without doing the max(up_date) mess. Its really a mix of history and current data, and as you describe poorly designed.
Anyway, it will help you in this and other long running joins to have a table/view with ONLY the most recent data for a client. So, first build a mat view for this (untested):
create or replace materialized view recent_clients_view
tablespace my_tablespace
nologging
build deferred
refresh complete on demand
as
select * from
(
select c.*, rownumber() over (partition by code, category order by up_date desc, rowid desc) rnum
from clients c
)
where rnum = 1;
Add unique index on code,category. The assumption is that this will be refreshed periodically on some off hours schedule, and that your queries using this will be ok with showing data AS OF the date of the last refresh. In a DW env or for reporting, this is usually the norm.
The snapshot table for this view should be MUCH smaller than the full clients table with all the history.
Now, you are doing an joining invoice to this smaller view, and doing an equijoin on code,category (where emission between date1 and date2). Something like:
select cv.*
from
recent_clients_view cv,
invoices i
where cv.code = i.code
and cv.category = i.category
and i.emission between :date1 and :date2;
Hope that helps.
You might try rewriting the query to use analytic functions rather than a correlated subquery:
select *
from (SELECT CL.CODE, CL.CATEGORY, -- other address fields
max(up_date) over (partition by cl.code, cl.category) as max_up_date
FROM CLIENTS_SUPPLIERS CL join
INVOICES I
on CL.CODE = I.CODE AND
CL.CATEGORY = I.CATEGORY and
I.EMISSION BETWEEN DATE1 AND DATE2 and
up_date <= i.emission
) t
where t.up_date = max_up_date
You might want to remove the max_up_date column in the outside select.
As some have noticed, this query is subtly different from the original, because it is taking the max of up_date over all dates. The original query has the condition:
CL2.UP_DATE <= I.EMISSION
However, by transitivity, this means that:
CL2.UP_DATE <= DATE2
So the only difference is when the max of the update date is less than DATE1 in the original query. However, these rows would be filtered out by the comparison to UP_DATE.
Although this query is phrased slightly differently, I think it does the same thing. I must admit to not being 100% positive, since this is a subtle situation on data that I'm not familiar with.

Best way to randomly select rows *per* column in SQL Server

A search of SO yields many results describing how to select random rows of data from a database table. My requirement is a bit different, though, in that I'd like to select individual columns from across random rows in the most efficient/random/interesting way possible.
To better illustrate: I have a large Customers table, and from that I'd like to generate a bunch of fictitious demo Customer records that aren't real people. I'm thinking of just querying randomly from the Customers table, and then randomly pairing FirstNames with LastNames, Address, City, State, etc.
So if this is my real Customer data (simplified):
FirstName LastName State
==========================
Sally Simpson SD
Will Warren WI
Mike Malone MN
Kelly Kline KS
Then I'd generate several records that look like this:
FirstName LastName State
==========================
Sally Warren MN
Kelly Malone SD
Etc.
My initial approach works, but it lacks the elegance that I'm hoping the final answer will provide. (I'm particularly unhappy with the repetitiveness of the subqueries, and the fact that this solution requires a known/fixed number of fields and therefore isn't reusable.)
SELECT
FirstName = (SELECT TOP 1 FirstName FROM Customer ORDER BY newid()),
LastName= (SELECT TOP 1 LastNameFROM Customer ORDER BY newid()),
State = (SELECT TOP 1 State FROM Customer ORDER BY newid())
Thanks!
ORDER BY NEWID() works with ROW_NUMBER in SQL Server 2008. Not sure about SQL Server 2005,
This is needed to generate values to join the 3 separate queries: it's slightly counter intuitive because you'd think it would always take the first 100 rows in a different order but it doesn't...
;With F AS
(
SELECT TOP 100
FirstName, ROW_NUMBER() OVER (ORDER BY NEWID()) AS Foo
FROM Customer
), L AS
(
SELECT TOP 100
LastName, ROW_NUMBER() OVER (ORDER BY NEWID()) AS Foo
FROM Customer
), S AS
(
SELECT TOP 100
State, ROW_NUMBER() OVER (ORDER BY NEWID()) AS Foo
FROM Customer
)
SELECT
F.FirstName, L.LastName, S.State
FROM
F
JOIN L ON F.Foo = L.Foo
JOIN S ON F.Foo = S.Foo
You could select the top N random rows at once (where N=3 is the number of columns), and then take column 1 from row 1, column 2 from row 2, etc. I'm not sure exactly how to do that last step in SQL, but if you're willing to do the last step in some other language I'm sure it would be simple.
Also, by selecting N rows at once you would have the new property that you would never be selecting two columns from the same row (though this could cause trouble if there are more columns than rows).
It seems to me that you are actually trying to generate random data -- the fact that you already have a bunch that is non-random is really just a side note. If I were in your shoes, I would look at generating random customers by choosing random words from the dictionary to use as FName, LName, City, etc. That seems easier and more random anyway.