Ok, I am stuck on this one.
I have a PostgreSQL table customers that looks like this:
id firm1 firm2 firm3 firm4 firm5 lastname firstname
1 13 8 2 0 0 Smith John
2 3 2 0 0 0 Doe Jane
Each row corresponds to a client/customer. Each client/customer can be associated with one or multiple firms; the numeric value under each firm# columns corresponds to the firm id in a different table.
So I am looking for a way of returning all rows of customers that are associated with a specific firm.
For example, SELECT id, lastname, firstname where 8 exists in firm1, firm2, firm3, firm4, firm5 would just return the John Smith row as he is associated with firm 8 under the firm2 column.
Any ideas on how to accomplish that?
You can use the IN operator for that:
SELECT *
FROM customer
where 8 IN (firm1, firm2, firm3, firm4, firm5);
But it would be much better in the long run if your normalized your data model.
You should consider to normalize your tables, with the current schema you should join firms tables as many times as the number of firm fields in your customer table.
select *
from customers c
left join firms f1
on f1.firm_id = c.firm1
left join firms f2
on f2.firm_id = c.firm2
left join firms f3
on f3.firm_id = c.firm3
left join firms f4
on f4.firm_id = c.firm4
You can "unpivot" using a combination of array and unnest, as specified in this answer: unpivot and PostgreSQL.
In your case, I think this should work:
select lastname,
firstname,
unnest(array[firm1, firm2, firm3, firm4, firm5]) as firm_id
from customer
Now you can select from this table (using either a with statement or an inner query) where firm_id is the value you care about
I'm doing a query in BigQuery:
SELECT id FROM [table] WHERE city = 'New York City' GROUP BY id
The weird part is it shows duplicate ids, often right next to each other.
There is absolutely nothing different between the ids themselves. There are around 3 million rows total, for ~500k IDs. So there are a lot of duplicates, but that is by design. We figured the filtering would easily eliminate that but noticed discrepancies in totals.
Is there a reason BigQuery's GROUP BY function would work improperly? For what its worth, the dataset has ~3 million rows.
Example of duplicate ID:
56abdb5b9a75d90003001df6
56abdb5b9a75d90003001df6
the only explanation is your id is STRING and in reality those two ids are different because of spaces before or most likely after what is "visible" for eyes
I recommend you to adjust your query like below
SELECT REPLACE(id, ' ', '')
FROM [table]
WHERE city = 'New York City'
GROUP BY 1
another option to troubleshoot would be below
SELECT id, LENGTH(id)
FROM [table]
WHERE city = 'New York City'
GROUP BY 1, 2
so you can see if those ids are same by length or not - my initial assumption was about space - but it can be any other char(s) including non printable
Two tables:
Parts Table:
Part_Number Load_Date TQTY
m-123 19940102 32
1234Cf 20010809 3
wf9-2 20160421 14
Locations Table:
PartNo Condition Location QTY
m-123 U A02 2
1234Cf S A02 3
m-123 U B01 1
wf9-2 S A06 7
m-123 S A18 29
wf9-2 U F16 7
Result:
Part_Number Load_Date TQTY U_LOC UQTY S_LOC SQTY
m-123 19940102 32 A02,B01 3 A18 29
1234Cf 20010809 3 A02 3
wf9-2 20160421 14 F16 7 A06 7
I am having trouble finding a solution to this with my current DB2 version. I am not completely sure how to find the version, but it is running on an AS400 system, and it seems the version of DB2, is tied to the OS version. Which the box is using: Operating system: i5/OS Version: V5R4M0
(I tried some commands to get the DB2 version using these suggestions Here but none of them worked, like most stated).
In regards to concatenating multiple rows of column data into one row I have come across many articles stating to use XMLAGG or xmlserialize, Here and, Here but I get an error stating the command is not recognized.
Not sure where to go from here, as there seem to be solutions, but I can't get those already suggested functions to work.
EDIT:
Using the accepted answer and explanation, as well as the example
HERE to get a basic idea of recursion with a simple example, and it was
HERE using the "SELECT rownumber() over(partition by category)" statements that really helped pull it all together. Once I understood that statement of course.
I also learned to make sure the data used in the recursion is as narrowed down as possible and then joined up with extra data later. This makes for exponentially faster results. <-- This seems pretty obvious, but when trying to figure all of this out, it wasn't obvious and my query was pretty slow. Once I understood what was actually happening better it was easier to make adjustments for really fast results.
This is rather complicated, so I will show all my work:
Table definitions
create table parts
(part_number Varchar(64),
load_date Date,
total_qty Dec(5,0));
create table locations
(part_number Varchar(64),
condition Char(1),
location Char(3),
qty Dec(5,0));
insert into parts
values ('m-123', '1994-01-02', 32),
('1234Cf', '2001-08-09', 3),
('wf9-2', '2016-04-21', 14);
insert into locations
values ('m-123', 'U', 'A02', 2),
('1234Cf', 'S', 'A02', 3),
('m-123', 'U', 'B01', 1),
('wf9-2', 'S', 'A06', 7),
('m-123', 'S', 'A18', 29),
('wf9-2', 'U', 'F16', 7);
The query:
with -- CTE's
-- This collects locations into a comma seperated list
tmp (part_number, condition, location, csv, level) as (
select part_number, condition, min(location),
cast(min(location) as varchar(128)), 1
from locations
group by part_number, condition
union all
select a.part_number, a.condition, b.location,
a.csv || ',' || b.location, a.level + 1
from tmp a
join locations b using (part_number, condition)
where a.csv not like '%' || b.location || '%'
and b.location > a.location),
-- This chooses the correct csv list, and adds quantity for the condition
tmp2 (part_number, condition, csv, qty) as (
select t.part_number, t.condition, t.csv,
(select sum(qty) qty
from locations
where part_number = t.part_number
and condition = t.condition)
from tmp t
where level = (select max(level)
from tmp
where part_number = t.part_number
and condition = t.condition))
-- This is the final select that combines the parts file with
-- the second stage CTE and arranges things horizontally by condition
select p.part_number, p.load_date,
(select sum(qty)
from locations
where part_number = p.part_number) as total_qty,
coalesce(u.csv, '') as u_loc,
coalesce(u.qty, 0) as uqty,
coalesce(s.csv, '') as s_loc,
coalesce(s.qty, 0) as sqty
from parts p
left outer join tmp2 u
on u.part_number = p.part_number and u.condition = 'U'
left outer join tmp2 s
on s.part_number = p.part_number and s.condition = 'S'
order by p.load_date;
EDIT I have had to add some extra bits in here to support more than two locations for a part/condition, and I have made the column naming in the CTEs more consistent. Ok, so let me explain this a bit, there are 3 parts to this quety, 2 CTEs and the query, you can see the three parts are separated by comments. The first CTE is a recursive CTE. It's purpose is to produce the comma separated location list. You should be able to run the select by itself to see just what it does. tmp is the table name, part_number, condition, csv, and level are the column names. A recursive CTE needs a SELECT to prime the CTE and a UNION ALL with a SELECT that fills in the next details. In this case the priming SELECT retrieves a part number, a condition, and the first location (alphabetically) for that combination. level is set to 1. If you run just the priming select, you will get:
part_number condition location csv level
----------- --------- -------- --- -----
1234Cf S A01 A02 1
m-123 S A18 A18 1
m-123 U A02 A02 1
wf9-2 U F16 F16 1
wf9-2 S A06 A06 1
Note one line per part/condition. The remainder of the recursive CTE will fill in the remaining locations in csv, but it will actually add additional records so we need to filter the results here and later. So records are processed as they are added. The first rows listed above are joined with the location file
on part_number and condition. Note in the priming select I have a cast of the second min(location) to a varchar(128). This leaves room for the CSV column to expand. Without this, it will still expand, but not enough to hold more than 2 locations.
The second select in the recursive CTE concatenates a comma and the next location to the end of CSV. The specific bit that does this is a.csv || ',' || b.location. It also increments the level column. This helps us keep track of where we are in the query. Eventually, the row with the highest level is the one we want to use. We also have a way to end the recursion, and some filters to reduce the number of rows added to the temporary result set. If we have 2 locations, A02 and B02, left unchecked, we will get the following rows: A02, A02,A02, A02,B02, A02,A02,A02, A02,B02,A02, A02,A02,B02, A02,B02,B02, ... ad infinitum. The anti-duplication filter where a.csv not like '%' || b.location || '%' is sufficient for two locations to end the recursion, and minimize rows, like above, for locations A02 and B02, with the anti-duplication filter, we will get rows A02, and A02,B02. Note that none of the other results from the first example with duplicate locations are returned. Adding a third location C02 will yield, with anti-duplication filter, the following rows: A02, A02,B02, A02,C02, A02,B02,C02, A02,C02,B02. No duplicates here, but we do have redundant rows, and as you add locations, it gets worse. This is where we need a way to detect these redundant rows. Since we are starting with the lowest location number, we can always make sure that locations added to CSV are greater than the previously added location. To do that all we need to do is include a column in the result set that indicates which column was added (we could interrogate CSV, but that is harder). This is why we need the location column in tmp. Then we can write filter b.location > a.location. In the above 3 location example, this filter prevents row A02,C02,B02 leaving just a single row with all three locations. Adding more than three locations to the locations file will cause the number of rows to expand even more in TMP, but for each part and condition, there will only be one row with all locations, and it will contain all locations in ascending order.
The second CTE does two things. First, it filters TMP to drop all but the rows containing all locations for a given part/condition. Second, it accumulates the total quantity for each part/condition.
The bit that performs the filtering is in the where clause:
where level = (select max(level)
from tmp
where part_number = t.part_number
and condition = t.condition))
Pretty straight forward. The bit that accumulates the total quantity for a part/condition is also an easy to understand sub-query:
(select sum(qty) qty
from locations
where part_number = t.part_number
and condition = t.condition)
The final piece of this monster query is the main select. It joins the parts file with the results of the second CTE to form the ultimate result set:
select p.part_number, p.load_date,
(select sum(qty) from locations where part_number = p.part_number) as total_qty,
coalesce(u.csv, '') as u_loc, coalesce(u.qty, 0) as uqty,
coalesce(s.csv, '') as s_loc, coalesce(s.qty, 0) as sqty
from parts p
left outer join tmp2 u
on u.part_number = p.part_number and u.condition = 'U'
left outer join tmp2 s
on s.part_number = p.part_number and s.condition = 'S'
order by p.load_date
Bits of note are the subquery to retrieve the total quantity from the locations table. You could use the tqty field in parts, but that can get out of sync with the actual quantities in the locations table. In addition there are two left outer joins with tmp2, one for condition U, and another for condition S. These construct the horizontal array of Location/Quantity in the result row. The last thing is the coalesce functions. These give null values (when a result from an outer join is missing) a default value.
End of EDIT
The final result is:
part_number load_date tqty u_loc uqty s_loc sqty
----------- ---------- ---- ------- ---- ----- ----
m-123 1994-01-02 32 A02,B01 3 A18 29
1234Cf 2001-08-09 3 0 A02 3
wf9-2 2016-04-21 14 F16 7 A06 7
Note XMLAGG and XMLSERIALIZE became available at DB2 for i v7.1 and LISTAGG became available at DB2 for i v7.2. Most recent version as of 8/9/2017 is v7.3. As you are on v5r4, it is likely you will need not only a software, but also a hardware upgrade to get current.
No idea what the rules are for UQTY, S_LOC, SQTY but here is the column you asked about ---
SELECT
P.Part_Number,
P.Load_Date,
P.TQTY,
LISTAGG(L.Location, ', ') WITHIN GROUP (ORDER BY L.Location) AS U_LOC
FROM "Parts Table" AS P
LEFT JOIN "Locations Table" AS L ON P.Part_Number = L.Part_Number
GROUP BY P.Part_Number, P.Load_Date, P.TQTY
Lets say I have the following data in the Employee table: (nothing more)
ID FirstName LastName x
-------------------------------------------------------------------
20 John Mackenzie <A>te</A><b>wq</b><a>342</a><d>rt21</d>
21 Ted Green <A>re</A><b>es</b><1>t34w</1><4>65z</4>
22 Marcy Nate <A>ds</A><b>tf</b><3>fv 34</3><6>65aa</6>
I need to search in the X column and get highest number in <> these brackets
What sort of SELECT statement can get me, for example, the number 6 like in <6>, in the x column?
This type of query generally works on finding patterns, I consider that the <6> is at the 9th position from left.
Please note if the pattern changes the below query will not work.
SELECT A.* FROM YOURTABLE A INNER JOIN
(SELECT TOP 1 ID,Firstname,Lastname,SUBSTRING(X,LEN(X)-9,1) AS [ORDER]
FROM YOURTABLE
WHERE ISNUMERIC(SUBSTRING(X,LEN(X)-9,1))=1
ORDER BY SUBSTRING(X,LEN(X)-9,1))B
ON
A.ID=B.ID AND
A.FIRSTNAME=B.FIRSTNAME AND
A.LASTNAME=B.LASTNAME
I am using Oracle and I have a table with 1000 rows. There is a last name field and
I want to know the lengths of the name field but I don't want it for every row. I want a count of the various lengths.
Example:
lastname:
smith
smith
Johnson
Johnson
Jackson
Baggins
There are two smiths length of five. Four others, length of seven. I want my query to return
7
5
If there were 1,000 names I expect to get all kinds of lengths.
I tried,
Select count(*) as total, lastname from myNames group by total
It didn't know what total was. Grouping by lastname just groups on each individual name unless it's a different last name, which is as expected but not what I need.
Can this be done in one SQL query?
SELECT Length(lastname)
FROM MyTable
GROUP BY Length(lastname)
select distinct(LENGTH(lastname)) from mynames;
Select count(*), Length(column_name) from table_name group by Length(column_name);
This will work for the different lengths in a single column.