SQL pivot with text based fields - sql

Forgive me, but I can't get this working.
I can find lots of complex pivots using numeric values, but nothing basic based on strings to build upon.
Lets suppose this is my source query from a temp table. I can't change this:
select * from #tmpTable
This provides 12 rows:
Row | Name | Code
---------------------------------
1 | July 2019 | 19/20-01
2 | August 2019 | 19/20-02
3 | September 2019 | 19/20-03
.. .. ..
12 | June 2020 | 19/20-12
I want to pivot this and return the data like this:
Data Type | [0] | [1] | [3] | [12]
---------------------------------------------------------------------------
Name | July 2019 | August 2019 | September 2019 | June 2020
Code | 19/20-01 | 19/20-02 | 19/20-03 | 19/20-12
Thanks in advance..

Strings and numbers aren't much different in pivot terms, it's just that you can't use numeric aggregators like SUM or AVG on them. MAX will be fine and in this case you'll only have one Value so nothing will be lost
You need to pull your data out to a taller key/value representation before pivoting it back to look the other way round as it does now
unpivot the data:
WITH upiv AS(
SELECT 'Name' as t, row as r, name as v FROM #tempTable
UNION ALL
SELECT 'Code' as t, row, code FROM #tempTable
)
Now the data can be re grouped and conditionally aggregated on the r columns:
SELECT
t,
MAX(CASE WHEN r = 1 THEN v END) as r1,
MAX(CASE WHEN r = 2 THEN v END) as r2,
...
MAX(CASE WHEN r = 12 THEN v END) as r12
FROM
upiv
GROUP BY
t
You'll need to put the two sql blocks I present here together so they form a single sql statement. If you want to know more about how this works, I suggest you run the sql statement inside the with block, take a look at it, and also remove the group by/max words from the full statement and look at the result. You'll see the WITH block query makes the data taller, essentially a key/value pair that is tracking what type the data is (name or code). When you run the full sql without the group by/max you'll see the tall data spreads out sideways to give a lot of nulls and a diagonal set of cell data (if ordered by r). The group by collapses all these nulls because a MAX will pick any value over null (of which there is only one)
You could also do this as an UNPIVOT followed by a PIVOT. I've always preferred to use this form because not every database supports the UN/PIVOT keywords. Arguably, UNPIVOT/PIVOT could perform better because there may be specific optimizations the developers can make (eg UNPIVOT could single scan a table; this multiple Union approach may require multiple scans and ways round it could be more memory intensive) but in this case it's only 12 rows. I suspect you're using SQLServer but if you're using a database that doesn't understand WITH you can place the bracketed statement of the WITH (including the brackets) between the FROM and the upiv to make it a subquery if the pattern SELECT ... FROM (SELECT ... UNION ALL SELECT ...) upiv GROUP BY ...; there is no difference
I'll leave renaming the output columns as an exercise for you but I would urge you to consider not putting spaces or square brackets in the column names as you show in your question

Related

Horizontal and vertical output from same SQL Server table

I am not sure this can be done, and tried numerous searches but no real result yet.
I have a SQL Server database with a table where I want to output results from a single table both horizontally and vertically. I realise this will be a complex SQL statement and have managed part of the vertical using a UNION but the horizontal eludes me.
The table has a field called "reference" and contains a string of characters such as "A03ACCEVEN18JS-SN1AA" or "A02ACCVCOM18JS-FN1AA". I want to create an output with a row for the count of references commencing A02 then a row for A03, A04 etc that also contain "18". Then expand horizontally to count the references with different letters after the hyphen, i.e. "-s" and "-f" etc. So the output would look like below,
S_Count | F_Count | J_Count etc
---------------------------------
A02 Row --> 58 | 23 | 16
A03 Row --> 22 | 43 | 53
A04 Row --> 7 | 31 | 23
etc
I managed to get one column so far with multiple where clauses and UNIONS like below but I now need the vertical. Can this be done please?
SELECT COUNT(reference) FROM mytable
WHERE reference LIKE 'A02%' AND reference LIKE '%%18%%' AND PATINDEX('%-P%',
reference) <> 0
UNION
SELECT COUNT(reference) FROM mytable
WHERE reference LIKE 'A03%' AND reference LIKE '%%18%%' AND PATINDEX('%-P%',
reference) <> 0
UNION
SELECT COUNT(reference) AS TOTAL FROM mytable
WHERE reference LIKE 'A04%' AND reference LIKE '%%18%%' AND PATINDEX('%-P%',
reference) <> 0;
Let's do it all in one hit :)
SELECT
LEFT(reference, 3) as ao_number,
SUM(CASE WHEN reference LIKE '%-S%' THEN 1 ELSE 0 END) as s_count,
SUM(CASE WHEN reference LIKE '%-F%' THEN 1 ELSE 0 END) as j_count,
SUM(CASE WHEN reference LIKE '%-J%' THEN 1 ELSE 0 END) as s_count
FROM
table
WHERE
reference like 'A0%18%'
GROUP BY
LEFT(reference, 3)
Notes:
LEFT(reference, 3) pulls the A0x number off the start. Grouping by this will give us one row per distinct A0x number, so if a thousand variations of A00 to A09 are present, we'll get 10 rows
You don't need to (and shouldn't) say WHERE reference LIKE 'A03%' AND reference LIKE '%%18%%' etc.. I just combine them to 'A0%18%'. Note that I didn't combine them to 'A03%18%' as that would restrict our data too much. Don't double up your percent signs when doing a like
The SUM performs a count; the case when looks a the reference and if it has e.g. an -S in it, then it returns 1 else 0. Summing these effectively counts the reference patterns
By th way, for future searching purposes, this type of query is called a PIVOT. Most databases have some proprietary syntax to carry out pivoting, but I tend to remember/utilize this pattern because it's a bit more flexible and is cross-db compatible

Count particular substring text within column

I have a Hive table, titled 'UK.Choices' with a column, titled 'Fruit', with each row as follows:
AppleBananaAppleOrangeOrangePears
BananaKiwiPlumAppleAppleOrange
KiwiKiwiOrangeGrapesAppleKiwi
etc.
etc.
There are 2.5M rows and the rows are much longer than the above.
I want to count the number of instances that the word 'Apple' appears.
For example above, it is:
Number of 'Apple'= 5
My sql so far is:
select 'Fruit' from UK.Choices
Then in chunks of 300,000 I copy and paste into Excel, where I'm more proficient and able to do this using formulas. Problem is, it takes upto an hour and a half to generate each chunk of 300,000 rows.
Anyone know a quicker way to do this bypassing Excel? I can do simple things like counts using where clauses, but something like the above is a little beyond me right now. Please help.
Thank you.
I think I am 2 years too late. But since I was looking for the same answer and I finally managed to solve it, I thought it was a good idea to post it here.
Here is how I do it.
Solution 1:
+-----------------------------------+---------------------------+-------------+-------------+
| Fruits | Transform 1 | Transform 2 | Final Count |
+-----------------------------------+---------------------------+-------------+-------------+
| AppleBananaAppleOrangeOrangePears | #Banana#OrangeOrangePears | ## | 2 |
| BananaKiwiPlumAppleAppleOrange | BananaKiwiPlum##Orange | ## | 2 |
| KiwiKiwiOrangeGrapesAppleKiwi | KiwiKiwiOrangeGrapes#Kiwi | # | 1 |
+-----------------------------------+---------------------------+-------------+-------------+
Here is the code for it:
SELECT length(regexp_replace(regexp_replace(fruits, "Apple", "#"), "[A-Za-z]", "")) as number_of_apples
FROM fruits;
You may have numbers or other special characters in your fruits column and you can just modify the second regexp to incorporate that. Just remember that in hive to escape a character you may need to use \\ instead of just one \.
Solution 2:
SELECT size(split(fruits,"Apple"))-1 as number_of_apples
FROM fruits;
This just first split the string using "Apple" as a separator and makes an array. The size function just tells the size of that array. Note that the size of the array is one more than the number of separators.
This is straight-forward if you have any delimiter ( eg: comma ) between the fruit names. The idea is to split the column into an array, and explode the array into multiple rows using the 'explode' function.
SELECT fruit, count(1) as count FROM
( SELECT
explode(split(Fruit, ',')) as fruit
FROM UK.Choices ) X
GROUP BY fruit
From your example, it looks like fruits are delimited by Capital letters. One idea is to split the column based on capital letters, assuming there are no fruits with same suffix.
SELECT fruit_suffix, count(1) as count FROM
( SELECT
explode(split(Fruit, '[A-Z]')) as fruit_suffix
FROM UK.Choices ) X
WHERE fruit_suffix <> ''
GROUP BY fruit_suffix
The downside is that, the output will not have first letter of the fruit,
pple - 5
range - 4
I think you want to run in one select, and use the Hive if UDF to sum for the different cases. Something like the following...
select sum( if( fruit like '%Apple%' , 1, 0 ) ) as apple_count,
sum( if( fruit like '%Orange%', 1, 0 ) ) as orange_count
from UK.Choices
where ID > start and ID < end;
instead of a join in the above query.
No experience of Hive, I'm afraid, so this may or may not work. But on SQLServer, Oracle etc I'd do something like this:
Assuming that you have an int PK called ID on the row, something along the lines of:
select AppleCount, OrangeCount, AppleCount - OrangeCount score
from
(
select count(*) as AppleCount
from UK.Choices
where ID > start and ID < end
and Fruit like '%Apple%'
) a,
(
select count(*) as OrangeCount
from UK.Choices
where ID > start and ID < end
and Fruit like '%Orange%'
) o
I'd leave the division by the total count to the end, when you have all the rows in the spreadsheet and can count them there.
However, I'd urgently ask my boss to let me change the Fruit field to be a table with an FK to Choices and one fruit name per row. Unless this is something you can't do in Hive, this design is something that makes kittens cry.
PS I'd missed that you wanted the count of occurances of Apple which this won't do. I'm leaving my answer up, because I reckon that my However... para is actually a good answer. :(

Adding a percent column to MS Access Query

I'm trying to add a column which calculates percentages of different products in MS Access Query. Basically, this is the structure of the query that I'm trying to reach:
Product |
Total |
Percentage
Prod1 |
15 |
21.13%
Prod2 |
23 |
32.39%
Prod3 |
33 |
46.48%
Product |
71 |
100%
The formula for finding the percent I use is: ([Total Q of a Product]/[Totals of all Products])*100, but when I try to use the expression builder (since my SQL skills are basic) in MS Access to calculate it..
= [CountOfProcuts] / Sum([CountOfProducts])
..I receive an error message "Cannot have aggregate function in GROUP BY clause.. (and the expression goes here)". I also tried the option with two queries: one that calculates only the totals and another that use the first one to calculate the percentages, but the result was the same.
I'll be grateful if someone can help me with this.
You can get all but the last row of your desired output with this query.
SELECT
y.Product,
y.Total,
Format((y.Total/sub.SumOfTotal),'#.##%') AS Percentage
FROM
YourTable AS y,
(
SELECT Sum(Total) AS SumOfTotal
FROM YourTable
) AS sub;
Since that query does not include a JOIN or WHERE condition, it returns a cross join between the table and the single row of the subquery.
If you need the last row from your question example, you can UNION the query with another which returns the fabricated row you want. In this example, I used a custom Dual table which is designed to always contain one and only one row. But you could substitute another table or query which returns a single row.
SELECT
y.Product,
y.Total,
Format((y.Total/sub.SumOfTotal),'#.##%') AS Percentage
FROM
YourTable AS y,
(
SELECT Sum(Total) AS SumOfTotal
FROM YourTable
) AS sub
UNION ALL
SELECT
'Product',
DSum('Total', 'YourTable'),
'100%'
FROM Dual;

Explode range of integers out for joining in SQL

I have one table that stores a range of integers in a field, sort of like a print range, (e.g. "1-2,4-7,9-11"). This field could also contain a single number.
My goal is to join this table to a second one that has discrete values instead of ranges.
So if table one contains
1-2,5
9-15
7
And table two contains
1
2
3
4
5
6
7
8
9
10
The result of the join would be
1-2,5 1
1-2,5 2
1-2,5 5
7 7
9-15 9
9-15 10
Working in SQL Server 2008 R2.
Use a string split function of your choice to split on comma. Figure out the min/max values and join using between.
SQL Fiddle
MS SQL Server 2012 Schema Setup:
create table T1(Col1 varchar(10))
create table T2(Col2 int)
insert into T1 values
('1-2,5'),
('9-15'),
('7')
insert into T2 values (1),(2),(3),(4),(5),(6),(7),(8),(9),(10)
Query 1:
select T1.Col1,
T2.Col2
from T2
inner join (
select T1.Col1,
cast(left(S.Item, charindex('-', S.Item+'-')-1) as int) MinValue,
cast(stuff(S.Item, 1, charindex('-', S.Item), '') as int) MaxValue
from T1
cross apply dbo.Split(T1.Col1, ',') as S
) as T1
on T2.Col2 between T1.MinValue and T1.MaxValue
Results:
| COL1 | COL2 |
----------------
| 1-2,5 | 1 |
| 1-2,5 | 2 |
| 1-2,5 | 5 |
| 9-15 | 9 |
| 9-15 | 10 |
| 7 | 7 |
Like everybody has said, this is a pain to do natively in SQL Server. If you must then I think this is the proper approach.
First determine your rules for parsing the string, then break down the process into well-defined and understood problems.
Based on your example, I think this is the process:
Separate comma separated values in the string into rows
If the data does not contain a dash, then it's finished (it's a standalone value)
If it does contain a dash, parse the left and right sides of the dash
Given the left and right sides (the range) determine all the values between them into rows
I would create a temp table to populate the parsing results into which needs two columns:
SourceRowID INT, ContainedValue INT
and another to use for intermediate processing:
SourceRowID INT, ContainedValues VARCHAR
Parse your comma-separated values into their own rows using a CTE like this Step 1 is now a well-defined and understood problem to solve:
Turning a Comma Separated string into individual rows
So your result from the source
'1-2,5'
will be:
'1-2'
'5'
From there, SELECT from that processing table where the field does not contain a dash. Step 2 is now a well-defined and understood problem to solve These are standalone numbers and can go straight into the results temp table. The results table should also get the ID reference to the original row.
Next would be to parse the values to the left and right of the dash using CHARINDEX to locate it, then the appropriate LEFT and RIGHT functions as needed. This will give you the starting and ending value.
Here is a relevant question for accomplishing this step 3 is now a well-defined and understood problem to solve:
T-SQL substring - separating first and last name
Now you have separated the starting and ending values. Use another function which can explode this range. Step 4 is now a well-defined and understood problem to solve:
SQL: create sequential list of numbers from various starting points
SELECT all N between #min and #max
What is the best way to create and populate a numbers table?
and, also, insert it into the temp table.
Now what you should have is a temp table with every value in the exploded range.
Simply JOIN that to the other table on the values now, then to your source table on the ID reference and you're there.
My suggestion is to add one more field and many more records to your ranges table. Specifically, the primary key would be the integer and the other field would be the range. Records would look like this:
number range
1 1-2,5
2 1-2,5
3 na
4 na
5 1-2,5
etc
Having said that, this is still rather limiting because a number can only have one range. If you want to be thorough, set up a many to many relationship between numbers and ranges.
As far as I can tell you best option is something like below:
Create a table value function that accepts your ranges an converts them to a collection of ints. So 1-3,5 would return:
1
2
3
5
Then use these results to join to other tables. I don't have an exact function to do this at hand, but this one seems like an excellent start.

Select distinct values for a particular column choosing arbitrarily from duplicates

I have health data relating to deaths. Individual should die once maximum. In the database they sometimes don't; probably because causes of death were changed but the original entry was not deleted. I don't really understand how this was allowed to happen, but it has. So, as a made up example, I have:
Row_number | Individual_ID | Cause_of_death | Date_of_death
------------+---------------+-----------------------+---------------
1 | 1 | Stroke | 3 march 2008
2 | 2 | Myocardial infarction | 1 jan 2009
3 | 2 | Pulmonary Embolus | 1 jan 2009
I want each individual to have only one cause of death.
In the example, I want a query that returns row 1 and either row 2 or row 3 (not both). I have to make an arbitrary choice between rows 2 and 3 because there is no timestamp in any of the fields that can be used to determine which is the revision; it's not ideal but is unavoidable.
I can't make the SQL work to do this. I've tried inner joining distinct Individual_ID to the other fields, but this still gives all the rows. I've tried adding a 'having count(Individual_ID) = 1' clause with it. This leaves out people with more than one cause of death completely. Suggestions on the internet seem to be based on using a timestamped field to choose the most recent, but I don't have that.
IBM DB2. Windows XP. Any thoughts gratefully received.
Have you tried using MIN (or MAX) against the cause of death. (and the date of death, if they died on two different dates)
SELECT IndividualID, MIN(Cause_Of_Death), MIN (Date_Of_Death)
from deaths
GROUP BY IndividualID
I don't know DB2 so I'll answer in general. There are two main approaches:
select *
from T
join (
select keys, min(ID) as MinID
from T
group by keys
) on T.ID = MinID
And
select *, row_number() over (partition by keys) as r
from T
where r = 1
Both return all rows, no matter if duplicate or not. But they returns only one duplicate per "key".
Notice, that both statements are pseudo-SQL.
The row_number() approach is probably preferable from a performance standpoint. Here is usr's example, in DB2 syntax:
select * from (
select T.*, row_number() over (partition by Individual_ID) as r
from T
)
where r=1;