Transpose single row with multiple columns into multiple rows of two columns - sql
I have a SELECT query that works perfectly fine and it returns a single row with multiple named columns:
| registered | downloaded | subscribed | requested_invoice | paid |
|------------|------------|------------|-------------------|------|
| 9000 | 7000 | 5000 | 4000 | 3000 |
But I need to transpose this result to a new table that looks like this:
| type | value |
|-------------------|-------|
| registered | 9000 |
| downloaded | 7000 |
| subscribed | 5000 |
| requested_invoice | 4000 |
| paid | 3000 |
I have the additional module tablefunc enabled at PostgreSQL but I can't get the crosstab() function to work for this. What can I do?
You need the reverse operation of what crosstab() does. Some call it "unpivot". A LATERAL join to a VALUES expression should be the most elegant way:
SELECT l.*
FROM tbl -- or replace the table with your subquery
CROSS JOIN LATERAL (
VALUES
('registered' , registered)
, ('downloaded' , downloaded)
, ('subscribed' , subscribed)
, ('requested_invoice', requested_invoice)
, ('paid' , paid)
) l(type, value)
WHERE id = 1; -- or whatever
You may need to cast some or all columns to arrive at a common data type. Like:
...
VALUES
('registered' , registered::text)
, ('downloaded' , downloaded::text)
, ...
Related:
Postgres: convert single row to multiple rows (unpivot)
For the reverse operation - "pivot" or "cross-tabulation":
PostgreSQL Crosstab Query
Related
How to get a value inside of a JSON that is inside a column in a table in Oracle sql?
Suppose that I have a table named agents_timesheet that having a structure like this: ID | name | health_check_record | date | clock_in | clock_out --------------------------------------------------------------------------------------------------------- 1 | AAA | {"mental":{"stress":"no", "depression":"no"}, | 6-Dec-2021 | 08:25:07 | | | "physical":{"other_symptoms":"headache", "flu":"no"}} | | | --------------------------------------------------------------------------------------------------------- 2 | BBB | {"mental":{"stress":"no", "depression":"no"}, | 6-Dec-2021 | 08:26:12 | | | "physical":{"other_symptoms":"no", "flu":"yes"}} | | | --------------------------------------------------------------------------------------------------------- 3 | CCC | {"mental":{"stress":"no", "depression":"severe"}, | 6-Dec-2021 | 08:27:12 | | | "physical":{"other_symptoms":"cancer", "flu":"yes"}} | | | Now I need to get all agents having flu at the day. As for getting the flu from a single JSON in Oracle SQL, I can already get it by this SQL statement: SELECT * FROM JSON_TABLE( '{"mental":{"stress":"no", "depression":"no"}, "physical":{"fever":"no", "flu":"yes"}}', '$' COLUMNS (fever VARCHAR(2) PATH '$.physical.flu') ); As for getting the values from the column health_check_record, I can get it by utilizing the SELECT statement. But How to get the values of flu in the JSON in the health_check_record of that table? Additional question Based on the table, how can I retrieve full list of other_symptoms, then it will get me this kind of output: ID | name | other_symptoms ------------------------------- 1 | AAA | headache 2 | BBB | no 3 | CCC | cancer
You can use JSON_EXISTS() function. SELECT * FROM agents_timesheet WHERE JSON_EXISTS(health_check_record, '$.physical.flu == "yes"'); There is also "plain old way" without JSON parsing only treting column like a standard VARCHAR one. This way will not work in 100% of cases, but if you have the data in the same way like you described it might be sufficient. SELECT * FROM agents_timesheet WHERE health_check_record LIKE '%"flu":"yes"%';
How to get the values of flu in the JSON in the health_check_record of that table? From Oracle 12, to get the values you can use JSON_TABLE with a correlated CROSS JOIN to the table: SELECT a.id, a.name, j.*, a."DATE", a.clock_in, a.clock_out FROM agents_timesheet a CROSS JOIN JSON_TABLE( a.health_check_record, '$' COLUMNS ( mental_stress VARCHAR2(3) PATH '$.mental.stress', mental_depression VARCHAR2(3) PATH '$.mental.depression', physical_fever VARCHAR2(3) PATH '$.physical.fever', physical_flu VARCHAR2(3) PATH '$.physical.flu' ) ) j WHERE physical_flu = 'yes'; db<>fiddle here
You can use "dot notation" to access data from a JSON column. Like this: select "DATE", id, name from agents_timesheet t where t.health_check_record.physical.flu = 'yes' ; DATE ID NAME ----------- --- ---- 06-DEC-2021 2 BBB Note that this approach requires that you use an alias for the table name (so you can use it in accessing the JSON data). For testing I used the data posted by MT0 on dbfiddle. I am not a big fan of double-quoted column names; use something else for "DATE", such as dt or date_.
How to get the mode of a text[] in SQL?
In a table, I have a column of type text[]. I want to extract the most frequent string in each row. How can I do that? Trivial example: id | fruit ---------------------------------- 10 | ['apple','pear','apple'] 20 | ['pear','pear','banana'] 30 | ['pineapple','apple','apple'] After running the query I would like to have: id | fruit | mode ----------------------------------------- 10 | ['apple','pear','apple'] | apple 20 | ['pear','pear','banana'] | pear 30 | ['pineapple','apple','apple']| apple
You can use a scalar sub-query after unnesting the elements: select *, (select mode() within group (order by u.word) from unnest(u.fruit) as u(word)) as mode from the_table t This assumes that fruit is a text[] column. If it's a json or jsonb in reality, you need to use json_array_elements_text() instead of unnest. If you need that a lot, you can create a function for that.
Postgres jsonb. Heterogenous json fields
If I have a table with a single jsonb column and the table has data like this: [{"body": {"project-id": "111"}}, {"body": {"my-org.project-id": "222"}}, {"body": {"other-org.project-id": "333"}}] Basically it stores project-id differently for different rows. Now I need a query where the data->'body'->'etc'., from different rows would coalesce into a single field 'project-id', how can I do that? e.g.: if I do something like this: select data->'body'->'project-id' projectid from mytable it will return something like: | projectid | | 111 | But I also want project-id's in other rows too, but I don't want additional columns in the results. i.e, I want this: | projectid | | 111 | | 222 | | 333 |
I understand that each of your rows contains a json object, with a nested object whose key varies over rows, and whose value you want to acquire. Assuming the 'body' always has a single key, you could do: select jsonb_extract_path_text(t.js -> 'body', x.k) projectid from t cross join lateral jsonb_object_keys(t.js -> 'body') as x(k) The lateral join on jsonb_object_keys() extracts all keys in the object as rows. Then we use jsonb_extract_path_text() to get the corresponding value. Demo on DB Fiddle: with t as ( select '{"body": {"project-id": "111"}}'::jsonb js union all select '{"body": {"my-org.project-id": "222"}}'::jsonb union all select '{"body": {"other-org.project-id": "333"}}'::jsonb ) select jsonb_extract_path_text(t.js -> 'body', x.k) projectid from t cross join lateral jsonb_object_keys(t.js -> 'body') as x(k) | projectid | | :--------- | | 111 | | 222 | | 333 |
Recursive self join over file data
I know there are many questions about recursive self joins, but they're mostly in a hierarchical data structure as follows: ID | Value | Parent id ----------------------------- But I was wondering if there was a way to do this in a specific case that I have where I don't necessarily have a parent id. My data will look like this when I initially load the file. ID | Line | ------------------------- 1 | 3,Formula,1,2,3,4,... 2 | *,record,abc,efg,hij,... 3 | ,,1,x,y,z,... 4 | ,,2,q,r,s,... 5 | 3,Formula,5,6,7,8,... 6 | *,record,lmn,opq,rst,... 7 | ,,1,t,u,v,... 8 | ,,2,l,m,n,... Essentially, its a CSV file where each row in the table is a line in the file. Lines 1 and 5 identify an object header and lines 3, 4, 7, and 8 identify the rows belonging to the object. The object header lines can have only 40 attributes which is why the object is broken up across multiple sections in the CSV file. What I'd like to do is take the table, separate out the record # column, and join it with itself multiple times so it achieves something like this: ID | Line | ------------------------- 1 | 3,Formula,1,2,3,4,5,6,7,8,... 2 | *,record,abc,efg,hij,lmn,opq,rst 3 | ,,1,x,y,z,t,u,v,... 4 | ,,2,q,r,s,l,m,n,... I know its probably possible, I'm just not sure where to start. My initial idea was to create a view that separates out the first and second columns in a view, and use the view as a way of joining in a repeated fashion on those two columns. However, I have some problems: I don't know how many sections will occur in the file for the same object The file can contain other objects as well so joining on the first two columns would be problematic if you have something like ID | Line | ------------------------- 1 | 3,Formula,1,2,3,4,... 2 | *,record,abc,efg,hij,... 3 | ,,1,x,y,z,... 4 | ,,2,q,r,s,... 5 | 3,Formula,5,6,7,8,... 6 | *,record,lmn,opq,rst,... 7 | ,,1,t,u,v,... 8 | ,,2,l,m,n,... 9 | ,4,Data,1,2,3,4,... 10 | *,record,lmn,opq,rst,... 11 | ,,1,t,u,v,... In the above case, my plan could join rows from the Data object in row 9 with the first rows of the Formula object by matching the record value of 1. UPDATE I know this is somewhat confusing. I tried doing this with C# a while back, but I had to basically write a recursive decent parser to parse the specific file format and it simply took to long because I had to get it in the database afterwards and it was too much for entity framework. It was taking hours just to convert one file since these files are excessively large. Either way, #Nolan Shang has the closest result to what I want. The only difference is this (sorry for the bad formatting): +----+------------+------------------------------------------+-----------------------+ | ID | header | x | value | +----+------------+------------------------------------------+-----------------------+ | 1 | 3,Formula, | ,1,2,3,4,5,6,7,8 |3,Formula,1,2,3,4,5,6,7,8 | | 2 | ,, | ,1,x,y,z,t,u,v | ,1,x,y,z,t,u,v | | 3 | ,, | ,2,q,r,s,l,m,n | ,2,q,r,s,l,m,n | | 4 | *,record, | ,abc,efg,hij,lmn,opq,rst |*,record,abc,efg,hij,lmn,opq,rst | | 5 | ,4, | ,Data,1,2,3,4 |,4,Data,1,2,3,4 | | 6 | *,record, | ,lmn,opq,rst | ,lmn,opq,rst | | 7 | ,, | ,1,t,u,v | ,1,t,u,v | +----+------------+------------------------------------------+-----------------------------------------------+
I agree that it would be better to export this to a scripting language and do it there. This will be a lot of work in TSQL. You've intimated that there are other possible scenarios you haven't shown, so I obviously can't give a comprehensive solution. I'm guessing this isn't something you need to do quickly on a repeated basis. More of a one-time transformation, so performance isn't an issue. One approach would be to do a LEFT JOIN to a hard-coded table of the possible identifying sub-strings like: 3,Formula, *,record, ,,1, ,,2, ,4,Data, Looks like it pretty much has to be human-selected and hard-coded because I can't find a reliable pattern that can be used to SELECT only these sub-strings. Then you SELECT from this artificially-created table (or derived table, or CTE) and LEFT JOIN to your actual table with a LIKE to get all the rows that use each of these values as their starting substring, strip out the starting characters to get the rest of the string, and use the STUFF..FOR XML trick to build the desired Line. How you get the ID column depends on what you want, for instance in your second example, I don't know what ID you want for the ,4,Data,... line. Do you want 5 because that's the next number in the results, or do you want 9 because that's the ID of the first occurrance of that sub-string? Code accordingly. If you want 5 it's a ROW_NUMBER(). If you want 9, you can add an ID column to the artificial table you created at the start of this approach. BTW, there's really nothing recursive about what you need done, so if you're still thinking in those terms, now would be a good time to stop. This is more of a "Group Concatenation" problem.
Here is a sample, but has some different with you need. It is because I use the value the second comma as group header, so the ,,1 and ,,2 will be treated as same group, if you can use a parent id to indicated a group will be better DECLARE #testdata TABLE(ID int,Line varchar(8000)) INSERT INTO #testdata SELECT 1,'3,Formula,1,2,3,4,...' UNION ALL SELECT 2,'*,record,abc,efg,hij,...' UNION ALL SELECT 3,',,1,x,y,z,...' UNION ALL SELECT 4,',,2,q,r,s,...' UNION ALL SELECT 5,'3,Formula,5,6,7,8,...' UNION ALL SELECT 6,'*,record,lmn,opq,rst,...' UNION ALL SELECT 7,',,1,t,u,v,...' UNION ALL SELECT 8,',,2,l,m,n,...' UNION ALL SELECT 9,',4,Data,1,2,3,4,...' UNION ALL SELECT 10,'*,record,lmn,opq,rst,...' UNION ALL SELECT 11,',,1,t,u,v,...' ;WITH t AS( SELECT *,REPLACE(SUBSTRING(t.Line,LEN(c.header)+1,LEN(t.Line)),',...','') AS data FROM #testdata AS t CROSS APPLY(VALUES(LEFT(t.Line,CHARINDEX(',',t.Line, CHARINDEX(',',t.Line)+1 )))) c(header) ) SELECT MIN(ID) AS ID,t.header,c.x,t.header+STUFF(c.x,1,1,'') AS value FROM t OUTER APPLY(SELECT ','+tb.data FROM t AS tb WHERE tb.header=t.header FOR XML PATH('') ) c(x) GROUP BY t.header,c.x +----+------------+------------------------------------------+-----------------------------------------------+ | ID | header | x | value | +----+------------+------------------------------------------+-----------------------------------------------+ | 1 | 3,Formula, | ,1,2,3,4,5,6,7,8 | 3,Formula,1,2,3,4,5,6,7,8 | | 3 | ,, | ,1,x,y,z,2,q,r,s,1,t,u,v,2,l,m,n,1,t,u,v | ,,1,x,y,z,2,q,r,s,1,t,u,v,2,l,m,n,1,t,u,v | | 2 | *,record, | ,abc,efg,hij,lmn,opq,rst,lmn,opq,rst | *,record,abc,efg,hij,lmn,opq,rst,lmn,opq,rst | | 9 | ,4, | ,Data,1,2,3,4 | ,4,Data,1,2,3,4 | +----+------------+------------------------------------------+-----------------------------------------------+
SQL Query: Search with list of tuples
I have a following table (simplified version) in SQLServer. Table Events ----------------------------------------------------------- | Room | User | Entered | Exited | ----------------------------------------------------------- | A | Jim | 2014-10-10T09:00:00 | 2014-10-10T09:10:00 | | B | Jim | 2014-10-10T09:11:00 | 2014-10-10T09:22:30 | | A | Jill | 2014-10-10T09:00:00 | NULL | | C | Jack | 2014-10-10T09:45:00 | 2014-10-10T10:00:00 | | A | Jack | 2014-10-10T10:01:00 | NULL | . . . I need to create a query that returns person's whereabouts in given timestamps. For an example: Where was (Jim at 2014-10-09T09:05:00), (Jim at 2014-10-10T09:01:00), (Jill at 2014-10-10T09:10:00), ... The result set must contain the given User and Timestamp as well as the found room (if any). ------------------------------------------ | User | Timestamp | WasInRoom | ------------------------------------------ | Jim | 2014-10-09T09:05:00 | NULL | | Jim | 2014-10-09T09:01:00 | A | | Jim | 2014-10-10T09:10:00 | A | The number of User-Timestamp tuples can be > 10 000. The current implementation retrieves all records from Events table and does the search in Java code. I am hoping that I could push this logic to SQL. But how? I am using MyBatis framework to create SQL queries so the tuples can be inlined to the query.
The basic query is: select e.* from events e where e.user = 'Jim' and '2014-10-09T09:05:00' >= e.entered and ('2014-10-09T09:05:00' <= e.exited or e.exited is NULL) or e.user = 'Jill' and '2014-10-10T09:10:00 >= e.entered and ('2014-10-10T09:10:00' <= e.exited or e.exited is NULL) or . . .; SQL Server can handle ridiculously large queries, so you can continue in this vein. However, if you have the name/time values in a table already (or it is the result of a query), then use a join: select ut.*, t.* from usertimes ut left join events e on e.user = ut.user and ut.thetime >= et.entered and (ut.thetime <= exited or ut.exited is null); Note the use of a left join here. It ensures that all the original rows are in the result set, even when there are no matches.
Answers from Jonas and Gordon got me on track, I think. Here is query that seems to do the job: CREATE TABLE #SEARCH_PARAMETERS(User VARCHAR(16), "Timestamp" DATETIME) INSERT INTO #SEARCH_PARAMETERS(User, "Timestamp") VALUES ('Jim', '2014-10-09T09:05:00'), ('Jim', '2014-10-10T09:01:00'), ('Jill', '2014-10-10T09:10:00') SELECT #SEARCH_PARAMETERS.*, Events.Room FROM #SEARCH_PARAMETERS LEFT JOIN Events ON #SEARCH_PARAMETERS.User = Events.User AND #SEARCH_PARAMETERS."Timestamp" > Events.Entered AND (Events.Exited IS NULL OR Events.Exited > #SEARCH_PARAMETERS."Timestamp" DROP TABLE #SEARCH_PARAMETERS
By declaring a table valued parameter type for the (user, timestamp) tuples, it should be simple to write a table valued user defined function which returns the desired result by joining the parameter table and the Events table. See http://msdn.microsoft.com/en-us/library/bb510489.aspx Since you are using MyBatis it may be easier to just generate a table variable for the tuples inline in the query and join with that.