Crosstab splitting results due to presence of unrelated field - sql

I'm using postgres 9.1 with tablefunc:crosstab
I have a table with the following structure:
CREATE TABLE marketdata.instrument_data
(
dt date NOT NULL,
instrument text NOT NULL,
field text NOT NULL,
value numeric,
CONSTRAINT instrument_data_pk PRIMARY KEY (dt , instrument , field )
)
This is populated by a script that fetches data daily. So it might look like so:
| dt | instrument | field | value |
|------------+-------------------+-----------+-------|
| 2014-05-23 | SGX.MiniJGB.2014U | PX_VOLUME | 1 |
| 2014-05-23 | SGX.MiniJGB.2014U | OPEN_INT | 2 |
I then use the following crosstab query to pivot the table:
select dt, instrument, vol, oi
FROM crosstab($$
select dt, instrument, field, value
from marketdata.instrument_data
where field = 'PX_VOLUME' or field = 'OPEN_INT'
$$::text, $$VALUES ('PX_VOLUME'),('OPEN_INT')$$::text
) vol(dt date, instrument text, vol numeric, oi numeric);
Running this I get the result:
| dt | instrument | vol | oi |
|------------+-------------------+-----+----|
| 2014-05-23 | SGX.MiniJGB.2014U | 1 | 2 |
The problem:
When running this with lot of real data in the table, I noticed that for some fields the function was splitting the result over two rows:
| dt | instrument | vol | oi |
|------------+-------------------+-----+----|
| 2014-05-23 | SGX.MiniJGB.2014U | 1 | |
| 2014-05-23 | SGX.MiniJGB.2014U | | 2 |
I checked that the dt and instrument fields were identical and produced a work-around by grouping the ouput of the crosstab.
Analysis
I've discovered that it's the presence of one other entry in the input table that causes the output to be split over 2 rows. If I have the input as follows:
| dt | instrument | field | value |
|------------+-------------------+-----------+-------|
| 2014-04-23 | EUX.Bund.2014M | PX_VOLUME | 0 |
| 2014-05-23 | SGX.MiniJGB.2014U | PX_VOLUME | 1 |
| 2014-05-23 | SGX.MiniJGB.2014U | OPEN_INT | 2 |
I get:
| dt | instrument | vol | oi |
|------------+-------------------+-----+----|
| 2014-04-23 | EUX.Bund.2014M | 0 | |
| 2014-05-23 | SGX.MiniJGB.2014U | 1 | |
| 2014-05-23 | SGX.MiniJGB.2014U | | 2 |
Where it gets really weird...
If I recreate the above input table manually then the output is as we would expect, combined into a single row.
If I run:
update marketdata.instrument_data
set instrument = instrument
where instrument = 'EUX.Bund.2014M'
Then again, the output is as we would expect, which is surprising as all I've done is set the instrument field to itself.
So I can only conclude that there is some hidden character/encoding issue in that Bund entry that is breaking crosstab.
Are there any suggestions as to how I can determine what it is about that entry that breaks crosstab?
Edit:
I ran the following on the raw table to try and see any hidden characters:
select instrument, encode(instrument::bytea, 'escape')
from marketdata.bloomberg_future_data_temp
where instrument = 'EUX.Bund.2014M';
And got:
| instrument | encode |
|----------------+----------------|
| EUX.Bund.2014M | EUX.Bund.2014M |

Two problems.
1. ORDER BY is required.
The manual:
In practice the SQL query should always specify ORDER BY 1,2 to ensure that the input rows are properly ordered, that is, values with the same row_name are brought together and correctly ordered within the row.
With the one-parameter form of crosstab(), ORDER BY 1,2 would be necessary.
2. One column with distinct values per group.
The manual:
crosstab(text source_sql, text category_sql)
source_sql is a SQL statement that produces the source set of data.
...
This statement must return one row_name column, one category column,
and one value column. It may also have one or more "extra" columns.
The row_name column must be first. The category and value columns must
be the last two columns, in that order. Any columns between row_name
and category are treated as "extra". The "extra" columns are expected
to be the same for all rows with the same row_name value.
Bold emphasis mine. One column. It seems like you want to form groups over two columns, which does not work as you desire.
Related answer:
Pivot on Multiple Columns using Tablefunc
The solution depends on what you actually want to achieve. It's not in your question, you silently assumed the function would do what you hope for.
Solution
I guess you want to group on both leading columns: (dt, instrument). You could play tricks with concatenating or arrays, but that would be slow and / or unreliable. I suggest a cleaner and faster approach with a window function rank() or dense_rank() to produce a single-column unique value per desired group. This is very cheap, because ordering rows is the main cost and the order of the frame is identical to the required order anyway. You can remove the added column in the outer query if desired:
SELECT dt, instrument, vol, oi
FROM crosstab(
$$SELECT dense_rank() OVER (ORDER BY dt, instrument) AS rnk
, dt, instrument, field, value
FROM marketdata.instrument_data
WHERE field IN ('PX_VOLUME', 'OPEN_INT')
ORDER BY 1$$
, $$VALUES ('PX_VOLUME'),('OPEN_INT')$$
) vol(rnk int, dt date, instrument text, vol numeric, oi numeric);
More details:
PostgreSQL Crosstab Query

You could run a query that replaces irregular characters with an asterisk:
select regexp_replace(instrument, '[^a-zA-Z0-9]', '*', 'g')
from marketdata.instrument_data
where instrument = 'EUX.Bund.2014M'
Perhaps the instrument = instrument assignment discards trailing whitespace. That would also explain why where instrument = 'EUX.Bund.2014M' matches two values that crosstab sees as different.

Related

How to sum the minutes of each activity in Postgresql?

The column "activitie_time_enter" has the times.
The column "activitie_still" indicates the type of activity.
The column "activitie_walking" indicates the other type of activity.
Table example:
activitie_time_enter | activitie_still | activitie_walking
17:30:20 | Still |
17:31:32 | Still |
17:32:24 | | Walking
17:33:37 | | Walking
17:34:20 | Still |
17:35:37 | Still |
17:45:13 | Still |
17:50:23 | Still |
17:51:32 | | Walking
What I need is to sum up the total minutes for each activity separately.
Any suggestions or solution?
First calculate the duration for each activity (the with CTE) and then do conditional sum.
with t as
(
select
*, lead(activitie_time_enter) over (order by activitie_time_enter) - activitie_time_enter as duration
from _table
)
select
sum (duration) filter (where activitie_still = 'Still') as total_still,
sum (duration) filter (where activitie_walking = 'Walking') as total_walking
from t;
/** Result:
total_still|total_walking|
-----------+-------------+
00:19:16| 00:01:56|
*/
BTW do you really need two columns (activitie_still and activitie_walking)? Only one activity column with those values will do. This will allow more activities (Running, Sleeping, Working etc.) w/o having to change the table structure.

SQL: How to find string occurrences, sort them randomly by key and assign them to new attributes?

I have the following sample data:
key | source_string
---------------------
1355 | hb;am;dr;cc;
3245 | am;xa;sp;cc;
9831 | dr;hb;am;ei;
What I need to do:
Find strings from a fixed list ('hb','am','dr','ac') in the source_string
Create 3 new attributes and assign the found strings to them randomly but fixed (no difference after query re-execution)
If possible no subqueries and all in one SQL SELECT statement
The solution should look like this:
key | source_string | t_1 | t_2 | t_3
---------------------------------------
1355 | hb;am;dr;cc; | hb | dr |
3245 | am;xa;sp;cc; | am | |
9831 | dr;hb;am;ei; | hb | dr | am
My thought process:
I wanted to return the strings that occurred per row -> 1355: hb,am,dr,cc, (no idea how)
Rank them based on the key to have it randomly (maybe with rank() and mod())
Assign the strings based on their rank to the new attributes. At key 1355 4 attributes match, but only 3 need to be assigned, so the one left has to be ignored.
(Everything in Postgres)
In my current solution I created a rule for every case, which results in a huge query which is not desirable.
One simple method is to split the string, reaggregate the matches to an array and use that for the results
select t.*,
ar[1], ar[2], ar[3]
from t cross join lateral
(select array_agg(el order by random()) as ar
from regexp_split_to_table(t.source_string, ';') el
where el in ('hb','am','dr','ac')
) s;
Here is a db<>fiddle;

How can I return the best matched row first in sort order from a set returned by querying a single search term against multiple columns in Postgres?

Background
I have a Postgres 11 table like so:
CREATE TABLE
some_schema.foo_table (
id INTEGER PRIMARY KEY GENERATED ALWAYS AS IDENTITY,
bar_text TEXT,
foo_text TEXT,
foobar_text TEXT
);
It has some data like this:
INSERT INTO some_schema.foo_table (bar_text, foo_text, foobar_text)
VALUES ('eddie', '123456', 'something0987');
INSERT INTO some_schema.foo_table (bar_text, foo_text, foobar_text)
VALUES ('Snake', '12345-54321', 'that_##$%_snake');
INSERT INTO some_schema.foo_table (bar_text, foo_text, foobar_text)
VALUES ('Sally', '12345', '24-7avocado');
id | bar_text | foo_text | foobar_text
----+----------+-------------+-----------------
1 | eddie | 123456 | something0987
2 | Snake | 12345-54321 | that_##$%_snake
3 | Sally | 12345 | 24-7avocado
The problem
I need to query each one of these columns and compare the values to a given term (passed in as an argument from app logic), and make sure the best-matched row (considering comparison with all the columns, not just one) is returned first in the sort order.
There is no way to know in advance which of the columns is likely to be a better match for the given term.
If I compare the given term to each value using the similarity() function, I can see at a glance which row has the best match in any of the three columns and can see that's the one I would want ranked first in the sort order.
SELECT
f.id,
f.foo_text,
f.bar_text,
f.foobar_text,
similarity('12345', foo_text) AS foo_similarity,
similarity('12345', bar_text) AS bar_similarity,
similarity('12345', foobar_text) AS foobar_similarity
FROM some_schema.foo_table f
WHERE
(
f.foo_text ILIKE '%12345%'
OR
f.bar_text ILIKE '%12345%'
OR
f.foobar_text ILIKE '%12345%'
)
;
id | foo_text | bar_text | foobar_text | foo_similarity | bar_similarity | foobar_similarity
----+-------------+----------+-----------------+----------------+----------------+-------------------
2 | 12345-54321 | Snake | that_##$%_snake | 0.5 | 0 | 0
3 | 12345 | Sally | 24-7avocado | 1 | 0 | 0
1 | 123456 | eddie | something0987 | 0.625 | 0 | 0
(3 rows)
Clearly in this case, id #3 (Sally) is the best match (exact, as it happens); this is the row I'd like to return first.
However, since I don't know ahead of time that foo_text is going to be the column with the best match, I don't know how to define the ORDER BY clause.
I figured this would be a common enough problem, but I haven't found any hints in a fair bit of SO and DDG .
How can I always rank the best-matched row first in the returned set, without knowing which column will provide the best match to the search term?
Use greatest():
greatest(similarity('12345', foo_text), similarity('12345', bar_text), similarity('12345', foobar_text)) desc

Teradata SQL - select distinct returning duplicate rows where one row has null values

I'm stuck on query where i'm joining multiple tables to bring in attributes I need for a specific primary key. What I'm finding is I'm receiving duplicate rows, which are essentially the same, but one row has null (?) values in a few columns. I only want to return the row with populated data.
I've checked all of my vol tables up to this point and there are no duplicates and I have the same distinct row count until my final vol table. I am joining in new data from other tables in the final vol table, and will post that query, but just curious if anyone knows why this would happen with "SELECT DISTINCT".
I tried using a clause "WHERE PROD_LN IS NOT NULL", but I have some that are not duplicates and will not have values for PROD_LN. I was also thinking of trying a "CASE WHEN PROD_LN IS NULL THEN PROD_LN = PROD_LINE NOT NULL" but not sure if that would work. Any help is appreciated!
ACCT_NAME | GRP_ID | GRP_B | ASGND_CD | PROD_LN | PROD_TYP | PLCY_TYP | FINCL | MKT_SGMT |
ENTERPRISE A | 00012345 | N12345 | 1 | ? | ? | 8 | ? | ? |
ENTERPRISE A | 00012345 | N12345 | 1 | H | SPPO | 8 | ASO | AFG |
I think you want something like this:
select t.*
from t
qualify row_number() over (partition by ACCT_NAME, GRP_ID, GRP_B, ASGND_CD
order by prod_ln nulls last
) = 1;
I am guessing that by duplicate, you mean on the first four columns. In any case, the partition by should be the columns that you want to be unique.

JOIN two tables, but only include data from first table in first instance of each unique record

Title might be confusing.
I have a table of Cases, and each Case can contain many Tasks. To achieve a different workflow for each Task, I have different tables such as Case_Emails, Case_Calls, Case_Chats, etc...
I want to build a Query that will eventually be exported to Excel. In this query, I want to list out each Task, and the Tasks are already joined together via a UNION in another table using a common format. For each task in the Query, I want only the first Task associated with a case to include the details from Cases table. Example below:
+----+---------+------------+-------------+-------------+-------------+
| id | Case ID | Agent Name | Task Info 1 | Task Info 2 | Task Info 3 |
+----+---------+------------+-------------+-------------+-------------+
| 1 | 4000000 | Some Name | Detailstuff | Stuffdetail | Thingsyo |
| 2 | | | Detailstuff | Stuffdetail | Thingsyo |
| 3 | | | Detailstuff | Stuffdetail | Thingsyo |
| 4 | 4000003 | Some Name | Detailstuff | Stuffdetail | Thingsyo |
| 5 | | | Detailstuff | Stuffdetail | Thingsyo |
| 6 | 4000006 | Some Name | Detailstuff | Stuffdetail | Thingsyo |
+----+---------+------------+-------------+-------------+-------------+
My original approach was attempting a LEFT JOIN on Case ID, but I couldn't figure out how to filter the data out from the extra rows.
This would be much simpler if Access supported the ROW_NUMBER function. It doesn't, but you can sort of simulate it with a correlated subquery using the Tasks table (this assumes that each task has a unique numeric ID). This basically assigns a row number to each task, partitioned by the CaseID. Then you can just conditionally display the CaseID and AgentName where RowNum = 1.
SELECT Switch(RowNum = 1, CaseID) as Case,
Switch(RowNum = 1, AgentName) as Agent,
TaskName
FROM (
SELECT c.CaseID,
c.AgentName,
t.TaskName,
(select count(*)
from Tasks t2
where t2.CaseID = c.CaseID and t2.ID <= t.ID) as RowNum
FROM Cases c
INNER JOIN Tasks t ON c.CaseID = t.CaseID
order by c.CaseID, t.TaskName
)
You didn't post your table structure, so I'm not sure this will work for you as-is, but maybe you can adapt it.
No matter what when you join you will have duplicate values. to remove the duplicates either put in a Distinct in your select or a Group by after your filters. This should resolve the duplicates in you query for task info 1,2,3.
Found out that I can name my tables in the query like so:
FROM Case_Calls Calls
With this other name, I was able to filter based on a sub query:
IIF( Calls.[ID] <> (select top 1 [ID] from Case_Calls where [Case ID] = Calls.[Case ID]), '', Cases.[Creator]) As [Case Creator]
This solution gives me the results that I want :) It's rather ugly SQL, and difficult to parse when I'm dealing with dozens of columns, but it gets the job done!
I'm still curious if there is a better solution...