I have some data from a query and the shape pretty much looks like this:
| Id | category | value |
|----|----------|-------|
| 1 | 'a' | 2 |
| 1 | 'b' | 5 |
| 2 | 'a' | 3 |
| 2 | 'b' | 4 |
I'm wanting to group that data and insert it into a table of the following structure
| Id | category_a_value | category_b_value|
|----|------------------|-----------------|
| 1 | 2 | 5 |
| 2 | 3 | 4 |
Is there a nice way to achieve this in Postgres? I couldn't figure out to group the data how I wanted so eventually I tried a INSERT INTO and on conflict approach selecting from the orginial query but this failed because you can't affect a row multiple times.
Thanks in advance
You can use conditional aggregation, which in Postgres uses filter:
select id,
max(value) filter (where category = 'a') as category_a_value,
max(value) filter (where category = 'b') as category_b_value
from t
group by id;
You can then use insert . . . select to insert the results into an existing table.
I would like to dedup rows with case insensitive values.
original table:
| ID | Name |
| ---| -------------- |
| 1 | Apple |
| 2 | Banana |
| 1 | apple |
desired output after deduping (keep the lowercase):
| ID | Name |
| ---| -------------- |
| 2 | Banana |
| 1 | apple |
Following statement only works for case sensitive match.
create table DELETE2 as select distinct * from DELETE1;
drop table DELETE1;
alter table DELETE2 rename to DELETE1;
I tried to use following statement, but it did not work.
ALTER SESSION SET QUOTED_IDENTIFIERS_IGNORE_CASE = TRUE;
Thank you!
knozawa
You could group by lower(x):
select id, max(name) name
from table
group by 1, lower(name)
Say I have a table which looks like this, with two foreign keys:
| col1 | fkey1 | fkey2 |
|------|-------|-------|
| foo | 123 | null |
| foo | 123 | 456 |
| bar | 789 | null |
How would I group by col1, with a preference for the row in which fkey2 is not null? So that the result would look like this:
| col1 | fkey1 | fkey2 |
|------|-------|-------|
| foo | 123 | 456 |
| bar | 789 | null |
One other consideration is that fkey1 has a not null constraint on it, while fkey2 does not.
For this dataset, you could use simple aggregation:
select col1, fkey1, max(fkey2) fkey2
from mytable
group by col1, fkey1
But I suspect that you actually want distinct on:
select distinct on(col1) t.*
from mytable t
order by col1, fkey2
I have the following tables:
table_1
id | table_2_id | table_3_id | table_1_specific_columns
1 | null | 1 |
2 | 1 | null |
...
and:
table_2
id | table_2_specific_columns| date
1 | blabla | 01-01-1990
2 | bababa | 02-02-1992
...
and:
table_3
id | table_3_specific_columns| date
1 | blabla | 01-01-1991
2 | bababa | 02-02-1989
...
The database is PostgreSQL
How to order table_1 by date column on table_2 and table_3 joined tables in Laravel?
Can this be done using Eloquent?
Edit: the expected result, if it can be done using eloquent, is a Collection of table_1 model with table_2 and table_3 relations ordered by the date column in table_2 and table_3
Edit2:
Expected result for asc ordering of table_1. The date column is added for quick reference, not required.
id | table_2_id | table_3_id | date
1 | null | 2 | 02-02-1989
2 | 1 | null | 01-01-1990
1 | null | 1 | 01-01-1991
2 | 2 | null | 02-02-1992
From what you've shared you either have a table_2 or table_3 relationship (if any) so you could do:
DB::table('table_1')
->leftJoin('table_2', 'table_1.table_2_id','table_2.id')
->leftJoin('table_3', 'table_1.table_3_id','table_3.id')
->select('table_1.*')
->orderByRaw('COALESCE(table_2.date, table_3.date)');
Note: If a row relates to both table 2 and table 3 the order is given by table_2 only
The downside here is that you'll also include any rows of table_1 which don't relate to anything. You could omit those by adding ->whereNotNull('table_2.id')->orWhereNotNull('table_3.id')
If you have a model you can modify the above to use e.g. Table1::leftJoin.... instead of using DB::table in this case select('table_1.*') becomes even more important to not add the wrong values in the model objects
Hi you can use syntax
orderByRaw
So your code will be similar to :
Table::orderByRaw("column DESC, column2 ASC")->get();
I have a crosstab() query similar to the one in my previous question:
Unexpected effect of filtering on result from crosstab() query
The common case is to filter extra1 field with multiples values: extra1 IN(value1, value2...). For each value included on the extra1 filter, I have added an ordering expression like this (extra1 <> valueN), as appear on the above mentioned post. The resulting query is as follows:
SELECT *
FROM crosstab(
'SELECT row_name, extra1, extra2..., another_table.category, value
FROM table t
JOIN another_table ON t.field_id = another_table.field_id
WHERE t.field = certain_value AND t.extra1 IN (val1, val2, ...) --> more values
ORDER BY row_name ASC, (extra1 <> val1), (extra1 <> val2)', ... --> more ordering expressions
'SELECT category_name FROM category_name WHERE field = certain_value'
) AS ct(extra1, extra2...)
WHERE extra1 = val1; --> condition on the result
The first value of extra1 included on the ordering expression value1, get the correct resulting rows. However, the following ones value2, value3..., get wrong number of results, resulting on less rows on each one. Why is that?
UPDATE:
Giving this as our source table (table t):
+----------+--------+--------+------------------------+-------+
| row_name | Extra1 | Extra2 | another_table.category | value |
+----------+--------+--------+------------------------+-------+
| Name1 | 10 | A | 1 | 100 |
| Name2 | 11 | B | 2 | 200 |
| Name3 | 12 | C | 3 | 150 |
| Name2 | 11 | B | 3 | 150 |
| Name3 | 12 | C | 2 | 150 |
| Name1 | 10 | A | 2 | 100 |
| Name3 | 12 | C | 1 | 120 |
+----------+--------+--------+------------------------+-------+
And this as our category table:
+-------------+--------+
| category_id | value |
+-------------+--------+
| 1 | Cat1 |
| 2 | Cat2 |
| 3 | Cat3 |
+-------------+--------+
Using the CROSSTAB, the idea is to get a table like this:
+----------+--------+--------+------+------+------+
| row_name | Extra1 | Extra2 | cat1 | cat2 | cat3 |
+----------+--------+--------+------+------+------+
| Name1 | 10 | A | 100 | 100 | |
| Name2 | 11 | B | | 200 | 150 |
| Name3 | 12 | C | 120 | 150 | 150 |
+----------+--------+--------+------+------+------+
The idea is to be able to filter the resulting table so I get results with Extra1 column with values 10 or 11, as follow:
+----------+--------+--------+------+------+------+
| row_name | Extra1 | Extra2 | cat1 | cat2 | cat3 |
+----------+--------+--------+------+------+------+
| Name1 | 10 | A | 100 | 100 | |
| Name2 | 11 | B | | 200 | 150 |
+----------+--------+--------+------+------+------+
The problem is that on my query, I get different result size for Extra1 with 10 as value and Extra1 with 11 as value. With (Extra1 <> 10) I can get the correct result size on Extra1 for that value but not in the case of 11 as value.
Here is a fiddle demonstrating the problem in more detail:
https://dbfiddle.uk/?rdbms=postgres_11&fiddle=5c401f7512d52405923374c75cb7ff04
All "extra" columns are copied from the first row of the group (as pointed out in my previous answer)
While you filter with:
.... WHERE extra1 = 'val1';
...it makes no sense to add more ORDER BY expressions on the same column. Only rows that have at least one extra1 = 'val1' in their source group survive.
From your various comments, I guess you might want to see all distinct existing values of extra - within the set filtered in the WHERE clause - for the same unixdatetime. If so, aggregate before pivoting. Like:
SELECT *
FROM crosstab(
$$
SELECT unixdatetime, x.extras, c.name, s.value
FROM (
SELECT unixdatetime, array_agg(extra) AS extras
FROM (
SELECT DISTINCT unixdatetime, extra
FROM source_table s
WHERE extra IN (1, 2) -- condition moves here
ORDER BY unixdatetime, extra
) sub
GROUP BY 1
) x
JOIN source_table s USING (unixdatetime)
JOIN category_table c ON c.id = s.gausesummaryid
ORDER BY 1
$$
, $$SELECT unnest('{trace1,trace2,trace3,trace4}'::text[])$$
) AS final_result (unixdatetime int
, extras int[]
, trace1 numeric
, trace2 numeric
, trace3 numeric
, trace4 numeric);
Aside: advice given in the following related answer about the 2nd function parameter applies to your case as well:
PostgreSQL crosstab doesn't work as desired
I demonstrate a static 2nd parameter query above. While being at it, you don't need to join to category_table at all. The same, a bit shorter and faster, yet:
SELECT *
FROM crosstab(
$$
SELECT unixdatetime, x.extras, s.gausesummaryid, s.value
FROM (
SELECT unixdatetime, array_agg(extra) AS extras
FROM (
SELECT DISTINCT unixdatetime, extra
FROM source_table
WHERE extra IN (1, 2) -- condition moves here
ORDER BY unixdatetime, extra
) sub
GROUP BY 1
) x
JOIN source_table s USING (unixdatetime)
ORDER BY 1
$$
, $$SELECT unnest('{923,924,926,927}'::int[])$$
) AS final_result (unixdatetime int
, extras int[]
, trace1 numeric
, trace2 numeric
, trace3 numeric
, trace4 numeric);
db<>fiddle here - added my queries at the bottom of your fiddle.