Combine same named columns from different tables *without* merging the columns - sql

I've got a table to store collected data from several energy meters, then I created some views to show data from specific meters only. Now I want to combine those views for an overview of only interesting data.
As far as I understood from reading other questions, (where my question here could be a possible duplicate?) JOIN would be what I need and that creates new columns, but the columns with the values of the meters get merged. I guess this is because the columns with the interesting values have all the exact same name, but that is not what I want. I want the colums with the interesting values (named "1.8.0") not merged but in seperate columns as they are in the views, just next to each other for a better overview.
To shorten the post I created following example to show my problem:
http://sqlfiddle.com/#!17/a886d/31 (and maybe also http://sqlfiddle.com/#!17/a886d/30 )
The related query:
SELECT public.meter354123."0.9.2" AS datestamp,
public.meter354123."1.8.0" AS meter354123
FROM public.meter354123
FULL JOIN public.meter354124 ON public.meter354123."1.8.0" = public.meter354124."1.8.0";
For some reason I do not understand yet, the JOIN does not work for me as I would expect. If I JOIN ON the values (column "1.8.0") I get NULL rows, if I JOIN ON the datestamps (column "0.9.2"), one column is missing completely in the result.
(if it is meaningful, feel free to edit the code from the fiddle here into the question, I thought it would be too much code to paste here and I don't know how to explain my issue more simpler)
In the end I would like to have a result like:
| datestamp (=col "0.9.2") | meterdata1 (=col "1.8.0") | meterdata2 (=col "1.8.0") | etc...
| 1220101 | value1 | value1 | ...
| 1220201 | value2 | value2 | ...
| 1220301 | value3 | value3 | ...
Maybe the intermediate views are not necessary at all and it is even possible to pull off this result from the original table without going through those views?
I'm not a database expert so I went with my current knowledge to accomplish that.
Thank you very much for looking into this and for any hints!

You could aggregate meter data into a CSV:
SELECT
"0.9.2" AS datestamp,
string_agg("1.8.0", ',') AS meterdata
FROM public.meter354123
GROUP BY "0.9.2"
Or to get an actual array:
SELECT
"0.9.2" AS datestamp,
array_agg("1.8.0") AS meterdata
FROM public.meter354123
GROUP BY "0.9.2"

Thank you for looking into this, I could "solve" this by using another several intermediate views and then simple JOIN-ing those views as following:
see fiddle: http://sqlfiddle.com/#!17/a886d/40
CREATE VIEW meter354123 AS SELECT meterdata."0.0.0",
meterdata."0.9.1",
meterdata."0.9.2",
meterdata."1.8.0"
FROM meterdata
WHERE meterdata."0.0.0" = 354123::numeric AND meterdata."0.9.1" = 0::numeric
ORDER BY meterdata."0.0.0", meterdata."0.9.2" DESC
LIMIT 12;
CREATE VIEW meter354124 AS SELECT meterdata."0.0.0",
meterdata."0.9.1",
meterdata."0.9.2",
meterdata."1.8.0"
FROM meterdata
WHERE meterdata."0.0.0" = 354124::numeric AND meterdata."0.9.1" = 0::numeric
ORDER BY meterdata."0.0.0", meterdata."0.9.2" DESC
LIMIT 12;
CREATE VIEW meter354127 AS SELECT meterdata."0.0.0",
meterdata."0.9.1",
meterdata."0.9.2",
meterdata."1.8.0"
FROM meterdata
WHERE meterdata."0.0.0" = 354127::numeric AND meterdata."0.9.1" = 0::numeric
ORDER BY meterdata."0.0.0", meterdata."0.9.2" DESC
LIMIT 12;
CREATE VIEW "meter354123_1.8.0" AS SELECT public.meter354123."0.9.2" AS datestamp,
public.meter354123."1.8.0" AS meter354123
FROM public.meter354123
ORDER BY datestamp DESC
LIMIT 12;
CREATE VIEW "meter354124_1.8.0" AS SELECT public.meter354124."0.9.2" AS datestamp,
public.meter354124."1.8.0" AS meter354124
FROM public.meter354124
ORDER BY datestamp DESC
LIMIT 12;
CREATE VIEW "meter354127_1.8.0" AS SELECT public.meter354127."0.9.2" AS datestamp,
public.meter354127."1.8.0" AS meter354127
FROM public.meter354127
ORDER BY datestamp DESC
LIMIT 12;
SELECT "meter354123_1.8.0".datestamp,
"meter354123_1.8.0".meter354123,
"meter354124_1.8.0".meter354124,
"meter354127_1.8.0".meter354127
FROM "meter354123_1.8.0"
JOIN "meter354124_1.8.0" ON "meter354123_1.8.0".datestamp = "meter354124_1.8.0".datestamp
JOIN "meter354127_1.8.0" ON "meter354123_1.8.0".datestamp = "meter354127_1.8.0".datestamp;
which results in:
datestamp | meter354123 | meter354124 | meter354127
-----------+-------------+-------------+-------------
1220301 | 11055.66 | 5403.16 | 88556.23
1220201 | 11054.64 | 5399.47 | 88195.41
1220101 | 11053.33 | 5395.27 | 87799.84
I don't know if there is a more efficient/elegant solution, but at least this gives the wanted result.

Related

Aggregating or Bundle a Many to Many Relationship in SQL Developer

So I have 1 single table with 2 columns : Sales_Order called ccso, Arrangement called arrmap
The table has distinct values for this combination and both these fields have a Many to Many relationship
1 ccso can have Multiple arrmap
1 arrmap can have Multiple ccso
All such combinations should be considered as one single bundle
Objective :
Assign a final map to each of the Sales Order as the Largest Arrangement in that Bundle
Example:
ccso : 100-10015 has 3 arrangements --> Now each of those arrangements have a set of Sales Orders --> Now those sales orders will also have a list of other arrangements and so on
(Image : 1)
Therefore the answer definitely points to something recursively checking. Ive managed to write the below code / codes and they work as long as I hard code a ccso in the where clause - But I don't know how to proceed after this now. (I'm an accountant by profession but finding more passion in coding recently) I've searched the forums and web for things like
Recursive CTEs,
many to many aggregation
cartesian product etc
and I'm sure there must be a term for this which I don't know yet. I've also tried
I have to use sqldeveloper or googlesheet query and filter formulas
sqldeveloper has restrictions on on some CTEs. If recursive is the way I'd like to know how and if I can control the depth to say 4 or 5 iterations
Ideally I'd want to update a third column with the final map if possible but if not, then a select query result is just fine
Codes I've tried
Code 1: As per Screenshot
WITH a1(ccso, amap) AS
(SELECT distinct a.ccso, a.arrmap
FROM rg_consol_map2 A
WHERE a.ccso = '100-10115' -- this condition defines the ultimate ancestors in your chain, change it as appropriate
UNION ALL
SELECT m.ccso, m.arrmap
FROM rg_consol_map2 m
JOIN a1
ON M.arrmap = a1.amap -- or m.ccso=a1.ccso
) /*if*/ CYCLE amap SET nemap TO 1 /*else*/ DEFAULT 0
SELECT DISTINCT amap FROM (SELECT ccso, amap FROM a1 ORDER BY 1 DESC) WHERE ROWNUM = 1
In this the main challenge is how to remove the hardcoded ccso and do a join for each of the ccso
Code 2 : Manual CTEs for depth
Here again the join outside the CTE gives me an error and sqldeveloper does not allow WITH clause with UPDATE statement - only works for select and cannot be enclosed within brackets as subtable
SELECT distinct ccso FROM
(
WITH ar1 AS
(SELECT distinct arrmap
FROM rg_consol_map
WHERE ccso = a.ccso
)
,so1 AS
(SELECT DISTINCT ccso
FROM rg_consol_map
WHERE arrmap IN (SELECT arrmap FROM ar1)
)
,ar2 AS
(SELECT DISTINCT ccso FROM rg_consol_map
where arrmap IN (select distinct arrmap FROM rg_consol_map
WHERE ccso IN (SELECT ccso FROM so1)
))
SELECT ar1.arrmap, NULL ccso FROM ar1
union all
SELECT null, ar2.ccso FROM ar2
UNION ALL
SELECT NULL arrmap, so1.ccso FROM so1
)
Am I Missing something here or is there an easier way to do this? I read something about MERGE and PROC SQL JOIN but was unable to get them to work but if that's the way to go ahead I will try further if someone can point me in the direction
(Image : 2)
(CSV File : [3])
Edit : Fixing CSV file link
https://github.com/karan360note/karanstackoverflow.git
I suppose can be downloaded from here IC mapping many to many.csv
Oracle 11g version is being used
Apologies in advance for the wall of text.
Your problem is a complex, multi-layered Many-to-Many query; there is no "easy" solution to this, because that is not a terribly ideal design choice. The safest best does literally include multiple layers of CTE or subqueries in order to achieve all the depths you want, as the only ways I know to do so recursively rely on an anchor column (like "parentID") to direct the recursion in a linear fashion. We don't have that option here; we'd go in circles without a way to track our path.
Therefore, I went basic, and with several subqueries. Every level checks for a) All orders containing a particular ARRMAP item, and then b) All additional items on those orders. It's clear enough for you to see the logic and modify to your needs. It will generate a new table that contains the original CCSO, the linking ARRMAP, and the related CCSO. Link: https://pastebin.com/un70JnpA
This should enable you to go back and perform the desired updates you want, based on order # or order date, etc... in a much more straightforward fashion. Once you have an anchor column, a CTE in the future is much more trivial (just search for "CTE recursion tree hierarchy").
SELECT DISTINCT
CCSO, RELATEDORDER
FROM myTempTable
WHERE CCSO = '100-10115'; /* to find all orders by CCSO, query SELECT DISTINCT RELATEDORDER */
--WHERE ARRMAP = 'ARR10524'; /* to find all orders by ARRMAP, query SELECT DISTINCT CCSO */
EDIT:
To better explain what this table generates, let me simplify the problem.
If you have order
A with arrangements 1 and 2;
B with arrangement 2, 3; and
C with arrangement 3;
then, by your initial inquiry and image, order A should related to orders B and C, right? The query generates the following table when you SELECT DISTINCT ccso, relatedOrder:
+-------+--------------+
| CCSO | RelatedOrder |
+----------------------+
| A | B |
| A | C |
+----------------------+
| B | C |
| B | A |
+----------------------+
| C | A |
| C | B |
+-------+--------------+
You can see here if you query WHERE CCSO = 'A' OR RelatedOrder = 'A', you'll get the same relationships, just flipped between the two columns.
+-------+--------------+
| CCSO | RelatedOrder |
+----------------------+
| A | B |
| A | C |
+----------------------+
| B | A |
+----------------------+
| C | A |
+-------+--------------+
So query only CCSO or RelatedOrder.
As for the results of WHERE CCSO = '100-10115', see image here, which includes all the links you showed in your Image #1, as well as additional depths of relations.

PostgreSQL finding the 3 most popular articles in a news database

I'm currently trying to find the 3 most popular articles in a database. I want to print out the title and amount of views for each. I know I'll have to join two of the tables together (articles & log) in order to do so.
The articles table has a column of the titles, and one with a slug for the title.
The log table has a column of the paths in the format of /article/'slug'.
How would I join these two tables, filter out the path to compare to the slug column of the articles table, and use count to display the number of times it was viewed?
The correct query used was:
SELECT title, count(*) as views
FROM articles a, log l
WHERE a.slug=substring(l.path, 10)
GROUP BY title
ORDER BY views DESC
LIMIT 3;
If I understood you correctly you just need to join two tables based on one column using aggregation. The catch is that you can't compare them directly but have to use some string functions before.
Assuming a schema like this:
article
| title | slug |
-------------------
| title1 | myslug |
| title2 | myslug |
log
| path |
--------------------------
| /article/'myslug' |
| /article/'unmentioned' |
Try out something like the following:
select title, count(*) from article a join log l where concat('''', a.slug, '''') = substring(l.path, 10) group by title;
For more complex queries it can be helpful to at first write smaller queries which help you to figure out the whole query later. For example just check if the string functions return what you expect:
select substring(l.path, 10) from log l;
select concat('''', a.slug, '''') from article a;

The best way to keep count data in postgres

I need to create a statistic for some aggragete date splitted by days.
For example:
select
(select count(*) from bananas) as bananas_count,
(select count(*) from apples) as apples_count,
(select count(*) from bananas where color = 'yellow') as yellow_bananas_count;
obviously I will get:
bananas_count | apples_count | yellow_bananas_count
--------------+------------------+ ---------------------
123| 321 | 15
but I need to get that data grouped by day, we need to know how many banaras we had yesterday.
The first thought which I got is create aview, but in that case i will not be able split by dates ( or I don't know how to do it).
I need a performance-wise database sided implementation of this task.

Access join on first record

I have two tables in an Access database, tblProducts and tblProductGroups.
I am trying to run a query that joins both of these tables, and brings back a single record for each product. The problem is that the current design allows for a product to be listed in the tblProductGroups table more than 1 - i.e. a product can be a member of more than one group (i didnt design this!)
The query is this:
select tblProducts.intID, tblProducts.strTitle, tblProductGroups.intGroup
from tblProducts
inner join tblProductGroups on tblProducts.intID = tblProductGroups.intProduct
where tblProductGroups.intGroup = 56
and tblProducts.blnActive
order by tblProducts.intSort asc, tblProducts.curPrice asc
At the moment this returns results such as:
intID | strTitle | intGroup
1 | Product 1 | 1
1 | Product 1 | 2
2 | Product 2 | 1
2 | Product 2 | 2
Whereas I only want the join to be based on the first matching record, so that would return:
intID | strTitle | intGroup
1 | Product 1 | 1
2 | Product 2 | 1
Is this possible in Access?
Thanks in advance
Al
This option runs a subquery to find the minimum intGoup for each tblProducts.intID.
SELECT tblProducts.intID
, tblProducts.strTitle
, (SELECT TOP 1 intGroup
FROM tblProductGroups
WHERE intProduct=tblProducts.intID
ORDER BY intGroup ASC) AS intGroup
FROM tblProducts
WHERE tblProducts.blnActive
ORDER BY tblProducts.intSort ASC, tblProducts.curPrice ASC
This works for me. Maybe this helps someone:
SELECT
a.Lagerort_ID,
FIRST(a.Regal) AS frstRegal,
FIRST(a.Fachboden) AS frstFachboden,
FIRST(a.xOffset) AS frstxOffset,
FIRST(a.yOffset) AS frstyOffset,
FIRST(a.xSize) AS frstxSize,
FIRST(a.ySize) AS frstySize,
FIRST(a.Platzgr) AS frstyPlatzgr,
FIRST(b.Artikel_ID) AS frstArtikel_ID,
FIRST(b.Menge) AS frstMenge,
FIRST(c.Breite) AS frstBreite,
FIRST(c.Tiefe) AS frstTiefe,
FIRST(a.Fachboden_ID) AS frstFachboden_ID,
FIRST(b.BewegungsDatum) AS frstBewegungsDatum,
FIRST(b.ErzeugungsDatum) AS frstErzeugungsDatum
FROM ((Lagerort AS a)
LEFT JOIN LO_zu_ART AS b ON a.Lagerort_ID = b.Lagerort_ID)
LEFT JOIN Regal AS c ON a.Regal = c.Regal
GROUP BY a.Lagerort_ID
ORDER BY FIRST(a.Regal), FIRST(a.Fachboden), FIRST(a.xOffset), FIRST(a.yOffset);
I have non unique entries for Lagerort_ID on the table LO_zu_ART. My goal was to only use the first found entry from LO_zu_ART to match into Lagerort.
The trick is to use FIRST() an any column but the grouped one. This may also work with MIN() or MAX(), but I have not tested it.
Also make sure to call the Fields with the "AS" statement different than the original field. I used frstFIELDNAME. This is important, otherwise I got errors.
Create a new query, qryFirstGroupPerProduct:
SELECT intProduct, Min(intGroup) AS lowest_group
FROM tblProductGroups
GROUP BY intProduct;
Then JOIN qryFirstGroupPerProduct (instead of tblProductsGroups) to tblProducts.
Or you could do it as a subquery instead of a separate saved query, if you prefer.
It's not very optimal, but if you're bringing in a few thousand records this will work:
Create a query that gets the max of tblProducts.intID from one table and call it qry_Temp.
Create another query and join qry_temp to the table you are trying to join against, and you should get your results.

SQL magic - query shouldn't take 15 hours, but it does

Ok, so i have one really monstrous MySQL table (900k records, 180 MB total), and i want to extract from subgroups records with higher date_updated and calculate weighted average in each group. The calculation runs for ~15 hours, and i have a strong feeling i'm doing it wrong.
First, monstrous table layout:
category
element_id
date_updated
value
weight
source_prefix
source_name
Only key here is on element_id (BTREE, ~8k unique elements).
And calculation process:
Make hash for each group and subgroup.
CREATE TEMPORARY TABLE `temp1` (INDEX ( `ds_hash` ))
SELECT `category`,
`element_id`,
`source_prefix`,
`source_name`,
`date_updated`,
`value`,
`weight`,
MD5(CONCAT(`category`, `element_id`, `source_prefix`, `source_name`)) AS `subcat_hash`,
MD5(CONCAT(`category`, `element_id`, `date_updated`)) AS `cat_hash`
FROM `bigbigtable` WHERE `date_updated` <= '2009-04-28'
I really don't understand this fuss with hashes, but it worked faster this way. Dark magic, i presume.
Find maximum date for each subgroup
CREATE TEMPORARY TABLE `temp2` (INDEX ( `subcat_hash` ))
SELECT MAX(`date_updated`) AS `maxdate` , `subcat_hash`
FROM `temp1`
GROUP BY `subcat_hash`;
Join temp1 with temp2 to find weighted average values for categories
CREATE TEMPORARY TABLE `valuebycats` (INDEX ( `category` ))
SELECT `temp1`.`element_id`,
`temp1`.`category`,
`temp1`.`source_prefix`,
`temp1`.`source_name`,
`temp1`.`date_updated`,
AVG(`temp1`.`value`) AS `avg_value`,
SUM(`temp1`.`value` * `temp1`.`weight`) / SUM(`weight`) AS `rating`
FROM `temp1` LEFT JOIN `temp2` ON `temp1`.`subcat_hash` = `temp2`.`subcat_hash`
WHERE `temp2`.`subcat_hash` = `temp1`.`subcat_hash`
AND `temp1`.`date_updated` = `temp2`.`maxdate`
GROUP BY `temp1`.`cat_hash`;
(now that i looked through it and wrote it all down, it seems to me that i should use INNER JOIN in that last query (to avoid 900k*900k temp table)).
Still, is there a normal way to do so?
UPD: some picture for reference:
removed dead ImageShack link
UPD: EXPLAIN for proposed solution:
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
| 1 | SIMPLE | cur | ALL | NULL | NULL | NULL | NULL | 893085 | 100.00 | Using where; Using temporary; Using filesort |
| 1 | SIMPLE | next | ref | prefix | prefix | 1074 | bigbigtable.cur.source_prefix,bigbigtable.cur.source_name,bigbigtable.cur.element_id | 1 | 100.00 | Using where |
+----+-------------+-------+------+---------------+------------+---------+--------------------------------------------------------------------------------------+--------+----------+----------------------------------------------+
Using hashses is one of the ways in which a database engine can execute a join. It should be very rare that you'd have to write your own hash-based join; this certainly doesn't look like one of them, with a 900k rows table with some aggregates.
Based on your comment, this query might do what you are looking for:
SELECT cur.source_prefix,
cur.source_name,
cur.category,
cur.element_id,
MAX(cur.date_updated) AS DateUpdated,
AVG(cur.value) AS AvgValue,
SUM(cur.value * cur.weight) / SUM(cur.weight) AS Rating
FROM eev0 cur
LEFT JOIN eev0 next
ON next.date_updated < '2009-05-01'
AND next.source_prefix = cur.source_prefix
AND next.source_name = cur.source_name
AND next.element_id = cur.element_id
AND next.date_updated > cur.date_updated
WHERE cur.date_updated < '2009-05-01'
AND next.category IS NULL
GROUP BY cur.source_prefix, cur.source_name,
cur.category, cur.element_id
The GROUP BY performs the calculations per source+category+element.
The JOIN is there to filter out old entries. It looks for later entries, and then the WHERE statement filters out the rows for which a later entry exists. A join like this benefits from an index on (source_prefix, source_name, element_id, date_updated).
There are many ways of filtering out old entries, but this one tends to perform resonably well.
Ok, so 900K rows isn't a massive table, it's reasonably big but and your queries really shouldn't be taking that long.
First things first, which of the 3 statements above is taking the most time?
The first problem I see is with your first query. Your WHERE clause doesn't include an indexed column. So this means that it has to do a full table scan on the entire table.
Create an index on the "data_updated" column, then run the query again and see what that does for you.
If you don't need the hash's and are only using them to avail of the dark magic then remove them completely.
Edit: Someone with more SQL-fu than me will probably reduce your whole set of logic into one SQL statement without the use of the temporary tables.
Edit: My SQL is a little rusty, but are you joining twice in the third SQL staement? Maybe it won't make a difference but shouldn't it be :
SELECT temp1.element_id,
temp1.category,
temp1.source_prefix,
temp1.source_name,
temp1.date_updated,
AVG(temp1.value) AS avg_value,
SUM(temp1.value * temp1.weight) / SUM(weight) AS rating
FROM temp1 LEFT JOIN temp2 ON temp1.subcat_hash = temp2.subcat_hash
WHERE temp1.date_updated = temp2.maxdate
GROUP BY temp1.cat_hash;
or
SELECT temp1.element_id,
temp1.category,
temp1.source_prefix,
temp1.source_name,
temp1.date_updated,
AVG(temp1.value) AS avg_value,
SUM(temp1.value * temp1.weight) / SUM(weight) AS rating
FROM temp1 temp2
WHERE temp2.subcat_hash = temp1.subcat_hash
AND temp1.date_updated = temp2.maxdate
GROUP BY temp1.cat_hash;