Merge computed data from two tables back into one of them - sql

I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here)
Contents of Measures1:
+----+----------+--------+
| ID | Weight | Volume |
+----+----------+--------+
| 1 | 100.0000 | NULL |
| 2 | 200.0000 | NULL |
| 3 | 150.0000 | NULL |
| 4 | 325.0000 | NULL |
+----+----------+--------+
Contents of Measures2:
+----+----------+----------+
| ID | Weight | Volume |
+----+----------+----------+
| 1 | 75.0000 | 10.0000 |
| 2 | 400.0000 | 64.0000 |
| 3 | 100.0000 | 22.0000 |
| 4 | 500.0000 | 100.0000 |
+----+----------+----------+
These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2.
What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not.
I can compute the volumes I want to fill in with the following query:
SELECT Measures1.ID, Measures1.Weight,
(Measures2.Volume * (Measures1.Weight / Measures2.Weight))
AS DesiredVolume
FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID;
Producing:
+----+----------+-----------------+
| ID | Weight | DesiredVolume |
+----+----------+-----------------+
| 4 | 325.0000 | 65.000000000000 |
| 3 | 150.0000 | 33.000000000000 |
| 2 | 200.0000 | 32.000000000000 |
| 1 | 100.0000 | 13.333333333333 |
+----+----------+-----------------+
But I am at a loss for how to actually insert these computed values into the Measures1 table.
Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that.
I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

UPDATE Measures1
SET Volume = (Measures2.Volume * (Measures1.Weight / Measures2.Weight))
FROM Measures1 JOIN Measures2
ON Measures1.ID = Measures2.ID;

Related

Select data from multiple existing tables dynamically

I have tables "T1" in the database that are broken down by month of the form (table_082020, table_092020, table_102020). Each contains several million records.
+----+----------+-------+
| id | date | value |
+----+----------+-------+
| 1 | 20200816 | abc |
+----+----------+-------+
| 2 | 20200817 | xyz |
+----+----------+-------+
+----+----------+-------+
| id | date | value |
+----+----------+-------+
| 1 | 20200901 | cba |
+----+----------+-------+
| 2 | 20200901 | zyx |
+----+----------+-------+
There is a second table "T2" that stores a reference to the primary key of the first one and actually to the table itself only without the word "table_".
+------------+--------+--------+--------+--------+
| rec_number | period | field1 | field2 | field3 |
+------------+--------+--------+--------+--------+
| 777 | 092020 | aaa | bbb | ccc |
+------------+--------+--------+--------+--------+
| 987 | 102020 | eee | fff | ggg |
+------------+--------+--------+--------+--------+
| 123456 | 082020 | xxx | yyy | zzz |
+------------+--------+--------+--------+--------+
There is also a third table "T3", which is the ratio of the period and the table name.
+--------+--------------+
| period | table_name |
+--------+--------------+
| 082020 | table_082020 |
+--------+--------------+
| 092020 | table_092020 |
+--------+--------------+
| 102020 | table_102020 |
+--------+--------------+
Tell me how you can combine 3 tables to get dynamic data for several periods. For example: from 15082020 to 04092020, where the data will be located in different tables, respectively
There really is no good reason for storing data in this format. It makes querying a nightmare.
If you cannot change the data format, then add a view each month that combines the data:
create view t as
select '202010' as YYYYMM, t.*
from table_102020
union all
select '202008' as YYYYMM, t.*
from table_092020
union all
. . .;
For a once-a-month effort, you can spend 10 minutes writing the code and do so with a calendar reminder. Or, better yet, set up a job that uses dynamic SQL to generate the code and run this as a job after the underlying tables are using.
What should you be doing? Well, 5 million rows a months isn't actually that much data. But if you are concerned about it, you can use table partitioning to store the data by month. This can be a little tricky; for instance, the primary key needs to include the partitioning key.

Perform join on joined queries with N number of rows per type

This is a follow-up question to [stackexchange]: Perform join on joined queries. Please read this first.
The initial question on how to transpose rows based on a certain type has been answered, but now I also need this extend to a potential N number of rows.
Example table for this case:
+------+--------------+--------------+--------+----------+
| type | information1 | information2 | Notes | uniqueID |
+------+--------------+--------------+--------+----------+
| IN | infoA | anotherinfo1 | NotesA | SN1 |
+------+--------------+--------------+--------+----------+
| IN | infoB | anotherinfo2 | NotesB | SN1 |
+------+--------------+--------------+--------+----------+
| OUT | infoC | anotherinto3 | NotesC | SN2 |
+------+--------------+--------------+--------+----------+
| OUT | infoD | anotherinto4 | NotesD | SN2 |
+------+--------------+--------------+--------+----------+
Basically I think I would need a loop over each 'IN' and 'OUT' type. In order to transpose the N IN and OUT to columns and then later join with another table. No idea how to do this.
Table (B) in related SO ticket shows the target output and would need to extend by N columns for Info N etc.

Pivot Way or Straight Way in SQL

I have following association in pivot way.
| DOCID | Note1 | Note2 | Note3 |
|-------|-------|-------|-------|
| 1 | N11 | N21 | N31 |
| 2 | N12 | NULL | N32 |
| 3 | N13 | N23 | N33 |
| 4 | N14 | N24 | NULL |
| 5 | NULL | N25 | N35 |
Other way of storing above is as below.
| DOCID | Field | Value |
|-------|---------|-------|
| 1 | Note1 | N11 |
| 1 | Note2 | N21 |
| 1 | Note3 | N31 |
| 2 | Note1 | N12 |
| 2 | Note3 | N32 |
| 3 | Note1 | N13 |
| 3 | Note2 | N23 |
| 3 | Note3 | N33 |
| 4 | Note1 | N14 |
| 4 | Note2 | N24 |
| 5 | Note2 | N25 |
| 5 | Note3 | N35 |
which of the above two option is better.
I might have more null values. in that case 2nd option seems better. as it will have less records.
but when I have 10 million records, it will be multiplied by notes (in our case it will be (30 million - null) records).
So considering performance for fetching associated records. which option is better and why?
I will have more notes associated with DocIDs.
"Better" is often subjective. In this case, though, I think one method is generally better than the other.
The second approach is the better approach -- one row per document/note pair. In general, when you have columns that are only distinguished by a number -- but otherwise contain the same things -- then the data model is suspect. There may be good reasons for representing the data across columns, but the structure should be questioned. If you still need it, then fine.
Consider a simple query such as which ids have a particular note. In the first representation, you need to check all three columns. This makes it hard to use an index. And, it negates the value of columnar storage.
If the business changes and you suddenly want 4 notes per docid -- or want to limit them to 2 -- then the table needs to be restructured. That is an expensive process.
I'm not sure what the notes refer to. But if they represent a foreign key relationship to another table, then the pivoted version needs to maintain multiple foreign key relationships -- for essentially the same purpose.

SAP Business Objects Cross Table Data Duplication

I'm using Business Objects to construct a simple report on whether a unit is on or off for a given day. When constructing a vertical table, the data is correct and looks like such:
Unit ID | Status | Date
1 | On | 2016-09-10
1 | On | 2016-09-11
1 | Off | 2016-09-12
2 | Off | 2016-09-10
2 | Off | 2016-09-11
2 | On | 2016-09-12
However the cross table I've created, with columns of "date" and rows of "Unit ID" is duplicating Unit ID and having an entire row of 'On' followed by an entire row of 'Off' like:
____| 2016-09-10 | 2016-09-11 | 2016-09-12
1 | On | On | On
1 | Off | Off | Off
2 | On | On | On
2 | Off | Off | Off
instead of what it should be as:
____| 2016-09-10 | 2016-09-11 | 2016-09-12
1 | On | On | Off
2 | Off | Off | On
Any suggestions as to why it's doing this? The table isn't particularly useful if it has these duplicate rows and I can't understand why it's resulting in this odd table.
Turns out what happened is the "Status" field was a dimension type, but the cross table requires the data field to be a measure type. Simply making a new variable that was a measure equal to "Status" solved the issue.

How should I create an SQL table with stock information so that I can add new stocks and new fields easily?

I want to create an SQL table, where I can have any number of stocks (ie. MSFT, GOOG, IBM) and any number of fields (ie. Full Name, Sector, Country). But I want the flexibility to add new stocks and new fields as I go along. Say I want to add a new stock like AAPL, or I want a new boolean field for whether they pay dividends or not. I don't expect to store dynamic fields like CurrentStockPrice, but the information will have to change periodically. For instance, when a company changes its dividend policy. How do I design the table so that I don't have to change its structure?
I had one idea where I could have a new table for each stock, and a master table that has all the stocks, and a pointer to each individual stock's table. That way, I can freely add new stocks, and new fields easily. But I'm not very familiar with SQL, and would like an expert opinion on how it should be implemented.
The simple answer is that your requirements are not a good fit for SQL. The most important concern is not how to store the data, but how you will retrieve it - what kind of query will you need to run?
EAV allows you to store data whose schema you don't know in advance - but has lots of drawbacks when querying. Even moderately complex queries (find all stocks where the dividend was paid between 1 and 12 Jan, in the tech sector, whose CEO is female) run into a lot of compexity.
Creating a new table for each type of record very quickly gets crazy too - imagine the query above if you have to search dozens or hundreds of type-specific tables.
The relational model works best when you know the schema of the information in advance.
If you don't know the schema, consider using a NoSQL solution, or use SQL Server's support for XML or JSON. Store the fixed data in rows & columns, and the variable data in XML or JSON. Performance for searching is pretty good, and it's much less convoluted as a solution.
Just to expand on my comment, because the question itself begs for a couple of common schema anti-patterns. Some hybrid of EAV may actually be a good fit if you are willing to give up some flexibility and simplicity in your SQL and you aren't looking for fast queries.
EAV
EAV, or Entity-Attribute-Value is a design where, in your case, you would have a master table of stocks with some common attributes, or maybe even ticker info with a datetime. Something like:
+---------+--------+--------------+
| stockid | symbol | name |
+---------+--------+--------------+
| 1 | goog | Google |
| 2 | msft | Microsoft |
| 3 | gpro | GoPro |
| 4 | xom | Exxon Mobile |
+---------+--------+--------------+
And a second table (the EAV table) to store ever changing attributes:
+---------+-----------+------------+
| stockid | attribute | value |
+---------+-----------+------------+
| 1 | country | us |
| 1 | favorite | TRUE |
| 1 | startyear | 2004 |
| 3 | favorite | |
| 3 | bobspick | TRUE |
| 4 | country | us |
| 3 | country | us |
| 2 | startyear | 1986 |
| 2 | employees | 18000 |
| 3 | marketcap | 1850000000 |
+---------+-----------+------------+
And perhaps a third table to get that minute by minute ticker info stored:
+---------+----------------+--------+
| stockid | datetime | value |
+---------+----------------+--------+
| 1 | 9/21/2016 8:15 | 771.41 |
| 1 | 9/21/2016 8:14 | 771.39 |
| 1 | 9/21/2016 8:12 | 771.37 |
| 1 | 9/21/2016 8:10 | 771.35 |
| 1 | 9/21/2016 8:08 | 771.33 |
| 1 | 9/21/2016 8:06 | 771.31 |
| 1 | 9/21/2016 8:04 | 771.29 |
| 2 | 9/21/2016 8:15 | 56.81 |
| 2 | 9/21/2016 8:14 | 56.82 |
| 2 | 9/21/2016 8:12 | 56.83 |
| 2 | 9/21/2016 8:10 | 56.84 |
+---------+----------------+--------+
Generally this is considered not great design since stitching data back together in a format like:
+-------------+-----------+---------+-----------+----------+--------------+
| stocksymbol | stockname | country | startyear | bobspick | currentvalue |
+-------------+-----------+---------+-----------+----------+--------------+
causes you to write a query that is not fun to look at:
SELECT
stocks.stocksymbol,
stocks.name,
country.value,
bobspick.value,
startyear.value,
stockvalue.stockvalue
FROM
stocks
LEFT OUTER JOIN (SELECT stockid, value FROM fieldsTable WHERE attribute = 'country') as country ON
stocks.stockid = country.stockid
LEFT OUTER JOIN (SELECT stockid, value FROM fieldsTable WHERE attribute = 'Bobspick') as bobspick ON
stocks.stockid = bobspick.stockid
LEFT OUTER JOIN (SELECT stockid, value FROM fieldsTable WHERE attribute = 'startyear') as startyear ON
stocks.stockid = startyear.stockid
LEFT OUTER JOIN (SELECT max(value) as stockvalue, stockid FROM ticketTable GROUP BY stockid) as stockvalue ON
stocks.stockid = stockvalue.stockid
WHERE symbol in ('goog', 'msft')
You can see that every "field" in the EAV table gets its own subquery, which means we read that table from storage three times. We gain the flexibility on the front end over the database design, but we lose flexibility when querying.
Imagine a more traditional schema:
+---------+--------+--------------+---------+----------+----------+-----------+------------+-----------+
| stockid | symbol | name | country | bobspick | favorite | startyear | marketcap | employees |
+---------+--------+--------------+---------+----------+----------+-----------+------------+-----------+
| 1 | goog | Google | us | | TRUE | 2004 | | |
| 2 | msft | Microsoft | | | | 1986 | | 18000 |
| 3 | gpro | GoPro | us | TRUE | | | 1850000000 | |
| 4 | xom | Exxon Mobile | us | | | | | |
| | | | | | | | | |
+---------+--------+--------------+---------+----------+----------+-----------+------------+-----------+
and
+---------+----------------+--------+
| stockid | datetime | value |
+---------+----------------+--------+
| 1 | 9/21/2016 8:15 | 771.41 |
| 1 | 9/21/2016 8:14 | 771.39 |
| 1 | 9/21/2016 8:12 | 771.37 |
| 1 | 9/21/2016 8:10 | 771.35 |
| 1 | 9/21/2016 8:08 | 771.33 |
| 1 | 9/21/2016 8:06 | 771.31 |
| 1 | 9/21/2016 8:04 | 771.29 |
| 2 | 9/21/2016 8:15 | 56.81 |
| 2 | 9/21/2016 8:14 | 56.82 |
| 2 | 9/21/2016 8:12 | 56.83 |
| 2 | 9/21/2016 8:10 | 56.84 |
+---------+----------------+--------+
To get the same results:
SELECT
stocks.stocksymbol,
stocks.name,
stocks.country,
stocks.bobspick,
stocks.startyear,
stockvalue.stockvalue
FROM
stocks
LEFT OUTER JOIN (SELECT max(value) as stockvalue, stockid FROM ticketTable GROUP BY stockid) as stockvalue ON
stocks.stockid = stockvalue.stockid
WHERE symbol in ('goog', 'msft')
Now we have the flexibility in the query where we can quickly change out fields without monkeying around in subqueries, but we have to hassle our DBA every time we want to add a field.
There is a further abstraction from EAV that is definitely something to avoid. I don't know if it has a name, but I call it "Database in a database". Here you have a table of tables, table of fields, and a table of values. The entire schema is kept as records as our the values that would be stored in the schema. Ultimatele flexibility is gained, but the sql you will write to get at your data will be nightmarish and your query speeds will degrade at a fast rate as you add to your data/schema/data/schema mess.
As for your last idea of adding a new table for each stock, if the fields you are going to track for each stock are different (startyear, employees, and market cap for one stock and marketmax, country, address, yearsinbusiness in another) and you aren't planning on adding new stocks often, then it may be a good fit. I'm betting though that the attributes/fields that you track on stock1 are going to also be tracked on stock2, and therefore suggest that your should have a single stock table with all those common attributes and maybe an EAV to track attributes that are particular to each stock so you can have the flexibility you need.
In each of these schemas I would also suggest that you put your ticker data in it's own table. Whether you are capturing ticket data by the minute, hour, day, week, or month, because it's datetime level data, it deserves it's own table. (Unless you are only going to track the most current value, then it becomes a field).
If you want to add fields dynamically, but without actually altering the schema of the table, then you should use a vertical schema for the table and retrieve the data via a PIVOT statement.
In this manner you can add as many Field/Value pairs as you wish for each stock/customer pairing.
The basic table would have 5 columns perhaps:
ID (Identity); StockName; AttributeName; Value; Timestamp;
If you take a look at how SQL organizes it's table schema in INFORMATION_SCHEMA.COLUMNS, it provides this very same vertical schema layout for you.