sum rows from one table and move it to another table - sql

How can I sum rows from one table (based on selected critiria) and move the outcome to another table.
I have a table related to costs within project:
Table "costs":
id| CostName |ID_CostCategory| PlanValue|DoneValue
-------------------------------------------------------
1 | books |1 |100 |120
2 | flowers |1 |90 |90
3 | car |2 |150 |130
4 | gas |2 |50 |45
and I want to put the sum of "DoneValue" of each ID_CostCategory into table "CostCategories"
Table "CostCategories":
id|name |planned|done
------------------------
1 |other|190 |takes the sum from above table
2 |car |200 |takes the sum from above table
Many thanks

I would not store this, because as soon as anything changes in Costs then CostCategories will be out of date, instead I would create a view e.g:
CREATE VIEW CostCategoriesSum
AS
SELECT CostCategories.ID,
CostCategories.Name,
SUM(COALESCE(Costs.PlanValue, 0)) AS Planned,
SUM(COALESCE(Costs.DoneValue, 0)) AS Done
FROM CostCategories
LEFT JOIN Costs
ON Costs.ID_CostCategory = CostCategories.ID
GROUP BY CostCategories.ID, CostCategories.Name;
Now instead of referring to the table, you can refer to the view and the Planned and Done totals will always be up to date.

INSERT INTO CostCategories(id,name,planned,done)
SELECT ID_CostCategory, CostName, SUM(PlanValue), SUM(DoneValue)
FROM costs GROUP BY ID_CostCategory, CostName

Related

If condition TRUE in a row (that is grouped)

Table:
|Months |ID|Commission|
|2020-01|1 |2312 |
|2020-02|2 |24412 |
|2020-02|1 |123 |
|... |..|... |
What I need:
COUNT(Months),
ID,
SUM(Commission),
Country
GROUP BY ID...
How it should look:
|Months |ID|Commission|
|4 |1 |5356 |
|6 |2 |5436 |
|... |..|... |
So I want to know how many months each ID received his commission, however (and that's the part where I ask for your help) if the ID is still receiving commission up to this month (current month) - I want to exclude him from the list. If he stopped receiving comm last month or last year, I want to see him in the table.
In other words, I want a table with old clients (who doesn't receive commission anymore)
Use aggregation. Assuming there is one row per month:
select id, count(*)
from t
group by id
having max(months) < date_format(now(), '%Y-%m');
Note this uses MySQL syntax, which was one of the original tags.

Aggregate query in MS-Access that calculates current and cumulative amounts by account is exceptionally slow

I'm having trouble with an Access query. Although it runs , it is exceptionally slow; and I fear I may be overlooking a simpler, more elegant solution in my query design.
For context, I work in an accounts receivable office. We have thousands of customers, and each customer can have one or more accounts. Every month, transactions post to the various accounts, and I am preparing the invoices for the customers. In my particular case, a customer's first invoice is always 001, then 002, and so on. We bill monthly.
To describe a simplified example, in month of January 2020, customer A may have the following transactions in the Transaction table:
+-----------------------------+
|TransID|Account|Amount|InvNum|
+-----------------------------+
|1 |1 |$10.00|001 |
|2 |2 |$5.00 |001 |
|3 |3 |$2.00 |001 |
+-----------------------------+
So, in the above example, I would want to issue invoice 001 to customer A for a total of $17.00, broken out by account. The invoice would look something like this:
+-----------------------+
|Account|Current|ToDate |
|1 |$10.00 |$10.00 |
|2 |$5.00 |$5.00 |
|3 |$2.00 |$2.00 |
+-----------------------+
$17.00 $17.00
Now, suppose that in February 2020, additional transactions post. A simplified version of the Transaction table would look like this:
+-----------------------------+
|TransID|Account|Amount|InvNum|
+-----------------------------+
|1 |1 |$10.00|001 |
|2 |2 |$5.00 |001 |
|3 |3 |$2.00 |001 |
|4 |1 |$3.00 |002 |
|5 |3 |$4.00 |002 |
+-----------------------------+
Invoice #002 issued to customer A would need to look something like this:
+-----------------------+
|Account|Current|ToDate |
|1 |$3.00 |$13.00 |
|2 |$0.00 |$5.00 |
|3 |$4.00 |$6.00 |
+-----------------------+
$7.00 $24.00
The query I'm having trouble with is specifically designed to capture the month's activity by account and to calculate a cumulative total for the "ToDate" column on the invoice. The challenge is that not every account will have transactions in a given month. Note that account 2 did not post any transactions in February. So invoice 002 has to show a current amount of $0.00 for account 2, but it also needs to know the cumulative amount ($5.00 + $0.00 = $5.00) for account 2.
The problematic query is itself made up of a few subqueries:
BillNumByAcccountQ: an aggregate query that selects and groups all accounts by invoice number.
CurrentQ: Also an aggregate query that selects and sums all the transaction amounts (from the transaction table), which is left-joined to BillNumByAccountQ. The left-join is necessary to ensure that there is a row for every bill number. The "Current" field in this query is given by the expression Sum(Nz(Amount,0)). The result set of this query contains over 20K rows.
Finally, the problematic query is defined by the following SQL statement:
SELECT
Q1.Account
,Q1.InvNum
,Q1.CURRENT
,(
SELECT SUM(CURRENT)
FROM CurrentQ
WHERE Q1.Account = Account
AND Q1.InvNum >= InvNum
) AS ToDate
FROM CurrentQ AS Q1;
This query runs and runs and runs, and it eventually causes Access to stop responding. I do not even know how many rows it has because it never finishes running. I fear that I'm overlooking a way simpler solution.
Apologies for so much information, and I appreciate any advice on simplifying this.
Generally, doing a sub-query inside a select statement is slow, since it often needs to run the sub-query for every single row of the main query.
Doing the aggregation all at once is likely going to be faster:
SELECT
Q1.Account
,Q1.InvNum
,Q1.CURRENT
,SUM(i.CURRENT) AS ToDate
FROM CurrentQ AS Q1
JOIN CurrentQ AS i
ON i.Account = Q1.Account
AND i.InvNum >= Q1.InvNum
GROUP BY Q1.Account, Q1.InvNum, Q1.Current;
In addition, if you're able to edit the database, you'd probably want to add indexes for the Account and InvNum columns.

vertica sql delta

I want to calculate delta value between 2 records my table got 2 column id and timestamp i want to calculate the delta time between the records
id |timestamp |delta
----------------------------------
1 |100 |0
2 |101 |1 (101-100)
3 |106 |5 (106-101)
4 |107 |1 (107-106)
I work with a Vertica data base and I want to create view/projection of this table on my DB.
Is it possible to create this calculate without using udf function?
You can use lag() for this purpose:
select id, timestamp,
coalesce(timestamp - lag(timestamp) over (order by id), 0) as delta
from t;

Fetch data from multiple tables in postgresql

I am working on an application where I want to fetch the records from multiple tables which are connected through foreign key. The query I am using is
select ue.institute, ue.marks, uf.relation, uf.name
from user_education ue, user_family uf where ue.user_id=12 and uf.user_id=12
The result of the query is
You can see the data is repeating in it. I only want a record one time. I want no repetition. I want something like this
T1 T2
id|name|fid id|descrip| fid
1 |A |1 1|DA | 1
2 |B |1 2|DB | 1
2 |B |1
Result which I want:
Result:
id|name|fid|id|descrip| fid
1 |A |1 |1|DA | 1
2 |B |1 |2|DB | 1
2 |B |1 |
The results fetched through your query
The total rows are 5
More Information
I want the rows of same user_id from both tables but you can see in T1 there are 3 rows and in T2 there are 2 rows. I do not want repetitions but also I want to fetch all the data on the basis of user_id
Table Schemas,s
T1
T2
I can't see why you would want that, but the solution could be to use the window function row_number():
SELECT ue.institute, ue.marks, uf.relation, uf.name
FROM (SELECT institute, marks, row_number() OVER ()
FROM user_education
WHERE user_id=12) ue
FULL OUTER JOIN
(SELECT relation, name, row_number() OVER ()
FROM user_family
WHERE user_id=12) uf
USING (row_number);
The result would be pretty meaningless though, as there is no ordering defined in the individual result sets.

MSSQL/TSQL separating fields into rows based on value

I have two tables with data like:
table: test_results
ID |test_id |test_type |result_1 |amps |volts |power |
----+-----------+-----------+-----------+-----------+-----------+-----------+
1 |101 |static |10.1 |5.9 |15 |59.1 |
2 |101 |dynamic |300.5 |9.1 |10 |40.1 |
3 |101 |prime |48.9 |8.2 |14 |49.2 |
4 |101 |dual |235.2 |2.9 |11 |25.8 |
5 |101 |static |11.9 |4.3 |9 |43.3 |
6 |101 |prime |49.9 |5.8 |15 |51.6 |
and
table: test_records
ID |model |test_date |operator |
----+-----------+-----------+-----------+
101 |m-300 |some_date |john doe |
102 |m-243 |some_date |john doe |
103 |m-007 |some_date |john doe |
104 |m-523 |some_date |john doe |
105 |m-842 |some_date |john doe |
106 |m-252 |some_date |john doe |
and I'm making a report that looks like this:
|static |dynamic |
test_id |model |test_date |operator |result_1 |amps |volts |power |result_1 |amps |volts |power |
-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+-----------+
101 |m-300 |some_date |john doe |10.1 |5.9 |15 |59.1 |300.5 |9.1 |10 |40.1 |
with left outer joins like so:
SELECT
A.ID AS test_id, model, test_date, operator,
B.result_1, B.amps, B.volts, B.power,
C.result_1, C.amps, C.volts, C.power
FROM
test_records A
LEFT JOIN
test_results B
ON
A.ID = B.test_id
AND
B.test_type = 'static'
LEFT JOIN
test_results C
ON
A.ID = C.test_id
AND
C.test_type = 'dynamic'
But I have run into a problem. The "static" and "prime" tests are run twice.
I don't know how to differentiate between them to create their own 4 fields.
An abstracted(simplified) view of the desired report would look like:
|static |dynamic |prime |dual |static2 |prime2 |
|4 fields |4 fields |4 fields |4 fields |4 fields |4 fields |
Is this even possible?
Notes:
I'm labeling the groups of 4 fields with html so don't worry about the labels
Not every test will run "static" and "prime" twice. So this is a case of If ("static" and "prime") are found twice, do this SQL.
I think we're going to get our engineers to append a 2 to the second tests, eliminating the problem, so this question is more out of curiosity to know what method could solve a problem like this.
If you have another field (here I use ID) that you know is always going to be ordered in respect to the field you can use a windowing function to give them sequential values and then join to that. Like this:
WITH test_records_numbered AS
(
SELECT test_id, test_type, result_1, amps, volts, power,
ROW_NUMBER() OVER (PARTITION BY test_id, test_type ORDER BY ID) as type_num
FROM test_records
)
SELECT
A.ID AS test_id, model, test_date, operator,
B.result_1, B.amps, B.volts, B.power,
C.result_1, C.amps, C.volts, C.power
FROM test_records A
LEFT JOIN test_records_numbered B
ON A.ID = B.test_id AND B.test_type = 'static' and B.type_num = 1
LEFT JOIN test_records_numbered C
ON A.ID = C.test_id AND C.test_type = 'dynamic' and C.type_num = 2
I use a CTE to make it clearer but you could use a sub-queries, you would (of course) have to have the same sub-query twice in the SQL, most servers would have no issue optimizing without the CTE I expect.
I feel this solution is a bit of a "hack." You really want your original data to have all the information it needs. So I think it is good you are having your app developers modify their code (FWIW).
If this had to go into production I think I would break out the numbering as a view to hi-light the codification of questionable business rules (and to make it easy to change)