SQL computed column for sum of data in another table - sql

I have two tables:
DiskUsage Table StatisticsTable
Server DiskUsage(GB) Total Disk Usage xxxx
1 10
2 212
3 11
I need to create the "Total Disk Usage" as a column which works out the Sum of the "DiskUsage" column. This would need to be dynamic as more servers will be added overtime to the "DiskUsage" table.
I've done some looking into this - and I believe a Computed Column would be the easiest way to achieve this, but I'm not sure how to a). reference the other tables data or b). dynamically obtain the total of that column.

What is the issue with just running a query?
select sum(diskusage)
from diskusage;
This seems simple enough and unless you have millions of rows, performance should be quite fast.

Create a trigger
CREATE TRIGGER test
ON DiskUsage
after INSERT, UPDATE
AS
BEGIN
UPDATE StatisticsTable
SET TotalDiskUsage = (SELECT Sum(DiskUsage)
FROM DiskUsage)
END
Or as Mentioned by King.code create a view instead of having a table
CREATE VIEW StatisticsTable
AS
SELECT Sum(DiskUsage)
FROM DiskUsage

Related

How to create a computed column in PrestoDB with values persisted?

I am working in PrestoDB to query and create tables to configure data in a way that I can draw upon it and work on it in Excel and PowerBI. I am trying to create a persisted calculated column that is simply the quotient of two other existing columns.
A colleague suggested
Create Table B as
Select * , Column A/Column B as Column Q
from Table A
however when I perform
Select *
from Table B
column Q is there but completely empty.
What can I run to permanently add these computed columns so that when I query this data the values are persisted?
I don't think PrestoDB supports computed columns, much less persisted computed columns.
I think what you want is view, though, not a table:
create view v_a
select a.*, ColumnA/ColumnB as ColumnQ
from A a;
Anyone who queries v_b will see ColumnQ with the latest calculation of the ratio.

SQL Server Update Column based on value from another table

Please assist if I should use a trigger or procedure. I am trying to update the ScaleRating in table GSelfAssessment from GRatingScale if the Score in GSelfAssessment falls between the minimum and maximum score in GRatingScale.
GSelfAssessment table
GRatingScale Table
Preferably this should be achieved for each row on either update or insert. I believe SQL trigger is the most appropriate one. I understand the inserted/deleted concept inside a trigger after my research. E.g.
CREATE TRIGGER [dbo].[TR_GSelfAssessment_update] ON [dbo].[GSelfAssessment]
FOR UPDATE
AS
BEGIN
UPDATE GSelfAssessment
SET GSelfAssessment.ScaleRating= (Select )---this is where i have a problem-----
END
I believe there is Guru out here who can give me solution to this. I will learn a lot.
SQL Server supports computed columns. If you want ScaleRating to always be aligned with the rest of the data, then that is the best approach:
alter table GSelfAssessment
add ScaleRating as ( . . . );
This adds a new "column" that gets calculated when the value is used in a query. If the computation is expensive or you want to build an index, then use persisted so the value is actually stored with the rest of the data -- and recalculated when needed.
You can add the computed column in the create table statement as well. If you have already created the table, you can drop the column and re-add it or modify it.
You should not have that column. Join to the rating table when you need to. You can create a view if it makes it easier.
select …
from GSelfAssessment a
inner join
GRatingScale r
on (a.Score>r.MinScore and a.Score<=r.MaxScore)
Adjust/create view as required

Is it possible to update one table from another table without a join in Vertica?

I have two tables A(i,j,k) and B(m,n).
I want to update the 'm' column of B table by taking sum(j) from table A. Is it possible to do it in Vertica?
Following code can be used for Teradata, but does Vertica have this kind of flexibility?
Update B from (select sum(j) as m from A)a1 set m=a1.m;
The Teradata SQL syntax won't work with Vertica, but the following query should do the same thing :
update B set m = (select sum(j) from A)
Depending on the size of your tables, this may not be an efficient way to update data. Vertical is a WORM (write once read many times) store, and is not optimized for updates or deletes.
An alternate way would be to first temporarily move the data in the target table to another intermediate (but not temporary) table. After that write a join query using the other table to produce the desired result, and finally use export table with that join query. Finally drop the intermediate table. Of course, this is assuming you have partitioned your table in a way suitable for your update logic.

Database Update Query for Huge Records

We hare having around 20,80,000 records in the table.
We needed to add new column to it and we added that.
Since this new column needs to be primary key and we want to update all rows with Sequence
Here's the query
BEGIN
FOR loop_counter IN 1 .. 211 LOOP
update user_char set id = USER_CHAR__ID_SEQ.nextval where user_char.id is null and rownum<100000;
commit;
END LOOP;
end;
But it'w now almost 1 day completed. still the query is running.
Note: I am not db developer/programmer.
Is there anything wrong with this query or any other query solution (quick) to do the same job?
First, there does not appear to be any reason to use PL/SQL here. It would be more efficient to simply issue a single SQL statement to update every row
UPDATE user_char
SET id = USER_CHAR__ID_SEQ.nextval
WHERE id IS NULL;
Depending on the situation, it may also be more efficient to create a new table and move the data from the old table to the new table in order to avoid row migration, i.e.
ALTER TABLE user_char
RENAME TO user_char_old;
CREATE TABLE user_char
AS
SELECT USER_CHAR__ID_SEQ.nextval, <<list of other columns>>
FROM user_char;
<<Build indexes on user_char>>
<<Drop and recreate any foreign key constraints involving user_char>>
If this was a large table, you could use parallelism in the CREATE TABLE statement. It's not obvious that you'd get a lot of benefit from parallelism with a small 2 million row table but that might shave a few seconds off the operation.
Second, if it is taking a day to update a mere 2 million rows, there must be something else going on. A 2 million row table is pretty small these days-- I can populate and update a 2 million row table on my laptop in somewhere between a few seconds and a few minutes. Are there triggers on this table? Are there foreign keys? Are there other sessions updating the rows? What is the query waiting on?

SQL Server 2005: stored procedure to move rows from one table to another

I have 2 tables of identical schemea. I need to move rows older than 90 days (based on a dataTime column present in the table) from table A to table B. Here is the pseudo code for what I want to do
SET #Criteria = getdate()-90
Select * from table A
Where column X<#Criteria
Into table B
--now clean up the records we just moved to table B, in Table A
delete from table A Where column X<#Criteria
My questions are:
What is the most efficient way to do this (will select-in perform well under high volumes)? Table A will have ~180,000,000 rows in it, and will need to move ~4,000,000 rows at a time to table B.
How do I encapsulate this under one transaction so that I will not delete rows from Table A if there was an error inserting them to Table B. I just want to make sure that I don't accidentally delete a row from table A unless I have successfully written it to table B.
Are there any good SQL Server 2005 books that you recommend?
Thanks,
Chris
I think that SSIS is probably the best solution for your needs.
I think you can just use the SSIS tasks like Data Flow task to achieve your needs. There doesnt seem to be any need to create a procedure separately for the logic.
Transactions can be set for any Data Flow task using TransactionOption property. Check out this article as to how to use Transactions in SSIS
Some basic tutorials on SSIS packages and how to create them can be referred to here and here
regarding
How do I encapsulate this under one transaction so that I will not delete rows from Table A if there was an error inserting them to Table B.
you can delete all rows from A that are in B using a join. Then, if the copy to B failed, nothing will be deleted from A.