SUM Every row as Total - sql-server-2005

I was looking for before posting but I don´t find anything. I don´t know if is possible what I want.
I want the sum of every column in the same row. For a better explanation, I attach a picture. I am using SQL Server 2005
Example:
Thanks for your time.

Normally your requirement suggests a lack in the design of your database. Maybe you should refactor it and create another table where you insert one row for every column and a foreign-key to the main-table. That will be much more efficient and makes it easier to maintain and to write queries.
However, you can do it in this way:
SELECT [TOTAL ROW] = Col1 + Col2 + Col3 + Col4 + .....,
OtherColumns ...
FROM dbo.TableName

Related

Query Distinct/Unique Value Counts for All Tables in a database - MS SQL Server

Is there a way to query the table name, column name, and distinct value count per column for all tables in a Database? How can I do this?
I'm on SQL Server Management Studio (2018)
I need to get a resulting table like this
TableName ColumnName Distinct Values
table1 col1 10
table1 col2 9
table1 col3 20
table2 col1 10
table2 col2 9
... ... ...
Thank you in advance.
Is there an efficient way to query the table name, column name, and distinct value count per column for all tables in a Database?
TL;DR: No, there is not an efficient way.
To be able to do this, you're going to need to use dynamic SQL and then (a hell of) a lot of COUNT(DISTINCT {Column}) aggregate functions. Adding the DISTINCT operator to a COUNT makes the function far more expensive. This can easily slow down a simple query with only one COUNT(DISTINCT {Column}), however, you want to do this on EVERY column in EVERY table in your database (that can be effected by a COUNT(DISTINCT {Column})).
There is no way to make that efficient, it will instead be incredibly slow, and I would not be surprised if it takes hours, or days, to run with a large enough database and could easily suffer from being a deadlock victim.
Personally, I would rethink what ever it is you are trying to achieve with this requirement.

Python/SQL query to use ROWID to realign out of sync data in SQLite3 db

I have an SQLite3 table with 4 columns, "date", "price", "price2" and "vol".There are about 200k lines of data, but the last 2 columns are out of sync by 798 rows. That is the values of the second two columns in row 1, actually correspond to the values of the first two columns at row 798.
I am using Python 2.7.
I was thinking there must be a way of using the ROWID column as a unique identifier where i can extract the first two columns, then extract the second two columns and rejoin based upon "ROWID+798" or something like that.
Is this possible and if so would anyone know how to do this?
I'm curious how your database could get corrupt like that, and sceptical about your assessment that you seem to know exactly what is wrong. If something like this could happen it seems likely that there are many other problems.
In any case, the query you describe should be like this, if i understood correctly.
In most DBMS's you could do this with one subquery, but the syntax (col1,col2) = is not allowed in SQLite, so you have to do it like this.
UPDATE table1 t SET
col1 =
(SELECT col1 FROM table1
WHERE t.rowid = rowid - 798),
col2 =
(SELECT col2 FROM table1
WHERE t.rowid = rowid - 798)

Optimize where clause SQL

I am trying to figure out a way to optimize the below SQL query:
select * from SOME_TABLE
where (col1 = 123 and col2 = 'abc')
or (col1 = 234 and col2 = 'cdf')
or (col1 = 755 and col2 = 'cvd') ---> I have around 2000 'OR' statements in a single query.
Currently this query takes a long time to execute, so is there anyway to make this query run faster?
Create a lookup table CREATE TABLE lookup (col1 INT, col2 VARCHAR(3), PRIMARY KEY(col1, col2), KEY(col2)) ORGANIZATION INDEX or whatever fits your needs
Make sure you have indexes on your original table (col1 and col2)
populate the lookup table with your 2000 combinations
Now query
SELECT
mytable.*
FROM mytable
INNER JOIN lookup ON mytable.col1=lookup.col1 AND mytable.col2=lookup.col2
Difficult to say without seeing the query plan but I'd imagine this is resolves to a FTS with a lot of CPU doing the OR logic.
If the general pattern is col1=x and col2=y then try creating a table with your 2000 pairs and joining instead. If your 2000 pairs come from other tables, factor the select statement that retrieves them straight into your SELECT statement here.
Also make sure you've got all your unique and NOT NULL constraints in place as that will make a difference. Consider an index on col1 & col2, though don't be surprised if it doesn't use it.
Not sure if that's going to do the trick, but post more details if not.
Select only your desired columns, not all (*)... But surely you know that.
But if you have more than 2000 OR in your SQL statement, maybe it's time to change it! If you explain us something more about your database, we'll help you better.

Select all values in a column into a string in SQLite?

In SQL Server, I can do this with the help of Cursor (to loop through each row in the result set and build my string). But I don't know how to do the same in SQLite, this task is not like the so-called Pivoting which as far as I know, we have to know the exact rows we want to turn to columns. My case is different, I want to select all the values of a specified column into a string (and then can select as a column of 1 row), this column can have various values depend on the SELECT query. Here is a sample of how it looks:
A | B
------------
1 | 0
8 | 1
3 | 2
... ....
I want to select all the values of the column A into a string like this "183..." (or "1,8,3,..." would be fine).
This can be used as a column in another SELECT, I have to implement this because I need to display all the sub-infos (on each row of column A) as a comma-separated list in another row.
I don't have any idea on this, it seems to need some procedural statements (such as placed in a procedure) but SQLite limits much on how I can do with loop, variable declaration,...I'm really stuck. This kind of task is very common when programming database and there is no reason for me to refuse doing it.
Please help, your help would be highly appreciated! Thanks!
If you're just trying to get all the values from Column A into a single record, then use GROUP_CONCAT:
select group_concat(a, ',')
from yourtable
SQL Fiddle Demo
The main problem is that you are think like an application programmer. SQL does a lot of things for you.
SELECT *
FROM tableA
WHERE A IN (SELECT A
FROM tableB)
No need to resort to cursors, stored procedures and multiple queries.

Fastest way to perform time average of multiple calculations in SQL?

I have a question about the fastest way to perform a SQL Server query on a table, TheTable, that has the following fields: TimeStamp, Col1, Col2, Col3, Col4
I don't maintain the database, I just can access it. I need to perform 10 calculations that are similar to:
Col2*Col3 + 5
5*POWER(Col3,7) + 4*POWER(Col2,6) + 3*POWER(Col1,5)
Then I have to find the AVG and MAX of the calculation results using data from a chosen day (there is 8 months of data in the database so far). Since the data are sampled every 0.1 seconds, 864000 rows go into each calculation. I want to make sure that the query runs as quickly as possible. Is there a better way than this:
SELECT AVG(Col2*Col3 + 5),
AVG(5*POWER(Col3,7) + 4*POWER(Col2,6) + 3*POWER(Col1,5)),
MAX(Col2*Col3 + 5),
MAX(5*POWER(Col3,7) + 4*POWER(Col2,6) + 3*POWER(Col1,5))
FROM TheTable
WHERE TimeStamp >= '2010-08-31 00:00:00:000'
AND TimeStamp < '2010-09-01 00:00:00:000'
Thanks!
You could create those as computed (calculated) columns, and set Is Persisted to true. That will persist the calculated value to disk on insert, and make subsequent queries against those values very quick.
Alternately, if you cannot modify the table schema, you could create an Indexed View that calculates the values for you.
How about doing these calculations when you insert the data rather than when you select it? Then you will only have to do calcs for a given day on those values.
TableName
---------
TimeStamp
Col1
Col2
Col3
Col4
Calc1
Calc2
Calc3
and insert like so:
INSERT INTO TableName (...)
VALUES
(...,AVG(#Col2Val*#Col3Val + 5),...)
your only bet is to calculate the values ahead of time, either Computed Columns or persisted columns in a view, see here Improving Performance with SQL Server 2005 Indexed Views. If you are unable to alter the database you could pull the data out of that database into your own database. Just compute the columns as you insert it into your own database. Then run your queries off your own database.