is it possible to return count of values in single row?
For example this is test table and I want to count of daily_typing_pages
SQL> SELECT * FROM employee_tbl;
+------+------+------------+--------------------+
| id | name | work_date | daily_typing_pages |
+------+------+------------+--------------------+
| 1 | John | 2007-01-24 | 250 |
| 2 | Ram | 2007-05-27 | 220 |
| 3 | Jack | 2007-05-06 | 170 |
| 3 | Jack | 2007-04-06 | 100 |
| 4 | Jill | 2007-04-06 | 220 |
| 5 | Zara | 2007-06-06 | 300 |
| 5 | Zara | 2007-02-06 | 350 |
+------+------+------------+--------------------+
Result of this count should be : 1610 how ever if I simply count() AROUND it return:
SQL>SELECT COUNT(daily_typing_pages) FROM employee_tbl ;
+---------------------------+
| COUNT(daily_typing_pages) |
+---------------------------+
| 7 |
+---------------------------+
1 row in set (0.01 sec)
So it return number of rows instead of count single row.
Is there some way how to do things like I want without using external programming language which will count it for me?
Thanks
You want SUM instead of COUNT. COUNT merely counts the number of records, you want them summed.
You didn't mention your DBMS, but see for example, for sql server this
Did you mean you want to summarize alle numbers of daily_typing_pages ?
So you can use sum(daily_typing_pages):
SELECT SUM(daily_typing_pages) FROM employee_tbl
Related
I am wanting to insert multiple rows into a table based on the "Material" column. Attached below is a snippet of the table:
+----------+---------+-----------+
| Material | Station | BuildTime |
+----------+---------+-----------+
| ABC | #1 | 5 |
| ABC | #2 | 10 |
| ABC | #3 | 15 |
| DEF | #1 | 7 |
| DEF | #2 | 12 |
| DEF | #3 | 19 |
| GHI | #1 | 11 |
| GHI | #2 | 24 |
| GHI | #3 | 13 |
+----------+---------+-----------+
As you can see, there are three types of material here at three different stations - making for a total of 9 entries. Let's say I wanted to add a "Station #4" row for each type of material. How can I insert per material "group"?
In my specific case, the "buildtime" value of Station #4 will all be identical with a value of 50.
This seems simple if there were truly only three material groups, but in the actual case, there are hundreds. Is there a way to parse through them and insert a row per group?
Thanks in advance.
You can use insert:
insert into t (material, station, buildtime)
select distinct material, '#4', 50
from t;
I have a table in Access named Spells which holds patient spells (where a patient has a spell within a hospital). It's structure is as below:
| ID | SpellID | MultipleSpell | FirstSpell | LastSpell |
|----|---------|---------------|------------|-----------|
| 1 | 1 | False | | |
| 2 | 2 | True | | |
| 3 | 2 | True | | |
| 4 | 3 | False | | |
| 5 | 4 | False | | |
| 6 | 5 | True | | |
| 7 | 5 | True | | |
| 8 | 5 | True | | |
The MultipleSpell column indicates that there are multiple occurrences of the spell within the table.
I'd like to run query which would update the FirstSpell column to True for records with the IDs of 1,2,4,5,6. So basically, where a Spell is the first one in the table, it should be marked, in the FirstSpell column.
I would also then like to update the LastSpell column to True for records with the IDs of 1,3,4,5,8.
The reasoning for this (if you're interested) is that the table links to a separate table containing the name of wards. It would be useful to link to this other table and indicate whether the ward is the admitting ward (FirstSpell) or the discharging ward (LastSpell)
You can update the first using:
update spells
set firstspell = 1
where id = (select min(id)
from spells as s2
where spells.spellid = s2.spellid
);
Similar logic (using max()) can be used for the last spell.
I have a table in postgres like below,
alg_campaignid | alg_score | cp | sum
----------------+-----------+---------+----------
9829 | 30.44056 | 12.4000 | 12.4000
9880 | 29.59280 | 12.0600 | 24.4600
9882 | 29.59280 | 12.0600 | 36.5200
9827 | 29.27504 | 11.9300 | 48.4500
9821 | 29.14840 | 11.8800 | 60.3300
9881 | 29.14840 | 11.8800 | 72.2100
9883 | 29.14840 | 11.8800 | 84.0900
10026 | 28.79280 | 11.7300 | 95.8200
10680 | 10.31504 | 4.1800 | 100.0000
From which i have to select a record based on randomly generated number from 0 to 100.i.e first record should be returned if random number picked is between 0 and 12.4000,second if rendom is between 12.4000 and 24.4600,and likewise last if random no is between 95.8200 and 100.0000.
For Example
if the random number picked is 8 then the first record should be returned
or
if the random number picked is 48 then the fourth record should be returned
Is it possible to do this postgres if so kindly recommend a solution for this..
Yes, you can do this in Postgres. If you want to generate the number in the database:
with r as (
select random() * 100 as r
)
select t.*
from table t cross join r
where t.sum <= r.r
order by t.sum desc
limit 1;
I have a table in which several indentifiers of a person may be stored. In this table I would like to create a single calculated identifier column that stores the best identifier for that record depending on what identifiers are available.
For example (some fictional sample data) ....
Table = "Citizens"
Id | LastName | DL-No | SS-No | State-Id-No | Calculated
------------------------------------------------------------------------
1 | Smith | NULL | 374-784-8888 | 7383204848 | ?
2 | Jones | JG892435262 | NULL | NULL | ?
3 | Trask | TSK73948379 | NULL | 9276542119 | ?
4 | Clinton | CL231429888 | 543-123-5555 | 1840430324 | ?
I know the order in which I would like choose identifiers ...
Drivers-License-No
Social-Security-No
State-Id-No
So I would like the calculated identifier column to be part of the table schema. The desired results would be ...
Id | LastName | DL-No | SS-No | State-Id-No | Calculated
------------------------------------------------------------------------
1 | Smith | NULL | 374-784-8888 | 7383204848 | 374-784-8888
2 | Jones | JG892435262 | NULL | 4537409273 | JG892435262
3 | Trask | NULL | NULL | 9276542119 | 9276542119
4 | Clinton | CL231429888 | 543-123-5555 | 1840430324 | CL231429888
IS this possible? If so what SQL would I use to calculate what goes in the "Calculated" column?
I was thinking of something like ..
SELECT
CASE
WHEN ([DL-No] is NOT NULL) THEN [DL-No]
WHEN ([SS-No] is NOT NULL) THEN [SS-No]
WHEN ([State-Id-No] is NOT NULL) THEN [State-Id-No]
AS "Calculated"
END
FROM Citizens
The easiest solution is to use coalesce():
select c.*,
coalesce([DL-No], [SS-No], [State-ID-No]) as calculated
from citizens c
However, I think your case statement will also work, if you fix the syntax to use when rather than where.
I have a varchar column with mixed data- strings, integers, decimals, blank strings, and null values. I'd like to sort the column the same way that Excel would, first sorting numbers and then sorting the strings. For example:
1
2
3
3.5
10
11
12
alan
bob
carl
(blank/null)
(blank/null)
I've tried doing something like 'ORDER BY my_column+0' which sorts the numbers correctly but not the strings. I was hoping someone might know of an efficient way to accomplish this.
MartinofD's suggestion works for the most part and if I expand on it a little bit I can get exactly what I want:
SELECT a FROM test
ORDER BY
a IS NULL OR a='',
a<>'0' AND a=0,
a+0,
a;
Pretty ugly though and I'm not sure if there are any better options.
That's because my_column+0 is equal for all strings (0).
Just use ORDER BY my_column+0, my_column
mysql> SELECT a FROM test ORDER BY a+0, a;
+-------+
| a |
+-------+
| NULL |
| alan |
| bob |
| carl |
| david |
| 1 |
| 2 |
| 3 |
| 3.5 |
| 10 |
| 11 |
| 12 |
+-------+
12 rows in set (0.00 sec)
If you strictly need the numbers to be above the strings, here's a solution (though I'm not sure how quick this will be on big tables):
mysql> SELECT a FROM test ORDER BY (a = CONCAT('', 0+a)) DESC, a+0, a;
+-------+
| a |
+-------+
| 1 |
| 2 |
| 3 |
| 3.5 |
| 10 |
| 11 |
| 12 |
| alan |
| bob |
| carl |
| david |
| NULL |
+-------+
12 rows in set (0.00 sec)
This works:
SELECT a FROM test ORDER BY a IS NULL OR a='', a<>'0' AND a=0, a+0, a;
Any more efficient/elegant solution would be welcome however.