Using PostgreSQL, supposing a table like the following:
12184 | 4 | 83
12183 | 3 | 171
12176 | 6 | 95
How can I compute a math expression for each row in the table?
For example, to divide column 2 by column 3, such that the output would be:
12184 | 0.04819277108
12183 | 0.01754385965
12176 | 0.06315789474
My instinct was to try:
SELECT col1, col2 / col3 FROM table_name;
But that return the ceiling (ie. rounded-down) integer part, I need the floating point value.
Typical cast trick needed because col2 and col3 are integers (so result is by default an integer)
select col1, col2/col3*1.0 from table
or
select col1, col2/col3::float from table
or (SQL Standard way)
select col1, col2/cast(col3 as float) from table
You can use arithmetic expressions in SELECT clause, like this:
SELECT col1 / col2 AS new_name
FROM t
select col1, col2/col3 from table;
Should work. Aren't col2 and col3 numeric?
Try query like this:
SELECT col1, col2 / col3::float FROM table_name;
In PgSql the columns are typed. So if you want to operator on them; you need to cast the column.
suppose you have a column 'minutes' and you wanna add '+5' in every values of column 'mintues'
Because you are adding and integer value the minutes column must be a integer only then the addition can be performed.
hence incorrect way:
select *, minutes+5 from my table >> syntax error
select *, minutes::int + 5 from mytable >> give the output
Related
I have data that looks like:
row
col1
col2
col3
...
coln
1
A
null
B
...
null
2
null
B
C
...
D
3
null
null
null
...
A
I want to condense the columns together to get:
row
final
1
A, B
2
B, C, D
3
A
The order of the letters doesn't matter, and if the solution includes the nulls eg. A,null,B,null ect. I can work out how to remove them later. I've used up to coln as I have about 200 columns to condense.
I've tried a few things and if I were trying to condense rows I could use STRING_AGG() example
Additionally I could do this:
SELECT
CONCAT(col1,", ",col2,", ",col3,", ",coln) #ect.
FROM mytable
However, this would involve writing out each column name by hand which isn't really feasible. Is there a better way to achieve this ideally for the whole table.
Additionally CONCAT returns NULL if any value is NULL.
#standardSQL
select row,
(select string_agg(col, ', ' order by offset)
from unnest(split(trim(format('%t', (select as struct t.* except(row))), '()'), ', ')) col with offset
where not upper(col) = 'NULL'
) as final
from `project.dataset.table` t
if to apply to sample data in your question - output is
Not in exact format that you asked for, but you can try if this simplifies things for you:
SELECT TO_JSON_STRING(mytable) FROM mytable
If you want the exact format, you can write a regex to extract values from the output JSON string.
I'm asking for a solution without functions or procedures (Permissions problem).
I have a table like this:
where k=number of columns (In reality : k=500)
col1 col2 col3 col4 col5.... col(k)
10 20 30 -50 60 100
and I need to create a comulative row like this:
col1 col2 col3 col4 col5 ... col(k)
10 30 60 10 70 X
In Excel, it's a simple shit to make a forumla and drag it but in sql if I have lot of columns, it seems a very clumsy work to add Manually (col1 as col1, col1+col2 as col2, col1+col2+col3 as col3 till colk etc).
Any way of finding a good solution for this problem?
You say that you've changed your data model to rows. So let's say that the new table has three columns:
grp (some group key to identify which rows belong together, i.e. what was one row in your old table)
pos (a position number from 1 to 500 to indicate the order of the values)
value
You get the cumulative sums with SUM OVER:
select grp, pos, value, sum(value) over (partition by grp order by pos) as running_total
from mytable
order by grp, pos;
If this "colk" is going to be needed/used in a lot of reports, I suggest you create a computed column or a view to sum all the columns using k = cola+colb+...
There's no function in sql to sum up columns (ex. between colA and colJ)
How to get duplicate sub string count in db2 sql
col 1
|abc_123|
| abc_2 |
|xyz_123|
output will be
col1 output
|abc_123| |2|
|abc_23 | |1|
|xyz_123|
How to get count using substr(), locate() command and group by clause in SQL db2 ,I want two col i.e col1 and output
First take the part of the string and then do the group by on that part to find the count of those part of string
select substr(col1, 1,locate('_',col1)-1) col1, count(col1) from table t1
group by substr(col1, 1,locate('_',col1)-1)
Consider there is a table tableA
col1 col2
1 some random string and number 1213 aa5 string aaasome number
2 some random string 432682 aa3 test
1 aa7
I need to get the result as below.
1 12
2 3
group by col1 and the result will be 5+7 (the partial int after the 'aa' string)
To add more clarity to the question,the col2 has some other strings as well.. like test test test aa2 again test test 23u45 ajsdk 4834... . Here i need to pick the 2 alone.
kindly suggest a solution for this.
You need to get rid of the prefix, cast to a number, and sum. One method looks like:
select col1, sum(cast(replace(col2, 'aa', '') as number)
from tablea a
group by col1;
You can use regular expression to get the required digits from the string:
Select col1, sum(regexp_replace(col2,'(^|.*\s)aa(\d+)(\s.*|$)', '\2'))
From t
Group by col1
demo
I'm finding that when I try to select a column in a SQL case statement, it doesn't work unless I wrap it in a numeric function. The max(price) seems to select the value in the column, while just price, or price always returns blank.
I think its a bug.
This doesn't work:
SELECT
auction,
CASE WHEN auction='1'
THEN (select max(bid.amount) from bid where bid.auction_id = auction.id)
ELSE price
END as price_string
FROM product
This works:
SELECT
auction,
CASE WHEN auction='1'
THEN (select max(bid.amount) from bid where bid.auction_id = auction.id)
ELSE max(price)
END as price_string
FROM product
Edit: fixed comma.
This is not a bug.
In the query that doesn't work, the two halves of your CASE statement are incompatible. You have an aggregate function (MAX(bid.amount)) that returns a single value for one part of your CASE statement, and the name of a column, which will return a set, for the other part of your CASE statement. You cannot mix aggregates and sets like this.
The query that works does so because both halves of the CASE statement are returning aggregate values and are therefore compatible.
Take a simple table:
test_table
col1 | col2
1 7
5 14
8 3
3 9
If I query like this:
SELECT col1 FROM test_table
I'll get this result, a set:
col1
1
5
8
3
But if I query like this:
SELECT MAX(col1) FROM test_table
I'll get a single value:
8