sql query to calculate odd and even place digits sum - sql

I need a SQL query/function which will find out SUM of digits, which is present in even positions and SUM of digits which is present in odd position.
Example:
If number is 440065385:
x=4+0+6+3+5=18(x is holding sum of odd positions)
y=4+0+5+8=17 (y holds the sum of even positions).
Thanks.

You can mimic an iteration using level and connect by to achieve the same
with iterate as
(select level as i from dual connect by level <=25)
select
table1.column1,
sum(decode(mod(iterate.i,2),0,substr(table1.column1,iterate.i,1))) sum_even,
sum(decode(mod(iterate.i,2),1,substr(table1.column1,iterate.i,1))) sum_old
from table1 ,iterate
where iterate.i <=length(table1.column1)
group by table1.column1

You could join your table with a numbers table, to split all digits into rows, then sum odds and even position digits separately.
with positions(position) as
(select level from dual connect by level <=25
),
digits as
(
select
t1.v,
n.position ,
cast(substr(t1.v ,length(t1.v) - n.position + 1, 1) as int) as digit
from my_table t1 left join positions n
on n.position <= length(t1.v)
)
select
v,
sum(case when mod(position,2)=0 then digit else 0 end) as evensum,
sum(case when mod(position,2)=1 then digit else 0 end) as oddsum
from
digits
group by
v
Test Sql Fiddle, the result:
V EVENSUM ODDSUM
440065385 17 18

Related

Apply order by in comma separated string in oracle

I have one of the column in oracle table which has below value :
select csv_val from my_table where date='09-OCT-18';
output
==================
50,100,25,5000,1000
I want this values to be in ascending order with select query, output would looks like :
output
==================
25,50,100,1000,5000
I tried this link, but looks like it has some restriction on number of digits.
Here, I made you a modified version of the answer you linked to that can handle an arbitrary (hardcoded) number of commas. It's pretty heavy on CTEs. As with most LISTAGG answers, it'll have a 4000-char limit. I also changed your regexp to be able to handle null list entries, based on this answer.
WITH
T (N) AS --TEST DATA
(SELECT '50,100,25,5000,1000' FROM DUAL
UNION
SELECT '25464,89453,15686' FROM DUAL
UNION
SELECT '21561,68547,51612' FROM DUAL
),
nums (x) as -- arbitrary limit of 20, can be changed
(select level from dual connect by level <= 20),
splitstr (N, x, substring) as
(select N, x, regexp_substr(N, '(.*?)(,|$)', 1, x, NULL, 1)
from T
inner join nums on x <= 1 + regexp_count(N, ',')
order by N, x)
select N, listagg(substring, ',') within group (order by to_number(substring)) as sorted_N
from splitstr
group by N
;
Probably it can be improved, but eh...
Based on sample data you posted, relatively simple query would work (you need lines 3 - 7). If data doesn't really look like that, query might need adjustment.
SQL> with my_table (csv_val) as
2 (select '50,100,25,5000,1000' from dual)
3 select listagg(token, ',') within group (order by to_number(token)) result
4 from (select regexp_substr(csv_val, '[^,]+', 1, level) token
5 from my_table
6 connect by level <= regexp_count(csv_val, ',') + 1
7 );
RESULT
-------------------------
25,50,100,1000,5000
SQL>

selecting the value from column having highest digit count after decimal places

I have the below table named SAXTION_EG and it this table contain various colulms out of which there is one column named STR_RATE and in this columncontain values like
STR_RATE
1.11317
123.08546759
8.49111
now please advise me the oracle query by which i can select the maximum value in terms of decimal point for example in the above mentioned case the value 123.08546759 has highest decinal count so it has 8 digits count after decimal, as my objective is to fetch the value having highest digit count after decimal
You can try some thing like this. Logic is first get the position of the decimal point. Then get the string after the decimal. After that count the no of chars in that substring. Then use the MAX to get the aggregated max value
SELECT MAX(LENGTH(SUBSTR(STR_RATE, INSTR(STR_RATE, '.')+ 1)))
FROM your_table
In the solution below, I assume str_rate is of data type NUMBER, so it must be converted to character first. I added a couple of sample values to check that integer values are treated correctly, and to illustrate a case when two values have the same, highest number of decimal digits. In this case the solution returns both such values (see result set at the bottom).
with
saxtion_eg ( str_rate ) as (
select 1.11317 from dual union all
select 123.08546759 from dual union all
select 8.49111 from dual union all
select 582 from dual union all
select 0.00000001 from dual
),
prep ( str_rate, char_rate, pos ) as (
select str_rate, to_char(str_rate), instr(to_char(str_rate), '.')
from saxtion_eg
),
final ( char_rate, dec_digits, max_dec_digits ) as (
select char_rate,
case pos when 0 then 0 else length(char_rate) - pos end,
max(case pos when 0 then 0 else length(char_rate) - pos end) over ()
from prep
)
select char_rate as str_rate
from final
where dec_digits = max_dec_digits
;
STR_RATE
----------------
123.08546759
.00000001
This query return the maximum value in terms of decimal point:
select max(STR_RATE) keep (dense_rank last order by length(STR_RATE-trunc(STR_RATE)))
from SAXTION_EG

Select where record does not exists

I am trying out my hands on oracle 11g. I have a requirement such that I want to fetch those id from list which does not exists in table.
For example:
SELECT * FROM STOCK
where item_id in ('1','2'); // Return those records where result is null
I mean if item_id '1' is not present in db then the query should return me 1.
How can I achieve this?
You need to store the values in some sort of "table". Then you can use left join or not exists or something similar:
with ids as (
select 1 as id from dual union all
select 2 from dual
)
select ids.id
from ids
where not exists (select 1 from stock s where s.item_id = ids.id);
You can use a LEFT JOIN to an in-line table that contains the values to be searched:
SELECT t1.val
FROM (
SELECT '1' val UNION ALL SELECT '2'
) t1
LEFT JOIN STOCK t2 ON t1.val = t2.item_id
WHERE t2.item_id IS NULL
First create the list of possible IDs (e.g. 0 to 99 in below query). You can use a recursive cte for this. Then select these IDs and remove the IDs already present in the table from the result:
with possible_ids(id) as
(
select 0 as id from dual
union all
select id + 1 as id from possible_ids where id < 99
)
select id from possible_ids
minus
select item_id from stock;
A primary concern of the OP seems to be a terse notation of the query, notably the set of values to test for. The straightforwwrd recommendation would be to retrieve these values by another query or to generate them as a union of queries from the dual table (see the other answers for this).
The following alternative solution allows for a verbatim specification of the test values under the following conditions:
There is a character that does not occur in any of the test values provided ( in the example that will be - )
The number of values to test stays well below 2000 (to be precise, the list of values plus separators must be written as a varchar2 literal, which imposes the length limit ). However, this should not be an actual concern - If the test involves lists of hundreds of ids, these lists should definitely be retrieved froma table/view.
Caveat
Whether this method is worth the hassle ( not to mention potential performance impacts ) is questionable, imho.
Solution
The test values will be provided as a single varchar2 literal with - separating the values which is as terse as the specification as a list argument to the IN operator. The string starts and ends with -.
'-1-2-3-156-489-4654648-'
The number of items is computed as follows:
select cond, regexp_count ( cond, '[-]' ) - 1 cnt_items from (select '-1-2-3-156-489-4654648-' cond from dual)
A list of integers up to the number of items starting with 1 can be generated using the LEVEL pseudocolumn from hierarchical queries:
select level from dual connect by level < 42;
The n-th integer from that list will serve to extract the n-th value from the string (exemplified for the 4th value) :
select substr ( cond, instr(cond,'-', 1, 4 )+1, instr(cond,'-', 1, 4+1 ) - instr(cond,'-', 1, 4 ) - 1 ) si from (select cond, regexp_count ( cond, '[-]' ) - 1 cnt_items from (select '-1-2-3-156-489-4654648-' cond from dual) );
The non-existent stock ids are generated by subtracting the set of stock ids from the set of values. Putting it all together:
select substr ( cond, instr(cond,'-',1,level )+1, instr(cond,'-',1,level+1 ) - instr(cond,'-',1,level ) - 1 ) si
from (
select cond
, regexp_count ( cond, '[-]' ) - 1 cnt_items
from (
select '-1-2-3-156-489-4654648-' cond from dual
)
)
connect by level <= cnt_items + 1
minus
select item_id from stock
;

detect gaps in integer sequence

Intention: detect whether a numeric sequence contains gaps. No need to identify the missing elements, just flag (true / false) the sequence if it contains gaps.
CREATE TABLE foo(x INTEGER);
INSERT INTO foo(x) VALUES (1), (2), (4);
Below is my (apparently correctly functioning) query to detect gaps:
WITH cte AS
(SELECT DISTINCT x FROM foo)
SELECT
( (SELECT COUNT(*) FROM cte a
CROSS JOIN cte b
WHERE b.x=a.x-1)
=(SELECT COUNT(*)-1 FROM cte))
OR (NOT EXISTS (SELECT 1 FROM cte))
where the OR is needed for the edge case where the table is empty. The query's logic is based on the observation that in a contiguous sequence the number of links equals the number of elements minus 1.
Anything more idiomatic or performant (should I be worried by the CROSS JOIN in particularly long sequences?)
Try this:
SELECT
CASE WHEN ((MAX(x)-MIN(x)+1 = COUNT(DISTINCT X)) OR
(COUNT(DISTINCT X) = 0) )
THEN 'TRUE'
ELSE 'FALSE'
END
FROM foo
SQLFiddle demo
The following should detect whether or not there are gaps:
select (case when max(x) - min(x) + 1 = count(distinct x)
then 'No Gaps'
else 'Some Gaps'
end)
from foo;
If there are no gaps or duplicates, then the number of distinct values of x is the max minus the min plus 1.
A different approach...
If you subtract your min value from the max value, and add 1, you should equal the count.
if count = (max-min)+1 then "no gaps!"
If you can express that in SQL, it should be very efficient.
SELECT 'Has ' || count(*) - 1 || ' gaps.' AS gaps
FROM foo f1
LEFT JOIN foo f2 ON f2.id = f1.id + 1
WHERE f2.id IS NULL;
The trick is to count rows, where the next row is missing - which only happens for the last row(s) if there are no gaps.
If there are no rows, you get 'Has -1 gaps.'.
If there are no gaps, you get 'Has 0 gaps.'.
Else you get 'Has n gaps.' .. n being the exact number of gaps, no matter how big.
The count can be increased for duplicates, but 0 and -1 are immune to dupes.

How to find the average time difference between rows in a table?

I have a mysql database that stores some timestamps. Let's assume that all there is in the table is the ID and the timestamp. The timestamps might be duplicated.
I want to find the average time difference between consecutive rows that are not duplicates (timewise). Is there a way to do it in SQL?
If your table is t, and your timestamp column is ts, and you want the answer in seconds:
SELECT TIMESTAMPDIFF(SECOND, MIN(ts), MAX(ts) )
/
(COUNT(DISTINCT(ts)) -1)
FROM t
This will be miles quicker for large tables as it has no n-squared JOIN
This uses a cute mathematical trick which helps with this problem. Ignore the problem of duplicates for the moment. The average time difference between consecutive rows is the difference between the first timestamp and the last timestamp, divided by the number of rows -1.
Proof: The average distance between consecutive rows is the sum of the distance between consective rows, divided by the number of consecutive rows. But the sum of the difference between consecutive rows is just the distance between the first row and last row (assuming they are sorted by timestamp). And the number of consecutive rows is the total number of rows -1.
Then we just condition the timestamps to be distinct.
Are the ID's contiguous ?
You could do something like,
SELECT
a.ID
, b.ID
, a.Timestamp
, b.Timestamp
, b.timestamp - a.timestamp as Difference
FROM
MyTable a
JOIN MyTable b
ON a.ID = b.ID + 1 AND a.Timestamp <> b.Timestamp
That'll give you a list of time differences on each consecutive row pair...
Then you could wrap that up in an AVG grouping...
Here's one way:
select avg(timestampdiff(MINUTE,prev.datecol,cur.datecol))
from table cur
inner join table prev
on cur.id = prev.id + 1
and cur.datecol <> prev.datecol
The timestampdiff function allows you to choose between days, months, seconds, and so on.
If the id's are not consecutive, you can select the previous row by adding a rule that there are no other rows in between:
select avg(timestampdiff(MINUTE,prev.datecol,cur.datecol))
from table cur
inner join table prev
on prev.datecol < cur.datecol
and not exists (
select *
from table inbetween
where prev.datecol < inbetween.datecol
and inbetween.datecol < cur.datecol)
)
OLD POST but ....
Easies way is to use the Lag function and TIMESTAMPDIFF
SELECT
id,
TIMESTAMPDIFF('MINUTES', PREVIOUS_TIMESTAMP, TIMESTAMP) AS TIME_DIFF_IN_MINUTES
FROM (
SELECT
id,
TIMESTAMP,
LAG(TIMESTAMP, 1) OVER (ORDER BY TIMESTAMP) AS PREVIOUS_TIMESTAMP
FROM TABLE_NAME
)
Adapted for SQL Server from this discussion.
Essential columns used are:
cmis_load_date: A date/time stamp associated with each record.
extract_file: The full path to a file from which the record was loaded.
Comments:
There can be many records in each file. Records have to be grouped by the files loaded on the extract_file column. Intervals of days may pass between one file and the next being loaded. There is no reliable sequential value in any column, so the grouped rows are sorted by the minimum load date in each file group, and the ROW_NUMBER() function then serves as an ad hoc sequential value.
SELECT
AVG(DATEDIFF(day, t2.MinCMISLoadDate, t1.MinCMISLoadDate)) as ElapsedAvg
FROM
(
SELECT
ROW_NUMBER() OVER (ORDER BY MIN(cmis_load_date)) as RowNumber,
MIN(cmis_load_date) as MinCMISLoadDate,
CASE WHEN NOT CHARINDEX('\', extract_file) > 0 THEN '' ELSE RIGHT(extract_file, CHARINDEX('\', REVERSE(extract_file)) - 1) END as ExtractFile
FROM
TrafTabRecordsHistory
WHERE
court_id = 17
and
cmis_load_date >= '2019-09-01'
GROUP BY
CASE WHEN NOT CHARINDEX('\', extract_file) > 0 THEN '' ELSE RIGHT(extract_file, CHARINDEX('\', REVERSE(extract_file)) - 1) END
) t1
LEFT JOIN
(
SELECT
ROW_NUMBER() OVER (ORDER BY MIN(cmis_load_date)) as RowNumber,
MIN(cmis_load_date) as MinCMISLoadDate,
CASE WHEN NOT CHARINDEX('\', extract_file) > 0 THEN '' ELSE RIGHT(extract_file, CHARINDEX('\', REVERSE(extract_file)) - 1) END as ExtractFile
FROM
TrafTabRecordsHistory
WHERE
court_id = 17
and
cmis_load_date >= '2019-09-01'
GROUP BY
CASE WHEN NOT CHARINDEX('\', extract_file) > 0 THEN '' ELSE RIGHT(extract_file, CHARINDEX('\', REVERSE(extract_file)) - 1) END
) t2 on t2.RowNumber + 1 = t1.RowNumber