Selecting maximum value row using documentum query language (dql) - documentum

I have a few columns let's say
r_object_id, date, c, d, e, f
Now I want to have results such that only the object id row with maximum maximum dates remains
Eg: If the values are
R1, 29nov, c1,d1,e1,F1
R1, 30nov, c2,d2,e2,f2
R2, 20nov, c3,d3,e3,f3
R2, 25nov, c4,d4,e4,f4
The result should be
R1, 30nov, c2,d2,e2,f2
R2, 25nov, c4,d4,e4,f4
I'm not able to use max with group by for this as the other columns are different and I need results in a way that all columns can be seen , is there a way this can be achieved using documentum query language (dql)

SELECT r_object_id, date, c, d, e, f
FROM <table_name>
ORDER BY date desc ENABLE (RETURN_TOP 2)
ORDER BY will do the work. For limiting number of returned rows use ENABLE (RETURN_TOP N)

Related

Can I multiply the output of a SQL query from two separate tables within the same query?

I am taking two values (A, B) from similar but different tables. E.g. A is the count(*) of Table R, but B is a complex calculation based off a slightly adapted table (we can call it S).
So I did this:
SELECT
(SELECT count(*)*60 FROM R) AS A,
[calculation for B] AS B
FROM R
WHERE
[modification to R to get S]
Not sure if this was the smartest way to do it (probably was not, I'm a new user).
Now I want to do some multiplications:
A*B
B-(A*0.75)
B-(A*0.8)
B-(A*0.85)
etc.
Is there a way to do this within the same query?
Thanks.
The simplest way,
SELECT A*B p1, B-(A*0.75) p2, B-(A*0.8) p3, B-(A*0.85) p4, ...
FROM (
-- your original query producing columns A, B ...
) t

Oracle SQL how to get rowmax?

I likely lack the correct vocabulary, which is why my google searches were unsuccessful to achieve the following rowmax type of operation: I want to create a new column that is for each row the maximum of two existing columns and is bounded by 0.
SELECT a,b, rowmax(a,b,0) as c
FROM ...
Use greatest():
greatest(a, b, 0) as c

NTH in Legacy SQL in BigQuery doesn't work as expected

I have this query written in Legacy SQL:
select
nth(1, a) first_a,
nth(1, b) first_b
from (
select *
from
(select 12 a, null b),
(select null a, 54 b)
)
As a result I was expecting one row with values (12, null), but I got (12, 54) instead. In the documentation for NTH it says:
NTH(n, field)
Returns the nth sequential value in the scope of the function, where n
is a constant. The NTH function starts counting at 1, so there is no
zeroth term. If the scope of the function has less than n values, the
function returns NULL.
There is nothing indicating that nulls would be ignored.
Is this a bug in BigQuery?
This is the important part in the documentation:
in the scope of the function
The scope is normally a "record" (in legacy SQL terms), where you fetch the nth value within a repeated field. As written, though, this query has the effect of using NTH as an aggregate function. The values in the group have no well-defined order, but it so happens that NULL is ordered after the non-null values, so NTH(1, ...) gives a non-null value. Try using 2 as the ordinal instead, for instance:
select
nth(2, a) first_a,
nth(2, b) first_b
from (
select *
from
(select 12 a, null b),
(select null a, 54 b)
)
This returns null, null as output.
With that said, to ensure well-defined semantics in your queries, the best option is to use standard SQL instead. Some analogues to the NTH operator when using standard SQL are:
The array bracket operator, e.g. array_column[OFFSET(0)] to get the first element in an array.
The NTH_VALUE window function, e.g. NTH_VALUE(x, 1) OVER (PARTITION BY y ORDER BY z). See also FIRST_VALUE and LAST_VALUE.

Access - SQL Query Date wise with selection of column summarized value

Below is my source Data
by using below query I can get summarized data for '17-09-2016'
SQL Query :-
SELECT key_val.A, key_val.B, key_val.C, key_val.D, Sum(IIf(key_val.Store_date=#9/17/2016#,key_val.Val,0)) AS [17-09-2016]
FROM key_val
GROUP BY key_val.A, key_val.B, key_val.C, key_val.D;
but I am looking output suppose to look like this way.
Specifically= I need summarized data for column a,b,c and for '17-09-2016' dateIn excel we will apply sumifs formula to get desired output but in Access - SQL I am not getting how to form the query to get the same data.
Can any one assist me how to acheive above result by using Access Query?
Specifically= I need summarized data for column a,b,c and for '17-09-2016' date
I'm not sure where you get the 34 figure from - the sum of the first two rows even though the values in A, B, C & D are different (so the grouping won't work)?
Making an assumption that you want the values summed where all the other fields are equal (A, B, C, D & Store_Date):
This query will give you the totals, but not in the format you're after:
SELECT A, B, C, D, SUM(val) As Total, Store_Date
FROM key_val
WHERE Store_date = #9/17/2016#
GROUP BY A,B,C,D, Store_Date
This SQL will give you the same, but for all dates (just remove the WHERE clause).
SELECT A, B, C, D, SUM(val) As Total, Store_Date
FROM key_val
GROUP BY A,B,C,D, Store_Date
ORDER BY Store_Date
This will give the exact table shown in your example:
TRANSFORM Sum(val) AS SumOfValue
SELECT A, B, C, D
FROM key_val
WHERE Store_date = #9/17/2016#
GROUP BY A,B,C,D,val
PIVOT Store_Date
Again, just remove the WHERE clause to list all dates in the table:

SQLite3 How to calculate differential changes

I have a medium size database (400,000 rows at the time) containing a Measurement table with the following schema:
CREATE TABLE `Measurements` (
`timestamp` timestamp,
`timetick` INTEGER,
`Sensor1` REAL,
`Sensor2` REAL,
PRIMARY KEY(timestamp));
As timestamp increases (timestamp increases are not constant there are gaps and delays but timestamps are guaranteed to be monotonic), normally timetick increases too, but there cases where it resets to a small but unpredictable value. I need to find all such rows. I have used the following query (inspired by Finding the difference in rows in query using SQLite):
select r0,r1,a,b,rd,d from
(select M0.rowid as r0,
M1.rowid as r1,
M0.timestamp as a,
M1.timestamp as b,
min(M1.timestamp)-M0.timestamp as rd,
M1.timetick-M0.timetick as d
from Measurements M0,Measurements M1
where M1.timestamp>M0.timestamp group by M0.timestamp
) where d<0;
This works but takes hours, while the same job in python finishes in 30 seconds. Yet it is a a very common task, scientists calculate derivatives all the time and financial professionals calculate price differences. There should be an efficient way to do it.
I will appreciate your help and comments.
A join with a GROUP BY is hard to optimize.
Better use a correlated subquery to find the respective next row:
SELECT m0.rowid AS r0,
m1.rowid AS rn,
m0.timestamp AS a,
m1.timestamp AS b,
m1.timestamp - m0.timestamp AS rd,
m1.timetick - m0.timetick AS d
FROM (SELECT rowid, -- This is the core query attaching to each row
timestamp, -- the rowid of its next
timetick,
(SELECT rowid
FROM measurements
WHERE timestamp > m.timestamp
ORDER BY timestamp
LIMIT 1
) AS r1
FROM Measurements AS m
) AS m0
JOIN measurements AS m1 ON m0.r1 = m1.rowid
WHERE m1.timetick - m0.timetick < 0;
If the timestamp is an integer, make that column an INTEGER PRIMARY KEY to avoid an extra index lookup.