I have a table that lists the bugs along with the info regarding to who it was assigned and who resolved it.
Bugs | Assigned to | Resolved by
--------------------------------
1 Dev1
2 Dev2
3 Dev3
If after a specific number of days (for e.g., 14 days), if the field 'Resolved by' is still blank, I want it to be updated with the value from the column 'Assigned to'.
I was trying to create a view with a time stamp but I'm not sure how to specify the exact number of days and then update the value from another column.
You can do this in a view with something like this:
create view v_bugs as
select bugs, assigned_to,
coalesce(resolved_by,
(case when createdAt <= sysdate - interval '14' day then assigned_to end)
) as assigned_to
from bugs;
This assumes, of course, that you have a column that specifies when each row was inserted.
Unfortunately, Oracle does not allow sysdate in a virtual column, so you cannot use generated always as to define the column.
Related
I have two tables temp_N and temp_C . Table script and data is given below . I am using teradata
Table Script and data
First image is temp_N and second one is temp_C
Now I will try to explain my requirement. Key column for this two tables are 'nbr'. This two table contains all the changes for a particular period of time.( this is sample data and this two tables will get daily loaded based on the updates). Now I need to merge this two tables into one table with date range assigned correctly. The expected result is given below. To explain the logic behind the expected result, first row in the expected result, fstrtdate is the least date which from the two tables which is 2022-01-31 and for the same row if we notice the end date is given as 2022-07-10 as there is a change in the cpnrate on 2022-07-11. second row is start with 2022-07-11 giving the changed cpnrate, now when comes to third row there is a change in ntr on 2022-08-31 and the data is update accordingly. Please note all this are date fields, there wont be any timestamp, please ignore the timestamp in screenshots
Now I would like to know how to achieve this in sql or is it possible to achieve ?
You can combine all the changes into a single table and order by effective start date (fstrtdate). Then you can compute effective end date as day prior to next change, and where one of the data values is NULL use LAG IGNORE NULLS to "bring down" the desired previous not-NULL value:
SELECT nbr, fstrtdate,
PRIOR(LEAD(fstrtdate) OVER (PARTITION BY nbr ORDER BY fstrtdate)) as fenddate,
COALESCE(ntr,LAG(ntr) IGNORE NULLS OVER (PARTITION BY nbr ORDER BY fstrtdate)) as ntr,
COALESCE(cpnrate,LAG(cpnrate) IGNORE NULLS OVER (PARTITION BY nbr ORDER BY fstrtdate)) as cpnrate
FROM (
SELECT nbr, fstrtdate, max(ntr) as ntr, max(cpnrate) as cpnrate
FROM (
SELECT nbr, fstrtdate, ntr, NULL (DECIMAL(9,2)) as cpnrate
from temp_n
UNION ALL
SELECT nbr, fstrtdate, NULL (DECIMAL(9,2)) as ntr, cpnrate
from temp_c
) AS COMBINED
GROUP BY 1, 2
) AS UNIQUESTART
ORDER BY fstrtdate;
The innermost SELECTs make the structure the same for data from both tables with NULLs for the values that come from the other table, so we can do a UNION to form one COMBINED derived table with rows for both types of change events. Note that you should explicitly assign datatype for those added NULL columns to match the datatype for the corresponding column in the other table; I somewhat arbitrarily chose DECIMAL(9,2) above since I didn't know the real data types. They can't be INT as in the example, though, since that would truncate the decimal part. There's no reason to carry along the original fenddate; a new non-overlapping fenddate will be computed in the outer query.
The intermediate GROUP BY is only to combine what would otherwise be two rows in the special case where both ntr and cpnrate changed on the same day for the same nbr. That case is not present in the example data - the dates are already unique - but it might be necessary to do this when processing the full table. The syntax requires an aggregate function, but there should be at most two rows for a (nbr, fstrtdate) group; and when there are two rows, in each of the other columns one row has NULL and the other row does not. In that case either MIN or MAX will return the non-NULL value.
In the outer query, the COALESCEs will return the value for that column from the current row in the UNIQUED derived table if it's not NULL, otherwise LAG is used to obtain the value from a previous row.
The first two rows in the result won't match the screenshot above but they do accurately reflect the data provided - specifically, the example does not identify a cpnrate for any date prior to 2022-05-11.
nbr
fstrtdate
fenddate
ntr
cpnrate
233
2022-01-31
2022-05-10
311,000.00
NuLL
233
2022-05-11
2022-07-10
311,000.00
3.31
...
-
-
-
-
So I have a table in Snowflake that I uploaded and made the mistake of uploading some data with the incorrect datetime. The first few columns of the table are different json results, that correspond to the date they were fetched.
Example:
name date
0 [{},{},{},{},{}] 8/20/2019
1 [{}] 12/22/2019
2 [{},{},{},{}] 11/15/2019
3 [{},{},{}] 1/10/2019
4 [{},{},{},{}] 12/1/2019
The INSERT was pretty straight forward, I staged the file and inserted it into an already created table.
I want to go in and change the date by two days, and will make a note to change how I am generating the date and data, so I don't have to do this manually anymore.
Is is possible to correct the column for a certain set of ids instead of a date range like I did below?
ALTER TABLE json_date ALTER COLUMN date Where date >"12-01-2019" and date < "12-30-2019" dateadd(day,2,date);
If this is simply a column, then you would use update:
update json_date
set date = dateadd(day, 2, date)
where date > '2019-12-01';
The gem we have installed (Blazer) on our site limits us to one query.
We are trying to write a query to show how many hours each employee has for the past 10 days. The first column would have employee names and the rest would have hours with the column header being each date. I'm having trouble figuring out how to make the column headers dynamic based on the day. The following is an example of what we have working without dynamic column headers and only using 3 days.
SELECT
pivot_table.*
FROM
crosstab(
E'SELECT
"User",
"Date",
"Hours"
FROM
(SELECT
"q"."qdb_users"."name" AS "User",
to_char("qdb_works"."date", \'YYYY-MM-DD\') AS "Date",
sum("qdb_works"."hours") AS "Hours"
FROM
"q"."qdb_works"
LEFT OUTER JOIN
"q"."qdb_users" ON
"q"."qdb_users"."id" = "q"."qdb_works"."qdb_user_id"
WHERE
"qdb_works"."date" > current_date - 20
GROUP BY
"User",
"Date"
ORDER BY
"Date" DESC,
"User" DESC) "x"
ORDER BY 1, 2')
AS
pivot_table (
"User" VARCHAR,
"2017-10-06" FLOAT,
"2017-10-05" FLOAT,
"2017-10-04" FLOAT
);
This results in
| User | 2017-10-05 | 2017-10-04 | 2017-10-03 |
|------|------------|------------|------------|
| John | 1.5 | 3.25 | 2.25 |
| Jill | 6.25 | 6.25 | 6 |
| Bill | 2.75 | 3 | 4 |
This is correct, but tomorrow, the column headers will be off unless we update the query every day. I know we could pivot this table with date on the left and names on the top, but that will still need updating with each new employee – and we get new ones often.
We have tried using functions and queries in the "AS" section with no luck. For example:
AS
pivot_table (
"User" VARCHAR,
current_date - 0 FLOAT,
current_date - 1 FLOAT,
current_date - 2 FLOAT
);
Is there any way to pull this off with one query?
You could select a row for each user, and then per column sum the hours for one day:
with user_work as
(
select u.name as user
, to_char(w.date, 'YYYY-MM-DD') as dt_str
, w.hours
from qdb_works w
join qdb_users u
on u.id = w.qdb_user_id
where w.date >= current_date - interval '2 days'
)
select User
, sum(case when dt_str = to_char(current_date,
'YYYY-MM-DD') then hours end) as Today
, sum(case when dt_str = to_char(current_date - 'interval 1 day',
'YYYY-MM-DD') then hours end) as Yesterday
, sum(case when dt_str = to_char(current_date - 'interval 2 days',
'YYYY-MM-DD') then hours end) as DayBeforeYesterday
from user_work
group by
user
, dt_str
It's often easier to return a list and pivot it client side. That also allows you to generate column names with a date.
Is there any way to pull this off with one query?
No, because a fixed SQL query cannot have any variability in its output columns. The SQL engine determines the number, types and names of every column of a query before executing it, without reading any data except in the catalog (for the structure of tables and other objects), execution being just the last of 5 stages.
A single-query dynamic pivot, if such a thing existed, couldn't be prepared, since a prepared query always have the same results structure, whereas by definition a dynamic pivot doesn't, as the rows that pivot into columns can change between executions. That would be at odds again with the Prepare-Bind-Execute model.
You may find some limited workarounds and additional explanations in other questions, for example: Execute a dynamic crosstab query, but since you mentioned specifically:
The gem we have installed (Blazer) on our site limits us to one
query
I'm afraid you're out of luck. Whatever the workaround, it always need at best one step with a query to figure out the columns and generate a dynamic query from them, and a second step executing the query generated at the previous step.
Date jm Text
-------- ---- ----
6/3/2015 ne Good
6/4/2015 ne Good
6/5/2015 ne Same
6/8/2015 ne Same
I want to count how often the "same" value occurs in a set of consecutive days.
I dont want to count the value for the whole database. Now on the current date it is 2 (above example).
It is very important for me that "Same" never occurs...
The query has to ignore the weekend (6 and 7 june).
Date jm Text
-------- ---- ----
6/3/2015 ne Same
6/4/2015 ne Same
6/5/2015 ne Good
6/8/2015 ne Good
In this example the count is zero
Okay, I'm starting to get the picture, although at first I thought you wanted to count by jm, and now it seems you want to count by Text = 'Same'. Anyway, that's what this query should do. It gets the row for the current date. Is connects all previous rows and counts them. Also, it shows whether the current text (and that of the connected rows).
So the query will return one row (if there is one for today), which will show the date, jm and Text of the current date, the number of consecutive days for which the Text has been the same (just in case you want to know how many days it is 'Good'), and the number of days (either 0 or the same as the other count) for which the Text has been 'Same'.
I hope this query is right, or at least it gives you an idea of how to solve the problem using CONNECT BY. I should mention I based the 'Friday-detection' on this question.
Also, I don't have Oracle at hand, so please forgive me for any minor syntax errors.
WITH
VW_SAMESTATUSES AS
( SELECT t.*
FROM YourTable t
START WITH -- Start with the row for today
t.Date = trunc(sysdate)
CONNECT BY -- Connect to previous row that have a lower date.
-- Note that PRIOR refers to the prior record, which is
-- actually the NEXT day. :)
t.Date = PRIOR t.Date +
CASE MOD(TO_CHAR(t.Date, 'J'), 7) + 1
WHEN 5 THEN 3 -- Friday, so add 3
ELSE 1 -- Other days, so add one
END
-- And the Text also has to match to the one of the next day.
AND t.Text = PRIOR t.Text)
SELECT s.Date,
s.jm,
MAX(Text) AS CurrentText, -- Not really MAX, they are actually all the same
COUNT(*) AS ConsecutiveDays,
COUNT(CASE WHEN Text = 'Same' THEN 1 END) as SameCount
FROM VW_SAMESTATUSES s
GROUP BY s.Date,
s.jm
This recursive query (available from Oracle version 11g) might be useful:
with s(tcode, tdate) as (
select tcode, tdate from test where tdate = date '2015-06-08'
union all
select t.tcode, t.tdate from test t, s
where s.tcode = t.tcode
and t.tdate = s.tdate - decode(s.tdate-trunc(s.tdate, 'iw'), 0, 3, 1) )
select count(1) cnt from s
SQLFiddle
I prepared sample data according to your original question, without further edits, you can see them in attached SQLFiddle. Additional conditions for column 'Text'
are very simple, just add something like ... and Text ='Same' in where clauses.
In current version query counts number of previous days starting from given date (change it in line 2) where dates are consecutive (excluding weekend days) and values in column tcode is the same for all days.
Part: decode(s.tdate-trunc(s.tdate, 'iw'), 0, 3, 1) is for substracting days depending if it's Monday or other day, and should work independently from NLS settings.
i have some SQL code that is inserting values from another (non sql-based) system. one of the values i get is a timestamp.
i can get multiple inserts that have the same timestamp (albeit different values for other fields).
my problem is that i am trying to get the first insert happening every day (based upon timestamp) since a particular day (i.e. give me the first insert of each day since January 28, 2007...)
my code to get the first timestamp of every day is as follows:
SELECT MIN(my_timestamp) AS first_timestamp
FROM my_schema.my_table
WHERE my_col1 = 'WHATEVER'
AND my_timestamp > timestamp '2010-Jul-27 07:45:24' - INTERVAL '365 DAY'
GROUP BY DATE (my_timestamp);
This delivers me the list of times available. But when I join against these times, I can get several rows, as there are lots of rows that mach these times. So for 365 days, I may get 5,000 rows (I could be inserting 100 rows at 00:00:00 every day).
Assuming, in the example above, my_table has columns my_col1 and my_col2, how can I get exactly 365 rows that contain my_col1 & my_col2? it doesn't matter which row i get back if there are multiple rows for a date; any row will suffice.
it's an odd question. the overall problem is: given a timestamp, how can one get 1-row-per-timestamp even if there are multiple rows that have said timestamp (assuming there is no other priority)?
thanks for the help in advance.
EDIT:
So, let's say for example, this table has the following columns: my_col1, my_col2, and my_timestamp.
Here are example values (in order of my_col1 - my_col2 - my_timestamp):
'my_val1' - 10 - '2010-07-01 01:01:01'
'my_val2' - 11 - '2010-07-01 01:01:01'
'my_val3' - 12 - '2010-07-01 01:01:01'
'my_val4' - 13 - '2010-07-01 01:01:02'
'my_val5' - 14 - '2010-07-02 01:01:01'
'my_val6' - 15 - '2010-07-02 01:01:01'
'my_val7' - 16 - '2010-07-03 01:01:01'
in the end, i would want only 3 rows, 1 with a timestamp with '2010-07-01 01:01:01', one with '2010-07-02 01:01:01', and one with '2010-07-03 01:01:01'. the third one is easy, since there is only 1 row with that last timestamp. but the first two are the tricky ones. the sql i posted above will ignore the row with 'my_val4'.
i need a query that will return me all of the columns, not just the dates.
how would i get sql to give me either the first or last of the values that would match that timestamp (it doesn't matter either way. i just need to get 1-per first-day's timestamp matching)?
select distinct on (date(my_timestamp)) *
from my_table
order by date(my_timestamp), my_timestamp
This selects all columns, exactly one row per date(my_timestamp). The single row per day is the first row for the group, as determined by order by (so that's the row with minimal my_timestamp).
Of course you can add whatever joins, wheres etc. you need. But this is the stub you're looking for.
The solution is to use the SQL's DISTINCT statement (http://www.sql-tutorial.com/sql-distinct-sql-tutorial/):
SELECT DISTINCT MIN(my_timestamp) AS first_timestamp FROM my_schema.my_table WHERE my_col1 = 'WHATEVER' AND my_timestamp > timestamp '2010-Jul-27 07:45:24' - INTERVAL '365 DAY' GROUP BY DATE (my_timestamp);
I know you already have an answer, but I still don't understand why you have mentioned a join in your question. Why not just include the rest of the columns in your query, like this:
SELECT MIN(my_timestamp) AS first_timestamp, my_col1, my_col2
FROM my_table
GROUP BY DATE(my_timestamp);
This works in MySQL. Does it not return the expected result in PostgreSQL?