I need to program a query where I can see the changes that certain fields have undergone in a certain date period.
Example: From the CAM_CONCEN table bring those records where the ACCOUNT_NUMBER undergoes a modification in the CONCTACT field in a period of 6 months before the date.
I would be grateful if you can guide me.
You can use LAG() to peek at the previous row of a particular subset of rows (the same account in this case).
For example:
select *
from (
select c.*,
lag(contact) over(partition by account_number
order by change_date) as prev_contact
from cam_concen c
) x
where contact <> prev_contact
Related
I have a sales data table with cust_ids and their transaction dates.
I want to create a table that stores, for every customer, their cust_id, their last purchased date (on the basis of transaction dates) and the count of times they have purchased.
I wrote this code:
SELECT
cust_xref_id, txn_ts,
DENSE_RANK() OVER (PARTITION BY cust_xref_id ORDER BY CAST(txn_ts as timestamp) DESC) AS rank,
COUNT(txn_ts)
FROM
sales_data_table
But I understand that the above code would give an output like this (attached example picture)
How do I modify the code to get an output like :
I am a beginner in SQL queries and would really appreciate any help! :)
This would be an aggregation query which changes the table key from (customer_id, date) to (customer_id)
SELECT
cust_xref_id,
MAX(txn_ts) as last_purchase_date,
COUNT(txn_ts) as count_purchase_dates
FROM
sales_data_table
GROUP BY
cust_xref_id
You are looking for last purchase date and count of distinct transaction dates ( like if a person buys twice, it should be considered as one single time).
Although you mentioned you want count of dates but sample data shows you want count of distinct dates - customer 284214 transacted 9 times but distinct will give you 7.
So, here is the SQL you can use to get your result.
SELECT
cust_xref_id,
MAX(txn_ts) as last_purchase_date,
COUNT(distinct txn_ts) as count_purchase_dates -- Pls note distinct will count distinct dates
FROM sales_data_table
GROUP BY 1
Scenario: Person A takes test B three times in the span of two year. There will be three entries for that person. However, I need to write a query that tells me the number of Persons that have taken a test(just one, the latest test). The problem with this is I have a column labeled, Test_Month (xx) and Test_year(xx).
What I need: I need to be able to just pull the test with the most recent test month and year, basically the most recent test they took. (For example(see pic below) I need, the record for 2/20 only.)
I have no idea how to retrieve only one record per person by the last test they took based on the separate columns test_Month and test_year.
You can use window functions:
select *
from (
select
t.*,
row_number() over(
partition last_name, firt_name
order by test_year desc, test_month desc
) rn
from mytable t
) t
where rn = 1
I have a problem with retrieving the last value of every category from my table which should not be sorted. For example i want the daily inventory value of nov-1 last appearance in the table without sorting the column daily inventory i.e "471". Is there a way to achieve this?
similarly i need to get the value of the next week's last daily inventory value and i should be able to do this for multiple items in the table too.
p.s: nov-1 represents nov-1 st week
Question from comments of initial post: will I be able to achieve what I need if I introduce a column id? If so, how can I do it?
Here's a way to do it (no guarantee that it's the most efficient way to do it)...
;WITH SetID AS
(
SELECT ROW_NUMBER() OVER(PARTITION BY Week ORDER BY Week) AS rowid, * FROM <TableName>
),
MaxRow AS
(
SELECT LastRecord = MAX(rowid), Week
FROM SetID
GROUP BY Week
)
SELECT a.*
FROM SetID a
INNER JOIN MaxRow b
ON a.rowid = b.LastRecord
AND b.Week = a.Week
ORDER BY a.Week
I feel like there's more to the table though, and this is also untested on large amounts of data. I'd be afraid that a different RowID could be potentially assigned upon each run. (I haven't used ROW_NUMBER() enough to know if this would throw unexpected data.)
I suppose this example is to enforce the idea that, if you had a dedicated rowID on the table, it's possible. Also, I believe #Larnu's comment to you on your original post - introducing an ID column that retains current order, but reinserting all your data - is a concern too.
Here's a SQLFiddle example here.
I have some records track inquires by DATETIME. There is an glitch in the system and sometimes a record will enter multiple times on the same day. I have a query with a bunch of correlated subqueries attached to these but the numbers are off because when there were those glitches in the system then these leads show up multiple times. I need the first entry of the day, I tried fooling around with MIN but I couldn't quite get it to work.
I currently have this, I am not sure if I am on the right track though.
SELECT SL.UserID, MIN(SL.Added) OVER (PARTITION BY SL.UserID)
FROM SourceLog AS SL
Here's one approach using row_number():
select *
from (
select *,
row_number() over (partition by userid, cast(added as date) order by added) rn
from sourcelog
) t
where rn = 1
You could use group by along with min to accomplish this.
Depending on how your data is structured if you are assigning a unique sequential number to each record created you could just return the lowest number created per day. Otherwise you would need to return the ID of the record with the earliest DATETIME value per day.
--Assumes sequential IDs
select
min(Id)
from
[YourTable]
group by
--the conversion is used to stip the time value out of the date/time
convert(date, [YourDateTime]
I have a database which consists of employees (one table) which can be assigned to groups (another table). Bother are joined together with another table, employee-to-group, which lists the group id, the employee id and the start date of the assignment.
An employee always has to be assigned to a group, but the assignments can change daily. One employee could be working in group A for day, then change into group B and work in group C only a week later.
My task is to find out which employees are assigned to a certain group given by its name at any given date. So the input should be: group name, date and I want the output to be the data of all the employees which are part of that group at the given moment in time.
Here's an SQL fiddle with some test data:
http://sqlfiddle.com/#!9/6d0bb
I recreated the database with mysql-statements because I couldn't figure out the oracle statements, I'm sorry.
As you can see from the test data, some employees may never change groups, while others change frequently. THere are also employees which are planned to change assignments in the future. The query has to account for that.
Because the application is a legacy one, the values (especially in the date field) are questionable. They are given as "days since the 1st of january, 1990", so the entry "9131" means "1st of january, 2015". 9468 would be today (2015-12-04) and 9496 would be 2016-01-01).
What I already have is code to find out the "date value" for any given date in what I call the "legacy format" of the application I'm working with (here I've just used CURRENT_DATE):
SELECT FLOOR(CURRENT_DATE - TO_DATE('1990-01-01', 'YYYY-MM-DD')) AS diffdate
For finding out which group a certain employee is assigned to, I tried:
SELECT * FROM history h
WHERE emp_nr = 1 AND valid_from <= 9131
ORDER BY valid_from DESC
FETCH FIRST ROW ONLY;
which should return me the group which an employee is assigned to on the 1st of january 2015.
What I do need help with is creating a statement that joins all tables does the same for a whole group instead of only one employee (as there are thousands of employees in the database and I only want the data of at most 10 groups).
I'm thankful for any kind of pointers in the right direction.
Use row_number to rank your history and get the latest group, just as you did with your FETCH FIRST query:
select *
from
(
select
h.*,
row_number() over (partition by emp_nr order by valid_from desc) as rn
from history h
where valid_from <= 9131
)
where rn = 1
You can then join this result with other tables.