Converting day datetime to timestamp in Snowflake - sql

I have a datetime data in a table in Snowflake and I want to convert it into a timestamp
|-----------------------------------------|
| Date |
|-----------------------------------------|
| Wed 22 Mar 2022 12:51:21 -0500 |
| Sun 28 Apr 2022 02:21:19 -0500 |
| Mon 21 Mar 2021 18:31:59 -0500 |
| Fri 12 Jan 2022 19:41:46 -0500 |
| Thu 09 Feb 2022 23:51:17 -0500 |
| Tue 17 May 2021 07:61:07 -0500 |
| Wed 07 Oct 2022 01:71:01 -0500 |
|-----------------------------------------|
The output I want is:
|------------------------------------|
| Date |
|------------------------------------|
| 03/22/2022 12:51:21 -0500 |
| 04/28/2022 02:21:19 -0500 |
| 03/21/2021 18:31:59 -0500 |
| 01/12/2022 19:41:46 -0500 |
| 02/09/2022 23:51:17 -0500 |
| 05/17/2021 07:61:07 -0500 |
| 10/07/2022 01:71:01 -0500 |
|------------------------------------|
The methods I tried:
select to_date(date) from my_table
select to_date(date, 'mm/dd/yyyy h24:mi:ss') from my_table
select to_timestamp_tz(date) from my_table
etc.. None of the above conversions worked

using the correct formatting tokens, your valid datetime strings can be parsed. Depending if you what to have or not have timezone part on the timestamp, indicates which function you should use.
SELECT column1
,TRY_TO_TIMESTAMP_tz(column1, 'dy dd mon yyyy hh:mi:ss tzhtzm') as tz
,TRY_TO_TIMESTAMP(column1, 'dy dd mon yyyy hh:mi:ss tzhtzm') as default
,TRY_TO_TIMESTAMP_ntz(column1, 'dy dd mon yyyy hh:mi:ss tzhtzm') as ntz
FROM VALUES
('Wed 22 Mar 2022 12:51:21 -0500'),
('Sun 28 Apr 2022 02:21:19 -0500'),
('Mon 21 Mar 2021 18:31:59 -0500'),
('Fri 12 Jan 2022 19:41:46 -0500'),
('Thu 09 Feb 2022 23:51:17 -0500'),
('Tue 17 May 2021 07:61:07 -0500'),
('Thu 07 Oct 2022 01:71:01 -0500')
gives:
COLUMN1
TZ
DEFAULT
NTZ
Wed 22 Mar 2022 12:51:21 -0500
2022-03-22 12:51:21.000 -0500
2022-03-22 12:51:21.000
2022-03-22 12:51:21.000
Sun 28 Apr 2022 02:21:19 -0500
2022-04-28 02:21:19.000 -0500
2022-04-28 02:21:19.000
2022-04-28 02:21:19.000
Mon 21 Mar 2021 18:31:59 -0500
2021-03-21 18:31:59.000 -0500
2021-03-21 18:31:59.000
2021-03-21 18:31:59.000
Fri 12 Jan 2022 19:41:46 -0500
2022-01-12 19:41:46.000 -0500
2022-01-12 19:41:46.000
2022-01-12 19:41:46.000
Thu 09 Feb 2022 23:51:17 -0500
2022-02-09 23:51:17.000 -0500
2022-02-09 23:51:17.000
2022-02-09 23:51:17.000
Tue 17 May 2021 07:61:07 -0500
null
null
null
Thu 07 Oct 2022 01:71:01 -0500
null
null
null
because that last two are invalid times, if you correct the time to be in the valid range, the day being wrong is ignored.

Related

How to deduplicate table rows with the same date and keep the row with the most current date stamp?

A client (e-commerce store) doesn't possess a very well-built database. For instance, there are many users with a lot of shopping orders (=different IDs) for exactly the same products and on the same day. It is obvious that these seemingly multiple orders are in many cases just one unique order. At least that's what we have decided to work with to simplify the issue. (I am trying to do a basic data analytics.)
My table might look like this:
| Email | OrderID | Order_date | TotalAmount |
| ----------------- | --------- | ---------------- | ---------------- |
|customerA#gmail.com| 1 |Jan 01 2021 1:00PM| 2000 |
|customerA#gmail.com| 2 |Jan 01 2021 1:03PM| 2000 |
|customerA#gmail.com| 3 |Jan 01 2021 1:05PM| 2000 |
|customerA#gmail.com| 4 |Jan 01 2021 1:10PM| 2000 |
|customerA#gmail.com| 5 |Jan 01 2021 1:14PM| 2000 |
|customerA#gmail.com| 6 |Jan 03 2021 3:55PM| 3000 |
|customerA#gmail.com| 7 |Jan 03 2021 4:00PM| 3000 |
|customerA#gmail.com| 8 |Jan 03 2021 4:05PM| 3000 |
|customerB#gmail.com| 9 |Jan 04 2021 2:10PM| 1000 |
|customerB#gmail.com| 10 |Jan 04 2021 2:20PM| 1000 |
|customerB#gmail.com| 11 |Jan 04 2021 2:30PM| 1000 |
|customerB#gmail.com| 12 |Jan 06 2021 5:00PM| 5000 |
|customerC#gmail.com| 13 |Jan 09 2021 3:00PM| 4000 |
|customerC#gmail.com| 14 |Jan 09 2021 3:06PM| 4000 |
And my desired result would look like this:
| Email | OrderID | Order_date | TotalAmount |
| ----------------- | --------- | ---------------- | ---------------- |
|customerA#gmail.com| 5 |Jan 01 2021 1:14PM| 2000 |
|customerA#gmail.com| 8 |Jan 03 2021 4:05PM| 3000 |
|customerA#gmail.com| 11 |Jan 04 2021 2:30PM| 1000 |
|customerA#gmail.com| 12 |Jan 06 2021 5:00PM| 5000 |
|customerA#gmail.com| 14 |Jan 09 2021 3:06PM| 4000 |
I would guess this might be a common problem, but is there a simple solution to this?
Maybe there is, but I certainly don't seem to come up with one any time soon. I'd like to see even a complex solution, btw :-)
Thank you for any kind of help you can provide!
Do you mean this?
WITH
indata(Email,OrderID,Order_ts,TotalAmount) AS (
SELECT 'customerA#gmail.com', 1,TO_TIMESTAMP( 'Jan 01 2021 01:00PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 2,TO_TIMESTAMP( 'Jan 01 2021 01:03PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 3,TO_TIMESTAMP( 'Jan 01 2021 01:05PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 4,TO_TIMESTAMP( 'Jan 01 2021 01:10PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 5,TO_TIMESTAMP( 'Jan 01 2021 01:14PM','Mon DD YYYY HH12:MIAM'),2000
UNION ALL SELECT 'customerA#gmail.com', 6,TO_TIMESTAMP( 'Jan 03 2021 03:55PM','Mon DD YYYY HH12:MIAM'),3000
UNION ALL SELECT 'customerA#gmail.com', 7,TO_TIMESTAMP( 'Jan 03 2021 04:00PM','Mon DD YYYY HH12:MIAM'),3000
UNION ALL SELECT 'customerA#gmail.com', 8,TO_TIMESTAMP( 'Jan 03 2021 04:05PM','Mon DD YYYY HH12:MIAM'),3000
UNION ALL SELECT 'customerB#gmail.com', 9,TO_TIMESTAMP( 'Jan 04 2021 02:10PM','Mon DD YYYY HH12:MIAM'),1000
UNION ALL SELECT 'customerB#gmail.com',10,TO_TIMESTAMP( 'Jan 04 2021 02:20PM','Mon DD YYYY HH12:MIAM'),1000
UNION ALL SELECT 'customerB#gmail.com',11,TO_TIMESTAMP( 'Jan 04 2021 02:30PM','Mon DD YYYY HH12:MIAM'),1000
UNION ALL SELECT 'customerB#gmail.com',12,TO_TIMESTAMP( 'Jan 06 2021 05:00PM','Mon DD YYYY HH12:MIAM'),5000
UNION ALL SELECT 'customerC#gmail.com',13,TO_TIMESTAMP( 'Jan 09 2021 03:00PM','Mon DD YYYY HH12:MIAM'),4000
UNION ALL SELECT 'customerC#gmail.com',14,TO_TIMESTAMP( 'Jan 09 2021 03:06PM','Mon DD YYYY HH12:MIAM'),4000
)
,
-- need a ROW_NUMBER() to identify the last row within the day (order descending to get 1.
-- can't filter by an OLAP function, so in a fullselect, and WHERE cond in the final SELECT
with_rank AS (
SELECT
*
, ROW_NUMBER() OVER(PARTITION BY email,DAY(order_ts) ORDER BY order_ts DESC) AS rank
FROM INDATA
)
SELECT
*
FROM with_rank
WHERE rank = 1;
-- out Email | OrderID | Order_ts | TotalAmount | rank
-- out ---------------------+---------+---------------------+-------------+------
-- out customerA#gmail.com | 5 | 2021-01-01 13:14:00 | 2000 | 1
-- out customerA#gmail.com | 8 | 2021-01-03 16:05:00 | 3000 | 1
-- out customerB#gmail.com | 11 | 2021-01-04 14:30:00 | 1000 | 1
-- out customerB#gmail.com | 12 | 2021-01-06 17:00:00 | 5000 | 1
-- out customerC#gmail.com | 14 | 2021-01-09 15:06:00 | 4000 | 1

How to create churn table from transactional data?

Currently my Transaction Table has customer's transaction data for each month. Account_ID identifies the customer's ID. Order_ID is the number of orders that the customer had made. Reporting_week_start_date is the week which begins on Monday where each transaction is made (Date_Purchased).
How do i create a new table to identify the customer_status after each transaction has been made? Note that the new table has the Reporting_week_start_date until current date despite no transactions has been made .
Customer_Status
- New : customers who made their first paid subscription
- Recurring : customers with continuous payment
- Churned : when customers' subscriptions had expired and there's no renewal within the next month/same month
- Reactivated : customers who had churned and then returned to re-subscribe
Transaction Table
Account_ID | Order_ID | Reporting_week_start_date| Date_Purchased | Data_Expired
001 | 1001 | 31 Dec 2018 | 01 Jan 2019 | 08 Jan 2019
001 | 1001 | 07 Jan 2019 | 08 Jan 2019 | 15 Jan 2019
001 | 1001 | 14 Jan 2019 | 15 Jan 2019 | 22 Jan 2019 #Transaction 1
001 | 1001 | 21 Jan 2019 | 22 Jan 2019 | 29 Jan 2019
001 | 1001 | 28 Jan 2019 | 29 Jan 2019 | 31 Jan 2019
001 | 1002 | 28 Jan 2019 | 01 Feb 2019 | 08 Feb 2019
001 | 1002 | 04 Feb 2019 | 08 Feb 2019 | 15 Feb 2019 #Transaction 2
001 | 1002 | 11 Feb 2019 | 15 Feb 2019 | 22 Feb 2019
001 | 1002 | 18 Feb 2019 | 22 Feb 2019 | 28 Feb 2019
001 | 1003 | 25 Feb 2019 | 01 Mar 2019 | 08 Mar 2019
001 | 1003 | 04 Mar 2019 | 08 Mar 2019 | 15 Mar 2019
001 | 1003 | 11 Mar 2019 | 15 Mar 2019 | 22 Mar 2019 #Transaction 3
001 | 1003 | 18 Mar 2019 | 22 Mar 2019 | 29 Mar 2019
001 | 1003 | 25 Mar 2019 | 29 Mar 2019 | 31 Mar 2019
001 | 1004 | 27 May 2019 | 01 Jun 2019 | 08 Jun 2019
001 | 1004 | 03 Jun 2019 | 08 Jun 2019 | 15 Jun 2019 #Transaction 4
001 | 1004 | 10 Jun 2019 | 15 Jun 2019 | 22 Jun 2019
001 | 1004 | 17 Jun 2019 | 22 Jun 2019 | 29 Jun 2019
001 | 1004 | 24 Jun 2019 | 29 Jun 2019 | 30 Jun 2019
Expected Output
Account_ID | Order_ID | Reporting_week_start_date| Customer_status
001 | 1001 | 31 Dec 2018 | New
001 | 1001 | 07 Jan 2019 | New #Transaction 1
001 | 1001 | 14 Jan 2019 | New
001 | 1001 | 21 Jan 2019 | New
001 | 1001 | 28 Jan 2019 | New
001 | 1002 | 28 Jan 2019 | Recurring
001 | 1002 | 04 Feb 2019 | Recurring #Transaction 2
001 | 1002 | 11 Feb 2019 | Recurring
001 | 1002 | 18 Feb 2019 | Recurring
001 | 1003 | 25 Feb 2019 | Churned
001 | 1003 | 04 Mar 2019 | Churned #Transaction 3
001 | 1003 | 11 Mar 2019 | Churned
001 | 1003 | 18 Mar 2019 | Churned
001 | 1003 | 25 Mar 2019 | Churned
001 | - | 1 Apr 2019 | Churned
001 | - | 08 Apr 2019 | Churned
001 | - | 15 Apr 2019 | Churned
001 | - | 22 Apr 2019 | Churned
001 | - | 29 Apr 2019 | Churned
001 | - | 29 Apr 2019 | Churned
001 | - | 06 May 2019 | Churned
001 | - | 13 May 2019 | Churned
001 | - | 20 May 2019 | Churned
001 | - | 27 May 2019 | Churned
001 | 1004 | 27 May 2019 | Reactivated
001 | 1004 | 03 Jun 2019 | Reactivated #Transaction 4
001 | 1004 | 10 Jun 2019 | Reactivated
001 | 1004 | 17 Jun 2019 | Reactivated
001 | 1004 | 24 Jun 2019 | Reactivated'
...
...
...
current date
I think you just want window functions and case logic. Assuming the date you are referring to is Reporting_week_start_date, then the logic looks something like this:
select t.*,
(case when Reporting_week_start_date = min(Reporting_week_start_date) over (partition by account_id)
then 'New'
when Reporting_week_start_date < dateadd(lag(Reporting_week_start_date) over (partition by account_id order by Reporting_week_start_date), interval 1 month)
then 'Recurring'
when Reporting_week_start_date < dateadd(lead(Reporting_week_start_date) over (partition by account_id order by Reporting_week_start_date), interval -1 month)
then 'Churned'
else 'Reactivated'
end) as status
from transactions t;
These are not exactly the rules you have specified. But they seem very reasonable interpretations of what you want to do.

Rails with postgresql: extract field from time is not working

I have a problem with ordering a collection by hour. But first things first.
Project details:
Rails version 5.1.3
Ruby version 2.4.1-p111 (x86_64-linux)
Database adapter postgresql
This is how my collection looks like:
#<DeliveryTimeslot:0x00562dd1ad2690
id: 1,
start: Sun, 02 Jan 2000 01:00:00 +03 +03:00,
stop: Sun, 02 Jan 2000 02:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:39 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:25 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abcf98
id: 2,
start: Sun, 02 Jan 2000 02:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 03:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:39 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:25 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1ad23c0
id: 3,
start: Sat, 01 Jan 2000 03:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 04:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:39 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1ad1e70
id: 4,
start: Sat, 01 Jan 2000 04:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 05:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1ad1470
id: 5,
start: Sat, 01 Jan 2000 05:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 06:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1ad0bd8
id: 6,
start: Sat, 01 Jan 2000 06:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 07:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acfdc8
id: 7,
start: Sat, 01 Jan 2000 07:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 08:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acf7b0
id: 8,
start: Sat, 01 Jan 2000 08:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 09:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acf198
id: 9,
start: Sat, 01 Jan 2000 09:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 10:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acee00
id: 10,
start: Sat, 01 Jan 2000 10:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 11:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1aceb80
id: 11,
start: Sat, 01 Jan 2000 11:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 12:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1ace838
id: 12,
start: Sat, 01 Jan 2000 12:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 13:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1ace4c8
id: 13,
start: Sat, 01 Jan 2000 13:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 14:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acdde8
id: 14,
start: Sat, 01 Jan 2000 14:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 15:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acd758
id: 15,
start: Sat, 01 Jan 2000 15:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 16:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acd168
id: 16,
start: Sat, 01 Jan 2000 16:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 17:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:26 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acce20
id: 17,
start: Sat, 01 Jan 2000 17:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 18:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1acc678
id: 18,
start: Sat, 01 Jan 2000 18:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 19:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abfb30
id: 19,
start: Sat, 01 Jan 2000 19:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 20:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abf220
id: 20,
start: Sat, 01 Jan 2000 20:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 21:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abea78
id: 21,
start: Sat, 01 Jan 2000 21:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 22:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abe500
id: 22,
start: Sat, 01 Jan 2000 22:00:00 +03 +03:00,
stop: Sat, 01 Jan 2000 23:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abdf10
id: 23,
start: Sat, 01 Jan 2000 23:00:00 +03 +03:00,
stop: Sun, 02 Jan 2000 00:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>,
#<DeliveryTimeslot:0x00562dd1abd3f8
id: 24,
start: Sun, 02 Jan 2000 00:00:00 +03 +03:00,
stop: Sun, 02 Jan 2000 01:00:00 +03 +03:00,
created_at: Wed, 05 Oct 2016 17:57:40 +03 +03:00,
updated_at: Thu, 16 Mar 2017 12:40:27 +03 +03:00>]
Both start and stop columns are t.time in schema (so records are ActiveSupport::TimeWithZone).
What I want to do is to order them by hour and I have to do it using SQL statements.
So I have tried this:
DeliveryTimeslot.all.order("EXTRACT (HOUR FROM start) DESC").map(&:id)
and I expected to get array like this:
[23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 24]
but instead I've got this:
[2, 1, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3]
My thought was that this is because three records (with ids 1, 2 and 24) have a different start date (2nd of january while rest of them have 1st of january). Isn't the query I've wrote above supposed to extract hour from time type record? Is it really because of date? IDK why this is not working and basically I can't change anything in DB. Any hints what I am doing wrong? Any hints for make it without any changes in DB?
Found the solution. It worked with my existing application
Try it
DeliveryTimeslot.all.order('start DESC').sort_by {|item| item.start.to_date}.map{|item| item.id}
Just use DeliveryTimeslot.all.order("start DESC").map(&:id)
Problem solved.
I believe the answer is that this issue is caused by construction of db itself. In postgresql doc:(https://www.postgresql.org/docs/9.2/static/datatype-datetime.html)
they say:
We do not recommend using the type time with time zone
...and ActiveRecord is messing somehow with ActiveSupport::TimeWithZone.
A colleague told me to sort it by extracted hour.
In psql it looks ok:
kriseqsdb=# SELECT EXTRACT (HOUR FROM start) AS extracted_hour, * FROM delivery_timeslots ORDER BY EXTRACT (HOUR FROM start) DESC;
extracted_hour | id | start | stop | created_at | updated_at
----------------+----+----------+----------+----------------------------+----------------------------
23 | 2 | 23:00:00 | 00:00:00 | 2016-10-05 14:57:39.981764 | 2017-03-16 09:40:25.960571
22 | 1 | 22:00:00 | 23:00:00 | 2016-10-05 14:57:39.960375 | 2017-03-16 09:40:25.738402
21 | 24 | 21:00:00 | 22:00:00 | 2016-10-05 14:57:40.381005 | 2017-03-16 09:40:27.558225
20 | 23 | 20:00:00 | 21:00:00 | 2016-10-05 14:57:40.362648 | 2017-03-16 09:40:27.485668
19 | 22 | 19:00:00 | 20:00:00 | 2016-10-05 14:57:40.347286 | 2017-03-16 09:40:27.413084
18 | 21 | 18:00:00 | 19:00:00 | 2016-10-05 14:57:40.329297 | 2017-03-16 09:40:27.340032
17 | 20 | 17:00:00 | 18:00:00 | 2016-10-05 14:57:40.309307 | 2017-03-16 09:40:27.267091
16 | 19 | 16:00:00 | 17:00:00 | 2016-10-05 14:57:40.291424 | 2017-03-16 09:40:27.194525
15 | 18 | 15:00:00 | 16:00:00 | 2016-10-05 14:57:40.261229 | 2017-03-16 09:40:27.122137
14 | 17 | 14:00:00 | 15:00:00 | 2016-10-05 14:57:40.244531 | 2017-03-16 09:40:27.049617
13 | 16 | 13:00:00 | 14:00:00 | 2016-10-05 14:57:40.228901 | 2017-03-16 09:40:26.977144
12 | 15 | 12:00:00 | 13:00:00 | 2016-10-05 14:57:40.2118 | 2017-03-16 09:40:26.904671
11 | 14 | 11:00:00 | 12:00:00 | 2016-10-05 14:57:40.194678 | 2017-03-16 09:40:26.832177
10 | 13 | 10:00:00 | 11:00:00 | 2016-10-05 14:57:40.175353 | 2017-03-16 09:40:26.759804
9 | 12 | 09:00:00 | 10:00:00 | 2016-10-05 14:57:40.159382 | 2017-03-16 09:40:26.687357
8 | 11 | 08:00:00 | 09:00:00 | 2016-10-05 14:57:40.144921 | 2017-03-16 09:40:26.614746
7 | 10 | 07:00:00 | 08:00:00 | 2016-10-05 14:57:40.127898 | 2017-03-16 09:40:26.542091
6 | 9 | 06:00:00 | 07:00:00 | 2016-10-05 14:57:40.106023 | 2017-03-16 09:40:26.469586
5 | 8 | 05:00:00 | 06:00:00 | 2016-10-05 14:57:40.082284 | 2017-03-16 09:40:26.397126
4 | 7 | 04:00:00 | 05:00:00 | 2016-10-05 14:57:40.06161 | 2017-03-16 09:40:26.324512
3 | 6 | 03:00:00 | 04:00:00 | 2016-10-05 14:57:40.046009 | 2017-03-16 09:40:26.251731
2 | 5 | 02:00:00 | 03:00:00 | 2016-10-05 14:57:40.03066 | 2017-03-16 09:40:26.178915
1 | 4 | 01:00:00 | 02:00:00 | 2016-10-05 14:57:40.015266 | 2017-03-16 09:40:26.106172
0 | 3 | 00:00:00 | 01:00:00 | 2016-10-05 14:57:39.996529 | 2017-03-16 09:40:26.033393
(24 rows)
and so it does via rails console:
DeliveryTimeslot.connection.select_all("SELECT EXTRACT (HOUR FROM start) AS extracted_hour, * FROM delivery_timeslots ORDER BY EXTRACT (HOUR FROM start) DESC")
=>
[23.0, 2, "23:00:00", "00:00:00", "2016-10-05 14:57:39.981764", "2017-03-16 09:40:25.960571"],
[22.0, 1, "22:00:00", "23:00:00", "2016-10-05 14:57:39.960375", "2017-03-16 09:40:25.738402"],
[21.0, 24, "21:00:00", "22:00:00", "2016-10-05 14:57:40.381005", "2017-03-16 09:40:27.558225"],
[20.0, 23, "20:00:00", "21:00:00", "2016-10-05 14:57:40.362648", "2017-03-16 09:40:27.485668"],
[19.0, 22, "19:00:00", "20:00:00", "2016-10-05 14:57:40.347286", "2017-03-16 09:40:27.413084"],
[18.0, 21, "18:00:00", "19:00:00", "2016-10-05 14:57:40.329297", "2017-03-16 09:40:27.340032"],
[17.0, 20, "17:00:00", "18:00:00", "2016-10-05 14:57:40.309307", "2017-03-16 09:40:27.267091"],
[16.0, 19, "16:00:00", "17:00:00", "2016-10-05 14:57:40.291424", "2017-03-16 09:40:27.194525"],
[15.0, 18, "15:00:00", "16:00:00", "2016-10-05 14:57:40.261229", "2017-03-16 09:40:27.122137"],
[14.0, 17, "14:00:00", "15:00:00", "2016-10-05 14:57:40.244531", "2017-03-16 09:40:27.049617"],
[13.0, 16, "13:00:00", "14:00:00", "2016-10-05 14:57:40.228901", "2017-03-16 09:40:26.977144"],
[12.0, 15, "12:00:00", "13:00:00", "2016-10-05 14:57:40.2118", "2017-03-16 09:40:26.904671"],
[11.0, 14, "11:00:00", "12:00:00", "2016-10-05 14:57:40.194678", "2017-03-16 09:40:26.832177"],
[10.0, 13, "10:00:00", "11:00:00", "2016-10-05 14:57:40.175353", "2017-03-16 09:40:26.759804"],
[9.0, 12, "09:00:00", "10:00:00", "2016-10-05 14:57:40.159382", "2017-03-16 09:40:26.687357"],
[8.0, 11, "08:00:00", "09:00:00", "2016-10-05 14:57:40.144921", "2017-03-16 09:40:26.614746"],
[7.0, 10, "07:00:00", "08:00:00", "2016-10-05 14:57:40.127898", "2017-03-16 09:40:26.542091"],
[6.0, 9, "06:00:00", "07:00:00", "2016-10-05 14:57:40.106023", "2017-03-16 09:40:26.469586"],
[5.0, 8, "05:00:00", "06:00:00", "2016-10-05 14:57:40.082284", "2017-03-16 09:40:26.397126"],
[4.0, 7, "04:00:00", "05:00:00", "2016-10-05 14:57:40.06161", "2017-03-16 09:40:26.324512"],
[3.0, 6, "03:00:00", "04:00:00", "2016-10-05 14:57:40.046009", "2017-03-16 09:40:26.251731"],
[2.0, 5, "02:00:00", "03:00:00", "2016-10-05 14:57:40.03066", "2017-03-16 09:40:26.178915"],
[1.0, 4, "01:00:00", "02:00:00", "2016-10-05 14:57:40.015266", "2017-03-16 09:40:26.106172"],
[0.0, 3, "00:00:00", "01:00:00", "2016-10-05 14:57:39.996529", "2017-03-16 09:40:26.033393"]
In that case it is ok for me as this was my main concern. But to be clear about my question and get proper array I just mapped it by extracted_hour:
DeliveryTimeslot.connection.select_all("SELECT EXTRACT (HOUR FROM start) AS extracted_hour, * FROM delivery_timeslots ORDER BY EXTRACT (HOUR FROM start) DESC").rows.map { |row| row[0].to_i }
and basically got what I've asked about.
=> [23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]

Postgres convert string to time

I have the following query:
SELECT id, start_date::TIME, occurrence->0->>'startsOn' FROM service WHERE name='A.F';
Which return:
id | start_date | ?column?
------+------------+---------------------------------
1573 | 18:00:00 | Mon, 29 Jun 2015 18:00:00 +0000
1592 | 10:00:00 | Wed, 24 Jun 2015 10:00:00 +0000
1605 | 18:00:00 | Thu, 25 Jun 2015 18:00:00 +0000
1571 | 10:00:00 | Mon, 29 Jun 2015 10:00:00 +0000
1591 | 20:15:00 | Tue, 30 Jun 2015 20:15:00 +0000
1578 | 18:00:00 | Mon, 29 Jun 2015 20:00:00 +0000
1620 | 12:00:00 | Sat, 27 Jun 2015 12:00:00 +0000
(7 rows)
what I am trying to do is convert occurrence->0->>'startsOn' to time, so the expected result should be:
id | start_date | ?column?
------+------------+---------------------------------
1573 | 18:00:00 | 18:00:00
1592 | 10:00:00 | 10:00:00
1605 | 18:00:00 | 18:00:00
1571 | 10:00:00 | 10:00:00
1591 | 20:15:00 | 20:15:00
1578 | 18:00:00 | 20:00:00
1620 | 12:00:00 | 12:00:00
i tried the following:
SELECT id, start_date::TIME, occurrence->0->>'startsOn'::TIME FROM service WHERE name='A.F';
But it is not working as it gives me the following syntax error:
ERROR: invalid input syntax for type time: "startsOn"
select ('[{"startsOn":"Mon, 29 Jun 2015 18:00:00 +0000"}]'::json->0->>'startsOn')::timestamp::time
I did not have column "occurrence" so I mocked it up from your output

SQL working week in Oracle

I need Oracle SQL that returns the 'working' week number in year:
no overflowing weeks from one year to another
each week starts from monday
first few days in year are week 01
So the result should be:
2015-12-28 - MON - week 53
2015-12-29 - TUE - week 53
2015-12-30 - WED - week 53
2015-12-31 - THU - week 53
===
2016-01-01 - FRI - week 01 - reseting yearly week counter
2016-01-02 - SAT - week 01
2016-01-03 - SUN - week 01
---
2016-01-04 - MON - week 02 - monday start of new week
2016-01-05 - TUE - week 02
...
2016-12-31 - SAT - week 53
===
2017-01-01 - SUN - week 01 - reseting yearly week counter
2017-01-02 - MON - week 02 - monday start of new week
...
W - week number in a month
WW - week number in a year, week 1 starts at 1st of Jan
IW - week number in a year, according to ISO standard
For your requirement, you need to use combination of IW and WW format. You could combine them using a CASE expression.
If you want to generate the list of dates for entire year, then you could use the row generator method.
SQL> WITH sample_data AS(
2 SELECT DATE '2015-12-28' + LEVEL -1 dt FROM dual
3 CONNECT BY LEVEL <= 15
4 )
5 -- end of sample_data mimicking real table
6 SELECT dt,
7 TO_CHAR(dt, 'DY') DAY,
8 NVL(
9 CASE
10 WHEN dt < DATE '2016-01-01'
11 THEN TO_CHAR(dt, 'IW')
12 WHEN dt >= next_day(TRUNC(DATE '2016-01-01', 'YYYY') - 1, 'Monday')
13 THEN TO_CHAR(dt +7, 'IW')
14 END, '01') week_number
15 FROM sample_data;
DT DAY WEEK_NUMBER
---------- --- -----------
2015-12-28 MON 53
2015-12-29 TUE 53
2015-12-30 WED 53
2015-12-31 THU 53
2016-01-01 FRI 01
2016-01-02 SAT 01
2016-01-03 SUN 01
2016-01-04 MON 02
2016-01-05 TUE 02
2016-01-06 WED 02
2016-01-07 THU 02
2016-01-08 FRI 02
2016-01-09 SAT 02
2016-01-10 SUN 02
2016-01-11 MON 03
15 rows selected.
NOTE:
The value 15 to generate 15 rows and the dates are hard-coded above just for demonstration using the WITH clause since OP did not provide the test case with create and insert statements. In reality, you need to use your table and column names.
An approach could be counting the number of days of the year and divide by 7, with some logic to handle the beginning and the end ot the week and of the year:
with test(date_) as
(
select to_date('23122016', 'ddmmyyyy') + level -1 from dual connect by level < 30
)
SELECT date_,
floor( to_number( to_char(
greatest( least(
trunc(date_, 'iw')+6 ,
add_months( trunc(date_, 'YEAR'),12) -1
),
trunc(date_, 'yyyy')),
'ddd'
)
) /7 +1
) week
FROM test
The LEAST is used to avoid going to the next year, while the GREATEST is useful to avoid going to the previous one.
I found the answer myself, TO_CHAR(date,'IW') format is of no use because the very first week in a year according to this standard (ISO) can start after the New Year but also before it (look at TO_CHAR(TO_DATE('2014-12-31','YYYY-MM-DD'),'IW')=01 the first week that belongs to the next year!)
| DAY | WW | IW | MY
===========+=====+====+====+====
2014-12-28 | SUN | 52 | 52 | 52
2014-12-29 | MON | 52 | 01 | 53
2014-12-30 | TUE | 52 | 01 | 53
2014-12-31 | WED | 52 | 01 | 53
2015-01-01 | THU | 53 | 01 | 53
... | ... | .. | .. | ..
2016-12-31 | THU | 53 | 53 | 01
2016-01-01 | FRI | 01 | 53 | 01
2016-01-02 | SAT | 01 | 53 | 01
2016-01-03 | SUN | 01 | 53 | 01
2016-01-04 | MON | 01 | 01 | 02
2016-01-05 | TUE | 01 | 01 | 02
2016-01-06 | WED | 01 | 01 | 02
2016-01-07 | THU | 01 | 01 | 02
2016-01-08 | FRI | 02 | 01 | 02
The logic is quite simple, let's look at the very first day in year and its offset from monday. If current day is bigger than this first day offset then week number should be incremented by 1.
The number of very first day (offset from monday) is calculated with:
TO_CHAR(TO_DATE(TO_CHAR(dt,'YYYY')||'0101','YYYYMMDD'),'D'))
So the final SQL statement is
WITH DATES AS
(
SELECT DATE '2014-12-25' + LEVEL -1 dt FROM DUAL CONNECT BY LEVEL <= 500
)
SELECT dt,TO_CHAR(dt,'DY') DAY,TO_CHAR(dt,'WW') WW,TO_CHAR(dt,'IW') IW,
CASE WHEN TO_CHAR(dt,'D')<TO_CHAR(TO_DATE(TO_CHAR(dt,'YYYY')||'0101','YYYYMMDD'),'D') THEN
LPAD(TO_CHAR(dt,'WW')+1,2,'0')
ELSE
TO_CHAR(dt,'WW')
END MY
FROM dates
Of course, one can create a function for that purpose like:
CREATE OR REPLACE FUNCTION WorkingWeek(dt IN DATE) RETURN CHAR
IS
BEGIN
IF(TO_CHAR(dt,'D')<TO_CHAR(TO_DATE('0101'||TO_CHAR(dt,'YYYY'),'DDMMYYYY'),'D')) THEN
RETURN LPAD(TO_CHAR(dt,'WW')+1,2,'0');
ELSE
RETURN TO_CHAR(dt,'WW');
END IF;
END WorkingWeek;
/