The table below contains customer reservations. Customers come and make one record in this table, and the last day this table will be updated its checkout_date field by putting that current time.
The Table
Now I need to extract all customers spending nights.
The Query
SELECT reservations.customerid, reservations.roomno, rooms.rate,
reservations.checkin_date, reservations.billed_nights, reservations.status,
DateDiff("d",reservations.checkin_date,Date())+Abs(DateDiff("s",#12/30/1899
14:30:0#,Time())>0) AS Due_nights FROM reservations, rooms WHERE
reservations.roomno=rooms.roomno;
What I need is, if customer has checkout status, due nights will be calculated checkin_date subtracting by checkout date instead current date, also if customer has checkout date no need to add extra absolute value from 14:30.
My current query view is below, also my computer time is 14:39 so it adds 1 to every query.
Since you want to calculate the Due nights upto the checkout date, and if they are still checked in use current date. I would suggest you to use an Immediate If.
The condition to check would be the status of the room. If it is checkout, then use the checkout_date, else use the Now(), something like.
SELECT
reservations.customerid,
reservations.roomno,
rooms.rate,
reservations.checkin_date,
reservations.billed_nights,
reservations.status,
DateDiff("d", checkin_date, IIF(status = 'checkout', checkout_date, Now())) As DueNights
FROM
reservations
INNER JOIN
rooms
ON reservations.roomno = rooms.roomno;
As you might have noticed, I used a JOIN. This is more efficient than merging the two tables with common identifier. Hope this helps !
Related
In my program, I have a data grid view. I make some amounts due for payment today. I made a display of the amounts that are due and have not been paid (late) I want a code that displays the dates less than the current date of the day I tried that following code but it only fetches the lower days and does not look For the month or year if it is greater or not than the current day's date
tbl = db.readData("SELECT * from Payments where date_batch < CONVERT(varchar(50),GetDate(), 103)", "");
DgvSearch.DataSource = tbl;
The problem with the previous code is that it doesn't fetch the date lower by day, month and year.
Fetches the date less than the current date in terms of day only I want in terms of day, month and year
Ok, so I'm going to assume date_batch is a VARCHAR(10) or similar and contains data like:
28/12/2021
29/11/2021
30/08/2021
31/12/2021
As you can see these "strings that look like dates to a human" are in order. They are not in date order, they are in alphabetical order. Big difference - SQLServer sorts strings alphabetically. When you ask for strings "less than x" it uses alphabetical sorting rules to determine "less than"-ness
Don't stores dates in a string. SQLServer has several date specific datatypes. Use them.
The following process will dig you out of the hole you've dug yourself into:
ALTER TABLE Payments ADD COLUMN BatchDate DATE;
UPDATE Payments SET BatchDate = TRY_CONVERT(Date, date_batch, 103);
Now go look at your table and sanity check it:
SELECT * FROM payments WHERE batchdate is null and date_batch is not null
This shows any dates that didn't convert. Correct their wonky bad data and run the update again.
Do another select, of all the data, and eyeball it; does it look sensible? Do you have any dates that have been put in as 02/03/2021 when they should have been 03/02/2021 etc
Now your table is full of nice dates, get rid of the strings;
ALTER TABLE Payments DROP COLUMN date_batch;
Maybe rename the column, but in SQLServer and c# WeCallThingsNamesLikeThis, we_dont_call_them_names_like_this
sp_rename 'Payments.BatchDate', 'date-batch', 'COLUMN';
Now you can do:
SELECT * FROM payments WHERE batchDate < GetDate()
And never again store dates in a string
I have some related tables that I want to run a Totals/Group By query on.
My "Tickets" table has a field called "PickDate" which is the date that the order/ticket was fulfilled.
I want to group by the weekday (name) (a calculated field) so that results for certain customers on the same day of the week are grouped. Then the average ticket completion time can be calculated per customer for each weekday. It would look something like the following.
--CustName---Day---AvTime
Customer 1 - MON - 72.3
- TUE - 84.2
- WED - 110.66
..etc
..etc
..etc
Customer 2 ..
This works fine but the problem I am having is that when this query is run, it works on every record from the tickets table. There are some reasons that, for certain reports, the data that it the query is referencing should be restricted between a date range; for example to track a change in duration over a number of weeks.
In the query properties, there is a property, "filter", to which I can add a string such as:
"([qryCustomerDetails].[PickDate] Between #11/1/2021# And #11/14/2021#)"
to filter the results. The only issue is that since each date is unique, the "group by" of like days such as "Monday" is overridden by this unique date "11/1/2021". The query only works when the PickDate is removed as a field. However, then I can't access it to filter by it.
What I want to achieve would be the same as; in the "Tickets" table itself filtering the results between two dates and then having a query that could run on that filtered table.
Is there any way that I could achieve this?
For reference here is the SQL of the query.
FROM tblCustomers INNER JOIN tblTickets ON tblCustomers.CustomerID = tblTickets.CustomerID
GROUP BY tblCustomers.Customer, WeekdayName(Weekday([PickDate]),False,1), tblCustomers.Round, Weekday([PickDate])
ORDER BY tblCustomers.Round, Weekday([PickDate]);
You probably encountered two issues. The first issue is that to filter results in a totals query by un totaled fields you use HAVING rather than WHERE. the second issue is that calculated fields like Day don't exist at the time of the query. You can't say having Day > Mon. Instead you must repeat the calculation of Day: Having CalculateDay(PickDate) > Monday
The designer will usually figure out whether you want having or where automatically. So here is my example:
this gives you the SQL:
SELECT Tickets.Customer, WeekdayName(Weekday([PickDate])) AS [Day], Avg(Tickets.Time) AS AvTime
FROM Tickets
GROUP BY Tickets.Customer, WeekdayName(Weekday([PickDate])), Tickets.PickDate
HAVING (((Tickets.PickDate) Between #11/16/2021# And #11/17/2021#))
ORDER BY Tickets.PickDate;
I am using following query to get records count on month wise and it is working fine:
SELECT MONTH(dte_cycle_count) MONTH, COUNT(*) COUNT
FROM inventory
WHERE YEAR(dte_cycle_count)='2021' --OR (MONTH(dte_cycle_count) = '1' OR MONTH(dte_cycle_count) = '12')
GROUP BY MONTH(dte_cycle_count);
Problem:
Now I need to bind rollover calendar so user can scroll or click on next or previous button the next 12 Months record will be visible.
eg. Current month is MARCH, So default records will be from APR2020 to MARCH2021. If user click on previous then records will come MAR2020 to FEB2021.
How I can achieve this?
Please let me know if need more information. I will try my best to provide.
I think what you are after is a date list from which to join to your inventory table.
Like a numbers table, build a static table with columns for date, year, month, populated from whenever you need to far in the future.
You then select from this, applying your filtering range critera, and join to your inventory table.
For an efficient query, ideally your inventory table should have the relevant date portions eg year and month stored to match.
You don't want to be using functions on a datetime to extract the year or month as this is not sargable and will not allow any index to be used for a seek lookup.
Since bigquery is append-only, I was thinking about stamping each record I upload to it with an 'effective date' similar to how peoplesoft works, if anybody is familiar with that pattern.
Then, I could issue a select statement and join on the max effective date
select UTC_USEC_TO_MONTH(timestamp) as month, sum(amt)/100 as sales
from foo.orders as all
join (select id, max(effdt) as max_effdt from foo.orders group by id) as latest
on all.effdt = latest.max_effdt and all.id = latest.id
group by month
order by month;
Unfortunately, I believe this won't scale because of the big query 'small joins' restriction, so I wanted to see if anyone else had thought around this use case.
Yes, adding a timestamp for each record (or in some cases, a flag that captures the state of a particular record) is the right approach. The small side of a BigQuery "Small Join" can actually return at least 8MB (this value is compressed on our end, so is usually 2 to 10 times larger), so for "lookup" table type subqueries, this can actually provide a lot of records.
In your case, it's not clear to me what the exact query you are trying to run is.. it looks like you are trying to return the most recent sales times of every individual item - and then JOIN this information with the SUM of sales amt per month of each item? Can you provide more info about the query?
It might be possible to do this all in one query. For example, in our wikipedia dataset, an example might look something like...
SELECT contributor_username, UTC_USEC_TO_MONTH(timestamp * 1000000) as month,
SUM(num_characters) as total_characters_used FROM
[publicdata:samples.wikipedia] WHERE (contributor_username != '' or
contributor_username IS NOT NULL) AND timestamp > 1133395200
AND timestamp < 1157068800 GROUP BY contributor_username, month
ORDER BY contributor_username DESC, month DESC;
...to provide wikipedia contributions per user per month (like sales per month per item). This result is actually really large, so you would have to limit by date range.
UPDATE (based on comments below) a similar query that finds "num_characters" for the latest wikipedia revisions by contributors after a particular time...
SELECT current.contributor_username, current.num_characters
FROM
(SELECT contributor_username, num_characters, timestamp as time FROM [publicdata:samples.wikipedia] WHERE contributor_username != '' AND contributor_username IS NOT NULL)
AS current
JOIN
(SELECT contributor_username, MAX(timestamp) as time FROM [publicdata:samples.wikipedia] WHERE contributor_username != '' AND contributor_username IS NOT NULL AND timestamp > 1265073722 GROUP BY contributor_username) AS latest
ON
current.contributor_username = latest.contributor_username
AND
current.time = latest.time;
If your query requires you to use first build a large aggregate (for example, you need to run essentially an accurate COUNT DISTINCT) another option is to break this query up into two queries. The first query could provide the max effective date by month along with a count and save this result as a new table. Then, could run a sum query on the resulting table.
You could also store monthly sales records in separate tables, and only query the particular table for the months you are interested in, simplifying your monthly sales summaries (this could also be a more economical use of BigQuery). When you need to find aggregates across all tables, you could run your queries with multiple tables listed after the FROM clause.
I've got a table with purchase orders stored in it. Each row has a timestamp indicating when the order was placed. I'd like to be able to create a report indicating the number of purchases each day, month, or year. I figured I would do a simple SELECT COUNT(xxx) FROM tbl_orders GROUP BY tbl_orders.purchase_time and get the value, but it turns out I can't GROUP BY a timestamp column.
Is there another way to accomplish this? I'd ideally like a flexible solution so I could use whatever timeframe I needed (hourly, monthly, weekly, etc.) Thanks for any suggestions you can give!
This does the trick without the date_trunc function (easier to read).
// 2014
select created_on::DATE from users group by created_on::DATE
// updated September 2018 (thanks to #wegry)
select created_on::DATE as co from users group by co
What we're doing here is casting the original value into a DATE rendering the time data in this value inconsequential.
Grouping by a timestamp column works fine for me here, keeping in mind that even a 1-microsecond difference will prevent two rows from being grouped together.
To group by larger time periods, group by an expression on the timestamp column that returns an appropriately truncated value. date_trunc can be useful here, as can to_char.