Microsoft Access Query Max Vaule per Date - sql

Ok, so my SQL skills and Access skills are very rusty. I have a excel sheet with some data that I would like to start tracking into a database. Currently I pull the data, clean it up a bit and transpose it and have some excel magic work.
Currently there are two fields. Field1 is just a normal number field, less than 10 digits. Field 2 is converted into an excel date format from a UNIX Epoch timestamp.
My goal is to have the Max of Field1 for each day. Most of the older data only had one data point per day, while the newer data will possibly have hundreds data points.
Example Data:
Field1 being normal number
Field2 being Excel Date format
Field1
Field2
21107
44200.88
31827
44201.5
31827
44201.5
29355
44202.13
29355
44202.13

Assuming that you have your data in a table called Sheet1, you can type the following query in SQL view in MsAccess:
SELECT int(Field2) as ExcelDay, max(Field1) as MaxOfField1
FROM Sheet1
group by int(Field2)
int(Field2) removes the fraction from the time, leaving the Day you requested.

If you need to return both date & time as output then could try-
SELECT First(Field2) AS [Date], Max(Field1) AS MaxValue
FROM ExcelLinkTable
GROUP BY Format(Field2,"m/d/\e");
If you only want date to display then try-
SELECT Format(Field2,"mm/dd/yyyy") AS [Date], Max(Field1) AS MaxValue
FROM ExcelLinkTable
GROUP BY Format(Field2,"mm/dd/yyyy");

Related

Query another table with results of an another query that include a csv column

Brief Summary:
I am currently trying to get a count of completed parts that fall within a specific time range, machine number, operation number, and matches the tool number.
For example:
SELECT Sequence, Serial, Operation,Machine,DateTime,value as Tool
FROM tbPartProfile
CROSS APPLY STRING_SPLIT(Tool_Used, ',')
ORDER BY DateTime desc
is running a query which pulls all the instances that a tool has been changed, I am splitting the CSV from Tool_Used column. I am doing this because there can be multiple changes during one operation.
Objective:
This is where the production count come into place. For example, record 1 has a to0l change of 36 on 12/12/2022. I will need to go back in to the table and get the amount of part completed that equals the OPERATION/MACHINE/TOOL and fall between the date range.
For example:
SELECT *
FROM tbPartProfile
WHERE Operation = 20 AND Machine = 1 AND Tool_Used LIKE '%36%'
ORDER BY DateTime desc
For example this query will give me the datetimes the tools LIKE 36 was changed. I will need to take this datetime and compare it previous query and get the sum of all parts that were ran in this TimeRange/Operation/Machine/Tool Used

Google Sheets Query - fill zeros for dates when no data was recorded

How do I change my query in this google sheet so that it generates a table that has a column for every week, even if there is no data for that week? (display 0 in the values fields)
The problem I'm running into is that if there is a week or date with no data, it's not listed. I would like to generate a row/column with 0 for dates without data.
This is how the query is currently written:
=QUERY(A3:D9,"Select A, sum(C) group by A pivot D")
Here's the sheet (hyperlinked so you can see it):
The basic problem you need to solve is to know which data pieces are missing. Do you need the entries for every single day in a given date range? Only weekdays? Only weekdays, except public holidays? etc.
Once you know that, you can insert the missing data in the query itself, by concatenating the source table with literal data as below (where I'm manually adding a combination of Red with Nov 5), or with another query/resultset of another formula that gives you the missing data:
=QUERY({A3:D9; {"Red", date(2018,11,5), 0, weeknum(date(2018,11,5))}},
"Select Col1, sum(Col3) group by Col1 pivot Col4")

Difference in minutes from datetime

I'm trying to obtain the total amount of time difference from two timestamp columns (datetime)
I currently have a Table 1 setup like the following:
Time_Line_Down => datetime
Time_Line_Ran => datetime
Total_Downtime => Computed column with formula:
(case when [Time_Line_Down] IS NULL then NULL else CONVERT([varchar],case when [Time_Line_Ran] IS NULL then NULL else [Time_Line_Ran]-[Time_Line_Down] end,(108)) end)
Every time some conditions occur, I am copying those three columns (I have more columns but the problem is on this ones) into another Table 2 originally setup like the following:
Time_Line_Down => datetime
Time_Line_Ran => datetime
Total_Downtime => datetime
I then use an excel spreadsheet to "Get External Data" from SQL Server and use a pivot table to work with the data.
Example
Time_Line_Down = 2015-02-20 12:32:40.000
Time_Line_Ran = 2015-02-20 12:34:40.000
Total_Downtime = 1900-01-01 00:02:00.000
Desired Output
I want the pivot table to be able to give me a Grand Total of downtime from all rows in that table
Let's say it was forty five hours, fifty minutes and thirty seconds of accumulated downtime it should read like (45:50:30)
The problem:
Even if I format the Total_Downtime column in the excel pivot table as h:mm:ss to read like this:
Total_Downtime = 0:02:00
As rows accumulate and the Grand Total is calculated the "Date" part of the timestamp is messing the result is the total exceeds 24 hours
What I have tried
I changed the data type format of column Total_Downtime in Table 2 to time(0) so that it won't send the "Date" part, only the "Time" part of the timestamp, it is working and reads out 00:02:00
But now all the values in my pivot table on excel for that column are 0:00:00 no matter what value is actually in the SQL table.
Any suggestions?
You can use the Excel time format [h]:mm:ss which can go beyond 24 hours.
Alternatively, you can use the SQL function DATEDIFF to get the total downtime in seconds, and then convert that to however you need to display it in Excel, e.g.
case when [Time_Line_Down] IS NULL then NULL else case when [Time_Line_Ran] IS NULL then NULL else datediff(ss, Time_Line_Ran, Time_Line_Down) end end
I don't think you need the CASE statements here, you can just use
datediff(ss, Time_Line_Ran, Time_Line_Down)
Thank you all for your help,
I went ahead an tried the function DATEDIFF as suggested, I changed Table 1 computed column formula and Table 2 Total_Downtime column data type to int. Once imported into excel this numeric value needed some extra calculations.
In principle is the best answer and should work for anyone trying to calculate the difference from two timestamps, as mentioned before, is pretty straight forward.
But in my situation I needed to maintain two things:
1) The format 00:00:00 for the column Total_Downtime in Table 1, which changed to an integer value when using DATEDIFF
2) The pivot table Total_Downtime column format [h]:mm:ss (suggested by TobyLL) in excel, which required several calculations to convert from seconds
Solution
After learning that every time I copied from Table 1 to Table 2 the computed value (e.g. 00:02:00) changed to 1900-01-01 00:02:00.000 and that when imported to Excel it equaled to 1.001388889, I decided to force the "Date" part of the time stamp to be 1899-12-31 so that Excel would only calculate the Grand total in the pivot table with the "Time" (decimal) part.

how to convert a number to date in oracle sql developer

I have a excel format dataset that need to be imported to a table, one column is a date, but the data is stored in number format, such as 41275, when importing data, i tried to choose data format yyyy-mm-dd, it gives an error: not a valid month, also tried MM/DD/YYYY also error: day of month must be between 1 and last day of month. does anyone know what is this number and how can i convert it to a date format when importing it into the database?Thanks!
The expression (with respect to the Excel's leap year bug AmmoQ mentions) you are looking for is:
case
when yourNumberToBeImported <= 59 then date'1899-12-31' + yourNumberToBeImported
else date'1899-12-30' + yourNumberToBeImported
end
Then, you may either
Create a (global) temporary table in your Oracle DB, load your data from the Excel to the table and then reload the data from the temporary table to your target table with the above calculation included.
or you may
Load the data from the Excel to a persistent table in your Oracle DB and create a view over the persistent table which would contain the above calculation.
The number you got is the excel representation of a certain date ...
excel stores a date as the number of days, starting to count at a certain date ... to be precise:
1 = 1-JAN-1900
2 = 2-JAN-1900
...
30 = 30-JAN-1900
so, to get your excel number into an oracle date, you might want to try something like this...
to_date('1899-12-31','yyyy-mm-dd') + 41275

Picking timeseries from SQL database in source priority order

I need to pull a timeseries from a table that looks about like this
TimeStamp (timestamp), Datapoint (float), Data_source (integer)
So the following query would give me all the data recorded by source 1.
SELECT *
FROM table
WHERE data_source = 1
Now, how do I pick so that data_source = 1, is prioritized over the other sources? ie. I don't want doubles, I always want a datapoint which preferably is from source 1, but if not available pick something else.
I did this with a subquery that counted the amount of source=1 for every row. But that is incredibly slow. There must be an efficient way to do this? Source 1 is only available for about 3% of the points. There may be multiple other sources for one point, but in general any other source will do.
I'm on ms sql 2008. So T-SQL would be preffered, but i think this problem is quite general?
It sounds like you want to combine your data into a single series, prefering source 1.
How about this:
select timestamp,
datapoint
from (select t.*,
min(data_source) over (partition by timestamp) as minDataSource
from t
) t
where data_source = minDataSource
This assumes that "1" is the smallest data source. It calculates the min data source for each time stamp, and then uses the data from that data source.