The following is in direct query mode:
I have a date/time table that 10 separate queries have a relationship to. All data is from the current day. The date/time table just returns all times on 15 minute intervals up to the current timestamp. This works fine.
I have about 10+ queries that each have a record from whateever the latest 15 minute interval time stamp was. Each of these queries has a relationship to the date/time table.
I am wondering why any visual that uses the date/time table is incredibly slow. Say I have a visual that shows the timestamps on the x axis, and the data on a line (E.g. at 3:45 I had 10 records. At 4 I had 5 records. At 4:15 I had 1 record). If I use the date/time table time on the x axis, it is very slow. If I use the timestamp from the query on the x axis, it is very fast.
Is it doing something with all of the related queries? Only about 2-3 of them should actually be running at any given time based on what page I am on.
Related
I'm using Firebird 2.5 through Delphi 10.2/FireDAC. I have a table of historical data from industrial devices, one row per device at one-minute intervals. I need to be able to retrieve an arbitrary time window of records on an arbitrary interval span (every minute, every 3 minutes, every 15 minutes, etc.) which then gets used to generate a trending grid display or a report. This example query, pulling every 3 minutes records for a two-day window for 2 devices, does most of what I need:
select *
from avc_history
where (avcid in (11,12))
and (tstamp >= '6/26/2022') and (tstamp < '6/27/2022')
and (mod(extract(minute from tstamp), 3) = 0)
order by tstamp, avcid;
The problem with this is that, if device #11 has no rows for a part of the time span, then there is a mismatch in the number of rows returned for each device. I need to return a row for each device on each minutes period regardless of whether a row on that timestamp actually exists in the table - the other fields of the row can be null as long as the tstamp field has the timestamp and the avcid field has the device's ID.
I can fix this up in code, but that's a memory and time hog, so I want to avoid that if possible. Is there a way to structure a query (or if necessary, a stored procedure) to do what I need? Would some capability in the FireDAC library that I'm not aware of make this easier?
I'm trying to obtain the average time of an "activity" in a moodle database, i am not an sql expert, but i have managed to get to the point showed in the picture, my question is if exists a way to obtain, first the timestamp/time difference (this "activity" does not have a starting time column like many others) by day and then sum them all to get the average of that activity , for the first i tried with the function 'EXTRACT()' and comparing the dates in the format "%Y-%m-%d" but only sums the first row where they are equal, by the way i have been doing this just by a sql statement, i know the existence of store procedures but my level of sql is not that high.
Thanks in advance!
data obtained so far
Data on table logs (the most important i think)
component
action
objecttable
userid
courseid
timecreated
mod_quiz*
viewed
quiz_attempts
6
2
1645287525
mod_forum
viewed
forum
5
2
1645288525
core
loggedout
user
2
0
1645291745
mod_page
viewed
page
5
2
1645291955
Data i've trying to get:
Activity
StartTime
EndTime
Total
forum
19:01
19:10
9 minute(s)
quiz
15:45
16:00
15 minute(s)
page
...
...
...
workshop
...
...
...
but so far i get to assort the data in a column
Time
2022-x-x h:m
....
but when i try to sum by day with the function EXTRACT() and trying to match the dates in a very long query it just get the first record.
NOTE: * half of the "activities" were easy to calculate since they have a "timestart" and "timeend" columns but i can not figure out how to solve the ones that do not have a "timestart" column.
Quick question on optimizing a query type we do a lot in working with time-series data provided by a data logging system.
Database is SQL Server 2019 (v15) and for simplification assume the table is made up of just:
ID (bigint) - unique ID for the row
Timestamp (bigint) - Unix timestamp value.
Sample (float) - Value of sample taken (e.g. temperature measurement).
There is no regular interval or spacing with respect to timestamp as the data logger only logs data on a change to the data point being monitored (i.e. there is no reliable way to determine when in time that a previous sample would have been taken).
Anyway, our queries often involve selecting a range of data between two timestamps, but as expected the timestamps selected as the bounds for the range rarely ever line-up exactly with a timestamp in the data set. Because of this, what we really need to select is all the data in the range plus one record immediately before the range (so we know what the data value is leading into the selected range).
Historically we have done this one of two ways:
Select the rows between the timestamps (inclusive) and union this with a top(1) select of the first row with a timestamp <= to the range start.
OR
Select the top(1) timestamp <= to the range start into a variable and then do a select statement with this new timestamp as the lower bound for the range.
Since I am not an expert, I'm wondering if either one of these methods has better performance over the other or if there is maybe some better, third option we haven't encountered.
Thanks!
I have a table (Access 2016) tbl_b with date/time registrations
b_customer (num)
b_date (date)
b_start (date/time)
b_end (date/time)
I want to make a chart of all time registrations per day in a selected month and the gaps between those times. For this I need a query or table showing all times as source for the chart. I’m a bit lost how to approach this.
I assume the chart source needs consecutive records with all date and time registrations to do this. My approach would be create a temporary table (tmp) calculating all time periods where the customer is null. The next step would be a union query to combine the tbl_b and tmp table.
The tbl_b does not have records for every day, so I use a query generating all days in the selected month which shall be used in the chart (found this solution here: [Create a List of Dates in Access Query)
The disadvantage of using a tmp table for the “time gaps” is that it is not updating real time, where a query would provide this opportunity. I have about 20 queries to perform the end result, but MS Access keeps giving (expected) errors that the queries are too difficult.
Every query looks for difference between the in the previous query found end time and the next start time. On the other hand this approach has a weaknes as well, I thought 15 steps would be enough (no more than 15 gaps expected), but this is not sure.
Can anyone give me a head start how this can be accomplished by an easier (and actual working) method? Maybe VBA?
Thx!
Art
I have two database in SQL Server. I wanted to find out the data older than (let say 3) years.
I know the database creation date, currently I have around 550 GB (both the database) of data spanned for 7 years, I wanted to know 'how much of the DB data (out of total 550 GB)is older than 3 years OR (5 years)'?
I was going through this link but couldn't get the expected data.
SQL SERVER – Query to find number Rows, Columns, ByteSize for each table in the current database – Find Biggest Table in Database
One of the solution coming in my mind right now is to find out the total number of rows accounted for 7 years (easily get this number), total number of rows accounted for 5 years (starting from the date creation) (don't know how to get this number).
then for row_count_7_years accounts for 550 GB of data , what will be the row_count_5_years? i will get the approx data.
Please Help
For such purposes you should keep some datetime field as marc mentioned. I suppose you don't have it.
In you suggested solution you can get the whole count of rows from your table (for 7 years i suppose), but you wouldn't be able to get the rows for 5 years, because there is no date.
You can get the whole number of records for 7 years and divide them on the number of years, and ONLY IN CASE you have your database avarage fulfill, you can make query for top (numberOFRows in one year)*5 and order them by row_number(). The result - the rows, you should delete. But I wouldn't recommend you to use this solution.
I would recommend you to alter your tables and add the datetime columns for each of them. Before that you should make the backup for the whole date and copy it somewhere. After 3 years you would be able to make your clean up.
as mentioned above u shud have a date column , however if you dont , depending on the realtionships in your tables u might be able to estimate the number of rows looking up realtionships with some other table that has the datetime column , else if you have a backup ( unlikely but still) you can restore that to identify the delta