Select based on table name w/ SQL for time series tables? - sql

I'm trying to count the number of actions per account over a period of time. The period of time may be variable. The way I've thought of doing this is by creating a table for each time period, containing records for each account that get upserted on user action. For example, I'd have a table named '12-05-2013' which would contain a record w/ the following attributes:
{
"account_id" = uniqueid12345,
"uploads" = 4
}
The table would basically represent the uploads by all users on that specific day.
I want to be able to find all accounts that had < 2 uploads over the last seven days. This would require me to query over the 7 tables representing the last 7 days. (IE 12-05-2013, 12-04-2013, 12-03-2013, etc). So far I've been unable to find a method of selecting out of these specific tables. Am I modeling the problem wrong? Is there an easy way to do this? I'm new to relational databases so go easy on me :)

Why wouldn't you use a query such as this?
select userid, count(*) as NumActions
from ActionTable
where ActionDate bewteen #date1 and #date2;

Related

How to create an aggregate table (data mart) that will improve chart performance?

I created a table named user_preferences where user preferences have been grouped by user_id and month.
Table:
Each month I collect all user_ids and assign all preferences:
city
district
number of rooms
the maximum price they can spend
The plan assumes displaying a graph showing users' shopping intentions like this:
The blue line is the number of interested users for the selected values in the filters.
The graph should enable filtering by parameters marked in red.
What you see above is a simplified form for clarifying the subject. In fact, there are many more users. Every month, the table increases by several hundred thousand records. The SQL query retrieving data (feeding) for chart lasts up to 50 seconds. It's far too much - I can't afford it.
So, I need to create a table (table/aggregation/data mart) where I will be able to insert the previously calculated numer of interested users for all combinations. Thanks to this, the end user will not have to wait for the data to count.
Details below:
Now the question is - how to create such a table in PostgreSQL?
I know how to write a SQL query that will calculate a specific example.
SELECT
month,
count(DISTINCT user_id) interested_users
FROM
user_preferences
WHERE
month BETWEEN '2020-01' AND '2020-03'
AND city = 'Madrid'
AND district = 'Latina'
AND rooms IN (1,2)
AND price_max BETWEEN 400001 AND 500000
GROUP BY
1
The question is - how to calculate all possible combinations? Can I write multiple nested loop in SQL?
The topic is extremely important to me, I think it will also be useful to others for the future.
I will be extremely grateful for any tips.
Well, base on your query, you have the following filters:
month
city
distirct
rooms
price_max
You can try creating a view with the following structure:
SELECT month
,city
,distirct
,rooms
,price_max
,count(DISTINCT user_id)
FROM user_preferences
GROUP BY month
,city
,distirct
,rooms
,price_max
You can make this view materialized. So, the query behind the view will not be executed when queried. It will behave like table.
When you are adding new records to the base table you will need to refresh the view (unfortunately, posgresql does not support auto-refresh like others):
REFRESH MATERIALIZED VIEW my_view;
or you can scheduled a task.
If you are using only exact search for each field, this will work. But in your example, you have criteria like:
month BETWEEN '2020-01' AND '2020-03'
AND rooms IN (1,2)
AND price_max BETWEEN 400001 AND 500000
In such cases, I usually write the same query but SUM the data from the materialized view. In your case, you are using DISTINCT and this may lead to counting a user multiple times.
If this is a issue, you need to precalculate too many combinations and I doubt this is the answer. Alternatively, you can try to normalize your data - this will improve the performance of the aggregations.

Counting the number of times same record exist in a given period of time

I am trying to write a query to find out whether a record exist more than one or not in a given period of time. And even if it exist, how many times the same record has been repeated.
Now to solve this issue, I have sorted the records.
select * from table_name where date = ? and date > ? order by email
And trying to count the number of times the same record exist.But I am not able to figure out a way to count the number of times the same record exists.
Here is a problem.The image below holds the basic data structure.
Here is the expected output for a year
The table above holds Xyz name and xyz#email.com data three times. And the name Abc and email abc#email.com two times and the third record name Def and email def#email.com two times. Now what I am trying to figure out a way to find out the number of times each records are being repeated in a given period of time using a single query. I am thinking to make use of recursion on a record and count till it didn't find a different record after sorting it. But using recursion on every records seems expensive.
Is there a better solution to solve this problem ?
Regards
Group and count.
SELECT column_to_compare1, column_to_compare2, COUNT(*)
FROM table_name
WHERE [date] BETWEEN #date1 AND #date2
GROUP BY column_to_compare1, column_to_compare2
HAVING COUNT(*) > 1 -- IF YOU WANT TO ONLY INCLUDE RECORDS WITH DUPLICATES
Between is inclusive, so you can adjust your dates with DATEADD if you really want between.
You can use the COUNT function to do this.
To do this using your own query:
SELECT Name, Email, COUNT(*) AS count
FROM table_name
WHERE date BETWEEN '01/01/2005' AND '31/12/2005'
GROUP BY Name, Email
However your example query is poor so I cannot give you a better solution. Here is an example of this working: SQL Fiddle
EDIT: Updated my solution to match you expected output.

How to add a column for each day in sql?

I'm trying to make a attendance management system for my college project.
I'm planning to createaone table for each month.
Each table will have
OCT(Roll_no int ,Name varchar, (dates...) bool)
Here dates will be from 1 to 30 and store boolean for present or absent.
Is this a good way to do it?
Is there a way to dynamically add a column for each day when the data was filled.
Also, how can I populate data according to current day.
Edit : I'm planning to make a UI which will have only two options (Present, absent) corresponding to each fetched roll no.
So, roll nos. and names are already going to be in the table. I'll just add status (present or absent) corresponding to each row in table for each date.
I would use Firebase. Make a node with a list of users. Then inside the uses make a attendance node with time-stamps for attended days. That way it's easier to parse. You also would leave room for the ability to bind data from other tables to users as well as the ability to add additional properties to each user.
Or do the SQL equivalent which would be make a table list of users (names and user properties) with associated keys (Primary keys in the user table with Foreign keys in the attendance table) that contained an attendance column that would hold an array of time-stamps representing attended days.
Either way, your UI would then only have to process timestamps and be able to parse through them with dates.
Though maybe add additional columns as years go so it wouldnt be so much of a bulk download.
Edit: In your case you'd want the SQL columns to be by month letting you select whichever month you'd like. For your UI, on injecting new attendance you'd simply add a column to the table if it does not already exist and then continue with the submission. On search/view you'd handle null results (say there were 2 months where no one attended at all. You'd catch any exceptions and continue with your display.)
Ex:
User
Primary Key - Name
1 - Joe
2 - Don
3 - Rob
Attendance
Foreign Key - Dates Array (Oct 2017)
1 - 1508198400, 1508284800, 1508371200
2 - 1508284800
3 - 1508198400, 1508371200
I'd agree with Gordon. This is not a good way to store the data. (It might be a good way to present it). If you have a table with the following columns, you will be able to store the data you want:
role_no (int)
Name (varchar)
Date (Date)
Present (bool)
If you want to then pull out the data for a particular month, you could just add this into your WHERE clause:
WHERE DATEPART(mm, [Date]) = 10 -- for October, or pass in a parameter
Dynamically adding columns is going to be a pain in the neck and is also quite messy

MS Access - Log daily totals of query in new table

I have an ODBC database that I've linked to an Access table. I've been using Access to generate some custom queries/reports.
However, this ODBC database changes frequently and I'm trying to discover where the discrepancy is coming from. (hundreds of thousands of records to go through, but I can easily filter it down into what I'm concerned about)
Right now I've been manually pulling the data each day, exporting to Excel, counting the totals for each category I want to track, and logging in another Excel file.
I'd rather automate this in Access if possible, but haven't been able to get my heard around it yet.
I've already linked the ODBC databases I'm concerned with, and can generate the query I want to generate.
What I'm struggling with is how to capture this daily and then log that total so I can trend it over a given time period.
If it the data was constant, this would be easy for me to understand/do. However, the data can change daily.
EX: This is a database of work orders. Work orders(which are basically my primary key) are assigned to different departments. A single work order can belong to many different departments and have multiple tasks/holds/actions tied to it.
Work Order 0237153-03 could be assigned to Department A today, but then could be reassigned to Department B tomorrow.
These work orders also have "ranking codes" such as Priority A, B, C. These too can be changed at any given time. Today Work Order 0237153-03 could be priority A, but tomorrow someone may decide that it should actually be Priority B.
This is why I want to capture all available data each day (The new work orders that have come in overnight, and all the old work orders that may have had changes made to them), count the totals of the different fields I'm concerned about, then log this data.
Then repeat this everyday.
the question you ask is very vague so here is a general answer.
You are counting the items you get from a database table.
It may be that you don't need to actually count them every day, but if the table in the database stores all the data for every day, you simply need to create a query to count the items that are in the table for every day that is stored in the table.
You are right that this would be best done in access.
You might not have the "log the counts in another table" though.
It seems you are quite new to access so you might benefit form these links videos numbered 61, 70 here and also video 7 here
These will help or buy a book / use web resources.
PART2.
If you have to bodge it because you can't get the ODBC database to use triggers/data macros to log a history you could store a history yourself like this.... BUT you have to do it EVERY day.
0 On day 1 take a full copy of the ODBC data as YOURTABLE. Add a field "dump Number" and set it all to 1.
1. Link to the ODBC data every day.
join from YOURTABLE to the ODBC table and find any records that have changed (ie test just the fields you want to monitor and if any of them have changed...).
Append these changed records to YOURTABLE with a new value for "dump number of 2" This MUST always increment!
You can now write SQL to get the most recent record for each primary key.
SELECT *
FROM Mytable
WHERE
(
SELECT PrimaryKeyFields, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
GROUP BY PrimaryKeyFields
) AS T1
ON t1.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND t1.MAXDumpNumber= Mytable.DumpNumber
You can compare the most recent records with any previous records.
ie to get the previous dump
Note that this will NOT work in the abvoe SQL (unless you always keep every record!)
AND t1.MAXDumpNumber-1 = Mytable.DumpNumber
Use something like this to get the previous row:
SELECT *
FROM Mytable
INNER JOIN
(
SELECT PrimaryKeyFields
, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
INNER JOIN
(
SELECT PrimaryKeyFields
, MAX(DumpNumber) AS MAXDumpNumber
FROM Mytable
GROUP BY PrimaryKeyFields
) AS TabLatest
ON TabLatest.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND
TabLatest.MAXDumpNumber <> Mytable.DumpNumber
-- Note that the <> is VERY important
GROUP BY PrimaryKeyFields
) AS T1
ON t1.PrimaryKeyFields = Mytable.PrimaryKeyFields
AND t1.MAXDumpNumber= Mytable.DumpNumber
Create 4 and 5 and MS Access named queries (or SS views) and then treate them like tables to do comparison.
Make sure you have indexes created on the PK fields and the DumpNumber and they shoudl be unique - this will speed things up....
Finish it in time for christmas... and flag this as an answer!

Inefficient query or am I reaching the limits of Access?

I have been taking an online class on relational databases and created an Access database (for the first time) to practice my SQL queries and solve a couple of work-related problems along the way. The database consists of three tables, with the primary table being used to record company wide sales summary information at the branch/store/menu item level (e.g. lowest level of detail) and with three periods of data the database is presently 1.3GB with that one table containing 4,262,421 records.
Everything has gone well until I attempted to run the following query:
SELECT P1.*, P13.[Price?] AS P13Price
FROM (SELECT * FROM PBASE WHERE Period = 13) AS P13, (SELECT * FROM PBASE WHERE Period = 1) AS P1
WHERE P1.Key = P13.Key and P1.[Price?]<>P13.[Price?];
To explain, the big table is PriceAccData and so I first ran a query (PBASE) that added a field to the PriceAccData that I can use as a key to compare price changes from one period to the next (combination of branch, store, menu item). Then I used subqueries to create a data set from the last period of 2013 (Period 13) and the first period of 2014 (Period 1)....from there I attempted to identify items that had changed in price from one period to the next in the Where clause.
Is there a more efficient way to write the query or to accomplish the comparison....it will work for one branch at a time, but takes a long time and locks up Access if I run it for more than one branch.
Subqueries are always known to be inefficient and are used as last resort. There's usually a way to JOIN tables for better efficiency. I suggest something in the line of :
SELECT ...... FROM PBASE P13 INNER JOIN PBASE P1 ON P13.KEY=P1.KEY
this will give you the data for the 2 periods then you can check for your equality criteria. Let me know if you need further help for that