Before all thank you for your help!
I want to find out which tables in the database are most heavily used, i.e. the amount of users that query the table, the amount of times it was queried, the resources that where consumed by users per table, the total time the tables where queried, and any other useful data.
For now I would limit the analysis to 9 specific tables.
I'd tried using stl_scan and pg_user using the next two querys:
SELECT
s.perm_table_name AS table_name,
count(*) AS qty_query,
count(DISTINCT s.userid) AS qty_users
FROM stl_scan s
JOIN pg_user b
ON s.userid = b.usesysid
JOIN temp_mone_tables tmt
ON tmt.table_id = s.tbl AND tmt.table = s.perm_table_name
WHERE s.userid > 1
GROUP BY 1
ORDER BY 1;
SELECT
b.usename AS user_name,
count(*) AS qty_scans,
count(DISTINCT s.tbl) AS qty_tables,
count(DISTINCT trunc(starttime)) AS qty_days
FROM stl_scan s
JOIN pg_user b
ON s.userid = b.usesysid
JOIN temp_mone_tables tmt
ON tmt.table_id = s.tbl AND tmt.table = s.perm_table_name
WHERE s.userid > 1
GROUP BY 1
ORDER BY 1;
The temp_mone_tables is a temporal table that contains the id and name of the tables I'm interested.
With this queries I'm able to get some information but I need more details. Surprisingly there's not much data online about this kind of statistics.
Again thank you all beforehand!
Nice work! You are on the right track using the stl_scan table. I'm not clear what further details you're looking for.
For detailed metrics on resource usage you may want to use the SVL_QUERY_METRICS_SUMMARY view. Note that this data is summarized by query not table because a query is the primary way resources are utilized.
Generally, have a look at the admin queries (and views) in our Redshift Utils library on GitHub, particularly v_get_tbl_scan_frequency.sql
Thanks to Joe Harris' answer I was able to add a lot of information to my previous query. With svl_query_metrics_summary joined to stl_scan you get important data about resources consumption, this information can be extended joining them to the vast number of views listed in Joe's answer.
For me the solution begins with the next query:
SELECT *
FROM stl_scan ss
JOIN pg_user pu
ON ss.userid = pu.usesysid
JOIN svl_query_metrics_summary sqms
ON ss.query = sqms.query
JOIN temp_mone_tables tmt
ON tmt.table_id = ss.tbl AND tmt.table = ss.perm_table_name
The query gives you a lot of data that can be summarized in multiple ways as wanted.
Remember that temp_mone_tables is a temp table that contains the tableid and name of the tables I'm interested.
Related
I have two tables: users and orders. Orders is a massive table (>100k entries) and users is relatively small (around 400 entries).
I want to find the number of orders per user. The column linking both tables is the email column.
I can achieve this with the following query:
SELECT sub_1.num, u.id FROM users AS u,
(SELECT cust_email AS email, COUNT(purchaseid) AS num
FROM orders AS o
WHERE o.status = 'COMPLETED'
GROUP BY cust_email) sub_1
WHERE u.email = sub_1.email
ORDER BY createdate DESC NULLS LAST
However, as mentioned previously, the order table is very large, so I would ideally want to add another condition to the WHERE clause in the Subquery to only retrieve those emails that exist in the User table.
I can simply add the user table to the subquery like this:
SELECT sub_1.num, u.id FROM users AS u,
(SELECT cust_email AS email, COUNT(purchaseid) AS num
FROM orders AS o, users AS u
WHERE o.status = 'COMPLETED'
and o.cust_email = u.email
GROUP BY cust_email) sub_1
WHERE u.email = sub_1.email
ORDER BY createdate DESC NULLS LAST
This does speed up the query, but sometimes the outer query is much more complex than just selecting all entries from the user table. Therefore, this solution does not always work. The goal would be to somehow link the outer and the inner query. I've thought of joint queries but cannot figure out how to get it to work.
I noticed that the first query seems to perform faster than I expected, so perhaps PostgreSQL is already smart enough to connect the outer and inner tables. However, I was hoping that someone could shed some light on how this works and what the best way to perform these types of subqueries is.
I have a subscription database containing Customers, Subscriptions and Publications tables.
The Subscriptions table contains ALL subscription records and each record has three flags to mark the status: isActive, isExpire and isPending. These are Booleans and only one flag can be True - this is handled by the application.
I need to identify all customers who have not renewed any magazines to which they have previously subscribed and I'm not sure that I've written the most efficient SQL query. If I find a lapsed subscription I need to ignore it if they already have an active or pending subscription for that particular magazine.
Here's what I have:
SELECT DISTINCT Customers.id, Subscriptions.publicationName
FROM Subscriptions
LEFT JOIN Customers
ON Subscriptions.id_Customer = Customers.id
LEFT JOIN Publications
ON Subscriptions.id_Publication = Publications.id
WHERE Subscriptions.isExpired = 1
AND NOT EXISTS
( SELECT * FROM Subscriptions s2
WHERE s2.id_Publication = Subscriptions.id_Publication
AND s2.id_Customer = Subscriptions.id_Customer
AND s2.isPending = 1 )
AND NOT EXISTS
( SELECT * FROM Subscriptions s3
WHERE s3.id_Publication = Subscriptions.id_Publication
AND s3.id_Customer = Subscriptions.id_Customer
AND s3.isActive = 1 )
I have just over 50,000 subscription records and this query takes almost an hour to run which tells me that there's a lot of looping or something going on where for each record the SQL engine is having to search again to find any 'isPending' and 'isActive' records.
This is my first post so please be gentle if I've missed out any information in my question :) Thanks.
I don't have your complete database structure, so I can't test the following query but it may contain some optimization. I will leave it to you to test, but will explain why I have changed, what I have changed.
select Distinct Customers.id, Subscriptions.publicationName
from Subscriptions
join Customers on Subscriptions.id_Customer = Customer.id
join Publications
ON Subscriptions.id_Publication = Publications.id
Where Subscriptions.isExpired = 1
And Not Exists
(select * from Subscriptions s2
join Customers on s2.id_Customer = Customer.id
join Publications
ON s2.id_Publication = Publications.id
where s2.id_Customer = s2.id_customer and
(s2.isPending = 1 or s2.isActive = 1))
If you have no resulting data in Customer or Publications DB, then the Subscription information isn't useful, so I eliminated the LEFT join in favor of simply join. Combine the two Exists subqueries. These are pretty intensive if I recall so the fewer the better. Last thing which I did not list above but may be worth looking into is, can you run a subquery with specific data fields returned and use it in an Exists clause? The use of Select * will return all data fields which slows down processing. I'm not sure if you can limit your result unfortunately, because I don't have an equivalent DB available to me that I can test on (the google probably knows).
I suspect there are further optimizations that could be made on this query. Eliminating the Exists clause in favor of an 'IN' clause may help, but I can't think of a way right now, seeing how you've got to match two unique fields (customer id and the relevant subscription). Let me know if this helps at all.
With a table of 50k rows, you should be able to run a query like this in seconds.
I'm using PostgreSQL with big tables, and query takes too much time.
I have two tables. The first one has about 6 million rows (data table), and the second one has about 30000 rows (users table).
Each user has about 200 rows in data table.
Later, data and users tables may increase up to 30 times.
My query is:
SELECT d.name, count(*) c
FROM users AS u JOIN data AS d on d.id = u.id
WHERE u.language = 'eng' GROUP BY d.name ORDER BY c DESC LIMIT 10;
90% of users has eng language, and query time is 7 seconds. Each column is indexed!
I read about Merge Join and it should be really fast, so I sorted tables by id and forced Merge Join, but time increased up to 20 seconds.
I suppose, the tables configuration is wrong, but I don't know how to fix it.
Should I make other improvements?
For this query:
SELECT d.name, count(*) c
FROM users u JOIN
data d
on d.id = u.id
WHERE u.language = 'eng'
GROUP BY d.name
ORDER BY c DESC
LIMIT 10;
First, try indexes: users(language, id), data(id, name). See if this speeds up the query.
Second, what is d.name? Can a user have more than one of them? Is there a table of valid values? Depending on the answers to these questions, there may be other ways to structure the query.
Say I have three tables, a table of users, a table of around 500 different items, and the corresponding join table. What I would like to do is:
select * from users u join items_users iu on iu.user_id = u.id
where iu.item_id in (1,2,3,4,5)
and u.city_id = 1 limit 10;
Except, instead of an IN condition, I would like to find users that have all the corresponding items. If it helps, assume that the max number of items that will be searched for at a time will be 5. Also, I am using Postgres, and don't mind denormalizing it if would help as it's a read only system and speed is highest priority.
It's another case of relational division. We have assembled quite an arsenal of queries to deal with this class of problems here.
In this case, with 5 or more items, I might try:
SELECT u.*
FROM users AS u
WHERE u.city_id = 1
AND EXISTS (
SELECT *
FROM items_users AS a
JOIN items_users AS b USING (user_id)
JOIN items_users AS c USING (user_id)
...
WHERE a.user_id = u.user_id
AND a.item_id = 1
AND b.item_id = 2
AND c.item_id = 3
...
)
LIMIT 10;
It was among the fastest in my tests and it fits the requirement of multiple criteria on items_users while only returning columns from user.
Read about indexes at the linked answer. these are crucial for performance.
As your tables are read-only I would also CLUSTER both tables, to minimize the number of pages that have to be visited. If nothing else, CLUSTER items_users using a multi-column index on (user_id, item_id).
Of all the thousands of queries I've written, I can probably count on one hand the number of times I've used a non-equijoin. e.g.:
SELECT * FROM tbl1 INNER JOIN tbl2 ON tbl1.date > tbl2.date
And most of those instances were probably better solved using another method. Are there any good/clever real-world uses for non-equijoins that you've come across?
Bitmasks come to mind. In one of my jobs, we had permissions for a particular user or group on an "object" (usually corresponding to a form or class in the code) stored in the database. Rather than including a row or column for each particular permission (read, write, read others, write others, etc.), we would typically assign a bit value to each one. From there, we could then join using bitwise operators to get objects with a particular permission.
How about for checking for overlaps?
select ...
from employee_assignments ea1
, employee_assignments ea2
where ea1.emp_id = ea2.emp_id
and ea1.end_date >= ea2.start_date
and ea1.start_date <= ea1.start_date
Whole-day inetervals in date_time fields:
date_time_field >= begin_date and date_time_field < end_date_plus_1
Just found another interesting use of an unequal join on the MCTS 70-433 (SQL Server 2008 Database Development) Training Kit book. Verbatim below.
By combining derived tables with unequal joins, you can calculate a variety of cumulative aggregates. The following query returns a running aggregate of orders for each salesperson (my note - with reference to the ubiquitous AdventureWorks sample db):
select
SH3.SalesPersonID,
SH3.OrderDate,
SH3.DailyTotal,
SUM(SH4.DailyTotal) RunningTotal
from
(select SH1.SalesPersonID, SH1.OrderDate, SUM(SH1.TotalDue) DailyTotal
from Sales.SalesOrderHeader SH1
where SH1.SalesPersonID IS NOT NULL
group by SH1.SalesPersonID, SH1.OrderDate) SH3
join
(select SH1.SalesPersonID, SH1.OrderDate, SUM(SH1.TotalDue) DailyTotal
from Sales.SalesOrderHeader SH1
where SH1.SalesPersonID IS NOT NULL
group by SH1.SalesPersonID, SH1.OrderDate) SH4
on SH3.SalesPersonID = SH4.SalesPersonID AND SH3.OrderDate >= SH4.OrderDate
group by SH3.SalesPersonID, SH3.OrderDate, SH3.DailyTotal
order by SH3.SalesPersonID, SH3.OrderDate
The derived tables are used to combine all orders for salespeople who have more than one order on a single day. The join on SalesPersonID ensures that you are accumulating rows for only a single salesperson. The unequal join allows the aggregate to consider only the rows for a salesperson where the order date is earlier than the order date currently being considered within the result set.
In this particular example, the unequal join is creating a "sliding window" kind of sum on the daily total column in SH4.
Dublicates;
SELECT
*
FROM
table a, (
SELECT
id,
min(rowid)
FROM
table
GROUP BY
id
) b
WHERE
a.id = b.id
and a.rowid > b.rowid;
If you wanted to get all of the products to offer to a customer and don't want to offer them products that they already have:
SELECT
C.customer_id,
P.product_id
FROM
Customers C
INNER JOIN Products P ON
P.product_id NOT IN
(
SELECT
O.product_id
FROM
Orders O
WHERE
O.customer_id = C.customer_id
)
Most often though, when I use a non-equijoin it's because I'm doing some kind of manual fix to data. For example, the business tells me that a person in a user table should be given all access roles that they don't already have, etc.
If you want to do a dirty join of two not really related tables, you can join with a <>.
For example, you could have a Product table and a Customer table. Hypothetically, if you want to show a list of every product with every customer, you could do somthing like this:
SELECT *
FROM Product p
JOIN Customer c on p.SKU <> c.SSN
It can be useful. Be careful, though, because it can create ginormous result sets.