Randomly choose which group values to include in GROUP BY - sql

I have a fairly large table, so that I'd like to do a group by over the entire table across users, but only return data for 10% of the users, sampled randomly. I know how to sample rows uniformly, but is there a 1-step way to randomly decide which users to include in a group by on the user field?

Maybe could you order the results of the select clause randomly like this
SELECT TOP x *
FROM table_name
ORDER BY newid()

Related

(Hive) SQL retrieving data from a column that has 1 to N relationship in another column

How can I retrieve rows where BID comes up multiple times in AID
You can see the sample below, AID and BID columns are under the PrimaryID, and BIDs are under AID. I want to come up with an output that only takes records where BIDs had 1 to many relationship with records on AIDs column. Example output below.
I provided a small sample of data, I am trying to retrieve 20+ columns and joining 4 tables. I have unqiue PrimaryIDs and under those I have multiple unique AIDs, however under these AIDs I can have multiple non-unqiue BIDs that can repeatedly come up under different AIDs.
Hive supports window functions. A window function can associate every row in a group with an attribute of the group. Count() being one of the supported functions. In your case you can use that a and select rows for which that count > 1
The partition by clause you specify which columns define the group, tge same way that you would in the more familiar group by clause.
Something like this:
select * from
(
Select *,
count(*) over (partition by primaryID,AID) counts
from mytable
) x
Where counts>1

Paginated UNION ALL result - best performance

We have a situation where we need results from 4 different tables combined into one list and paginate it through OFFSET/FETCH.
What want to select records from tables a, b, c & d, order them by CreatedDatetime and then OFFSET X, FETCH Y. Tables are quite big (in terms of numbers of rows) and it sounds horrible to do just UNION ALL and then pagination because it would mean probably compiling whole list of records and then taking paginated part.
Problem is that none of the tables can be taken as reference to extract Start/End Datetime window because every collection might but also might not contain records from any of the table. For example, ending result might contain records from any combination of tables a; a/b; a/b/c; a/b/c/d; b; b/c;.... and we need fixed size number to be returned (paging size, for example, being 20).
Any ideas on how to most effectively approach this?
UPDATE
Based on question from #HABO
There are unfortunately no special clues like that about queries. We are showing user activities in the system. There are different kinds of it (tables we select over). Now, query pops up data for administrator who views the activities. How administrator will look at data may vary drastically: some users will have thousands of activities in last few hours and admin will want to see them all. In other cases, users will have 3 actions in a day and admin will see just first page of data.
PS. It's not a pure log tables as activities act as state machines over time, each having their states, which we also look for in these queries.
if you know the page size (eg 100) then you can simply write 4 Top 100 queries (order by Create Date) - Then do a Union ALL on the result.
That way even if all the first 100 records come from 1 table you are covered.
For Subsequent Paging queries - You'll need to record the last displayed row from each table and use this as your High-Water mark for the next fetch - (Select top 100 FROM TableA Where RowID > #HighWater)
Should be fairly efficient...
This is where a cache comes in useful. You can either cache the result of the query in your application layer and do the paging there if it is not too large, or cache the results of the query in a table (or temp table) if it is large.
There would be filters i suppose. From what you say, those may vary a lot. So at the worst scenario, all columns can be filters.
My suggestion is to use 5 views, one for each table and a final one union them. Just make sure all filter columns go up the physical tables as straightforward as possible.
Finally, select the master view and fetch but be careful of the order by clause. Make sure order by has unique data combination else you might have cases where a row change pages on a simple plain refresh. If there is user order by defined, force add some key columns at the end.
How to safely ensure order by to have distinct values for 100% safe fetch/offset:
At the 4 views create a new column with a simple constant number as value, e.g. 1, 2, 3, 4 AS [TableSource]
Make sure you select the PK of each table. If you don't have, you have to create one in the views, probably using ROW_NUMBER or NEWID, as [Pk] for example.
Finally, when selecting from the master view, you ORDER BY CreateDate, Pk, TableSource. This way you are 100% safe that within the same set of data any row will be placed exactly at the same position, resulting correct paging.
Example of safely isolating a page of 30 rows order by CreateDate:
SELECT * FROM (
SELECT src, id, ROW_NUMBER() OVER(ORDER BY dt DESC,src,id)rn FROM (
SELECT 1 src, id, dt FROM table1 /*WHERE x=y*/ UNION ALL
SELECT 2 src, id, dt FROM table2 /*WHERE x=y*/ UNION ALL
SELECT 3 src, id, dt FROM table3 /*WHERE x=y*/ UNION ALL
SELECT 4 src, id, dt FROM table4 /*WHERE x=y*/)alltables
)data WHERE data.rn BETWEEN 3001 AND 3030

Find row number in a sort based on row id, then find its neighbours

Say that I have some SELECT statement:
SELECT id, name FROM people
ORDER BY name ASC;
I have a few million rows in the people table and the ORDER BY clause can be much more complex than what I have shown here (possibly operating on a dozen columns).
I retrieve only a small subset of the rows (say rows 1..11) in order to display them in the UI. Now, I would like to solve following problems:
Find the number of a row with a given id.
Display the 5 items before and the 5 items after a row with a given id.
Problem 2 is easy to solve once I have solved problem 1, as I can then use something like this if I know that the item I was looking for has row number 1000 in the sorted result set (this is the Firebird SQL dialect):
SELECT id, name FROM people
ORDER BY name ASC
ROWS 995 TO 1005;
I also know that I can find the rank of a row by counting all of the rows which come before the one I am looking for, but this can lead to very long WHERE clauses with tons of OR and AND in the condition. And I have to do this repeatedly. With my test data, this takes hundreds of milliseconds, even when using properly indexed columns, which is way too slow.
Is there some means of achieving this by using some SQL:2003 features (such as row_number supported in Firebird 3.0)? I am by no way an SQL guru and I need some pointers here. Could I create a cached view where the result would include a rank/dense rank/row index?
Firebird appears to support window functions (called analytic functions in Oracle). So you can do the following:
To find the "row" number of a a row with a given id:
select id, row_number() over (partition by NULL order by name, id)
from t
where id = <id>
This assumes the id's are unique.
To solve the second problem:
select t.*
from (select id, row_number() over (partition by NULL order by name, id) as rownum
from t
) t join
(select id, row_number() over (partition by NULL order by name, id) as rownum
from t
where id = <id>
) tid
on t.rownum between tid.rownum - 5 and tid.rownum + 5
I might suggest something else, though, if you can modify the table structure. Most databases offer the ability to add an auto-increment column when a row is inserted. If your records are never deleted, this can server as your counter, simplifying your queries.

Filtering Database Results to Top n Records for Each Value in a Lookup Column

Let's say I have two tables in my database.
TABLE:Categories
ID|CategoryName
01|CategoryA
02|CategoryB
03|CategoryC
and a table that references the Categories and also has a column storing some random number.
TABLE:CategoriesAndNumbers
CategoryType|Number
CategoryA|24
CategoryA|22
CategoryC|105
.....(20,000 records)
CategoryB|3
Now, how do I filter out this data? So, I want to know what the 3 smallest numbers are out of each category and delete the rest. The end result would be like this:
TABLE:CategoriesAndNumbers
CategoryType|Number
CategoryA|2
CategoryA|5
CategoryA|18
CategoryB|3
CategoryB|500
CategoryB|1601
CategoryC|1
CategoryC|4
CategoryC|62
Right now, I can get the smallest numbers between all the categories, but I would like each category to be compared individually.
EDIT: I'm using Access and here's my code so far
SELECT TOP 10 cdt1.sourceCounty, cdt1.destCounty, cdt1.distMiles
FROM countyDistanceTable as cdt1, countyTable
WHERE cdt1.sourceCounty = countyTable.countyID
ORDER BY cdt1.sourceCounty, cdt1.distMiles, cdt1.destCounty
EDIT2: Thanks to Remou, here would be the working query that solved my problem. Thank you!
DELETE
FROM CategoriesAndNumbers a
WHERE a.Number NOT IN (
SELECT Top 3 [Number]
FROM CategoriesAndNumbers b
WHERE b.CategoryType=a.CategoryType
ORDER BY [Number])
You could use something like:
SELECT a.CategoryType, a.Number
FROM CategoriesAndNumbers a
WHERE a.Number IN (
SELECT Top 3 [Number]
FROM CategoriesAndNumbers b
WHERE b.CategoryType=a.CategoryType
ORDER BY [Number])
ORDER BY a.CategoryType
The difficulty with this is that Jet/ACE Top selects duplicate values where they exist, so you will not necessarily get three values, but more, if there are ties. The problem can often be solved with a key field, if one exists :
WHERE a.Number IN (
SELECT Top 3 [Number]
FROM CategoriesAndNumbers b
WHERE b.CategoryType=a.CategoryType
ORDER BY [Number], [KeyField])
However, I do not think it will help in this instance, because the outer table will include ties.
Order it by number and take 3, find out what the biggest number is and then remove rows where Number is greater than the Number.
I imagine it would need to be two seperate queries as your business tier would hold the value for the biggest number out of the 3 results and dynamically build the query to delete the rest.

MySQL - How do I order the results randomly inside a column?

I need to retrieve rows from a table (i.e: 'orders') ordered by a column (lets say 'user') randomly. That is, I need all orders from the same user to remain together (one after the other) and users to be ordered randomly.
I'm going to assume you have a second table called "users" that has all the users in it. If not, you could still do this by adding another SELECT DISTINCT subquery on orders, but that would be much messier:
SELECT orders.*
FROM orders
INNER JOIN (SELECT userid, RAND() as random FROM users) tmp
ON orders.userid = tmp.userid
ORDER BY tmp.random, tmp.userid
You'll want to order by the random number AND the user id so if two user ids get the same random number their orders won't be all jumbled together.
How random does it have to be? I can think of a few possible answers.
If the "random" sequence should be repeatable, you can sort by a hash of the user ID, using MD5 or a custom one you create yourself e.g. ORDER BY MD5(), secondary_sort_column.
order by reverse(user) ?