Selecting Random Rows in relation to Random Rows - sql

So I've been trying for a few days now and looking all over but I can't seem to find a solution for this situation. Maybe I'm over-thinking it, so I guess this is two questions in one:
Am I even going the right direction?
Why isn't this working as expected?
Situation:
I have a website where users can sign up, creating a "User". After registration, they can then create multiple profiles or "Characters".
I need a way to select a particular number of random users (in this case 12) and then select one of their characters at random.
Current Data Structure:
Users Table - {UserID}
Characters Table - {CharacterID, DisplayName, UserID}
Problem:
Usually I do this kind of stuff via code, but I wanted to approach this in a purely SQL method primarily because I didn't want to hit the code once for the random list of users and then 12 more times for each random character per user.
Eventually I came to the conclusion that this wasn't something that could be done with a single one-line query (if I'm wrong please correct me, I may just not be seeing the trees in the forest here). So I decided to select the 12 random users, loop through them and on each one select a random character for each user.
This seems to work and from what I can tell, it's not horrendous in terms of performance. However... I'm running into a small problem with the returned data. It only returns 12 rows sometimes. Other times it jumps down to 11 rows or 10 rows and I can't for the life of me figure out why it's doing this. Would anyone be able to shed light on this?
Code:
Declare #UserTable TABLE(UserID int)
Insert Into #UserTable Select Top 12 UserID From Users Where ((ABS(CAST( (BINARY_CHECKSUM(*) * RAND()) as int)) % 100) < 10)
Declare #OutputTable TABLE(CharacterID int, CharacterDisplayName nvarchar(MAX), UserID int)
Declare #CurrentUserID int
Select #CurrentUserID = min(UserID) From #UserTable
While #CurrentUserID is not null
Begin
Insert Into #OutputTable Select Top 1 CharacterID, CharacterDisplayName, UserID FROM CharactersForListing Where UserID = #CurrentUserID Order By NewID()
Select #CurrentUserID = min(UserID) from #UserTable Where UserID > #CurrentUserID
End
Select * From #OutputTable

How about something like this? The main query gets 12 random Users. The correlated cross apply gets 1 randomly selected Character for that User.
select top 12
u.UserID
, c.CharacterID
from Users u
cross apply
(
select top 1 CharacterID
from Characters ch
where ch.UserID = u.UserID
order by newid()
) c
order by NEWID()

Where ((ABS(CAST( (BINARY_CHECKSUM(*) * RAND()) as int)) % 100) < 10) is approximately giving you 10% of the table. If you table Users is really small, top 12 * does not guarantee to give you 12 rows.
If you table is small enough, you can change your Insert into query to
Insert Into #UserTable Select Top 12 UserID From Users ORDER BY NEWID()

Related

SQL Server random using seed

I want to add a column to my table with a random number using seed.
If I use RAND:
select *, RAND(5) as random_id from myTable
I get an equal value(0.943597390424144 for example) for all the rows, in the random_id column. I want this value to be different for every row - and that for every time I will pass it 0.5 value(for example), it would be the same values again(as seed should work...).
How can I do this?
(
For example, in PostrgreSql I can write
SELECT setseed(0.5);
SELECT t.* , random() as random_id
FROM myTable t
And I will get different values in each row.
)
Edit:
After I saw the comments here, I have managed to work this out somehow - but it's not efficient at all.
If someone has an idea how to improve it - it will be great. If not - I will have to find another way.
I used the basic idea of the example in here.
Creating a temporary table with blank seed value:
select * into t_myTable from (
select t.*, -1.00000000000000000 as seed
from myTable t
) as temp
Adding a random number for each seed value, one row at a time(this is the bad part...):
USE CPatterns;
GO
DECLARE #seed float;
DECLARE #id int;
DECLARE VIEW_CURSOR CURSOR FOR
select id
from t_myTable t;
OPEN VIEW_CURSOR;
FETCH NEXT FROM VIEW_CURSOR
into #id;
set #seed = RAND(5);
WHILE ##FETCH_STATUS = 0
BEGIN
set #seed = RAND();
update t_myTable set seed = #seed where id = #id
FETCH NEXT FROM VIEW_CURSOR
into #id;
END;
CLOSE VIEW_CURSOR;
DEALLOCATE VIEW_CURSOR;
GO
Creating the view using the seed value and ordering by it
create view my_view AS
select row_number() OVER (ORDER BY seed, id) AS source_id ,t.*
from t_myTable t
I think the simplest way to get a repeatable random id in a table is to use row_number() or a fixed id on each row. Let me assume that you have a column called id with a different value on each row.
The idea is just to use this as a seed:
select rand(id*1), as random_id
from mytable;
Note that the seed for the id is an integer and not a floating point number. If you wanted a floating point seed, you could do something with checksum():
select rand(checksum(id*0.5)) as random_id
. . .
If you are doing this for sampling (where you will say random_id < 0.1 for a 10% sample for instance, then I often use modulo arithmetic on row_number():
with t as (
select t.* row_number() over (order by id) as seqnum
from mytable t
)
select *
from t
where ((seqnum * 17 + 71) % 101) < 0.1
This returns about 10% of the numbers (okay, really 10/101). And you can adjust the sample by fiddling with the constants.
Someone sugested a similar query using newid() but I'm giving you the solution that works for me.
There's a workaround that involves newid() instead of rand, but it gives you the same result. You can execute it individually or as a column in a column. It will result in a random value per row rather than the same value for every row in the select statement.
If you need a random number from 0 - N, just change 100 for the desired number.
SELECT TOP 10 [Flag forca]
,1+ABS(CHECKSUM(NEWID())) % 100 AS RANDOM_NEWID
,RAND() AS RANDOM_RAND
FROM PAGSEGURO_WORK.dbo.jobSTM248_tmp_leitores_iso
So, in case it would someone someday, here's what I eventually did.
I'm generating the random seeded values in the server side(Java in my case), and then create a table with two columns: the id and the generated random_id.
Now I create the view as an inner join between the table and the original data.
The generated SQL looks something like that:
CREATE TABLE SEED_DATA(source_id INT PRIMARY KEY, random_id float NOT NULL);
select Rand(5);
insert into SEED_DATA values(1,Rand());
insert into SEED_DATA values(2, Rand());
insert into SEED_DATA values(3, Rand());
.
.
.
insert into SEED_DATA values(1000000, Rand());
and
CREATE VIEW DATA_VIEW
as
SELECT row_number() OVER (ORDER BY random_id, id) AS source_id,column1,column2,...
FROM
( select * from SEED_DATA tmp
inner join my_table i on tmp.source_id = i.id) TEMP
In addition, I create the random numbers in batches, 10,000 or so in each batch(may be higher), so it will not weigh heavily on the server side, and for each batch I insert it to the table in a separate execution.
All of that because I couldn't find a good way to do what I want purely in SQL. Updating row after row is really not efficient.
My own conclusion from this story is that SQL Server is sometimes really annoying...
You could convert a random number from the seed:
rand(row_number over (order by ___, ___,___))
Then cast that as a varchar
, Then use the last 3 characters as another seed.
That would give you a nice random value:
rand(right(cast(rand(row_number() over(x,y,x)) as varchar(15)), 3)

check whether an array values a subset of a query?

I've a set of rows
SELECT id from Users WHERE...
1
2
6
8
9
and I've and array with values 2,3,6
How can I check in SQL that the array is a sub set of the result of the query?
SQL doesn't as such support arrays so I'm not entirely sure how you're storing your array of numbers, and that will affect the best way to answer this question.
That said, I'd do this:
SELECT u.id
FROM Users U
RIGHT JOIN Numbers N
ON U.id=N.Number
WHERE N.Number IN (2,3,6)
That's the basic query; exact details from there depend on what you'd be doing to detect the failure. Any records where u.ID IS NULL indicate it isn't a subset. If you don't actually immediately want the set of IDs you could modify it to
SELECT COUNT(*) AS Missing
FROM Users U
RIGHT JOIN Numbers N
ON U.id=N.Number
WHERE N.Number IN (2,3,6)
AND u.id IS NULL
and, whenever Missing was > 0 you'd know you didn't have a subset. (In SQL Server at least you can then cast the int to a bit to get 0=false, !0=true if that's easier for your app to work with.)
Other details we can add with more info about what you're actually trying to do, but hopefully that makes sense as a basic technique.
(N.B. this all assumes that you've got a numbers / tally table in your database. They're incredibly useful so, if you haven't already, I'd get one set up.)
You have to check each record/item individually, then count them.
If the JOIN is the same size as the array, the array is a sub-set of the table.
Here is an example that assumes your array in in a table...
SELECT
COUNT(*)
FROM
Users
INNER JOIN
search
ON search.id = Users.id
HAVING
COUNT(*) = (SELECT COUNT(*) FROM search)
Use Dynamic SQL:
declare #cmd varchar(200)
select #cmd = "select id from Users WHERE id in (" + #array + ")"
exec(#cmd)
If you can populate a one column table with the values that you need to test against then you could do this.
Select count(*)
From
(
Select id
From users
Intersect
Select id
From testValues
) test
If the count is equal to the number of values you're testing against then the array forms a subset.

Random select is not always returning a single row

The intention of following (simplified) code fragment is to return one random row.
Unfortunatly, when we run this fragment in the query analyzer, it returns between zero and three results.
As our input table consists of exactly 5 rows with unique ID's and as we perform a select on this table where ID equals a random number, we are stumped that there would ever be more than one row returned.
Note: among other things, we already tried casting the checksum result to an integer with no avail.
DECLARE #Table TABLE (
ID INTEGER IDENTITY (1, 1)
, FK1 INTEGER
)
INSERT INTO #Table
SELECT 1
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
UNION ALL SELECT 5
SELECT *
FROM #Table
WHERE ID = ABS(CHECKSUM(NEWID())) % 5 + 1
Edit
Our usage scenario is as follows (please don't comment on wether it is the right thing to do or not. It's the powers that be that have decided)
Ultimately, we must create a result with realistic values where the combination of producer and weights are obfuscated by selecting at random existing weights from the table itself.
The query then would become something like this (also a reason why RAND can not be used)
SELECT t.ID
, FK1 = (SELECT FK1 FROM #Table WHERE ID=ABS(CHECKSUM(NEWID())) % 5 + 1)
FROM #Table t
Because the inner select could be returning zero results, it would return a NULL value wich again is not acceptable. It is the investigation of why the inner select returns between zero and x results, that this question sproused (is this even English?).
Answer
What turned the light on for me was the simple observation that ABS(CHECKSUM(NEWID())) % 5 + 1) was re-evaluated for each row. I was under the impression that ABS(CHECKSUM(NEWID())) % 5 + 1) would get evaluated once, then matched.
Thank you all for answering and slowly but surely leading me to a better understanding.
The reason this happens is because NEWID() gies a different value for each row in the table. For each row, independently of the others, there is a one in five chance of it being returned. Consequently, as it stands, you actually have a 1 in 3125 chance of all 5 rows being returned!
To see this, run the following query. You'll see that each row gets a different ID.
SELECT * , NEWID()
FROM #Table
This will fix your code:
DECLARE #Id int
SET #Id = ABS(CHECKSUM(NEWID())) % 5 + 1
SELECT *
FROM #Table
WHERE ID = #Id
However, I'm not sure this is the most efficient method of selecting a single random row from the table.
You might find this MSDN article useful: http://msdn.microsoft.com/en-us/library/Aa175776 (Random Sampling in T-SQL)
EDIT 1: now I think about it, this probably is the most efficient way to do it, assuming the number of rows remains fixed and the IDs are guaranteed to be contiguous.
EDIT 2: to achieve the desired result when used as a sub-query, use TOP 1 like this:
SELECT t.ID
, FK1 = (SELECT TOP 1 FK1 FROM #Table ORDER BY NEWID())
FROM #Table t
A bit of a guess, and not sure that SQL works this way, but wouldn't SQL evaluate "ABS(CHECKSUM(NEWID())) % 5 + 1" for each row in the table? If so, then each evaluation may or may not return the value of ID of the current row.
Try this instead - generating the random number explicitly first, and matching on that single value:
declare #targetRandom int
set #targetRandom = ABS(CHECKSUM(NEWID())) % 5 + 1
select * from #table where ID = #targetRandom
Try the following, so you can see what happens:
SELECT ABS(CHECKSUM(NEWID())) % 5 + 1 AS Number, #Table.*
FROM #Table
WHERE ID = Number
Or you could use RAND() instead of NEWID(), which is only evaluated once per query in MS SQL
If you want to use CHECKSUM to obtain a random row, this is the way to do it.
SELECT TOP 1 *
FROM #Table
ORDER BY CHECKSUM(NEWID())
what about?
SELECT t.ID
, FK1 = (SELECT TOP 1 FK1 FROM #Table ORDER BY NEWID())
FROM #Table t
This may help you understand the reasons.
Run the query multiple times. How many times does MY_FILTER = ID ?
SELECT *, ABS(CHECKSUM(NEWID())) % 5 + 1 AS MY_FILTER
FROM #Table
SELECT *, ABS(CHECKSUM(NEWID())) % 5 + 1 AS MY_FILTER
FROM #Table
SELECT *, ABS(CHECKSUM(NEWID())) % 5 + 1 AS MY_FILTER
FROM #Table
I don't know how much this will be helpful to you, but try this.. All I understood is you want one random row each time you execute the query..
select top 1 newid() as row,ID from #Table order by row
Here is the logic. Each time you execute the query a newid is being assigned to each row and all are unique and the you just order them with the new uniquely generated rowid. Then all you need to do is select the top most or whatever you want..

SQL Server 2005 Full Text forum Search

I'm working on a search stored procedure for our existing forums.
I've written the following code which uses standard SQL full text indexes, however I'm sure there is a better way of doing it and would like a point in the right direction.
To give some info on how it needs to work, The page has 1 search text box which when clicked will search thread titles, thread descriptions and post text and should return the results with the title matches first, then descriptions then post data.
Below is what I've written so far which works but is not elegant or as fast as I would like. To give an example of performance with 20K threads and 80K posts it takes about 12 seconds to search for 5 random words.
ALTER PROCEDURE [dbo].[SearchForums]
(
--Input Params
#SearchText VARCHAR(200),
#GroupId INT = -1,
#ClientId INT,
--Paging Params
#CurrentPage INT,
#PageSize INT,
#OutTotalRecCount INT OUTPUT
)
AS
--Create Temp Table to Store Query Data
CREATE TABLE #SearchResults
(
Relevance INT IDENTITY,
ThreadID INT,
PostID INT,
[Description] VARCHAR(2000),
Author BIGINT
)
--Create and populate table of all GroupID's This search will return from
CREATE TABLE #GroupsToSearch
(
GroupId INT
)
IF #GroupId = -1
BEGIN
INSERT INTO #GroupsToSearch
SELECT GroupID FROM SNetwork_Groups WHERE ClientId = #ClientId
END
ELSE
BEGIN
INSERT INTO #GroupsToSearch
VALUES(#GroupId)
END
--Get Thread Titles
INSERT INTO #SearchResults
SELECT
SNetwork_Threads.[ThreadId],
(SELECT NULL) AS PostId,
SNetwork_Threads.[Description],
SNetwork_Threads.[OwnerUserId]
FROM
SNetwork_Threads
INNER JOIN SNetwork_Groups ON SNetwork_Groups.GroupId = SNetwork_Threads.GroupId
WHERE
FREETEXT(SNetwork_Threads.[Description], #SearchText) AND
Snetwork_Threads.GroupID IN (SELECT GroupID FROM #GroupsToSearch) AND
SNetwork_Groups.ClientId = #ClientId
--Get Thread Descriptions
INSERT INTO #SearchResults
SELECT
SNetwork_Threads.[ThreadId],
(SELECT NULL) AS PostId,
SNetwork_Threads.[Description],
SNetwork_Threads.[OwnerUserId]
FROM
SNetwork_Threads
INNER JOIN SNetwork_Groups ON SNetwork_Groups.GroupId = SNetwork_Threads.GroupId
WHERE
FREETEXT(SNetwork_Threads.[Name], #SearchText) AND
Snetwork_Threads.GroupID IN (SELECT GroupID FROM #GroupsToSearch) AND
SNetwork_Groups.ClientId = #ClientId
--Get Posts
INSERT INTO #SearchResults
SELECT
SNetwork_Threads.ThreadId,
SNetwork_Posts.PostId,
SNetwork_Posts.PostText,
SNetwork_Posts.[OwnerUserId]
FROM
SNetwork_Posts
INNER JOIN SNetwork_Threads ON SNetwork_Threads.ThreadId = SNetwork_Posts.ThreadId
INNER JOIN SNetwork_Groups ON SNetwork_Groups.GroupId = SNetwork_Threads.GroupId
WHERE
FREETEXT(SNetwork_Posts.PostText, #SearchText) AND
Snetwork_Threads.GroupID IN (SELECT GroupID FROM #GroupsToSearch) AND
SNetwork_Groups.ClientId = #ClientId
--Return Paged Result Sets
SELECT #OutTotalRecCount = COUNT(*) FROM #SearchResults
SELECT
#SearchResults.[ThreadID],
#SearchResults.[PostID],
#SearchResults.[Description],
#SearchResults.[Author]
FROM
#SearchResults
WHERE
#SearchResults.[Relevance] >= (#CurrentPage - 1) * #PageSize + 1 AND
#SearchResults.[Relevance] <= #CurrentPage*#PageSize
ORDER BY Relevance ASC
--Clean Up
DROP TABLE #SearchResults
DROP TABLE #GroupsToSearch
I know its a bit long winded but just a nudge in the right direction would be well appreciated.
Incase it helps 80% of the query time is taken up when search posts and according to teh query plan is spent on "Clustered Index Scan" on the posts table. I cant see anyway around this.
Thanks
Gavin
I'd really have to see an explain plan to know where the slow parts were, as I don't see anything particularly nasty in your code. Very first thing - make sure all your indexes are in good shape, they are being used, statistics are up to date, etc.
One other idea would be to do the search on thread title first, then use the results from that to prune the searches on thread description and post text. Similarly, use the results from the thread description search to prune the post text search.
The basic idea here is that if you find the keywords in the thread title, why bother searching the description and posts? I realize this may not work depending on how you are presenting the search results to the user, and it may not make a huge difference, but it's something to think about.
80k records isn't that much. I'd recommend not inserting the resulting data into your temp table, and instead only inserting the IDs, then joining to that table afterward. This will save on writing to the temp table, as you may store 10000 ints, instead of 10000 full posts (of which you discard all but one page of). This may reduce the amount of time spent scanning posts, as well.
It looks like you would need two temp tables, one for threads and one for posts. You would union them in the final select.

Paging in Pervasive SQL

How to do paging in Pervasive SQL (version 9.1)? I need to do something similar like:
//MySQL
SELECT foo FROM table LIMIT 10, 10
But I can't find a way to define offset.
Tested query in PSQL:
select top n *
from tablename
where id not in(
select top k id
from tablename
)
for all n = no.of records u need to fetch at a time.
and k = multiples of n(eg. n=5; k=0,5,10,15,....)
Our paging required that we be able to pass in the current page number and page size (along with some additional filter parameters) as variables. Since a select top #page_size doesn't work in MS SQL, we came up with creating an temporary or variable table to assign each rows primary key an identity that can later be filtered on for the desired page number and size.
** Note that if you have a GUID primary key or a compound key, you just have to change the object id on the temporary table to a uniqueidentifier or add the additional key columns to the table.
The down side to this is that it still has to insert all of the results into the temporary table, but at least it is only the keys. This works in MS SQL, but should be able to work for any DB with minimal tweaks.
declare #page_number int, #page_size
int -- add any additional search
parameters here
--create the temporary table with the identity column and the id
--of the record that you'll be selecting. This is an in memory
--table, so if the number of rows you'll be inserting is greater
--than 10,000, then you should use a temporary table in tempdb
--instead. To do this, use
--CREATE TABLE #temp_table (row_num int IDENTITY(1,1), objectid int)
--and change all the references to #temp_table to #temp_table
DECLARE #temp_table TABLE (row_num int
IDENTITY(1,1), objectid int)
--insert into the temporary table with the ids of the records
--we want to return. It's critical to make sure the order by
--reflects the order of the records to return so that the row_num
--values are set in the correct order and we are selecting the
--correct records based on the page INSERT INTO #temp_table
(objectid)
/* Example: Select that inserts
records into the temporary table
SELECT personid FROM person WITH
(NOLOCK) inner join degree WITH
(NOLOCK) on degree.personid =
person.personid WHERE
person.lastname = #last_name
ORDER BY person.lastname asc,
person.firsname asc
*/
--get the total number of rows that we matched DECLARE #total_rows
int SET #total_rows =
##ROWCOUNT
--calculate the total number of pages based on the number of
--rows that matched and the page size passed in as a parameter DECLARE
#total_pages int
--add the #page_size - 1 to the total number of rows to
--calculate the total number of pages. This is because sql
--alwasy rounds down for division of integers SET #total_pages =
(#total_rows + #page_size - 1) /
#page_size
--return the result set we are interested in by joining
--back to the #temp_table and filtering by row_num /* Example:
Selecting the data to return. If the
insert was done properly, then
you should always be joining the table
that contains the rows to return
to the objectid column on the
#temp_table
SELECT person.* FROM person WITH
(NOLOCK) INNER JOIN #temp_table
tt ON person.personid =
tt.objectid
*/
--return only the rows in the page that we are interested in
--and order by the row_num column of the #temp_table to make sure
--we are selecting the correct records WHERE tt.row_num <
(#page_size * #page_number) + 1
AND tt.row_num > (#page_size *
#page_number) - #page_size ORDER
BY tt.row_num
I face this problem in MS Sql too... no Limit or rownumber functions. What I do is insert the keys for my final query result (or sometimes the entire list of fields) into a temp table with an identity column... then I delete from the temp table everything outside the range I want... then use a join against the keys and the original table, to bring back the items I want. This works if you have a nice unique key - if you don't, well... that's a design problem in itself.
Alternative with slightly better performance is to skip the deleting step and just use the row numbers in your final join. Another performance improvement is to use the TOP operator so that at the very least, you don't have to grab the stuff past the end of what you want.
So... in pseudo-code... to grab items 80-89...
create table #keys (rownum int identity(1,1), key varchar(10))
insert #keys (key)
select TOP 89 key from myTable ORDER BY whatever
delete #keys where rownumber < 80
select <columns> from #keys join myTable on #keys.key = myTable.key
I ended up doing the paging in code. I just skip the first records in loop.
I thought I made up an easy way for doing the paging, but it seems that pervasive sql doesn't allow order clauses in subqueries. But this should work on other DBs (I tested it on firebird)
select *
from (select top [rows] * from
(select top [rows * pagenumber] * from mytable order by id)
order by id desc)
order by id