Pulling items out of a DB with weighted chance - sql

Let's say I had a table full of records that I wanted to pull random records from. However, I want certain rows in that table to appear more often than others (and which ones vary by user). What's the best way to go about this, using SQL?
The only way I can think of is to create a temporary table, fill it with the rows I want to be more common, and then pad it with other randomly selected rows from the table. Is there a better way?

One way I can think of is to create another column in the table which is a rolling sum of your weights, then pull your records by generating a random number between 0 and the total of all your weights, and pull the row with the highest rolling sum value less than the random number.
For example, if you had four rows with the following weights:
+---+--------+------------+
|row| weight | rollingsum |
+---+--------+------------+
| a | 3 | 3 |
| b | 3 | 6 |
| c | 4 | 10 |
| d | 1 | 11 |
+---+--------+------------+
Then, choose a random number n between 0 and 11, inclusive, and return row a if 0<=n<3, b if 3<=n<6, and so on.
Here are some links on generating rolling sums:
http://dev.mysql.com/tech-resources/articles/rolling_sums_in_mysql.html
http://dev.mysql.com/tech-resources/articles/rolling_sums_in_mysql_followup.html

I don't know that it can be done very easily with SQL alone. With T-SQL or similar, you could write a loop to duplicate rows, or you can use the SQL to generate the instructions for doing the row duplication instead.
I don't know your probability model, but you could use an approach like this to achieve the latter. Given these table definitions:
RowSource
---------
RowID
UserRowProbability
------------------
UserId
RowId
FrequencyMultiplier
You could write a query like this (SQL Server specific):
SELECT TOP 100 rs.RowId, urp.FrequencyMultiplier
FROM RowSource rs
LEFT JOIN UserRowProbability urp ON rs.RowId = urp.RowId
ORDER BY ISNULL(urp.FrequencyMultiplier, 1) DESC, NEWID()
This would take care of selecting a random set of rows as well as how many should be repeated. Then, in your application logic, you could do the row duplication and shuffle the results.

Start with 3 tables users, data and user-data. User-data contains which rows should be prefered for each user.
Then create one view based on the data rows that are prefered by the the user.
Create a second view that has the none prefered data.
Create a third view which is a union of the first 2. The union should select more rows from the prefered data.
Then finally select random rows from the third view.

Related

Changes to table not designed for SQL

I am supposed to do some changes to an enormous CSV file based on a different file. Therefore I chose to do it in SQL but after further consideration I am not sure how to proceed..
In the 1st table I have a list of contracts. Columns represent some segments the contract belongs to and some products that can be linked to the contract (example in the table below).
Here contract no. 1234 belongs to segments X1 and Y2. There is no product number 1 linked to it, but it has product number 2 linked to it. The product originaly ends on the 1st of January 2030.
cont_n|date|segment_1|segment_2|..|prod_1|date_prod_1|product_2|date_product_2|..
1234 |3011| X1 | Y2 |..| | |YES |01/01/2030 |..
The 2nd file is a list of combinations of segments and an indication how the "date" columns should be adjusted. The example shows following situation - if there is prod_2 linked to the contract which belongs to groups X1 and Y2, end the prod_2 this year. I need this result to alter table no. 1.
prod_no|segment_1|segment_2|result
prod_2 | X1 | Y2 | end the product on anniversary
Ergo I need to get to the result:
cont_n|date|segment_1|segment_2|..|prod_1|date_prod_1|product_2|date_product_2|..
1234 |3011| X1 | Y2 |..| | |YES |30/11/2019 |..
In the original files I have around 600k rows and more than 300 columns (meaning around 100 different products) in table 1 and around 800 possible combinations of segments in table 2.
The algorithm I need to implement (very generally):
for x=1 to 100
IF product_x = YES THEN date_product_x = date + "Seach for result in table2"
Is there a reasonable way how to change the "date_product_x" columns based on the 2nd table or would it be better to find a different solution?
Thanks a lot!
I can only give you a general approach, because the information in your question is general (for example, why does "end the product on anniversary" translate to "30/11/2019"? It's not explained in the question, so I assume you're going to be able to handle that part of the logic).
You can approach this by using an UNPIVOT on Table 1 to get a structure like:
cont_n | segment1 | segment2 | product_number | product_date
You will UNPIVOT..FOR date_product_1 thru date_product_100. You'll either have to type out all 100 column names, or use dynamic sql to build the whole thing.
You'll do some string manipulation to grab the "x" portion of "date_product_x", and turn it into "prod_x", and then you can join to the second table on the two segment columns and the "prod_x" column, get the result column value, and do whatever rules you're doing to get the value you want for date_product_x.
Finally, you take that result, and PIVOT it back to the one-row-per-contract form, and JOIN it to your original table to UPDATE the date_product_x columns.

Why postgres returns unordered data in select query, after updation of row?

I am bit confused over default ordering of the rows returned by postgres.
postgres=# select * from check_user;
id | name
----+------
1 | x
2 | y
3 | z
4 | a
5 | c1\
6 | c2
7 | c3
(7 rows)
postgres=# update check_user set name = 'c1' where name = 'c1\';
UPDATE 1
postgres=# select * from check_user;
id | name
----+------
1 | x
2 | y
3 | z
4 | a
6 | c2
7 | c3
5 | c1
(7 rows)
Before any updation, it was returning rows ordered by id, but after updation, the order has changed. So my question is that if order by is not specified, what default ordering does postgres uses ?
Thanks in advance.
Put very simply the "default order" is whatever it happens to read from the disk. Updating a row will not change the row in place... Usually it marks the old row as deleted and writes a new one.
When postgres reads rows from pages of memory, it will (probably) read them in the order they are stored on the page. It will read pages in whatever order it thinks is quickest (that may or may not be how they appear on disk). It can change based on whether or not it decides to use an index. So it can suddenly change without your app asking for anything different.
If you don't specify an order by it will not take any action to re-order them.
NEVER rely on the default order. It is undefined behaviour.
SQL tables represent unordered sets.
SQL results sets are unordered unless you explicitly include an order by.
Your select has no order by. Hence, the rows can come back in any order. Even running the same query twice can produce different orders.

Access query, if two values exist in one column, omit one

I have a series of queries that generate reports that contain chemical data. There are two compounds A and B where A is the total amount and B is a speciated amount (like total iron and ferrous iron, for example).
There are about one hundred total compounds in the query result, and I need a criteria to filter the results such that if both Compounds A and B are present, only Compound B is displayed. So far I've tried adding a few iif statements to the criteria section in the query builder with no luck.
Here is what I have so far:
SELECT Table1.KEY_ANLT
FROM Table1
WHERE (((Table1.KEY_ANLT)=IIf([Table1].[KEY_ANLT]=1223 And [Table1].[KEY_ANLT]=70,70,1223)));
This filters out Compound A but does not include the rest of the compounds. How can I modify the query to also include the other compounds?
So, to clarify some of the comments above, the problem here is you don't have (or haven't specified above) a way to identify values that go together. You gave 70 and 1223 as an example, but if you gave us a list of all the numbers, how would we be able to identify which ones go together? You might say "chemistry expertise", but that's based on another column with the compounds' names, right? So really, your query should use that column. But then there's still the problem of how to connect associated names (e.g., "total iron" and "ferrous iron" might be connected because they both have the word "iron", but what about "permanganate" and "manganese"?). In short, you need another column to specify the thing in common between these separate rows, whether it's element, ion, charge, etc. You would also need a column identifying which row in each "group" you would want to include in your query (or, which ones to exclude). For example:
+----------+-----------------+---------+---------+
| KEY_ANLT | Compound | Element | Primary |
+----------+-----------------+---------+---------+
| 70 | total iron | Fe | Y |
| 1223 | ferrous iron | Fe | |
| 1224 | ferric iron | Fe | |
| 900 | total manganese | Mn | Y |
| 901 | permanganate | Mn | |
+----------+-----------------+---------+---------+
Then, to get a query that shows just the "primary" rows, it's pretty trivial:
SELECT * FROM Table1 WHERE Primary='Y';
Without that [Primary] column, you'd have to decide how to choose each row. Perhaps you'd want the one with the smallest KEY_ANLT?
SELECT Table1.*
FROM
(SELECT Element, min(KEY_ANLT) AS MinKey FROM Table1 GROUP BY Element) AS Subquery
INNER JOIN Table1 ON
Subquery.Element=Table1.Element AND
Subquery.MinKey=Table1.KEY_ANLT
The reason your query doesn't work is that the WHERE clause operates row-by-row, and doesn't compare different rows to one another. So in your SQL:
IIf([Table1].[KEY_ANLT]=1223 And [Table1].[KEY_ANLT]=70,70,1223)
NONE of the rows will evaluate this as 70, because no single row has KEY_ANLT=1223 AND KEY_ANLT=70. Each row only has one value for KEY_ANLT. So then that IIF expression evaluates as 1223 for every row, and your condition will only return rows where KEY_ANLT=1223 (compound B).

Spitting long column values to managable size for presenting data neatly

Hi I was wondering if there is a way to split long column values in this case I am using SSRS to get the distinct values with the number of product ID against a category into a matrix/pivot table in SSRS. The problem lies with the amount of distinct category makes it a nightmare to make the report look pretty shall we say. Is there a dynamic way to split the columns in say groups of 10 to make the table look nicer and easy to read. I was thinking of using in operator then the list of values but that means managing the data every time a new category gets added. Is there a dynamic way to present the data in the best way possible? There are 135 distinct category values
Also I am open to suggestions to make the report to nicer if anyone has any thoughts. I am new to SSRS and trying to get to grips with its.
Here is an example of my problem
enter image description here
Are your column names coming back from the database under the SubCat field you note in the comments above? If so I imagine your dataset looks something like this
Subcat | Logno
---------+---------------
SubCatA | 34
SubCatB | 65
SubCatC | 120
SubCatD | 8
SubCatE | 19
You can edit this so that there is an index of each individual category being returned also, using the Row_Number() function. Add the field
ROW_NUMBER() OVER (ORDER BY SubCat ASC) AS ColID
To your query. This will result in the following.
Subcat | LogNo | ColID
-----------+--------------+----------
SubCatA | 34 | 1
SubCatB | 65 | 2
SubCatC | 120 | 3
SubCatD | 8 | 4
SubCatE | 19 | 5
Now there is a numeric identifier for each column you can perform some logic on it to arrange itself nicely on the page.
This solution involves a Tablix, nested inside a Matrix nested inside a Matrix as follows
First create a Matrix (Matrix1), and set it’s datasource to your dataset. Set the Row Group Properties to group on the following expression where ‘4’ is the number of columns you wish to display horizontally.
=CInt(Floor((Fields!ColID.Value - 1) / 4))
Then in the data section of the Matrix (bottom right corner) insert a rectangle and on this insert a new Matrix (Matrix 2). Remove the leftmost row. Set the column header to be the Column Name SubCat. This will automatically set the column grouping to be SubCat.
Finally, in the Data Section of Matrix 2 add a new Rectangle and Add a Tablix on it. Remove the Header Row, and set it to be one column wide only. Set the Data to be the information you wish to display, i.e. LogNo.
Finally, delete the Leftmost and Topmost rows/columns from Matrix 1 to make it look tidier (Note Delete Column Row only! Not associated groups!)
Then when the report is run it should look similar to the following. Note in my example SubCat = ColName, and LogNo = NumItems, and I have multiple values per SubCat.
Hopefully you find this helpful. If not, please ask for clarification.
Can you do something like this:
The following gives the steps (in two columns, down then across)

Select N unique rows randomly while excluding 1 specific row (in SQLite DB)

I need to select x unique rows randomly from a table with n rows, while excluding 1 specific row. (x is small, 3 for example) This can be done in several queries if needed and I can also compute anything in programming language (Java). The one important thing is that it must be done faster than O(n), consuming O(x) memory and indefinite looping (retrying) is also undesirable.
Probability of selection should be equal for all rows (except the one which is excluded, of course)
Example:
| id | some data |
|————|———————————|
| 1 | … |
| 2 | … |
| 3 | … |
| 4 | … |
| 5 | … |
The algorithm is ran with arguments (x = 3, exclude_id = 4), so:
it should select 3 different random rows from rows with id in 1,2,3,5.
(end of example)
I tried the following approach:
get row count (= n);
get the position of the excluded row by something like select count(*) where id < excluded_id, assuming id is monotonically increasing;
select the numbers from 0..n, obeying all the rules, by using some "clever" algorithms, it's something like O(x), in other words fast enough;
select these x rows one by one by using limit(index, 1) SQL clause.
However, it turned out that it's possible for rows to change positions (I'm not sure why), so the auto-generated ids are not monotonically increasing. And in this case the second step (get the position of the excluded row) produces wrong result and the algorithm fails to do its job correctly.
How can this be improved?
Also, if this is vastly easier with a different SQL-like database system, it would be interesting, too. (the DB is on a server and I can install any software there as long as it's compatible with Ubuntu 14.04 LTS)
(I'm sorry for a bit of confusion;) the algorithm I used is actually correct it the id is monotonically increasing, I just forgot that it was not itself auto-generated, it was taken from another table where it's auto-generated, and it was possible to add these rows in different order.
So I added another id for this table, which is auto-generated, and used it for row selection, and now it works as it should.