how to get intersect on redis's sorted sets? - redis

I want to display leaders board for the users who are earning more points from two different games.
I am storing user scores as per games using redis's sorted sets, How I can get intersect on these games sorted sets to display common leaderborad.

This sounds like a job for ZINTERSTORE:
ZINTERSTORE leaders-sorted-set 2 game-1-sorted-set game-2-sorted-set AGGREGATE SUM
Since there is no AVG aggregate subcommand, you'll have to divide the resultant scores to obtain that.

Related

SQL finding Average for transaction

so i have a question regarding Average problem. suppose i have 5 transactions, with each transaction having multiple items and each item has their own Quantity Value. I want to search Average Quantity per transaction. Note that in my ERD Design, there are 2 separate tables which are HeaderTransaction and TransactionDetail.
If i use AVG() function, then it will be very weird as i.e.
first transaction:
5 eggs
2 sausages
Second transaction :
3 eggs
10 sausages.
AVG will work like (5+2+3+10)/4 what i want is ((5+2)+(3+10))/2
my current solution is
SELECT SUM(ItemQuantity)/COUNT (DISTINCT (SalesTransactionId)) as[aveg]
i find it a bit rough
If i use AVG() function, then it will be very weird
Not if you AVG what you say you want to average, which is the number of items per transaction
SELECT AVG(num_of_items_in_transaction)
FROM
(SELECT SUM(amount_of_item) as num_of_items_in_transaction FROM detail GROUP BY tran_id)
The inner query groups per transaction, and counts the total number of items. The outer query then averages these totals. The point is that because you need to first do an operation group by transaction, then another operation group by something else (the whole dataset) you can't combine the grouping operations into a single step - it has to be multi stage, because you're feeding the output from one stage into the input of another. SELECT AVG(SUM(amount)) .. GROUP BY ??? - what would you put into the ??? to let MySQL know you wanted the SUM grouping by one thing but the AVG grouping by another? (You can't)
You generally need to do it as a two step reduction if you're not windowing, but that's conceivably a implicit multi-step operation anyway
I don't think there's any need to change what you have (sum of amounts divided by count of transactions), I just wanted to point out why it probably wasn't working as you expected

Filter by two values with ID column

im analyzing some e-sports soccer championship data.
My original table looks like this:
Every row corresponds to one match with the Date, Players envolved, the Teams they used and their Scores
my df head()
After seaching around tableau community, I pivoted "Player A" and "Player B" columns so i can filter for players individually. Now any match has 2 rows(one for each player on that match) and tey're unified by the 'MatchID' column:
my tableau table
That said, i want to build a view where the viewer could select two players and see statistics about all the matches they played against each other, like these two:
1- Last 10 matches info (Date, teams they played with, scores)
2- Most-frequent results like this graph:
the graph i want to show
Tried bringing some dimensions to colums but i really couldnt find a way to show the entire row data in a view. No idea about h2 filter from two players and take only matches where they encounter using MatchID.
I tried searching around and do some Calculated Fields filters, but i just went Tableau with no background in SQL, Excel or anything, just Python. So im a bit lost with so many options and ways.
If anyone could gimme directions about that i would be very happy. Thx in advice (:
I think you should unpivot your data so you are back with 1 record per match. Then you will be able to use 2 parameters as your filters; one parameter for player 1 and the other for player 2. That would enable the user to select 2 different players.
As there's a chance the same player could be in both the Player 1 and Player 2 columns, to use as a filter is a little more complex. Your filter calculated field for the Player1 parameter would be something like:
[FilterParameterPlayer1]: [ParameterPlayer1] = [Player1] OR ParameterPlayer1] = [Player2]
And for Player2 parameter:
[FilterParameterPlayer2]: [ParameterPlayer2] = [Player1] OR ParameterPlayer2] = [Player2]
Both filter fields should be set to only show True.

PowerPivot Ranking Groups using DAX's Rankx - Ranking Using Sum of a Field

Am trying to rank groups by summing a field (not a calculated column) for each group so I get a static answer for each row in my table.
For example, I may have a table with state, agent, and sales. Sales is a field, not a measure. There can be many agents within a state, so there are many rows for each individual state. I am trying to rank the states by total sales within each state.
I have tried many things, but the ones that make the most sense to me are:
rankx(CALCULATETABLE(Table,allexcept(Table,Table[AGENT]),sum([Sales]),,DESC)
and
=rankx(SUMMARIZE(State,Table[State],"Sales",sum(Table[Sales])),[Sales])
The first one is creating a table where it sums sales without grouping by Agent. and then tries to rank based on that. I get #ERROR on this one.
The second one creates a table using SUMMARIZE with only sum of Sales grouped by state, then tries to take that table and rank the states based on Sales. For this one I get a rank of 1 for every row.
I think, but am not sure, that my problem is coming from the sales being a static field and not a calculated measure. I can't figure out where to go from here. Any help?
Assuming your data looks something like this...
...have you tried this:
Ranking Measure = RANKX(ALL('Table'[STATE]),CALCULATE(SUM('Table'[Sales])))
The ALL('Table'[STATE]) says to rank all states. The CALCULATE(SUM('Table'[Sales])) says to rank by the sum of their sales. The CALCULATE wrapper is important; a plain SUM('Table'[Sales]) will be filtered to the current row context, resulting in every state being ranked #1. (Alternatively, you can spin off SUM('Table'[Sales]) into a separate Sales measure - which I'd recommend.)
Note: the ranks will change based on slicers/filters (e.g. a filter by agent will re-rank the states by that agent). If you're looking for a static rank of states by their total sales (i.e. not affected by filters on agent and always looking at the entire table), then try this:
Static Ranking Measure = CALCULATE([Ranking Measure], ALLEXCEPT('Table', 'Table'[State]))
This takes the same ranking measure, but removes all filters except the state filter (which you need to leave, as that's the column you're ranking by).
I did figure out a solution that's pretty simple, but it's messier than I'd like. If it's the only thing that works though, that's okay.
I created a new table with each distinct state along with a sum of sales then just do a basic RANKX on that table.

transform rows into columns in a sql table

Supose I would like to store a table with 440 rows and 138,672 columns, as SQL limit is 1024 columns I would like to transform rows into columns, I mean to convert the
440 rows and 138,672 columns to 138,672 rows and 440 columns.
Is this possible?
SQL Server limit is actually 30000 columns, see Sparse Columns.
But creating a query that returns 30k columns (not to mention +138k) will be basically uncontrollable, the sheer size of the metadata on each query result would halt the client to a crawl. One simply does not design databases like that. Go back to the drawing board, when you reach 10 columns stop and think, when you reach 100 column erase the board and start anew.
And read this: Best Practices for Semantic Data Modeling for Performance and Scalability.
The description of the data is as follows....
Each attribute describes the measurement of the occupancy rate
(between 0 and 1) of a captor location as recorded by a measuring
station, at a given timestamp in time during the day.
The ID of each station is given in the stations_list text file.
For more information on the location (GPS, Highway, Direction) of each
station please refer to the PEMS website.
There are 963 (stations) x 144 (timestamps) = 138,672 attributes for
each record.
This is perfect for normalision.
You can have a stations table and a measurements table. Two nice long thin tables.

Dynamic user ranks

I have a basic karma/rep system that awards users based on their activities (questions, answers, etc..). I want to have user ranks (title) based on their points. Different ranks have different limitations and grant powers.
ranks table
id rankname points questions_per_day
1 beginner 150 10
2 advanced 300 30
I'm not sure if I need to have a lower and upper limit, but for the sake of simplicity I have only left a max points limit, that is, a user below 150 is a 'beginner' and below or higher than 300, he's an 'advanced'.
For example, Bob with 157 points would have an 'advanced' tag displayed by his username.
How can I determine and display the rank/title of an user? Do I loop through each row and compare values?
What problems might arise if I scale this to thousands of users having their rank calculated this way? Surely it will tax the system to query and loop each time a user's rank is requested, no?
You could better cache the rank and the score. If a user's score only changes when they do certain activities, you can put a trigger on that activity. When the score changes, you can recalculate the rank and save it in the users record. That way, retreiving the rank is trivial, you only need to calculate it when the score changes.
You can get the matching rank id like this; query the rank that is closest (but below or equal to) the user schore. Store this rank id in the user's record.
I added the pseudovariable {USERSCORE} because I don't know if you use parameters or any other way to enter values in a query.
select r.id
from ranks r
where r.points <= {USERSCORE}
order by r.points desc
limit 1
A little difficult without knowing your schema. Try:
SELECT user.id, MIN(ranks.id) AS rankid FROM user JOIN ranks ON (user.score <= ranks.points) GROUP BY user.id;
Now you know the ranks id.
This is non-trivial though (GROUP BY and MAX are pipeline breakers and so quite heavyweight operations), so GolezTrol advice is good; you should cache this information and update it only when a users score changes. A trigger sounds fine for this.