I'm developing a game and I'm using a leaderboard to keep track of a player's score. There is also the requirement to keep track of about 200 additional statistics. These stats are things like: kills, deaths, time played, weapon used, achievements gained and so on.
What players will be interested in is is the score,kills,deaths and time played. All the other stats are not necessarily needed to be shown in the game but should be accessible if I want to view them or compare them against other players. The expected number of players to be stored in this leaderboard table is about 2 million.
Currently the design is to store a player id together will all the stats in one table, for instance:
player_id,points,stat_1 .. stat_200,date_created,date_updated
If I want to show a sorted leaderboard based on points then I would have to put an index on points and do a sort on it with a select query and limit the results to return say 50 every time. There are also ideas to be able to have a player sort the leaderboard on a couple of other stats like time played or deaths up to a maximum of say 5 sortable stats.
The number of expected users playing the game is about 40k concurrently. Maybe a quarter of them, but this is really a ballpark figure, will actively browse the leaderboard, the rest will just play the game and upload their scores when they are finished.
I have a number of questions about this approach below:
It seems, but I have my doubts, that the consensus is that leaderboards with millions of records that should be sortable on a couple of stats don't scale very well in a RDBMS. Is this correct ?
Is sorting the leaderboard on points through a select query, assuming we have an index on it, going to be extremely slow and if so how can I work around this ?
Should I split up the storing of the additional stats that are not to be sorted in a separate table or is there another even better approach ?
Is caching the sorted results in memory or in a separate table going to be needed, keeping the expected load in mind, and if so which solutions or options should I consider ?
If my approach is completely wrong and I would be better of doing things like this in another way please let me know, even options like NoSQL solutions in cloud hosting environments are open to be considered.
Cheers
1) With multiple indexes it will become more costly to update the table. It all boils down to how often each player status is written to the db.
2) It will be very fast as long as the indexes are small enough to fit into RAM. After that, performance takes a big hit.
3) Sometimes you can gain performance if you add all fields you need to the index, cause then the DBMS doesn't need to access the table at all. This approach has the highest probability to work if the accessed fields are small compared to the size of a row.
4) Oracle will probably be good att doing the caching for you, but if you have a massive load of users all doing the same query it is probably better to run that query regularly and store the result in memory (or a memory-mapped file).
For instance, if the high-score list is accessed 50 times/second you can decrease the load caused by that question by 99% by dumping it every 2 seconds.
My advice on this is: don't do it unless you need it. Measure the performance first, and add it if necessary.
I've been working on a game with a leaderboard myself recently, using MS SQL Server rather than Oracle, and though the number of records and players aren't the same, here's what I've learnt - in answer to your questions:
As long as you have the right underlying hardware, creating a leaderboard with millions of records and sorting on score etc. should work just fine - databases are really, really efficient at querying and sorting based on indexes.
No, it will be fast.
I see no reason to partition into other tables - you'll have to join to those tables to retrieve the data, and that will incur a performance penalty. Though this might be the issue the normalization comment was aimed at.
I assume you will need to include caching to reach the scale you mention; I wouldn't cache in the database layer (your table is effectively a denormalized, flat record already - I don't think you can partition it much more). Not sure what other layers you've got, but I'd look at how "cacheable" your data is (sounds like leaderboards are fairly static), and cache either in the layer immediately above the database, or add something like ehcache to the mix.
General points:
I'd try it out to get a feel for how it would work. Use something like dbmonster to populate a test system with millions of records, and query against that puppy to get a feel for what works and doesn't.
Once you have that up and running, I'd invest in some more serious load and performance testing before deciding to add caching etc. - the more complex you make the architecture, the harder it is to debug, the more costly it is to build, and the more there is to go wrong. So, only add caching if you really need to because you can prove - through load and performance tests - that you can't meet your response time goals.
Whilst it's true that adding indexes to a table slows down insert/update/delete statements, in most cases that's a negligible penalty - I'd definitely not worry too much about it at this stage.
I don't like tables having hundreds of columns, to begin with but it could be ok. Personally I would prefer having separate ID table and scores table having ID, score types and values, both indexed on only the ID columns. If you organize them as cluster, the parent and child records are all fetched in 1 IO.
The number of transactions you mention asks for some scalability. You have no real idea about the load. I assume there is some application server[farm] that handles the requests.
That is a good fit for the Oracle In-Memory Database Cache option. See result caches ..... what about heavily modified data. This is a smart way of caching you Oracle data on the application server. You create a cache grid, consisting of at least one grid member and for best performance, combine them with the application server[s]. When you add application server, you automatically add Cache Grid Members. It works very well, it is the good old TimesTen technology that is integrated in the database.
You can make the combination, but don't have to. If you don't, you have a no top performance but are more flexible in the number of Grid Members.
meh - millions of records? not a big table.
I'd just create the table (avoid the "stat_1, stat_2" naming - give them their proper names, e.g. "score", "kill_count", etc.), add indexes with leading columns on what the users are most likely to want to sort on (that way Oracle can avoid a sort by using the index to access the table in sorted order).
If the number of stats grows too large, you could "partition" it vertically - e.g. have most of the most frequently accessed stats in one table, then have one or more other tables which have extra stats. Each table would have an identical primary key.
Related
I'm creating a client dashboard website that displays many different graphs and charts of different views of data in our database.
The data is of records of medical patients and companies that they work for for insurance purposes. The data is displayed as aggregate charts but there is a filter feature on the page that the user can use to filter individual patient records. The fields that they can filter by are
Date range of the medical claim
Relationship to the insurance holder
Sex
Employer groups (user selects a number of different groups they work with, and can turn them on and off in the filter)
User Lists (the user of the site can create arbitrary lists of patients and save their IDs and edit them later). Either none, one, or multiple lists can be selected. There is also an any/all selector if multiple are chosen.
A set of filters that the user can define (with preset defaults) from other, more internally structured pieces of data. The user can customize up to three of them and can select any one, or none of them, and they return a list of patient IDs that is stored in memory until they're changed.
The problem is that loading the data can take a long time, some pages taking from 30 seconds to a minute to load (the page is loaded first and the data is then download as JSON via an ajax function while a loading spinner is displayed). Some of the stored procedures we use are very complex, requiring multiple levels of nested queries. I've tried using the Query Analyzer to simplify them, but we've made all the recommended changes and it still takes a long time. Our database people have looked and don't see any other way to make the queries simpler while still getting the data that we need.
The way it's set up now, only changes to the date range and the employer groups cause the database to be hit again. The database never filters on any of the other fields. Any other changes to the filter selection are made on the front end. I tried changing the way it worked and sending all the fields to the back end for the database to filter on, and it ended up taking even longer, not to mention having to wait on every change instead of just a couple.
We're using MS SQL 2014 (SP1). My question is, what are our options for speeding things up? Even if it means completely changing the way our data is stored?
You don't provide any specifics - so this is pretty generic.
Speed up your queries - this is the best, easiest, least error-prone option. Modern hardware can cope with huge datasets and still provide sub-second responses. Post your queries, DDL, sample data and EXPLAINs to Stack Overflow - it's very likely you can get significant improvements.
Buy better hardware - if you really can't speed up the queries, figure out what the bottleneck is, and buy better hardware. It's so cheap these days that maxing out on SSDs, RAM and CPU will probably cost less than the time it takes to figure out how to deal with the less optimal routes below.
Caching - rather than going back to the database for everything, use a cache. Figure out how "up to date" your dashboards need to be, and how unique the data is, and cache query results if at all possible. Many development frameworks have first-class support for caching. The problem with caching is that it makes debugging hard - if a user reports a bug, are they looking at cached data? If so, is that cache stale - is it a bug in the data, or in the caching?
Pre-compute if caching is not feasible, you can pre-compute data. For instance, when you create a new patient record, you could update the reports for "patient by sex", "patient by date", "patience by insurance co" etc. This creates a lot of work - and even more opportunity for bugs.
De-normalize - this is the nuclear option. Denormalization typically improves reporting speed at the expense of write speed, and at the expense of introducing lots of opportunities for bugs.
I'm currently designing an application where users can create/join groups, and then post content within a group. I'm trying to figure out how best to store this content in a RDBMS.
Option 1: Create a single table for all user content. One of the columns in this table will be the groupID, designating which group the content was posted in. Create an index using the groupID, to enable fast searching of content within a specific group. All content reads/writes will hit this single table.
Option 2: Whenever a user creates a new group, we dynamically create a new table. Something like group_content_{groupName}. All content reads/writes will be routed to the group-specific dynamically created table.
Pros for Option 1:
It's easier to search for content across multiple groups, using a single simple query, operating on a single table.
It's easier to construct simple cross-table queries, since the content table is static and well-defined.
It's easier to implement schema changes and changes to indexing/triggers etc, since there's only one table to maintain.
Pros for Option 2:
All reads and writes will be distributed across numerous tables, thus avoiding any bottlenecks that can result from a lot of traffic hitting a single table (though admittedly, all these tables are still in a single DB)
Each table will be much smaller in size, allowing for faster lookups, faster schema-changes, faster indexing, etc
If we want to shard the DB in future, the transition would be easier if all the data is already "sharded" across different tables.
What are the general recommendations between the above 2 options, from performance/development/maintenance perspectives?
One of the cardinal sins in computing is optimizing too early. It is the opinion of this DBA of 20+ years that you're overestimating the IO that's going to happen to these groups.. RDBMS's are very good at querying and writing this type of info within a standard set of tables. Worst case, you can partition them later. You'll have a lot more search capability and management ease with 1 set of tables instead of a set per user.
Imagine if the schema needs to change? do you really want to update hundreds or thousands of tables or write some long script to fix a mundane issue? Stick with a single set of tables and ignore sharding. Instead, think "maybe we'll partition the tables someday, if necessary"
It is a no-brainer. (1) is the way to go.
You list these as optimizations for the second method. All of these are misconceptions. See comments below:
All reads and writes will be distributed across numerous tables, thus
avoiding any bottlenecks that can result from a lot of traffic hitting
a single table (though admittedly, all these tables are still in a
single DB)
Reads and writes can just as easily be distributed within a table. The only issue would be write conflicts within a page. That is probably a pretty minor consideration, unless you are dealing with more than dozens of transactions per second.
Because of the next item (partially filled pages), you are actually much better off with a single table and pages that are mostly filled.
Each table will be much smaller in size, allowing for faster lookups,
faster schema-changes, faster indexing, etc
Smaller tables can be a performance disaster. Tables are stored on data pages. Each table is then a partially filled page. What you end up with is:
A lot of wasted space on disk.
A lot of wasted space in your page cache -- space that could be used to store records.
A lot of wasted I/O reading in partially filled pages.
If we want to shard the DB in future, the transition would be easier
if all the data is already "sharded" across different tables.
Postgres supports table partitioning, so you can store different parts of a table in different places. That should be sufficient for your purpose of spreading the I/O load.
Option 1: Performance=Normal Development=Easy Maintenance=Easy
Option 2: Performance=Fast Development=Complex Maintenance=Hard
I suggest to choose the Oprion1 and for the BIG table you can manage the performance with better indexes or cash indexes (for some DB) and the last thing is nothing help make the second Option 2, because development a maintenance time is fatal factor
I have a database with tables that grow every day. I cannot predict which tables are going to grow and which are not as I'm not the one who is putting the data into them.
Is there a way to find tables that need indexes at a particular point in time? Is there a way, in SQL Server, to notify me if a database needs tuning on certain tables?
This is a product we have deployed at different client locations and we cannot go onto their servers every time to check if they have a performance issue. What I was thinking about is something that can notify me if there are performance issues on certain tables, so as the new patches go to the clients we can add these indexes or tuned queries.
After referring to Insertion of data after creating index on empty table or creating unique index after inserting data on oracle? I'm not willing to create indexes while installing databases or when the tables have few rows or are empty.
As per my understanding we must not create indexes on a smaller table as it can affect the write performances.
This is only a real concern if you're bulk loading or otherwise generating a hundred million records each day and write performance is a problem. Indexes do increase write times because they have to be updated when data is written, but unless you're running on a potato or running very high loads it's unlikely to be a problem. You'd know it was a problem before you encountered it.
If we're talking about small tables (less than 100 pages) then it's much more likely that indexes won't be useful because the data set is so small, but you shouldn't be concerned about impacting write performance.
Overall, your application should have indexes that support the queries that you expect should be run in your unit testing and staging. You will need feedback from your customers or clients, but until you really know how people use their data, you're going to have to make a best guess.
The general question of "How do I know what indexes I need when I don't know what queries will be run?" is better suited to DBA Stack Exchange. Briefly, you'll need to use dynamic management views for that. The three missing index dynamic views can be used for this. The example query given isn't horrible:
SELECT mig.*, statement AS table_name,
column_id, column_name, column_usage
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig
ON mig.index_handle = mid.index_handle
ORDER BY mig.index_group_handle, mig.index_handle, column_id;
You shouldn't just blindly follow what this view says, however. It's a good lead on what to look at, but you have to look at the column order and queries actually being used to tell.
You should also monitor index usage statistics and examine how much and in what way indexes are used compared to how much they have to be updated. Indexes that are updated a million times a day but are used once or twice should be considered for removal.
You will also want to monitor query stats to look for queries that run for a long time. This may be poor development on the part of your client, but can also be a sign of design problems.
This is not even a comprehensive overview of things to look for, however. There's a lot to database maintenance and operations. That's why DBAs make a good living. This is just the tip of the iceberg. Just the tip for indexes, even.
What I'd do if you want to maintain this is consider asking your customers to allow you to send feedback for performance analysis. Set up a broker that monitors the management views and sends compiled and sanitized information back to yourselves. You'll need to be very careful about what you send because you don't want to be sending actual customer data, of course.
Keep in mind that dynamic management views typically reset when the instance does, so the results will not typically represent the entire lifespan of the database.
I'm working on a Grails web app that would be similar in access patterns to StackOverflow or MyLifeIsAverage - users can vote on entries, and their votes are used to sort a list of entries based on the number of votes. Votes can be placed while the sorted select queries are being performed.
Since the selects would lock a large portion of the table, it seems that normal transaction locking would cause updates to take forever (given enough traffic). Has anyone worked on an app with a data access pattern such as this, and if so, did you find a way to allow these updates and selects to happen more or less concurrently? Does anyone know how sites like SO approach this?
My thought was to make the sorted selects dirty reads, since it is acceptable if they're not completely up to date all of the time. This is my only idea for possibly improving performance of these selects and updates, but I thought someone might know a better way.
Don't infer too much about your database - database's are extremely complicated animals and will almost always NOT work the way you would think. An MVCC database, which is any modern database (i.e. not myisam tables), will perform a table scan without locking heavily. The whole concept of MVCC is that writes don't block reads and visa versa. But no table scan will be performed here. To efficiently answer the query there is probably an index on votes. That index will be used to 1) limit the number of rows retrieved from the table 2) retrieve them in sorted order (i.e. without performing a sort.)
We're thinking about putting up a data warehouse system to load with web access logs that our web servers generate. The idea is to load the data in real-time.
To the user we want to present a line graph of the data and enable the user to drill down using the dimensions.
The question is how to balance and design the system so that ;
(1) the data can be fetched and presented to the user in real-time (<2 seconds),
(2) data can be aggregated on per-hour and per-day basis, and
(2) as large amount of data can still be stored in the warehouse, and
Our current data-rate is roughly ~10 accesses per second which gives us ~800k rows per day. My simple tests with MySQL and a simple star schema shows that my quires starts to take longer than 2 seconds when we have more than 8 million rows.
Is it possible it get real-time query performance from a "simple" data warehouse like this,
and still have it store a lot of data (it would be nice to be able to never throw away any data)
Are there ways to aggregate the data into higher resolution tables?
I got a feeling that this isn't really a new question (i've googled quite a lot though). Could maybe someone give points to data warehouse solutions like this? One that comes to mind is Splunk.
Maybe I'm grasping for too much.
UPDATE
My schema looks like this;
dimensions:
client (ip-address)
server
url
facts;
timestamp (in seconds)
bytes transmitted
Seth's answer above is a very reasonable answer and I feel confident that if you invest in the appropriate knowledge and hardware, it has a high chance of success.
Mozilla does a lot of web service analytics. We keep track of details on an hourly basis and we use a commercial DB product, Vertica. It would work very well for this approach but since it is a proprietary commercial product, it has a different set of associated costs.
Another technology that you might want to investigate would be MongoDB. It is a document store database that has a few features that make it potentially a great fit for this use case.
Namely, the capped collections (do a search for mongodb capped collections for more info)
And the fast increment operation for things like keeping track of page views, hits, etc.
http://blog.mongodb.org/post/171353301/using-mongodb-for-real-time-analytics
Doesn't sound like it would be a problem. MySQL is very fast.
For storing logging data, use MyISAM tables -- they're much faster and well suited for web server logs. (I think InnoDB is the default for new installations these days - foreign keys and all the other features of InnoDB aren't necessary for the log tables). You might also consider using merge tables - you can keep individual tables to a manageable size while still being able to access them all as one big table.
If you're still not able to keep up, then get yourself more memory, faster disks, a RAID, or a faster system, in that order.
Also: Never throwing away data is probably a bad idea. If each line is about 200 bytes long, you're talking about a minimum of 50 GB per year, just for the raw logging data. Multiply by at least two if you have indexes. Multiply again by (at least) two for backups.
You can keep it all if you want, but in my opinion you should consider storing the raw data for a few weeks and the aggregated data for a few years. For anything older, just store the reports. (That is, unless you are required by law to keep around. Even then, it probably won't be for more than 3-4 years).
Also, look into partitioning, especially if your queries mostly access latest data; you could -- for example -- set-up weekly partitions of ~5.5M rows.
If aggregating per-day and per hour, consider having date and time dimensions -- you did not list them so I assume you do not use them. The idea is not to have any functions in a query, like HOUR(myTimestamp) or DATE(myTimestamp). The date dimension should be partitioned the same way as fact tables.
With this in place, the query optimizer can use partition pruning, so the total size of tables does not influence the query response as before.
This has gotten to be a fairly common data warehousing application. I've run one for years that supported 20-100 million rows a day with 0.1 second response time (from database), over a second from web server. This isn't even on a huge server.
Your data volumes aren't too large, so I wouldn't think you'd need very expensive hardware. But I'd still go multi-core, 64-bit with a lot of memory.
But you will want to mostly hit aggregate data rather than detail data - especially for time-series graphing over days, months, etc. Aggregate data can be either periodically created on your database through an asynchronous process, or in cases like this is typically works best if your ETL process that transforms your data creates the aggregate data. Note that the aggregate is typically just a group-by of your fact table.
As others have said - partitioning is a good idea when accessing detail data. But this is less critical for the aggregate data. Also, reliance on pre-created dimensional values is much better than on functions or stored procs. Both of these are typical data warehousing strategies.
Regarding the database - if it were me I'd try Postgresql rather than MySQL. The reason is primarily optimizer maturity: postgresql can better handle the kinds of queries you're likely to run. MySQL is more likely to get confused on five-way joins, go bottom up when you run a subselect, etc. And if this application is worth a lot, then I'd consider a commercial database like db2, oracle, sql server. Then you'd get additional features like query parallelism, automatic query rewrite against aggregate tables, additional optimizer sophistication, etc.