Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I mean: link text
why should one use this over MySQL or something similar?
SPEED
Redis is pretty fast!, 110000
SETs/second, 81000 GETs/second in an
entry level Linux box. Check the
benchmarks.
Most important is speed. No way you can get these numbers using SQL.
COMMANDS
It is possible to think at Redis as a
data structures server, it is not just another key-value DB, see all the
commands supported by Redis to
get the first feeling
Sometimes people call Redis Memcached on steroids
Like many NoSQL databases, one would use Redis if it fits your needs. It does not directly compete with RDBMS solutions like MySQL, PostgreSQL, etc. One may need to use multiple NoSQL solutions in order to replace the functionality of a RDBMS. I personally do not consider Redis to be a primary data store - only something to be used for speciality cases like caching, queuing, etc. Document databases like MongoDB or CouchDB may work as a primary data store and be able to replace RDBMSs, but there are certainly projects where a RDBMS would work better than a document database.
This Wikipedia article on NoSQL will explain.
These data stores may not require fixed table schemas, and usually avoid join operations and typically scale horizontally.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What is the best way of storing and querying data for a simple tasks management application (e.g.)? The goal is to have maximum performance with minimum resources consumption (CPU, disk, RAM) for a single EC2 instance.
This depends also on the use case - will it be the database with many reads or many writes? When you are talking about tasks management, you have to know how many records do you expect, and if you expect more INSERTs or more SELECTs, etc.
Regarding SQL databases, interresting benchmark can be found here:
https://www.sqlite.org/speed.html
The benchmark shows that SQLite can be in many cases very fast, but in some cases also uneffective. (unfortunately the benchmark is not the newest, but still may be helpful)
SQLite is also good in the way it is just a single file on your disk that contains whole database and you can access the database using SQL language.
Very long and exhausting benchmark of the No-SQL can be found i.e. here:
http://www.datastax.com/wp-content/themes/datastax-2014-08/files/NoSQL_Benchmarks_EndPoint.pdf
It is also good to know the database engines, i.e. when using MySQL, choose carefully between MyISAM and InnoDB (nice answer is here What's the difference between MyISAM and InnoDB?).
If you just want to optimize performance, you can think of optimizing using hardware resources (if you read a lot from the DB and you do not have that many writes, you can cache the database (innodb_cache_size) - if you have enough RAM, you can read whole database from RAM.
So the long story short - if you are choosing engine for a very simple and small database, SQLite might be the minimalistic approach you want to use. If you want to build something larger, first be clear about your needs.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Why don't any of the major RDBMS systems like MySQL, SQL Server, Oracle, etc. have good full text indexing support?
I realize that most databases support full text indexes to some degree, but that they are usually slower, and with a smaller feature set. It seems that every time you want a really good full text index, you have to go outside the database and use something like Lucene/Solr or Sphinx.
Why isn't the technology in these full text search engines completely integrated into the database engine? There's lot of problems with keeping the data in another system such as Lucence, including keeping the data up to date, and the inability to join the results with other tables. Is there a specific technological reason why these two technologies can't be integrated?
RDBMS indexed serve a different purpose. They are there to offer the engine a way to optimize access to the data, both by the user and by the engine itself (to resolve joins, check foreign keys, etc...). As such they are really not a functional data structure.
Tools like full-text search, tag clouds may be very useful for enhancing the user experience. These serve only the user and applications. They are functional, and require real data structures... secondary tables or derived fields... with, typically, a whole lot of triggers and code to keep these updated.
And IMHO... there are many ways to implement these technologies. RDBMS producers would have to maybe choose some tech over another... for reasons that have nothing to do with the RDBMS engine itself. That does not really seem their job.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Currently we have a complex business object which need around 30 joins on our sql database to retrieve one item. (and this is our main use case). The database is around 2Gb in sql server.
We are using entity framework to retrieve data and it takes around 3,5sec to retrieve one item. We haved noticed that using subquery in a parrallel invoke is more performant than using joins when there is a lot of rows in the other table. (so we have something like 10 subqueries). We don't use stored procedure because we would like to keep the Data Access Layer in "plain c#".
The goal would be to retrieve the item under 1sec without changing too much the environnement.
We are looking into no sql solutions (RavenDB, Cassandra, Redis with the "document client") and the new feature "in-memory database" of sql server.
What do you recommend ? Do you think that just one stored procedure call with EF would do the job ?
EDIT 1:
We have indexes on all columns where we are doing joins
In my opinion, if you need 30 joins to retrieve one item, it is something wrong with the design of your database. Maybe it is correct from the relational point of view but what is sure it is totally impractical from the funcional/performance point of view.
A couple of solutions came to my mind:
Denormalize your database design.
I am pretty sure that you can reduce the number of joins improving your performance a lot with that technique.
http://technet.microsoft.com/en-us/library/cc505841.aspx
Use a NoSQL solution like you mention.
Due to the quantity of SQL tables involved this is not going to be an easy change, but maybe you can start introducing NoSQL like a cache for this complex objects.
NoSQL Use Case Scenarios or WHEN to use NoSQL
Of course using stored procedures for this case in much better and it will improve the performance but I do not believe is going to make a dramatic change. You should try id and compare. Also revise all your indexes.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
SQL is one of the most powerful and most currently used languages​​, but for purposes of curiosity and learning would test new technologies and want to know what are the fastest.
I text talking about NoSQL (json) and also about "plain text" file (. Txt or. Dat or. INI) with information from publications, settings, and the like.
What is the fastest processing, taking for example the Wordpress CMS is a very famous and one of the largest in the world, it uses SQL, say we make a request of 50 posts from the database, using the default template, all standardized compared with a requisition 50 posts from the same hierarchy but in file. txt or json file, which technology and fashion that renders faster?
If you will work with storage only in read or write, json or text file will be more faster than mysql, otherwise if you want to process complex data, mysql is faster.
If you want to work with less overhead, try to use SQLite database or similar
NoSQL databases like Redis, MongoDB is faster than MySql, but for using it, you must have personal hosting with root access
Although I don't have numbers to prove my guess, I think that any database will always be faster than a text file, just consider its indexing capabilities.
If instead you want to compare different databases, then, as others already said, it's a matter of the specific domain / problem you're working on and the structure you gave to the specific database schema.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
There is a lot of articles on the web supporting the trend to move to a graph database like Neo4j... but I can't find much against them.
When would a graph database not be the best solution?
Any links to articles that compare graphs, nosql, and relational databases would be great.
Currently I would not use Neo4j in a high volume write situation. The writes are still limited to a single machine, so you're restricted to a single machine's throughput, until they figure out some way of sharding (which is, by the way, in the works). In high volume write situations, you would probably look at some other store like Cassandra or MongoDB, and sacrifice other benefits a graph database gives you.
Another thing I would not currently use Neo4j for is full-text search, although it does have some built-in facility (as it uses Lucene for indexing under the hood), it is limited in scope and difficult to use from the latest Cypher. I understand that this is going to be improving rapidly in the next couple of releases, and look forward to that. Something like ElasticSearch or Solr would do a better job for FTS-related things.
Contrary to popular belief, tabular data is often well-fitted to the graph, unless you really have very denormalized data, like log records.
The good news is you can take advantage of many of these things together, picking the best tool for the job, and implement a polyglot persistence solution to answer your questions the best way possible.
Also, I would not use neo4j for serving and storing binary data. There are much better options for images, videos and large text documents out there - use them either as indexes with Neo4j, or just reference them.
When would a graph database not be the best solution?
When you work in a conservative company.
Insert some well thought-out technical reason here.