I need ideas to implement a (really) high performance in-memory Database/Storage Mechanism. In the range of storing 20,000+ objects, with each object updated every 5 or so seconds. I would like a FOSS solution.
What is my best option? What are your experiences?
I am working primarily in Java, but I need the datastore to have good performance so the datastore solution need not be java centric.
I also need like to be able to Query these objects and I need to be able to restore all of the objects on program startup.
SQLite is an open-source self-contained database that supports in-memory databases (just connect to :memory:). It has bindings for many popular programming languages. It's a traditional SQL-based relational database, but you don't run a separate server – just use it as a library in your program. It's pretty quick. Whether it's quick enough, I don't know, but it may be worth an experiment.
Java driver.
are you updating 20K objects every 5 seconds or updating one of the 20K every 5 seconds?
What kind of objects? Why is a traditional RDBMS not sufficient?
Check out HSQLDB and Prevayler. Prevayler is a paradigm shift from traditional RDBMS - one which I have used (the paradigm, that is, not specifically Prevayler) in a number of projects and found it to have real merit.
Depends exactly how you need to query it, but have you looked into memcached?
http://www.danga.com/memcached/
Other options could include MySQL MEMORY Tables, the APC Cache if you're using PHP.
Some more detail about the project/requirements would be helpful.
An in-memory storage ?
1) a simple C 'malloc' array where all your structures would be indexed.
2) berkeleyDB: http://www.oracle.com/technology/products/berkeley-db/index.html. It is fast because you build your own indexes (secondary database) and there is no SQL expression to be evaluated.
Look at some of the products listed here: http://en.wikipedia.org/wiki/In-memory_database
What level of durability do you need? 20,000 updates every 5 seconds will probably be difficult for most IO hardware in terms of number of transactions if you write the data back to disc for every one.
If you can afford to lose some updates, you could probably flush it to disc every 100ms with no problem with fairly cheap hardware if your database and OS support doing that.
If it's really an in-memory database that you don't want to flush to disc often, that sounds pretty trivial. I've heard that H2 is pretty good, but SQLite may work as well. A properly tuned MySQL instance could also do it (But may be more convoluted)
Oracle TimesTen In-Memory Database. See: http://www.informationweek.com/whitepaper/Business-Intelligence/Datamarts-Data-Warehouses/oracle-timesten-in-memory-databas-wp1228511232361
Chronicle Map is an pure Java key-value store
it has really high performance, sustaining 1 million writes/second from a single thread. It's a myth that a fast database couldn't be written in Java.
Seamlessly stores and loads any serializable Java objects, provides a simple Map interface
LGPLv3
Since you don't have many "tables" a full-blown SQL database could be an overkill solution, indexes & queries could be implemented with a handful of distinct key-value stores which are updated manually by vanilla Java code. Chronicle Map provides mechanisms to make such updates concurrently isolated from each other, if you need it.
Related
We are trying to find an in-memory database with index support that we can use for our application.
We are looking at Aerospike, Apache Ignite, Geode, Voltdb.
There is not much to distinguish and every one claims to be fast and have great community support.
Out of these, Aerospike and VoltDB are C/C++ based and Apache Ignite and Geode are java based.
Considering there is little to choose between the databases in terms of performance and further it is tough to test which db will work for us better in production, Was trying to find out if the performance of an in-memory database will also depend on whether it is java based or c/c++ based. Considering garbage collection issues are quite frequent and its a tough to properly tune it for your use case(which may change after some time), is it true that the java based dbs will be at a disadvantage.
Thanks
You can't really conclude that one db is faster than another just because it is written in X language vs Y language. Database is a very complex product with many features. Some queries may be faster in one db, other queries in another db.
The only way to find out is to test your specific use case.
For an in-memory DB that maintains consistency like Geode does (i.e. makes synchronous replication to other nodes before releasing the client thread), your network is going to be a bigger concern than will the hotspot compiler. Still, here are two points of input to get you to the point where language is irrelevant:
1) If you are doing lots of creates/ updates over reads: Use off-heap memory on the server. This minimizes GC's.
2) Use Geode's serialization mapping between C/C++ and Java objects to avoid JNI. Specifically, use the DataSerializer http://gemfire.docs.pivotal.io/geode/developing/data_serialization/gemfire_data_serialization.html
If you plan to use queries extensively rather than gets/ puts, use the PDXSerializer: http://gemfire.docs.pivotal.io/geode/developing/data_serialization/use_pdx_serializer.html
I guess I'm going to be the contrarian.
All else being equal, compiled code is faster than the JVM, and there's just no garbage collection to have to employ tactics to avoid.
Having been written in C/C++, eXtremeDB (my company's product) is able to avoid using the C run-time memory management altogether. Managing the memory area entirely within the database software enables the use of highly efficient & purpose-specific memory managers, and eliminates the potential for memory leaks (from the whole system point of view, e.g. if 200GB is set aside for the in-memory database, it will never exceed 200GB). eXtremeDB is not unique in this regard; other in-memory DBMS written in C/C++ are also able to avoid the C run-time malloc/free or C++ new/delete. So please don't ding me for making a product pitch, I'm not. I'm pointing out a capability that is possible with a C/C++ implementation that may not be available with a JVM.
The other answerers are correct: that a crappy implementation of a SQL execution plan for a given query can overwhelm any advantage of compiled code vs JVM, but at some point you've got to have confidence that your DBMS vendor knows what they are doing (and are interested in improving their product if a plan is demonstrably inefficient/wrong). If you're not using SQL, then the goodness/badness of a SQL optimizer is not part of the equation, and it's really down to how well the database system's index methods are written, and availability of different types of indexes for different search requirements (e.g. a hash index will generally be better than a b-tree for exact match lookup, but a hash index can't support partial key (wildcard) search or ordered retrieval).
There are some public (independent, audited) benchmarks you can look to. We have participated in a few STAC-M3, though only one other DBMS has also (the DBMS you listed specifically, have not).
So, I've come to a place where I wanted to segment the data I store in redis into separate databases as I sometimes need to make use of the keys command on one specific kind of data, and wanted to separate it to make that faster.
If I segment into multiple databases, everything is still single threaded, and I still only get to use one core. If I just launch another instance of Redis on the same box, I get to use an extra core. On top of that, I can't name Redis databases, or give them any sort of more logical identifier. So, with all of that said, why/when would I ever want to use multiple Redis databases instead of just spinning up an extra instance of Redis for each extra database I want? And relatedly, why doesn't Redis try to utilize an extra core for each extra database I add? What's the advantage of being single threaded across databases?
You don't want to use multiple databases in a single redis instance. As you noted, multiple instances lets you take advantage of multiple cores. If you use database selection you will have to refactor when upgrading. Monitoring and managing multiple instances is not difficult nor painful.
Indeed, you would get far better metrics on each db by segregation based on instance. Each instance would have stats reflecting that segment of data, which can allow for better tuning and more responsive and accurate monitoring. Use a recent version and separate your data by instance.
As Jonaton said, don't use the keys command. You'll find far better performance if you simply create a key index. Whenever adding a key, add the key name to a set. The keys command is not terribly useful once you scale up since it will take significant time to return.
Let the access pattern determine how to structure your data rather than store it the way you think works and then working around how to access and mince it later. You will see far better performance and find the data consuming code often is much cleaner and simpler.
Regarding single threaded, consider that redis is designed for speed and atomicity. Sure actions modifying data in one db need not wait on another db, but what if that action is saving to the dump file, or processing transactions on slaves? At that point you start getting into the weeds of concurrency programming.
By using multiple instances you turn multi threading complexity into a simpler message passing style system.
In principle, Redis databases on the same instance are no different than schemas in RDBMS database instances.
So, with all of that said, why/when would I ever want to use multiple
Redis databases instead of just spinning up an extra instance of Redis
for each extra database I want?
There's one clear advantage of using redis databases in the same redis instance, and that's management. If you spin up a separate instance for each application, and let's say you've got 3 apps, that's 3 separate redis instances, each of which will likely need a slave for HA in production, so that's 6 total instances. From a management standpoint, this gets messy real quick because you need to monitor all of them, do upgrades/patches, etc. If you don't plan on overloading redis with high I/O, a single instance with a slave is simpler and easier to manage provided it meets your SLA.
Even Salvatore Sanfilippo (creator of Redis) thinks it's a bad idea to use multiple DBs in Redis. See his comment here:
https://groups.google.com/d/topic/redis-db/vS5wX8X4Cjg/discussion
I understand how this can be useful, but unfortunately I consider
Redis multiple database errors my worst decision in Redis design at
all... without any kind of real gain, it makes the internals a lot
more complex. The reality is that databases don't scale well for a
number of reason, like active expire of keys and VM. If the DB
selection can be performed with a string I can see this feature being
used as a scalable O(1) dictionary layer, that instead it is not.
With DB numbers, with a default of a few DBs, we are communication
better what this feature is and how can be used I think. I hope that
at some point we can drop the multiple DBs support at all, but I think
it is probably too late as there is a number of people relying on this
feature for their work.
I don't really know any benefits of having multiple databases on a single instance. I guess it's useful if multiple services use the same database server(s), so you can avoid key collisions.
I would not recommend building around using the KEYS command, since it's O(n) and that doesn't scale well. What are you using it for that you can accomplish in another way? Maybe redis isn't the best match for you if functionality like KEYS is vital.
I think they mention the benefits of a single threaded server in their FAQ, but the main thing is simplicity - you don't have to bother with concurrency in any real way. Every action is blocking, so no two things can alter the database at the same time. Ideally you would have one (or more) instances per core of each server, and use a consistent hashing algorithm (or a proxy) to divide the keys among them. Of course, you'll loose some functionality - piping will only work for things on the same server, sorts become harder etc.
Redis databases can be used in the rare cases of deploying a new version of the application, where the new version requires working with different entities.
I know this question is years old, but there's another reason multiple databases may be useful.
If you use a "cloud Redis" from your favourite cloud provider, you probably have a minimum memory size and will pay for what you allocate. If however your dataset is smaller than that, then you'll be wasting a bit of the allocation, and so wasting a bit of money.
Using databases you could use the same Redis cloud-instance to provide service for (say) dev, UAT and production, or multiple instances of your application, or whatever else - thus using more of the allocated memory and so being a little more cost-effective.
A use-case I'm looking at has several instances of an application which use 200-300K each, yet the minimum allocation on my cloud provider is 1M. We can consolidate 10 instances onto a single Redis without really making a dent in any limits, and so save about 90% of the Redis hosting cost. I appreciate there are limitations and issues with this approach, but thought it worth mentioning.
I am using redis for implementing a blacklist of email addresses , and i have different TTL values for different levels of blacklisting , so having different DBs on same instance helps me a lot .
Using multiple databases in a single instance may be useful in the following scenario:
Different copies of the same database could be used for production, development or testing using real-time data. People may use replica to clone a redis instance to achieve the same purpose. However, the former approach is easier for existing running programs to just select the right database to switch to the intended mode.
Our motivation has not been mentioned above. We use multiple databases because we routinely need to delete a large set of a certain type of data, and FLUSHDB makes that easy. For example, we can clear all cached web pages, using FLUSHDB on database 0, without affecting all of our other use of Redis.
There is some discussion here but I have not found definitive information about the performance of this vs scan and delete:
https://github.com/StackExchange/StackExchange.Redis/issues/873
I have an interesting situation where I'm near the end of an evaluation period for a RavenDB prototype for use with a project at our company. The reason it's interesting is that 99.99% of the time, I believe it fits Raven's sweet spot; it repeatedly queries for new data, often, and in small batches (< 1000 documents at a time).
However, we do have an initial load period, where we need to load two days' worth of data, which can be 3 million (or more) records in some cases.
A diagram might help:
It's the Transfer Service that is responsible for getting the correct data out of three production databases and storing it in RavenDB. The WCF service will query this data and make it available to its clients.
Once we do the initial load of millions of records/documents into RavenDB, we'll rarely have to do that again.
As an initial load test, on a machine with 4GB RAM and two processors, it took just over 23 minutes to read the initial data. In this case, it was only about 1.28 million records. I eliminated all async operations from this initial load, because I wanted each read to not be interfered with by other read operations. I found the best results this way.
I know it's not recommended, but to accomplish all this, I had to change settings that aren't recommended to be changed:
I had to increase the timeout:
documentStore.JsonRequestFactory.ConfigureRequest += (e, x) => ((HttpWebRequest)x.Request).Timeout = ravenTimeoutInMilliseconds;
In the Raven.Server.exe.config, I had to increase the page size (to int.MaxValue):
<add key="Raven/MaxPageSize" value="2147483647"/>
And in my retrieval methods, I had to use Take(int.MaxValue):
return session.Query<T>().Where(whereClause).Take(int.MaxValue).ToList();
Remember this is all for that one-time, initial load. After that, it's many queries, quickly, and often. I should also note that each document is self-contained in RavenDB. There are no relationships to manage.
Knowing all this, is RavenDB a good fit?
A good fit for what?
Full text search? Yes. Background aggregations (map/reduce ones)? Yes. Easy replication and sharding, say scaling? Yes...
Ad-hoc reporting? No. Support for probably thousands of third party tools? No...
If you're talking about performance, you probably want to look at Orens latest post on that. His numbers are quite similar to your ones: http://ayende.com/blog/154913/ravendb-amp-freedb-an-optimization-story
From what I understand of your question, you need to "prep" the WCF web-service. To do this you read 1.2M docs from RavenDB (in about 23 mins) and hold them in memory, so the WCF service can then serve queries from them, is this right? Or am I missing something?
Why not get the WCF service to send it's queries to Raven one-at-a-time? I.e. for each query it gets from a Client, ask RavenDB to do the query for it?
From what you've told us in the other answers comments, I believe the only good way to serve the wcf clients fast enough, is to actually store everything in memory, so just the way you do it now.
The question, if RavenDB is a good fit for that situation depends on whether your data model benefits in others way from the document oriented nature. So, in case you have dynamic data that would require some kind of EAV in a relational databases and lots of joins, then RavenDB will probably be a very good solution. However, if you just need something you can throw flat data in, then I would go with a relational database here. In terms of licensing costs and ease of use, you might also want to take a look at PostgreSql, as this is a really awesome database that comes completely free.
In one of my process I have this SQL query that take 10-20% of the total execution time. This SQL query does a filter on my Database, and load a list of PricingGrid object.
So I want to improve these performance.
So far I guessed 2 solutions :
Use a NoSQL solution, AFAIK these are good solutions for improving reading process.
But the migration seems hard and needs a lot of work (like import the data from sql server to nosql in a regular basis)
I don't have any knowledge , I even don't know which one I should use (the first I'd use is Ravendb because I follow ayende and it's done by the .net community).
I might have some stuff to change in my model to make my object ok for a nosql database
Load all my PricingGrid object in memory (in a static IEnumerable)
This might be a problem when my server won't have enough memory to load everything
I might reinvent the wheel (indexes...) invented by the NoSQL providers
I think I'm not the first one wondering this, so what would be the best solution ? Is there any tools that could help me ?
.net 3.5, SQL Server 2005, windows server 2005
Migrating your data from SQL is only the first step.
Moving to a document store (like RavenDB or MongoDB) also means that you need to:
Denormalize your data
Perform schema validation in your code
Handle concurrency of complex operations in your code since you no longer have transactions (at least not the same way)
Perform rollbacks in the event of partial commits (changes)
Depending on your updates, reads and network model you might also need to handle conflicts
You provided very limited information but it sounds like your needs include a single database server and that your data fits well in the relational model.
In such a case I would vote against a NoSQL solution, it is more likely that you can speed up your queries with database optimizations and still retain all the added value of a RDBMS.
Non-relational databases are tools for a specific job (no matter how they sell them), if you need them it is usually because your data doesn't fit well in the relational model or if you have a need to distribute your data over multiple machines (size or availability). For instance, I use MongoDB for a write-intensive high throughput job management application. It is centralized and the data is very transient so the "cost" of having low durability is acceptable. This doesn't sound like the case for you.
If prefer to use a NoSQL solution perhaps you should try using Memcached+MySQL (InnoDB) this will allow you to get the speed benefits of an in-memory cache (in the form of a memcached daemon plugin) with the underlying protection and capabilities of an RDBMS (MySQL). It should also ease data migration and somewhat reduce the amount of changes required in your code.
I myself have never used it, I find that I either need NoSQL for the reasons I stated above or that I can optimize the RDBMS using stored procedures, indexes and table views in a way which is sufficient for my needs.
Asaf has provided great information in regards to the usage of NoSQL and when it is most appropriate. Given that your main concern was performance, I would tend to agree with his opinion - it would take you much more time and effort to adopt a completely new (and very different) data persistence platform than it would to trick out your SQL Server cluster. That said, my answer is mainly to address the "how" part of your question.
Addressing misunderstandings:
Denormalizing Data - You do not need to manually denormalize your existing data. This will be done for you when it is migrated over. More than anything you need to simply think about your data in a different fashion - root aggregates, entity and value types, etc.
Concurrency/Transactions - Transactions are possible in both Mongo and Raven, they are simply done in a different fashion. One of the inherent ways Raven does this is by using an ORM-like "unit of work" pattern with its RavenSession objects. Yes, your data validation needs to be done in code, but you already should be doing it there anyway. In my experience this is an over-hyped con.
How:
Install Raven or Mongo on a primary server, run it as a service.
Create or extend an existing application that uses the database you intend to port. This application needs all the model classes/libraries that your SQL database provides persistence for.
a. In your "data layer" you likely have a repository class somewhere. Extract an interface form this, and use it to build another repository class for your Raven/Mongo persistence. Both DB's have plenty good documentation for using their APIs to push/pull/update changes in the document graphs. It's pretty damn simple.
b. Load your SQL data into C# objects in memory. Pull back your top-level objects (just the entities) and load their inner collections and related data in memory. Your repository is probably already doing this (ex. when fetching an Order object, ensure not only its properties but associated collections like Items are loaded in memory.
c. Instantiate your Raven/Mongo repository and push the data to it. Primary entities become "top level documents" or "root aggregates" serialized in JSON, and their collections' data nested within. Save changes and close the repository. Note: You may break this step down into as many little pieces as your data deems necessary.
Once your data is migrated, play around with it and ensure you are satisfied. You may want to modify your application Models a little to adjust the way they are persisted to Raven/Mongo - for instance you may want to make both Orders and Items top-level documents and simply use reference values (much like relationships in RDBMS systems). Watch out here though, as doing so sort-of goes against the principal and performance behind NoSQL as now you have to tap the DB twice to get the Order and the Items.
If satisfied, shard/replicate your mongo/raven servers across your remaining available server boxes.
Obviously there are tons of little details I did not explain, but that is the general process, and much of it depends on the applications already consuming the database and may be tricky if more than one app/system talks to it.
Lastly, just to reiterate what Asaf said... learn as much as you can about NoSQL and its best use-cases. It is an amazing tool, but not golden solution for all data persistence. In your case try to really find the bottlenecks in your current solution and see if they are solvable. As one of my systems guys says, "technology for technology's sake is bullshit"
Can you please point to alternative data storage tools and give good reasons to use them instead of good-old relational databases? In my opinion, most applications rarely use the full power of SQL--it would be interesting to see how to build an SQL-free application.
Plain text files in a filesystem
Very simple to create and edit
Easy for users to manipulate with simple tools (i.e. text editors, grep etc)
Efficient storage of binary documents
XML or JSON files on disk
As above, but with a bit more ability to validate the structure.
Spreadsheet / CSV file
Very easy model for business users to understand
Subversion (or similar disk based version control system)
Very good support for versioning of data
Berkeley DB (Basically, a disk based hashtable)
Very simple conceptually (just un-typed key/value)
Quite fast
No administration overhead
Supports transactions I believe
Amazon's Simple DB
Much like Berkeley DB I believe, but hosted
Google's App Engine Datastore
Hosted and highly scalable
Per document key-value storage (i.e. flexible data model)
CouchDB
Document focus
Simple storage of semi-structured / document based data
Native language collections (stored in memory or serialised on disk)
Very tight language integration
Custom (hand-written) storage engine
Potentially very high performance in required uses cases
I can't claim to know anything much about them, but you might also like to look into object database systems.
Matt Sheppard's answer is great (mod up), but I would take account these factors when thinking about a spindle:
Structure : does it obviously break into pieces, or are you making tradeoffs?
Usage : how will the data be analyzed/retrieved/grokked?
Lifetime : how long is the data useful?
Size : how much data is there?
One particular advantage of CSV files over RDBMSes is that they can be easy to condense and move around to practically any other machine. We do large data transfers, and everything's simple enough we just use one big CSV file, and easy to script using tools like rsync. To reduce repetition on big CSV files, you could use something like YAML. I'm not sure I'd store anything like JSON or XML, unless you had significant relationship requirements.
As far as not-mentioned alternatives, don't discount Hadoop, which is an open source implementation of MapReduce. This should work well if you have a TON of loosely structured data that needs to be analyzed, and you want to be in a scenario where you can just add 10 more machines to handle data processing.
For example, I started trying to analyze performance that was essentially all timing numbers of different functions logged across around 20 machines. After trying to stick everything in a RDBMS, I realized that I really don't need to query the data again once I've aggregated it. And, it's only useful in it's aggregated format to me. So, I keep the log files around, compressed, and then leave the aggregated data in a DB.
Note I'm more used to thinking with "big" sizes.
The filesystem's prety handy for storing binary data, which never works amazingly well in relational databases.
Try Prevayler:
http://www.prevayler.org/wiki/
Prevayler is alternative to RDBMS. In the site have more info.
If you don't need ACID, you probably don't need the overhead of an RDBMS. So, determine whether you need that first. Most of the non-RDBMS answers provided here do not provide ACID.
Custom (hand-written) storage engine / Potentially very high performance in required uses cases
http://www.hdfgroup.org/
If you have enormous data sets, instead of rolling your own, you might use HDF, the Hierarchical Data Format.
http://en.wikipedia.org/wiki/Hierarchical_Data_Format:
HDF supports several different data models, including multidimensional arrays, raster images, and tables.
It's also hierarchical like a file system, but the data is stored in one magic binary file.
HDF5 is a suite that makes possible the management of extremely large and complex data collections.
Think petabytes of NASA/JPL remote sensing data.
G'day,
One case that I can think of is when the data you are modelling cannot be easily represented in a relational database.
Once such example is the database used by mobile phone operators to monitor and control base stations for mobile telephone networks.
I almost all of these cases, an OO DB is used, either a commercial product or a self-rolled system that allows heirarchies of objects.
I've worked on a 3G monitoring application for a large company who will remain nameless, but whose logo is a red wine stain (-: , and they used such an OO DB to keep track of all the various attributes for individual cells within the network.
Interrogation of such DBs is done using proprietary techniques that are, usually, completely free from SQL.
HTH.
cheers,
Rob
Object databases are not relational databases. They can be really handy if you just want to stuff some objects in a database. They also support versioning and modify classes for objects that already exist in the database. db4o is the first one that comes to mind.
In some cases (financial market data and process control for example) you might need to use a real-time database rather than a RDBMS. See wiki link
There was a RAD tool called JADE written a few years ago that has a built-in OODBMS. Earlier incarnations of the DB engine also supported Digitalk Smalltalk. If you want to sample application building using a non-RDBMS paradigm this might be a start.
Other OODBMS products include Objectivity, GemStone (You will need to get VisualWorks Smalltalk to run the Smalltalk version but there is also a java version). There were also some open-source research projects in this space - EXODUS and its descendent SHORE come to mind.
Sadly, the concept seemed to die a death, probably due to the lack of a clearly visible standard and relatively poor ad-hoc query capability relative to SQL-based RDMBS systems.
An OODBMS is most suitable for applications with core data structures that are best represented as a graph of interconnected nodes. I used to say that the quintessential OODBMS application was a Multi-User Dungeon (MUD) where rooms would contain players' avatars and other objects.
You can go a long way just using files stored in the file system. RDBMSs are getting better at handling blobs, but this can be a natural way to handle image data and the like, particularly if the queries are simple (enumerating and selecting individual items.)
Other things that don't fit very well in a RDBMS are hierarchical data structures and I'm guessing geospatial data and 3D models aren't that easy to work with either.
Services like Amazon S3 provide simpler storage models (key->value) that don't support SQL. Scalability is the key there.
Excel files can be useful too, particularly if users need to be able to manipulate the data in a familiar environment and building a full application to do that isn't feasible.
There are a large number of ways to store data - even "relational databse" covers a range of alternatives from a simple library of code that manipulates a local file (or files) as if it were a relational database on a single user basis, through file based systems than can handle multiple-users to a generous selection of serious "server" based systems.
We use XML files a lot - you get well structured data, nice tools for querying same the ability to do edits if appropriate, something that's human readable and you don't then have to worry about the db engine working (or the workings of the db engine). This works well for stuff that's essentially read only (in our case more often than not generated from a db elsewhere) and also for single user systems where you can just load the data in and save it out as required - but you're creating opportunities for problems if you want multi-user editing - at least of a single file.
For us that's about it - we're either going to use something that will do SQL (MS offer a set of tools that run from a .DLL to do single user stuff all the way through to enterprise server and they all speak the same SQL (with limitations at the lower end)) or we're going to use XML as a format because (for us) the verbosity is seldom an issue.
We don't currently have to manipulate binary data in our apps so that question doesn't arise.
Murph
One might want to consider the use of an LDAP server in the place of a traditional SQL database if the application data is heavily key/value oriented and hierarchical in nature.
BTree files are often much faster than relational databases. SQLite contains within it a BTree library which is in the public domain (as in genuinely 'public domain', not using the term loosely).
Frankly though, if I wanted a multi-user system I would need a lot of persuading not to use a decent server relational database.
Full-text databases, which can be queried with proximity operators such as "within 10 words of," etc.
Relational databases are an ideal business tool for many purposes - easy enough to understand and design, fast enough, adequate even when they aren't designed and optimized by a genius who could "use the full power," etc.
But some business purposes require full-text indexing, which relational engines either don't provide or tack on as an afterthought. In particular, the legal and medical fields have large swaths of unstructured text to store and wade through.
Also:
* Embedded scenarios - Where usually it is required to use something smaller then a full fledged RDBMS. Db4o is an ODB that can be easily used in such case.
* Rapid or proof-of-concept development - where you wish to focus on the business and not worry about persistence layer
CAP theorem explains it succinctly. SQL mainly provides "Strong Consistency: all clients see the same view, even in presence of updates".
K.I.S.S: Keep It Small and Simple
I would offer RDBMS :)
If you do not wont to have troubles with set up/administration go for SQLite.
Built in RDBMS with full SQL support. It even allows you to store any type of data in any column.
Main advantage against for example log file: If you have huge one, how are you going to search in it? With SQL engine you just create index and speed up operation dramatically.
About full text search: SQLite has modules for full text search too..
Just enjoy nice standard interface to your data :)
One good reason not to use a relational database would be when you have a massive data set and want to do massively parallel and distributed processing on the data. The Google web index would be a perfect example of such a case.
Hadoop also has an implementation of the Google File System called the Hadoop Distributed File System.
I would strongly recommend Lua as an alternative to SQLite-kind of data storage.
Because:
The language was designed as a data description language to begin with
The syntax is human readable (XML is not)
One can compile Lua chunks to binary, for added performance
This is the "native language collection" option of the accepted answer. If you're using C/C++ as the application level, it is perfectly reasonable to throw in the Lua engine (100kB of binary) just for the sake of reading configs/data or writing them out.