Would MongoDB be a good idea for a social network site (developed in Ruby on Rails)? - sql

My project (in Ruby on Rails 3) is to develop a "social network" site with the following features:
Users can be friends. It's mutual friendships; not asymetric like Twitter.
Users can publish links, to share them. Friends of a user can see what this user has shared.
Friends can comment on those shared links.
So basically we have Users, Links, and Comments, and all that is connected. An interesting thing in social networks is that the User table has kind of a many-to-many relation with itself.
I think I can handle that level of complexity with SQL and RoR.
My question is: would it be a good idea to use MongoDB (or CouchDB) for such a site?
To be honest, I think the answer is no. MongoDB doesn't seem to fit really well with many-to-many relationships. I can't think of a good MongoDB way to implement the friendship relationships. And I've read that Diaspora started with MongoDB but then switched back to classic SQL.
But some articles on the web defend MongoDB for social networks, and above all I want to make a well-informed decision, and not miss a really cool aspect of MongoDB that would change my life.
Also, I've heard about graph DB, which are probably great, but they really seem too young to me, and I don't know how they'd fit with RoR (and not mentioning heroku).
So, am I missing something?

My advice would be to use whatever you're most familiar with so that you can get up and running quickly. From your question it sounds like that would be SQL rather than MongoDB.

I like MongoDB and use it a lot, but I am of the opinion that if you are dealing with relational data, you should use the right tool for it. We have relational databases for that. Mongo and Couch are document stores.
Mongo has a serious disadvantage if you are going to be maintaining a lot of inter-document links. Writes are only guaranteed to be atomic for one document. So you could have inconsistent updates for relations if you are not careful with your schema.
The good thing about MongoDB is that it is very good at scaling. You can shard and create replica sets. Foursquare currently uses MongoDB and it has been working pretty well for them. MongoDB also does map-reduce and has decent geospatial integration. The team that develops MongoDB is excellent, and I live in NY where they are based and have met them. You probably are not going to have scaling issues though I would think starting out.
As far as Diaspora switching... I would not want to follow anything they are doing :)
Your comment about graph dbs is interesting though. I would probably not use a graph DB as my primary DB either, but when dealing with relationships, you can do amazing things with them. In fact usually the demo the guys from graph DB companies will give you is extracting relationship knowledge from a social network. However, there is nothing preventing you from playing with these in the future for network analysis.
In conclusion, when you are starting out here, you are not running into the problems of massive scale yet, and are probably limited on time and money. Keep in mind that even Facebook does not use just one technology, they have basically expanded to NoSQL for certain functionality (like Facebook messaging). There is nothing stopping you in the future from using say Mongo and gridFS for handling image uploads or geo-location etc. It is good to grow as your needs change. I think your gut feeling that you have an SQL app here is right, and the benefits gained with MongoDB would not be realized for a while.

An interesting possibility for this is Riak. It is a cross between a key-value store and a graph database. The userID could be your key and comments and links could be stored in your value. But Riak also has key-to-key linking. This could be used for connecting up your users as friends. The linking is asymmetric so you would need to deal with adding and deleting in both directions, but that shouldn't be too hard.
But note that Riak is not a document datastore, which means it is value-agnostic, which means it won't help you extract internal parts of your value, which means if you want to pull a comment out you'll need to first retrieve every comment and link that's stored with the user.
You may also want to check out other graph databases. Social graphs are the prototypical use-case for a graph database.

Sorry, but you got to start learning about graphs. Use graph databases that spits out data in JSON format & write it to your Mongodb database.You are dealing with a network , which means there must be a graph of some sort that will allow you to create an ego(user, for example) or focal point or logical center, which is connected to other nodes around it.Nodes could be properties of a particular user such as name, likes,friends, pictures etc. An ego graph would resemble a hub and spokes if you were to draw it.
For further reading, you can refer to how Facebook implemented it's Graph API - https://developers.facebook.com/docs/graph-api/overview/
Here is an attempt to connect Mongodb and Neo4j graph database - https://neo4j.com/developer/mongodb/

Related

What do I need to know about databases in order to create a quality Django app?

I'm trying to optimize my site and found this nice little Django doc:
Database Access Optimization, which suggests profiling followed by indexing and the selection of proper fields as the starting point for database optimization.
Normally, the django docs explain things pretty well, even things that more experienced programmers might consider "obvious". Not so in this case. After no explanation of indexing, the doc goes on to say:
We will assume you have done the obvious things above.
Uhhh. Wait! What the heck is indexing?
Obviously I can figure out what indexing is via google, my question is: what is it that I need to know as far as database stuff goes in order to create a scalable website? What should I be aware of about the Django framework specifically? What other "obvious" things ought I know? Where can I learn them?
I'm looking to get pointed in a direction here. I don't need to learn anything and everything about SQL, I just want to be informed enough to build my app the right way.
Thanks in advance!
I encourage you to read all that the other answers suggest and whatever else you can find on the subject, because it's all good information to know and will make you a better programmer.
That said, one of the nice things about Django and other similar frameworks is that for the most part you don't have to know what's going on behind the scenes in the DB. Django adds indexes automatically for fields that need them. The encouragement to add more is based on the use cases of your app. If you continually query based on one particular field, you should ensure that that field is indexed. It might be already (if it's a foreign key, primary key, etc.), but other random fields typically aren't.
There's also various optimizations that are database client-specific. Django can't do much here because it's goal is to remain database independent. So, if you're using PostgreSQL, MySQL, whatever, read about optimizations and best practices concerning those particular clients.
Wikipedia database design, and database normalization http://en.wikipedia.org/wiki/Database_design, and http://en.wikipedia.org/wiki/Database_normalization are two very important concepts, in addition to indexing.
In addition to these, having a basic understanding of your database of choice is necessary. Being able to add users, set permissions, and create a database are key things that you should know.
Learning how to backup your data is also a crucial thing.
The list keeps getting longer, one should also be aware of the db relationships that django handles for you, OneToOne, ManyToMany, ManyToOne. https://docs.djangoproject.com/en/dev/topics/db/models/
The performance impact of JOINs shouldn't be ignored. Access model properties in django is so easy, but understanding that some of Foreign Key relationships could have huge performance impacts is something to consider too.
Once you have a basic understanding of these things you should be at a pretty good starting point for creating a non-trivial django app!
Wikipedia has a nice article about database indexes, they are similar(ish) to an index in a book i.e. lets you (the computer) find things faster because you just look at the index (probably a very bad example :-)
As for performance there are many things you can do and presumably as it is a very detailed subject in itself, and is something that is particular to each RDBMS then it would be distracting / irrelevant for them (django) to go into great detail. Best thing is really to google performance tips for your particular RDBMS. There are some general tips such as indexing, limiting queries to only return the required data etc.
I think one of the main things is a good design, sticking as much as possible to Normal Form and in general actually taking your database into consideration before programming your models etc (which clearly you seem to be doing). Naming conventions are also a big plus, remembering explicit is better then implicit :-)
To summarise:
Learn/understand the fundamentals such as the relational model
Decide on a naming convention
Design your database perhaps using an ERM tool
Prefer surrogate ID's
Use the correct data type of minimum possible size
Use indexes appropriately and don't over index
Avoid unecessary/over querying
Prioritise security and stability over raw performance
Once you have an up and running database 'tune' the database analysing/profiling settings, queries, design etc
Backup and archive regularly - cron
Hang out here :-)
If required advance into replication (master/slave - django supports this quite well too)
Consider upgrading your hardware
Don't get too hung up about it

I need advise choosing a NoSQL database for a project with a lot of minute related information

I am currently working on a private project that is going to use Google's GTFS spec to get information about 100s of Public Transit agencies, their routers, stations, times, and other related information. I will be getting my information from here and the google code wiki page with similar info. There is a lot of data and its partitioned into multiple CSV formatted text files. These can be huge, some ranging in 80-100mb of data.
With the data I have, I want to translate it all into a nice solid database that I can build layers on top of to use for my project. I will be using GPS positioning to pinpoint a location and all surrounding stations/stops.
My goal is to access all the information for all these stops and stations with as few calls as possible, while keeping datasets small for queried results.
I am currently leaning towards MongoDB and CouchDB for their GeoSpatial support that can really optimize getting small datasets. But I also need to be sure to link all the stops on a route because I will be propagating information along a transit route for that line. In this case I have found that I can benefit from a Graph DB like Neo4j and OrientDB, but from what I know, neither has GeoSpatial support nor am I 100% sure that a Graph DB would be what I need.
The perfect solution might not exist, but I come here asking for help on finding the best possible for my situation. I know I will possible have to work around limitations of whatever I choose, but I want to at least have done my research and know that its the best I can get at the moment.
I have also been suggested to splinter the data into multiple DBs, but that could get very messy because all the information is very tightly interconnected through IDs.
Any help would be appreciated.
Obviously a graph database fits 100% your problem. My advice here is to go for some geo spatial module over neo4j or orientdb, althought you have some others free and open source implementation.
I think the best one right now, with all the geo spatial thing implemented is neo4j-spatial package. But as far as I know, you can also reproduce most of the geo spatial thing on your own if necessary.
BTW talking about splitting, if the amount of data/queries will be high, I strongly recommend you to share the load and think the model in this terms. Sure you can do something.
I've used Mongo's GeoSpatial features and can offer some guidance if you need help with a C# or javascript implementation - I would recommend it to start because it's super easy to use. I'm learning all about Neo4j right now and I am working on a hybrid approach that takes advantage of both Mongo and Neo4j. You might want to cross reference the documents in Mongo to the nodes in Neo4j using the Mongo object id.
For my hybrid implementation, I'm storing profiles and any other large static data in Mongo. In Neo4j, I'm storing relationships like friend and friend-of-friend. If I wanted to analyze movies two friends are most likely to want to watch together (or really any other relationship I hadn't thought of initially), by keeping that object id reference I can simply add some code instructing each node go out and grab a list of movies from the related profile.
Added 2011-02-12:
Just wanted to follow up on this "hybrid" idea as I created prototypes for and implemented a few more solutions recently where I ended up using more than one database. Martin Fowler refers to this as "Polyglot Persistence."
I'm finding that I am often using a combination of a relational database, document database and a graph database (in my case this is generally SQL Server, MongoDB and Neo4j). Since the question is related to data modeling as much as it is to geospatial, I thought I would touch on that here:
I've used Neo4j for site organization (similar to the idea of hypermedia in the REST model), modeling social data and building recommendations (often based on social data). As a result, I will generally model this part of the application before I begin programming.
I often end up using MongoDB for prototyping the rest of the application because it provides such a simple persistence mechanism. I like to start developing an application with the user interface, so this ends up working well.
When I start moving entities from Mongo to SQL Server, the context is usually important - for instance, if I have an application that allows users to build daily reports based on periodically collected data, it may make sense to run a procedure that builds those reports each night and stores daily report objects in Mongo that may be combined into larger aggregate reports as needed (obviously this doesn't consider a few special cases, but that is not relevant to the point)...on the other hand, if users need to pull on-demand reports limited to very specific time periods, it may make sense to keep everything in SQL server and build those reports as needed.
That said, and this deserves more intense thought, here are some considerations that may be helpful:
I generally try to store entities in a relational database if I find that pulling an entity from the database [in other words(in the context of a relational database) - querying data from the database that provides the data required to generate an entity or list of entities that fulfills the requested parameters] does not require significant processing (multiple joins, for instance)
Do you require ACID compliance(aside:if you have a graph problem, you can leverage Neo4j for this)? There are document databases with ACID compliance, but there's a reason Mongo is not: What does MongoDB not being ACID compliant really mean?
One use of Mongo I saw in the wild that I thought was worthy of mention - Hadoop was being used to compute massive hash tables that were then stored in Mongo. I believe a similar approach is used by TripAdvisor for user based customization in terms of targeting offers, advertising, etc..
NoSQL only exists because MySQL users assume that all databases have their performance problems when their database grows large and/or becomes complex.
I suggest that you use PostGIS. You can use the same database for the rest of your data needs as well.
http://postgis.refractions.net/

Strategic issue: Mixing relational and non-relational db?

There has been a lot of talk about contra-revolutionary NoSQL databases like Cassandra, CouchDB, Hypertable, MongoDB, Project Voldemort, BigTable, and so many more. As far as I am concerned, the strongest pros are scalability, performance and simplicity.
I am seriously considering to suggest using some non-relational db for our next project. However, some teams comprise some RDBMS fanatics, so convincing a hard switch might be impossible in some cases just because of emotional reasons. Also, when it comes to complex data models, I personally still believe in the power of RDBMS with their low-level consistance enforcement mechanisms.
Now here comes my question: I was wondering, if someone could seriously consider using both, RDBMS and non-relational DB in a new project: The complex, but not performance critical data model would still be implemented using a relational model and db, while all performance critical, yet, simple models would be implemented with a non-relational db. Furthermore, such a soft paradigm shift would be much easier to sell to some highly emotionalized team members than a hard one.
Would anyone recommend such an approach? Or would you rather recommend a black or white, i.e. relational or non-relational approach? All comments highly welcome!
P.S.: Any idea if such a mix-up works well with Spring and Hibernate/JPA?
Rob Conery recently wrote about his experience building his popular web application TekPub with both MongoDB and MySQL, highlighting the strengths of both:
The high-read stuff (account info, productions and episode info) is perfect for a "right now" kind of thing like MongoDb. The "what happened yesterday" stuff is perfect for a relational system.
At a high-level, Rob breaks their application data into two scopes: runtime data and historical data. The current state of a user's shopping cart, for example, is great for keeping in MongoDB. It's an object blob that is constantly mutating. To keep a historical record of what went in and out of the shopping cart; when it happened; the state of the checkout are great for relational, tabular data in MySQL.
He summarizes with this:
It works perfectly. I could not be happier with our setup. It's incredibly low maintenance, we can back it up like any other solution, and we have the data we need when we need it.
Update May 2016 (6 years later)
This has only become more true in the last 6 years. It's now common to see NoSQL databases powering transactional stores and traditional relational databases powering analytical databases.
I use both. RDBMS is good for complex analysis, reports, and data accessed in different ways by multiple users. NOSQL is great when I'm looking at a data from a single users perspective. A question I ask myself is: "Who is accessing this data?". If the answer is 1 user, I use NOSQL to store it.
Of course, there are other times when it is appropriate, but that's just one example.
As you mentioned, NOSQL is simpler storage, and it could cause you to have to right complex code to maintain data... For example, storing a list of connections.
Non-relational key-value databases are best used for blob storage and caching.

Does ORM for social networking sites makes any sense?

The reason why I ask this is because I need to know whether not using ORM for a social networking site makes any sense at all.
My argument why ORM does not fit into social networking sites are:
Social networking sites are not a product, thus you don't need to support multiple database. You know what database to use, and you most likely won't change it every now and then.
Social networking sites requires many-to-many relationship between users, and in the end sometimes you will need to write plain SQL to get those relations. The value of ORM is thus decreased again.
Related to the previous point, ORM sometimes do multiple queries in the backend to fetch its record, which sometimes may be inefficient and may cause bottleneck in the database. In the end you have to write down plain SQL query. If we know we are going to write plain SQL anyway, what is the point using ORM?
This is my limited understanding based on my limited experience. What are you're experience with building a social networking sites? Are my points valid? Is it lame to use bare SQL without worrying about using ORM? What are the points where ORM may help in building a social networking sites?
The value of using an ORM is to help speed up development, by automating the tedious work of assigning query results to object fields, and tracking changes to object fields so you can save them to the database. Hence the term Object-Relational Mapping.
An ORM has little value for you regarding database portability, since you only use the one database you deploy on.
The runtime performance aspect of an ORM is no better than, and typically much worse than writing plain SQL yourself. The generic methods of query generation often make naive mistakes and result in redundant queries, as you have mentioned. Again, the benefit is in development time, not runtime efficiency.
Using an ORM versus not using an ORM doesn't seem to make a huge difference for scalability. Other techniques with more bang-for-the-buck for scalability include:
Managing indexes in the RDBMS. Improve as many algorithms as possible from O(n) to O(log2n).
Intelligent caching architecture.
Horizontal scaling by database partitioning/sharding.
Database load-balancing and replication. Read from slave databases where possible, and write to a single master database. Index slaves and masters differently.
Supplement the RDBMS with complementary technology, such as Sphinx Search.
Vertical scaling by throwing hardware at the problem. Jeff Atwood has commented about this on the StackOverflow podcast.
Some people advocate moving your data management to a distributed architecture using cloud computing or distributed non-relational databases. This is probably not necessary until you get a very large number of users. Once you grow to a certain level of magnitude, all the rules change and you probably can't use an RDBMS anyway. But unless you are the data architect at Yahoo or Facebook or LinkedIn, don't worry about it -- cloud computing is over-hyped.
There's a common wisdom that the database is always the bottleneck in web apps, but there's also a case that improving efficiency on the front-end is at least as important. Cf. books by Steve Souders.
Julia Lerman in Programming Entity Framework (2009), p.503 shows that there's a 220% increase in query execution cost between using a DataReader directly and using Microsoft’s LINQ to Entities.
Also see Jeff Atwood's post on All Abstractions are Failed Abstractions, where he shows that using LINQ is at least double the cost of using plain SQL even in a naive way.
Here's my response to your points:
ORM does not need multiple database to be effective, in fact most cases of ORM usage are not due to the ability to adapt to different databases.
Most modern ORM frameworks are flexible enough to fetch 'lightweight' variants of mapped classes, it really depends on how you implement them.
If really required to, you can write native SQL queries within the ORM frameworks. Do note that caching and performance related algorithms are often part of the these frameworks.
IMO, an ORM helps you write cleaner, clearer code. If you use it sloppily you can cause excessive queries, but that isn't a rule by any means. If I were you I would start using the ORM and best practices of a framework, and only drop to SQL if you find yourself needing functionality that the ORM does not provide.
Also note that in web applications, many people are moving away from SQL databases. An ORM might help you to migrate to a non-relational database (precisely because you do not have SQL in your application code). Look at the use of JDO and JPA in Google's App Engine.
IMHO. ORM is need.
It allow you to access database in OOP way, no matter multiple database or not.
Cleaner code, you can define all method related to a particular table in the table class file, if you need raw sql join query, no problem, define there. it follows DRY and KISS. It is much better than you write similar raw sql query again and again.
The odds of your site being big enough that scaling becomes an issue are quite small so why prematurely optimize by doing everything in raw SQL instead of an ORM? You can get fairly far by throwing better hardware at a database assuming the database and application design are decent. While you may need to write raw SQL for things like creating friend graphs what about all the little things like updating the database when someone changes there email, sends a private message, uploads a photo, etc? Using an ORM can simplify all the simple database tasks you will have to do while still allowing you to hand code where absolutely necessary.

Wiki Database, is there one?

I was searching the net for something like a wiki database, just like wikipedia but instead stores structured content, editable by users. What I was looking for was an online database accessible by everyone where people can design the schema and data with proper versioning of both schema and data. I couldn't find any such site. I am not sure if it is my search skills or if there really is no wiki database as of now. Does anyone out there know anything like this?
I think there is a great potential for something like this. A possible example will be a website with a GUI for querying a MySQL DB where any website visitor can create DB objects and populate data.
UPDATE: I had registered the domain wikidatabase.org to get started on a tool but I didn't find enough time yet. If anyone is interested in spending some time and coding on this, please let me know at wikidatabase.org
It's not quite what you're looking for, but Semantic Mediawiki adds database-like features to MediaWiki:
http://semantic-mediawiki.org/wiki/Semantic_MediaWiki
It's still fundamentally a Wiki, but you can add semantic tags to pages ([[foo::bar]] [[baz::1000]]) and then do database-type queries across them: SELECT baz FROM pages WHERE foo=bar would be {{#ask: [[foo::bar]] | ?baz}}. There is even an embryonic SPARQL implementation for pseudo-SQL queries.
OK this question is old, but Google led me here, so for anyone else out there looking for a wiki for structured data: Take a look at Foswiki.
This might be like what you're looking for: dbpedia.org. They're working on extracting data from Wikipedia, and encoding it in a structured format using RDF, so that it can be queried using SPARQL.
Linkeddata.org has a big list of RDF data sets.
Do you mean something like http://www.freebase.com?
You should check out https://www.wikidata.org/wiki/Wikidata:Main_Page which is a bit different but still may be of interest.
Something that might come close to your requirements is Google Docs.
What's offered is document editing roughly similar to MS Word, and spreadsheets roughly similar to Excel. I'm thinking of the latter, of course.
In Google Docs, You can create spreadsheets for free; being spreadsheets, they naturally have a row-and-column structure similar to a database, and which you can define flexibly. You can also share these sheets with other people. This seems to be a by-invite-only process rather than open-to-all, but there may be other possibilities I'm not aware of, or that level of sharing might be enough for you in any case.
mindtouch should be able to do it. It's rather easy to get data in / out. (for example: it's trivial to aggregate all the IP's for servers into one table).
I pretty much use it as a DB in the wiki itself (pages have tables, key/value..inheritance, templates, etc...) but you can also interface with the API, write dekiscript, grab the XML...
I like this idea. I have heard of some sites that are trying to pull together large datasets for various things for open consumption, but none that would allow a wiki feel.
You could start with something as simple as an installation of phpMyAdmin with a known password that would allow people to log in, create a database, edit data and query from any other site on the web.
It might suffer from more accuracy problems than wikipedia though.
OpenRecord, development of which seems to have halted in 2008, seems to approach this. It is a structured wiki in which pages are views on the data. Unlike RDBMSes it is loosely typed - the system tries to make a best guess about what data you entered, but defaults to text when it cannot guess. Schemas appear to have been implied.
http://openrecord.org
An example of the typing that is given is that of a date. If you enter '2008' in a record, the system interprets this as a date. If you enter 'unknown' however, the system allows that as well.
Perhaps you might be interested in Couch DB:
Apache CouchDB is a document-oriented
database that can be queried and
indexed in a MapReduce fashion using
JavaScript. CouchDB also offers
incremental replication with
bi-directional conflict detection and
resolution.
I'm working on an Open Source PHP / Symfony / PostgreSQL app that does this.
It allows multiple projects, each project can have multiple directories, each directory has a defined field structure. Admins set all this up.
Then members of the public can suggest new records, edit or report existing ones. All this is moderated and versioned.
It's early days yet but it basically works and is already in real world use in several projects.
Future plans already in progress include tools to help keep the data up to date, better searching/querying and field types that allow translations of content between languages.
There is more at http://www.directoki.org/
I'm surprised that nobody has mentioned Wikibase yet, which is the software that powers Wikidata.