The other day I was looking at SOGo SQL tables and saw that the records are stored as vcard data instead of a fine table with different columns like surname, phone number, etc.
Though there is a table called sogo_quick_contacts with the schema I was expecting, not all the columns are there, only some basic ones.
I'm wondering why is it like that way? Is it better to query a record with the whole vcard-data and extract the information I require? Wouldn't it be better (faster) to apply a SELECT query indicating some columns I'm looking for, if they were available?
CardDav seems to provide this vcard-data, are they more suitable to contacts lookup, why?
What if I want to just list the names and birthdays. Wouldn't extracting all the vcards much slower then using a SQL Query where I have everything split up for different columns?
There are a lot of things which played a role in the way the ScalableOGo database schema is designed. Which BTW was designed by me ;-)
I think the core thing here is that it is designed specifically for two types of clients: a) native CardDAV clients (macOS/iOS contacts, Thunderbird) and b) the ScalableOGo web interface.
Native clients essentially never do the type of query you are asking about. They always sync a full vCard to their local cache. So there has to be a fast way to store and retrieve a full vCard, it is the most common operation against the server.
Web clients in 2003 (I suppose that was around the time I wrote the original web client) didn't yet have the capacity to store full objects locally and had to do what you are asking for: query just the fields the web client needs to display on a respective page.
This is what the 'quick' tables are for. They contain the columns the web clients needs to display overviews and such. It is essentially an app server provided index over the vCard content.
This should be the main answer to your question.
There are other reasons too, some in no particular order:
a vCard is quite complex, to convert it to a proper SQL schema / normalise it, is (was at the time, but this is still relevant, since the scale of systems grew 100-fold over the last 15 years) quite compute intensive (hence OpenGroupware.org vs ScalableOGo) A BLOB just needs to be streamed to disk.
a CardDAV server is supposed to store a full vCard as-is, byte-by-byte. So that the clients can do ETag protected requests. And store custom fields (all clients use their own X- tags for client specific fields)
the quick tables are also setup so that they can be build asynchronously, though I think that feature never made it into SOGo. If a client quickly loads 10000 vCards into the server (e.g. just dragging the vCards into the server using Finder), the server can batch-update the quick table in the background. The vCard to DB conversion doesn't have to happen in real time.
(notably native clients often have a similar 'quick' table setup locally.)
Hope this helps. Maybe one would design the thing a little different in 2017, though I think the basic ideas are still sound ;-)
Related
I’m trying to build a mobile app that involves users following each other. I’ve seen posts (here) that say it is a cardinal sin to store a users’ followees and followers as a list in a SQL database as each “cell” should only store one discrete value.
However, is this the case for noSQL, document-based databases? What are the pros and cons of storing followers and followees as a list in the user document, vs storing it in a separate collection?
The only ones i can see now is that retrieving the follower/followee data (could be?) faster for the former method as you don’t have to index the entire follower/followee collection, unlike the latter method (or is the time difference negligible?). On the other hand, one would require 2 writes every time someone follows/unfollows another user, which may be disadvantageous for billing in cloud databases, but might not be a problem if the database is hosted locally (?)
I’m very new to working with databases so I’m hoping to get some insight from more experienced people about long term/large scale effects of this choice. Thanks!
I'm considering MongoDB right now. Just so the goal is clear here is what needs to happen:
In my app, Finch (finchformac.com for details) I have thousands and thousands of entries per day for each user of what window they had open, the time they opened it, the time they closed it, and a tag if they choose one for it. I need this data to be backed up online so it can sync to their other Mac computers, etc.. I also need to be able to draw charts online from their data which means some complex queries hitting hundreds of thousands of records.
Right now I have tried using Ruby/Rails/Mongoid in with a JSON parser on the app side sending up data in increments of 10,000 records at a time, the data is processed to other collections with a background mapreduce job. But, this all seems to block and is ultimately too slow. What recommendations does (if anyone) have for how to go about this?
You've got a complex problem, which means you need to break it down into smaller, more easily solvable issues.
Problems (as I see it):
You've got an application which is collecting data. You just need to
store that data somewhere locally until it gets sync'd to the
server.
You've received the data on the server and now you need to shove it
into the database fast enough so that it doesn't slow down.
You've got to report on that data and this sounds hard and complex.
You probably want to write this as some sort of API, for simplicity (and since you've got loads of spare processing cycles on the clients) you'll want these chunks of data processed on the client side into JSON ready to import into the database. Once you've got JSON you don't need Mongoid (you just throw the JSON into the database directly). Also you probably don't need rails since you're just creating a simple API so stick with just Rack or Sinatra (possibly using something like Grape).
Now you need to solve the whole "this all seems to block and is ultimately too slow" issue. We've already removed Mongoid (so no need to convert from JSON -> Ruby Objects -> JSON) and Rails. Before we get onto doing a MapReduce on this data you need to ensure it's getting loaded into the database quickly enough. Chances are you should architect the whole thing so that your MapReduce supports your reporting functionality. For sync'ing of data you shouldn't need to do anything but pass the JSON around. If your data isn't writing into your DB fast enough you should consider Sharding your dataset. This will probably be done using some user-based key but you know your data schema better than I do. You need choose you sharding key so that when multiple users are sync'ing at the same time they will probably be using different servers.
Once you've solved Problems 1 and 2 you need to work on your Reporting. This is probably supported by your MapReduce functions inside Mongo. My first comment on this part, is to make sure you're running at least Mongo 2.0. In that release 10gen sped up MapReduce (my tests indicate that it is substantially faster than 1.8). Other than this you can can achieve further increases by Sharding and directing reads to the the Secondary servers in your Replica set (you are using a Replica set?). If this still isn't working consider structuring your schema to support your reporting functionality. This lets you use more cycles on your clients to do work rather than loading your servers. But this optimisation should be left until after you've proven that conventional approaches won't work.
I hope that wall of text helps somewhat. Good luck!
I'm currently working in a Silverlight / MS SQL project where the Entity Framework has not been implemented and I would like to know what's the best practice to deal with calculated fields in this particular situation.
Considering that some external system might also consume my data directly in the DB or thru a web service, here's the 3 options I can see right now.
1) Force any external system to consume data thru a web service and create all the calculated fields in the objects only.
2) Create the calculated fields in a DB view and resync your object with the server each time a value needs to be calculated.
3) Replicate the calculation rules in the object and the database view.
Any other suggestions would also be welcomed.
I would recommend to follow two principles: data decoupling and minimum functionality duplication. Both would suggest to put your calculations in one place only, and serve them already calculated. So I would implement the calculations in the DB, and serve them via a web service.
However, you have to consider your particular case. For example, if the calculations are VERY heavy, you could delegate them to the client to spare server resources. This could even be the reason you are using Silverlight. I am in a similar situation on a project, and I found that the best compromise is to push raw data to the client and have it do the heavy computations.
Having a best practice or approach for this kind of problem is difficult as circumstances change what was formerly a good approach might start to seem less useful. That said where possible I would do anything data related at the DB level including calculated fields. This way you know no matter where you are looking at the data from you will see the same results. So your web service, SQL reporting and anything else that needs to look at or receive data will see the same result.
background:
I'm in the design phase of building an app.
I want the app to display text and images, the problem is that I will have A LOT of them. hundreds to thousands.
This is my largest app so far, and I am unsure on how to handle all the data.
The question???????:
What would be the best way to store and access these images and text?
Would I use a formal database approach like SQL?
Or would it be better to navigate files/folders e.g. dropping all the files in res/drawable?
potentially useful facts:
The database will be stored and accessed natively so it can be accessed off-line.
The user will not be adding to the database in anyway, only accessing the data.
the database will be updated every 6 months.
The application 'page' will display 1-5 images along with several blocks of text.
Concept:
the app will be like a recipe app...the user will pick some parameters e.g. ingredients, type, diet.. then select a recipe. And then several images and blocks of text will be displayed showing and detailing the process of some recipe.
I apologize if this is repeated but I didn't see a specific answer for my purposes.
The "Best" approach will depend on the functionality of the database server in question.
Generally, you should store the images "In" the database until that becomes a performance issue. Once you start storing images "Outside" of the database you will have to handle all the issue that are normally taken care of by the database. Disk space management, orphan records, file name conflicts, folder file limits, to name just a few. Depending on your situation these may be big issues or thay may be nothing to worry about.
I've seen several application where images (or attachements) were kept "Outside" the database, and in each case it was done poorly. There are just so many issues to handle, and most developers don't even think of half of them. In many cases the performance of storing the images "In" the databse was acceptable, but the developers decided against it because they just knew it would not perform well.
If your using SQL server 2008 the Filestream data type is ideal for your case. It stores the binary files outside of the database but behaves as a normal field. Also you are able to read/write the files using a stream instead of getting/setting the whole file as a byte array (like when using varbin(max))
If you don't have this functionality in your database, I would recommend storing the images outside of the DB
Its probably a better idea to use a file based approach for deployed static resources.
At the very least because taking a dependency on file system is typically easier to manage then taking a dependency on a DB.
Also this line indicates some sort of non-web client
The database will be stored and accessed natively so it can be accessed off-line."
This means if you go with the DB approach you'll have a couple of other interesting problems
Deployment
Depending on the platform deploying a DB can be a real bear depending on your target platform. What happens if they if already have the engine but its a different version.
Resources
Is your DB going to be client/server based (like MySQL/SQL Server etc)? If so then your app has to now manage the current state of its process. If not then you'll be using a file-based db SQL Lite/MS Access, at which point I would question why using a static DB is worth doing at all.
One final note. There's nothing stopping your Content Production environment from using a DB. Its quite common for Content producers to maintain a database for their content that will you will later use to produce the files for publishing/deployment.
I am looking for insert IIS 6.0 access log ( 5 servers, and over 400MB daily ) to SQL database. What scares me is the size. There is a lot of information you are duplicating (i.e. site name, url, referrer, browser) and could be normalized by index and look-up table.
Reason why I am looking for own database instead using other tools is that is 5 servers and I need very custom statistics and reports on each, few or all. Also installing any (specially open source) software is massacre ( need have 125% functionality and take months ).
I wounder what would be the most efficient way to do it? Is someone saw examples or articles about it ?
Whilst I would suggest buying a decent log parsing tool if you insist on going it alone, take a look at Log Parser
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
to help you do some of the heavy listing, either into SQL or maybe it can get the results you are after directly.
On the one hand, you will reduce disk space for values a lot by using artificial keys for things like server IP address, user agent, and referrer. Some of that space you save will be lost to the index, but the overall disk savings for 400 MB per day, times 5 servers, should still be substantial.
The tradeoff, of course, is the need to use joins to bring that information back together for reporting.
My nitpick is that replacing one column's values with an artificial key to a two-column lookup table shouldn't be called "normalizing". You can do that without identifying any functional dependencies. (I'm not certain you're proposing to do that, but it sounds like it.)
You're looking at about 12 gigs a month in raw data, right? Did you consider approaching it from a data warehousing point of view? (Instead of an OLTP point of view.)