Centralized storage for large text files - sql

What should do system : store/manage centralized large(100 - 400 mb) text files
What to store : lines from text file, for some files lines must be unique, metadata about file(filename, comment, last update etc.) also must be stored position in file( on same file may be different positions for different applications)
Operations : concurrent get lines from file (100 - 400 lines on query), add lines(also 100 - 400 lines), exporting is not critical - can be scheduled
So which storage to use SQL DBMS - too slow, i think, maybe a noSQL solution ?

NoSQL: Cassandra is an option (you can store it line by line or groups of lines I guess), Voldemort is not too bad, you might even get away with using MongoDB but not sure it fits the "large files" requirement.

400 MiB will be completely served from the caches on every non-ridiculous database server. Insofar, the choice of database does not really matter too much, any database will be able to deliver fast (though there are different kinds of "fast", it depends what you need).
If you are really desperate for raw speed, you can go with something like redis. Again, 400 MiB is no challenge for that.
SQL might be slightly slower (but not that much) but has the huge advantage of being flexible. Flexibility, generality, and the presence of a "built-in programming language" are not free, but they should not have a too bad impact, because either way returning data from the buffer cache works more or less at the speed of RAM.
If you ever figure that you need a different database at a later time, SQL will let you do it with a few commands, or if you ever want something else you've not planned for, SQL will do. There is no guarantee that doing something different will be feasible with a simple key-value store.
Personally, I wouldn't worry about performance for such rather "small" datasets. Really, every kind of DB will serve that well, worry not. Come again when your datasets are several dozens of gigabytes in size.
If you are 100% sure that you will definitively never need the extras that a fully blown SQL database system offers, go with NoSQL to shave off a few microseconds. Otherwise, just stick with it to be on the safe side.
EDIT:
To elaborate, consider that a "somewhat lower class" desktop has upwards of 2 GiB (usually rather 4 GiB) nowadays, and a typical "no big deal" server has something like 32 GiB. In that light, 400 MiB is nothing. Typical network uplink on a server (unless you are willing to pay extra) are 100 mibit/s.
A 400 MiB text file might have somewhere around a million lines. That boils down to 6-7 memory accesses for a "typical SQL server", and to 2 memory accesses plus the time needed to calculate a hash for a "typical NoSQL server". Which is, give or take few a dozen cycles, the same in either case -- something around a half a microsecond on a relatively slow system.
Add to that a few dozen microseconds the first time a query is executed, because it must be parsed, validated, and optimized, if you use SQL.
Network latency is somewhere around 2 to 3 milliseconds if you're lucky. That's 3 to 4 orders of magnitude more for establishing a connection, sending a request to the server, and receiving an answer. Compared to that, it seems ridiculous to worry whether the query takes 517 or 519 microseconds. If there are 1-2 routers in between, it becomes even more pronounced.
The same is true for bandwidth. You can in theory push around 119 MiB/s over a 1 Gibit/s link assuming maximum sized frames and assuming no ACKs and assuming absolutely no other traffic, and zero packet loss. RAM delivers in the tens of GiB per second without trouble.

Related

Why is the n+1 selects pattern slow?

I'm rather inexperienced with databases and have just read about the "n+1 selects issue". My follow-up question: Assuming the database resides on the same machine as my program, is cached in RAM and properly indexed, why is the n+1 query pattern slow?
As an example let's take the code from the accepted answer:
SELECT * FROM Cars;
/* for each car */
SELECT * FROM Wheel WHERE CarId = ?
With my mental model of the database cache, each of the SELECT * FROM Wheel WHERE CarId = ? queries should need:
1 lookup to reach the "Wheel" table (one hashmap get())
1 lookup to reach the list of k wheels with the specified CarId (another hashmap get())
k lookups to get the wheel rows for each matching wheel (k pointer dereferenciations)
Even if we multiply that by a small constant factor for an additional overhead because of the internal memory structure, it still should be unnoticeably fast. Is the interprocess communication the bottleneck?
Edit: I just found this related article via Hacker News: Following a Select Statement Through Postgres Internals. - HN discussion thread.
Edit 2: To clarify, I do assume N to be large. A non-trivial overhead will add up to a noticeable delay then, yes. I am asking why the overhead is non-trivial in the first place, for the setting described above.
You are correct that avoiding n+1 selects is less important in the scenario you describe. If the database is on a remote machine, communication latencies of > 1ms are common, i.e. the cpu would spend millions of clock cycles waiting for the network.
If we are on the same machine, the communication delay is several orders of magnitude smaller, but synchronous communication with another process necessarily involves a context switch, which commonly costs > 0.01 ms (source), which is tens of thousands of clock cycles.
In addition, both the ORM tool and the database will have some overhead per query.
To conclude, avoiding n+1 selects is far less important if the database is local, but still matters if n is large.
Assuming the database resides on the same machine as my program
Never assume this. Thinking about special cases like this is never a good idea. It's quite likely that your data will grow, and you will need to put your database on another server. Or you will want redundancy, which involves (you guessed it) another server. Or for security, you might want not want your app server on the same box as the DB.
why is the n+1 query pattern slow?
You don't think it's slow because your mental model of performance is probably all wrong.
1) RAM is horribly slow. Your CPU is wasting around 200-400 CPU cycles each time it needs to read something something from RAM. CPUs have a lot of tricks to hide this (caches, pipelining, hyperthreading)
2) Reading from RAM is not "Random Access". It's like a hard drive: sequential reads are faster.
See this article about how accessing RAM in the right order is 76.6% faster http://lwn.net/Articles/255364/ (Read the whole article if you want to know how horrifyingly complex RAM actually is.)
CPU cache
In your "N+1 query" case, the "loop" for each N includes many megabytes of code (on client and server) swapping in and out of caches on each iteration, plus context switches (which usually dumps the caches anyway).
The "1 query" case probably involves a single tight loop on the server (finding and copying each row), then a single tight loop on the client (reading each row). If those loops are small enough, they can execute 10-100x faster running from cache.
RAM sequential access
The "1 query" case will read everything from the DB to one linear buffer, send it to the client who will read it linearly. No random accesses during data transfer.
The "N+1 query" case will be allocating and de-allocating RAM N times, which (for various reasons) may not be the same physical bit of RAM.
Various other reasons
The networking subsystem only needs to read one or two TCP headers, instead of N.
Your DB only needs to parse one query instead of N.
When you throw in multi-users, the "locality/sequential access" gets even more fragmented in the N+1 case, but stays pretty good in the 1-query case.
Lots of other tricks that the CPU uses (e.g. branch prediction) work better with tight loops.
See: http://blogs.msdn.com/b/oldnewthing/archive/2014/06/13/10533875.aspx
Having the database on a local machine reduces the problem; however, most applications and databases will be on different machines, where each round trip takes at least a couple of milliseconds.
A database will also need a lot of locking and latching checks for each individual query. Context switches have already been mentioned by meriton. If you don't use a surrounding transaction, it also has to build implicit transactions for each query. Some query parsing overhead is still there, even with a parameterized, prepared query or one remembered by string equality (with parameters).
If the database gets filled up, query times may increase, compared to an almost empty database in the beginning.
If your database is to be used by other application, you will likely hammer it: even if your application works, others may slow down or even get an increasing number of failures, such as timeouts and deadlocks.
Also, consider having more than two levels of data. Imagine three levels: Blogs, Entries, Comments, with 100 blogs, each with 10 entries and 10 comments on each entry (for average). That's a SELECT 1+N+(NxM) situation. It will require 100 queries to retrieve the blog entries, and another 1000 to get all comments. Some more complex data, and you'll run into the 10000s or even 100000s.
Of course, bad programming may work in some cases and to some extent. If the database will always be on the same machine, nobody else uses it and the number of cars is never much more than 100, even a very sub-optimal program might be sufficient. But beware of the day any of these preconditions changes: refactoring the whole thing will take you much more time than doing it correctly in the beginning. And likely, you'll try some other workarounds first: a few more IF clauses, memory cache and the like, which help in the beginning, but mess up your code even more. In the end, you may be stuck in a "never touch a running system" position, where the system performance is becoming less and less acceptable, but refactoring is too risky and far more complex than changing correct code.
Also, a good ORM offers you ways around N+1: (N)Hibernate, for example, allows you to specify a batch-size (merging many SELECT * FROM Wheels WHERE CarId=? queries into one SELECT * FROM Wheels WHERE CarId IN (?, ?, ..., ?) ) or use a subselect (like: SELECT * FROM Wheels WHERE CarId IN (SELECT Id FROM Cars)).
The most simple option to avoid N+1 is a join, with the disadvantage that each car row is multiplied by the number of wheels, and multiple child/grandchild items likely ending up in a huge cartesian product of join results.
There is still overhead, even if the database is on the same machine, cached in RAM and properly indexed. The size of this overhead will depend on what DBMS you're using, the machine it's running on, the amount of users, the configuration of the DBMS (isolation level, ...) and so on.
When retrieving N rows, you can choose to pay this cost once or N times. Even a small cost can become noticeable if N is large enough.
One day someone might want to put the database on a separate machine or to use a different dbms. This happens frequently in the business world (to be compliant with some ISO standard, to reduce costs, to change vendors, ...)
So, sometimes it's good to plan for situations where the database isn't lightning fast.
All of this depends very much on what the software is for. Avoiding the "select n+1 problem" isn't always necessary, it's just a rule of thumb, to avoid a commonly encountered pitfall.

Cost of http request vs file size, rule of thumb?

This sort of question has been asked before HTTP Requests vs File Size?, but I'm hoping for a better answer. In that linked question, the answerer seemed to do a pretty good job of answering the question with the nifty formula of latency + transfer time with an estimated latency of 80 ms and transfer speed of 5Mb/s. But it seems flaw in at least one respect. Don't multiple requests and transfers happen simultaneously in a normal browsing experience? That's what it looks like when I examine the Network tab in Chrome. Doesn't this mean that request latency isn't such a terrible thing?
Are there any other things to consider? Obviously latency and and bandwidth will vary but is 80ms and 5Mb/s a good rule of thumb? I thought of an analogy and I wonder if it is correct. Imagine a train station with only one track in and one track out (or maybe it is one for both). Http requests are like sending an engine out to get a bunch of cars at another station. They return pulling a long train of railway cars, which represents the requested file being downloaded. So you could send one engine out and have it bring back a massive load. Or you could send multiple engines out and they could each bring back smaller loads, of course they would all have to wait their turn coming back into the station. And some engines couldn't be sent out until other ones had come in. Is this a flawed analogy?
I guess the big question then is how can you predict how much overlap there will be in http requests so that you can know, for example, if it is generally worth it to have two big PNG files on your page or instead have a webp image, plus the Webpjs js and swf files for incompatible browsers. That doubles the number of requests but more than halves the total file size (say 200kB savings).
Your analogy it's not bad in general terms. Obviously if you want to be really precise in all aspects, there're things that are oversimplified or incorrect (but that happens with almost all analogies).
Your estimate of 80ms and 5mb/s might sound logical, but even though most of us likes theory, you should manage this kind of problems in another way.
In order to make good estimates, you should measure to get some data and analyze it. Every estimation depends on some context and you should not ignore it.
Think about not being the same estimating latency and bandwidth for a 3G connection, an ADSL connection in Japan or an ADSL connection in a less technology-developed country. Are clients accessing from the other end of the world or in the same country?. Like your good observation of simultaneous connections on the client, there're millions of possible questions to ask yourself and very little good-quality answers without doing some measure.
I know I'm not answering exactly your question, because I think is unanswerable without so many details about the domain (plus constrains, and a huge etc).
You seem to have some ideas about how to design your solution. My best advice is to implement each one of those and profile them. Make measurements, try to identify what your bottlenecks are and see if you have some control about them.
In some problems this kind of questions might have an optimal solution, but the difference between optimal and suboptimal could be negligible in practice.
This is the kind of answer I'm looking for. I did some simplistic tests to get a feel for the speed of many small files vs one large files.
I created html pages that loaded a bunch of random sized images from placekitten.com. I loaded them in Chrome with the Network tab open.
Here are some results:
# of imgs Total Size (KB) Time (ms)
1 465 4000, 550
1 307 3000, 800, 350, 550, 400
30 192 1200, 900, 800, 900
30 529 7000, 5000, 6500, 7500
So one major thing to note is that single files become much quicker after they have been loaded once. (The comma seperated list of times are page reloads). I did normal refresh and also Empty Cache and Hard Reload. Strangely it didn't seem to make much difference which way I refreshed.
My connection had a latency or return time or whatever of around 120 - 130ms and my download speed varied between 4 and 8Mbps. Chrome seemed to do about 6 requests at a time.
Looking at these few tests it seems that, in this range of file sizes at least, it is obviously better to have less requests when the file sizes are equal, but if you could cut the file size in half, even at the expense of increasing the number of http requests by 30, it would be worth it, at least for a fresh page load.
Any comments or better answers would be appreciated.

How to store 15 x 100 million 32-byte records for sequential access?

Me got 15 x 100 million 32-byte records. Only sequential access and appends needed. The key is a Long. The value is a tuple - (Date, Double, Double). Is there something in this universe which can do this? I am willing to have 15 seperate databases (sql/nosql) or files for each of those 100 million records. I only have a i7 core and 8 GB RAM and 2 TB hard disk.
I have tried PostgreSQL, MySQL, Kyoto Cabinet (with fine tuning) with Protostuff encoding.
SQL DBs (with indices) take forever to do the silliest query.
Kyoto Cabinet's B-Tree can handle upto 15-18 million records beyond which appends take forever.
I am fed up so much that I am thinking of falling back on awk + CSV which I remember used to work for this type of data.
If you scenario means always going through all records in sequence then it may be an overkill to use a database. If you start to need random lookups, replacing/deleting records or checking if a new record is not a duplicate of an older one, a database engine would make more sense.
For the sequential access, a couple of text files or hand-crafted binary files will be easier to handle. You sound like a developer - I would probably go for an own binary format and access it with help of memory-mapped files to improve the sequential read/append speed. No caching, just a sliding window to read the data. I think that it would perform better and even on usual hardware than any DB would; I did such data analysis once. It would also be faster than awking CSV files; however, I am not sure how much and if it satisfied the effort to develop the binary storage, first of all.
As soon as the database becomes interesting, you can have a look at MongoDB and CouchDB. They are used for storing and serving very large amounts of data. (There is a flattering evaluation that compares one of them to traditional DBs.). Databases usually need a reasonable hardware power to perform better; maybe you could check out how those two would do with your data.
--- Ferda
Ferdinand Prantl's answer is very good. Two points:
By your requirements I recommend that you create a very tight binary format. This will be easy to do because your records are fixed size.
If you understand your data well you might be able to compress it. For example, if your key is an increasing log value you don't need to store it entirely. Instead, store the difference to the previous value (which is almost always going to be one). Then, use a standard compression algorithm/library to save on data size big time.
For sequential reads and writes, leveldb will handle your dataset pretty well.
I think that's about 48 gigs of data in one table.
When you get into large databases, you have to look at things a little differently. With an ordinary database (say, tables less than a couple million rows), you can do just about anything as a proof of concept. Even if you're stone ignorant about SQL databases, server tuning, and hardware tuning, the answer you come up with will probably be right. (Although sometimes you might be right for the wrong reason.)
That's not usually the case for large databases.
Unfortunately, you can't just throw 1.5 billion rows straight at an untuned PostgreSQL server, run a couple of queries, and say, "PostgreSQL can't handle this." Most SQL dbms have ways of dealing with lots of data, and most people don't know that much about them.
Here are some of the things that I have to think about when I have to process a lot of data over the long term. (Short-term or one-off processing, it's usually not worth caring a lot about speed. A lot of companies won't invest in more RAM or a dozen high-speed disks--or even a couple of SSDs--for even a long-term solution, let alone a one-time job.)
Server CPU.
Server RAM.
Server disks.
RAID configuration. (RAID 3 might be worth looking at for you.)
Choice of operating system. (64-bit vs 32-bit, BSD v. AT&T derivatives)
Choice of DBMS. (Oracle will usually outperform PostgreSQL, but it costs.)
DBMS tuning. (Shared buffers, sort memory, cache size, etc.)
Choice of index and clustering. (Lots of different kinds nowadays.)
Normalization. (You'd be surprised how often 5NF outperforms lower NFs. Ditto for natural keys.)
Tablespaces. (Maybe putting an index on its own SSD.)
Partitioning.
I'm sure there are others, but I haven't had coffee yet.
But the point is that you can't determine whether, say, PostgreSQL can handle a 48 gig table unless you've accounted for the effect of all those optimizations. With large databases, you come to rely on the cumulative effect of small improvements. You have to do a lot of testing before you can defensibly conclude that a given dbms can't handle a 48 gig table.
Now, whether you can implement those optimizations is a different question--most companies won't invest in a new 64-bit server running Oracle and a dozen of the newest "I'm the fastest hard disk" hard drives to solve your problem.
But someone is going to pay either for optimal hardware and software, for dba tuning expertise, or for programmer time and waiting on suboptimal hardware. I've seen problems like this take months to solve. If it's going to take months, money on hardware is probably a wise investment.

SQL HW to performance ration

I am seeking a way to find bottlenecks in SQL server and it seems that more than 32GB ram and more than 32 spindels on 8 cores are not enough. Are there any metrics, best practices or HW comparations (i.e. transactions per sec)? Our daily closure takes hours and I want it in minutes or realtime if possible. I was not able to merge more than 12k rows/sec. For now, I had to split the traffic to more than one server, but is it a proper solution for ~50GB database?
Merge is enclosed in SP and keeped as simple as it can be - deduplicate input, insert new rows, update existing rows. I found that the more rows we put into single merge the more rows per sec we get. Application server runs in more threads, and uses all the memory and processor on its dedicated server.
Follow a methodology like Waits and Queues to identify the bottlenecks. That's exactly what is designed for. Once you identified the bottleneck, you can also judge whether is a hardware provisioning and calibration issue (and if so, which hardware is the bottleneck), or if is something else.
The basic idea is to avoid having to do random access to a disk, both reading and writing. Without doing any analysis, a 50 GB database needs at least 50GB of ram. Then you have to make sure indexes are on a separate spindle from the data and the transaction logs, you write as late as possible, and critical tables are split over multiple spindles. Are you doing all that?

GemStone-Linux-Apache-Seaside-Smalltalk.. how practical is 4GB?

I am really interested in GLASS. The 4GB limit for the free version has me concerned. Especially when I consider the price for the next level ($7000 year).
I know this can be subjective and variable, but can someone describe for me in everyday terms what 4 GB of GLASS will get you? Maybe a business example. 4 GB may get me more storage than I realize.. and I don't have to worry about it.
In my app, some messages have file attachments up to 5 MB in size. Can I conserve the 4 GB of Gemstone space by saving these attachments directly to files on the operating system, instead of inside Gemstone? I'm thinking yes.
I'm aware of one GLASS system that is ~944 MB and has 8.3 million objects, or ~118 bytes per object. At this rate, it can grow to over 36 million objects and stay under 4 GB.
As to "attachments", I'd suggest that even in an RDBMS you should consider storing larger, static data in the file system and referencing it from the database. If you are building a web-based application, serving static content (JPG, CSS, etc.) should be done by your web server (e.g., Apache) rather than through the primary application.
By comparison, Oracle and Microsoft SQL Server have no-cost licenses for a 4-GB database.
What do you think would be a good price for the next level?
The 4GByte limit has been removed a while ago. The free version is limited now to the use of two cores and 2GByte ram.
4GB is quite a decent size database. Not having used gemstone before I can only speculate as to how efficient it is a storing objects, but having played with a few other similar object databases (Mongodb, db4o). I know that you're going to be able to fit several(5-10) million records before you even get close to that limit. In reality, how many records depends highly on the type of data you're storing.
As an example I was storing ~2million listings & ~1million transactions, in a mysql database and the space was < 1Gb. You have a small overhead serializing a whole object, but not that much.
Files can definitely can be stored on the file system.
4gb an issue... I guess you think you're building the next ebay!
Nowadays, there is no limit on the size of the repository. See the latest specs for GemStone
If you have multiple simultaneous users with attachments of 5MB you need a separate strategy for them anyway, as each takes about a twentieth second of bandwidth of a GBit ethernet network.