for a personal project I am searching for the "most suitable" database engine to hit the following key issues
need to store large amounts of single different document files (PDF)
need to perform full-text search onto PDF (for this I plan to use OCR and save the processed data/metadata additionally to the database)
need to get pieces/chunks of the saved documents (for example from a specific year) and show a preview of lots of them within a nice web UI
as much performance as possible
Up to now I did work a lot with SQL (MySql) and have some theoretical knowledge about other systems (MemCached, Redis, PostgreSQ, MongoDb). But I`ve never used them in combination and never hit the point WHEN they should be used for WHAT exactly or how they can be combined.
I think especially for a project like this it`s very important to select the right engine from beginning not to hit performance issues later.
So especially to all experienced developers out there, what would be your favourite choiche for this kind of (I gues SQL may not be the only right solution) ?
Or at the end will it be better to store files within filesystem and keep only metadata in database ?
BTW my planned API backend for this is Laravel 7+, frontend will be Vue 2+.
Thank you very much !
I would like to (re)-start again with GemStone/S. I have done multiple ETL transformations for relational databases but I'm still fuzzy on how this is done at GemStone/S.
I would like to load data into GemStone from different sources. It could be files (csv, excel, xml, plain text, etc.) or other DBs like SQL Server, Postgres, Oracle, etc.
From what I saw at the pages there is GemConnect which connects to Oracle databases. How do you do it from other databases or files? Is there any option to connect via ODBC? Is there any data pump to do so or you "just" have to one yourself?
In the end I'm asking is how do you create a staging area, where you would clean-up, transform and then load the data into GemStone DB. Are there any examples or documentation how it is done?
Note: Only similar answer I have found is on SO - from Stephan Eggermont, but that was short and without any "real" information.
Staging
I suspect that the reason that most environments have "ETL/staging" as a separate step is because the two endpoints are somewhat rigid and don't have a good programming language for data manipulation. That is, if you have TXT, CSV, XML, JSON, or SQL, and need it in another format/schema, then someone has to do the "transformation." But if you are working in GemStone, then you can do the transformation in Smalltalk--there is no need for a separate step.
Files
If you have files (TXT, CSV, XML, JSON, etc.), then use GsFile. In fact, if the other endpoint can deal with files, then just export from one source in an agreed format and then import in another (with GemStone doing the "heavy lifting" of transforming). Files are simpler, they avoid the communications layer, and they makes debugging trivial (if the source hasn't created the file, then it is the source's problem; if it is in the pending directory then haven't processed it yet (destination problem); if it is in the completed directory then the destination has processed it).
With this approach you start (one or more) background jobs in GemStone to watch a directory, open a file for read, process the file, and then move it to another directory. Other than basic string manipulation, you only need to work with GsFile. Then you create and update your objects in the database.
ODBC
While it would be possible to make FFI calls from GemStone to an ODBC library (or to a database's native library as is done with GemConnect), this would probably be unnecessarily complex. Instead, I'd create another layer using tools that have better interaction with the foreign system. This layer could write text files (as described above), or, with the proper interface, could communicate with GemStone directly. My inclination would be to use Dolphin to extract the data (good ODBC support), then communicate directly to GemStone from Dolphin. You could do something similar with other client Smalltalk dialects (Pharo, VA, or VW), or even from another language (I have a student working on a Python interface to GemStone).
O/R Mapping
Here again you are left with needing a way to take data in one format and translate it to another. These tend to be highly domain-specific and we find it easier to just write Smalltalk code. Alternatively, you could use something like GLORP in Pharo, VA, VW, etc.
Best Practices
I think you haven't found any "best practices" for ETL in GemStone because it isn't something we think of as an external process or separate step. There is just how to communicate with a file (GsFile), a socket (GsSocket), a library (CLibrary), or a client (GCI). From here we can look at internal processing issues like multiple producers and one consumer (RcQueue), or one producer and multiple consumers (locking).
So, it isn't that GemStone applications don't do ETL, they just do it internally and the situations are much more situation-specific.
I am doing this for the first time, how do I use SQL via vb.net to insert, update and delete data in the database. Do I need to set up anything before I can use SQL? If I shift my executable app to a different computer do I need to install SQL in that computer as well?
There are a number of tutorials which can get you started.
As for running the applications on a different computer, it depends on where the database is located. Generally the database and the application are two separate things, the latter is simply accessing the former. Can the other target computer also access the database? Will the database be installed on a server accessible to the target machine? As long as it can see the database and its config file is correct, it should work fine.
This question is pretty broad. It's like asking "How do I drive a vehicle?" There are many different types of vehicles: boats, cars, rockets, airplanes, skateboards, etc. To tell you how to drive a vehicle we need to know what type of vehicle you want (or at least whether or not it's a land vehicle, water vehicle, etc). Likewise, there are many different ways to "use SQL via vb.net". Do you want to do Web development, standard Windows apps, console apps, WPF?
The best starting point is probably to learn pure ADO.NET since all of the other methods are built on top of it. Learn the basics before getting into LINQ, Entity Framework, etc.
I'd start here: http://www.csharp-station.com/Tutorials/AdoDotNet/Lesson01.aspx
It may not be a pure programming question but I'm looking for information about enCapsa. Do you know what it is, have you ever used it? I'm reading some papers about it but I can't really see how it works and what it can be used for in an IT company (and this is what i am supposed to find out).
Basically enCapsa is a shared data storage system, focused on providing a way for storing any kind of data (even from etherogeneous data sources such as different designed db tables) and consequently obtain it through a sort of human friendly queries, just like on a search engine. They offer the possibility to upload data from everywhere (it's CSV based) and later download to use wherever you need it.
Usages are many, consider it's a centralized DB accessible through web and they say it meets high security standard.
A usefull way to employ this service is to have data stored in there without the need to keep them synchronized across company computers.
Can you please point to alternative data storage tools and give good reasons to use them instead of good-old relational databases? In my opinion, most applications rarely use the full power of SQL--it would be interesting to see how to build an SQL-free application.
Plain text files in a filesystem
Very simple to create and edit
Easy for users to manipulate with simple tools (i.e. text editors, grep etc)
Efficient storage of binary documents
XML or JSON files on disk
As above, but with a bit more ability to validate the structure.
Spreadsheet / CSV file
Very easy model for business users to understand
Subversion (or similar disk based version control system)
Very good support for versioning of data
Berkeley DB (Basically, a disk based hashtable)
Very simple conceptually (just un-typed key/value)
Quite fast
No administration overhead
Supports transactions I believe
Amazon's Simple DB
Much like Berkeley DB I believe, but hosted
Google's App Engine Datastore
Hosted and highly scalable
Per document key-value storage (i.e. flexible data model)
CouchDB
Document focus
Simple storage of semi-structured / document based data
Native language collections (stored in memory or serialised on disk)
Very tight language integration
Custom (hand-written) storage engine
Potentially very high performance in required uses cases
I can't claim to know anything much about them, but you might also like to look into object database systems.
Matt Sheppard's answer is great (mod up), but I would take account these factors when thinking about a spindle:
Structure : does it obviously break into pieces, or are you making tradeoffs?
Usage : how will the data be analyzed/retrieved/grokked?
Lifetime : how long is the data useful?
Size : how much data is there?
One particular advantage of CSV files over RDBMSes is that they can be easy to condense and move around to practically any other machine. We do large data transfers, and everything's simple enough we just use one big CSV file, and easy to script using tools like rsync. To reduce repetition on big CSV files, you could use something like YAML. I'm not sure I'd store anything like JSON or XML, unless you had significant relationship requirements.
As far as not-mentioned alternatives, don't discount Hadoop, which is an open source implementation of MapReduce. This should work well if you have a TON of loosely structured data that needs to be analyzed, and you want to be in a scenario where you can just add 10 more machines to handle data processing.
For example, I started trying to analyze performance that was essentially all timing numbers of different functions logged across around 20 machines. After trying to stick everything in a RDBMS, I realized that I really don't need to query the data again once I've aggregated it. And, it's only useful in it's aggregated format to me. So, I keep the log files around, compressed, and then leave the aggregated data in a DB.
Note I'm more used to thinking with "big" sizes.
The filesystem's prety handy for storing binary data, which never works amazingly well in relational databases.
Try Prevayler:
http://www.prevayler.org/wiki/
Prevayler is alternative to RDBMS. In the site have more info.
If you don't need ACID, you probably don't need the overhead of an RDBMS. So, determine whether you need that first. Most of the non-RDBMS answers provided here do not provide ACID.
Custom (hand-written) storage engine / Potentially very high performance in required uses cases
http://www.hdfgroup.org/
If you have enormous data sets, instead of rolling your own, you might use HDF, the Hierarchical Data Format.
http://en.wikipedia.org/wiki/Hierarchical_Data_Format:
HDF supports several different data models, including multidimensional arrays, raster images, and tables.
It's also hierarchical like a file system, but the data is stored in one magic binary file.
HDF5 is a suite that makes possible the management of extremely large and complex data collections.
Think petabytes of NASA/JPL remote sensing data.
G'day,
One case that I can think of is when the data you are modelling cannot be easily represented in a relational database.
Once such example is the database used by mobile phone operators to monitor and control base stations for mobile telephone networks.
I almost all of these cases, an OO DB is used, either a commercial product or a self-rolled system that allows heirarchies of objects.
I've worked on a 3G monitoring application for a large company who will remain nameless, but whose logo is a red wine stain (-: , and they used such an OO DB to keep track of all the various attributes for individual cells within the network.
Interrogation of such DBs is done using proprietary techniques that are, usually, completely free from SQL.
HTH.
cheers,
Rob
Object databases are not relational databases. They can be really handy if you just want to stuff some objects in a database. They also support versioning and modify classes for objects that already exist in the database. db4o is the first one that comes to mind.
In some cases (financial market data and process control for example) you might need to use a real-time database rather than a RDBMS. See wiki link
There was a RAD tool called JADE written a few years ago that has a built-in OODBMS. Earlier incarnations of the DB engine also supported Digitalk Smalltalk. If you want to sample application building using a non-RDBMS paradigm this might be a start.
Other OODBMS products include Objectivity, GemStone (You will need to get VisualWorks Smalltalk to run the Smalltalk version but there is also a java version). There were also some open-source research projects in this space - EXODUS and its descendent SHORE come to mind.
Sadly, the concept seemed to die a death, probably due to the lack of a clearly visible standard and relatively poor ad-hoc query capability relative to SQL-based RDMBS systems.
An OODBMS is most suitable for applications with core data structures that are best represented as a graph of interconnected nodes. I used to say that the quintessential OODBMS application was a Multi-User Dungeon (MUD) where rooms would contain players' avatars and other objects.
You can go a long way just using files stored in the file system. RDBMSs are getting better at handling blobs, but this can be a natural way to handle image data and the like, particularly if the queries are simple (enumerating and selecting individual items.)
Other things that don't fit very well in a RDBMS are hierarchical data structures and I'm guessing geospatial data and 3D models aren't that easy to work with either.
Services like Amazon S3 provide simpler storage models (key->value) that don't support SQL. Scalability is the key there.
Excel files can be useful too, particularly if users need to be able to manipulate the data in a familiar environment and building a full application to do that isn't feasible.
There are a large number of ways to store data - even "relational databse" covers a range of alternatives from a simple library of code that manipulates a local file (or files) as if it were a relational database on a single user basis, through file based systems than can handle multiple-users to a generous selection of serious "server" based systems.
We use XML files a lot - you get well structured data, nice tools for querying same the ability to do edits if appropriate, something that's human readable and you don't then have to worry about the db engine working (or the workings of the db engine). This works well for stuff that's essentially read only (in our case more often than not generated from a db elsewhere) and also for single user systems where you can just load the data in and save it out as required - but you're creating opportunities for problems if you want multi-user editing - at least of a single file.
For us that's about it - we're either going to use something that will do SQL (MS offer a set of tools that run from a .DLL to do single user stuff all the way through to enterprise server and they all speak the same SQL (with limitations at the lower end)) or we're going to use XML as a format because (for us) the verbosity is seldom an issue.
We don't currently have to manipulate binary data in our apps so that question doesn't arise.
Murph
One might want to consider the use of an LDAP server in the place of a traditional SQL database if the application data is heavily key/value oriented and hierarchical in nature.
BTree files are often much faster than relational databases. SQLite contains within it a BTree library which is in the public domain (as in genuinely 'public domain', not using the term loosely).
Frankly though, if I wanted a multi-user system I would need a lot of persuading not to use a decent server relational database.
Full-text databases, which can be queried with proximity operators such as "within 10 words of," etc.
Relational databases are an ideal business tool for many purposes - easy enough to understand and design, fast enough, adequate even when they aren't designed and optimized by a genius who could "use the full power," etc.
But some business purposes require full-text indexing, which relational engines either don't provide or tack on as an afterthought. In particular, the legal and medical fields have large swaths of unstructured text to store and wade through.
Also:
* Embedded scenarios - Where usually it is required to use something smaller then a full fledged RDBMS. Db4o is an ODB that can be easily used in such case.
* Rapid or proof-of-concept development - where you wish to focus on the business and not worry about persistence layer
CAP theorem explains it succinctly. SQL mainly provides "Strong Consistency: all clients see the same view, even in presence of updates".
K.I.S.S: Keep It Small and Simple
I would offer RDBMS :)
If you do not wont to have troubles with set up/administration go for SQLite.
Built in RDBMS with full SQL support. It even allows you to store any type of data in any column.
Main advantage against for example log file: If you have huge one, how are you going to search in it? With SQL engine you just create index and speed up operation dramatically.
About full text search: SQLite has modules for full text search too..
Just enjoy nice standard interface to your data :)
One good reason not to use a relational database would be when you have a massive data set and want to do massively parallel and distributed processing on the data. The Google web index would be a perfect example of such a case.
Hadoop also has an implementation of the Google File System called the Hadoop Distributed File System.
I would strongly recommend Lua as an alternative to SQLite-kind of data storage.
Because:
The language was designed as a data description language to begin with
The syntax is human readable (XML is not)
One can compile Lua chunks to binary, for added performance
This is the "native language collection" option of the accepted answer. If you're using C/C++ as the application level, it is perfectly reasonable to throw in the Lua engine (100kB of binary) just for the sake of reading configs/data or writing them out.