I want to build a Windows Application (WPF) that will use RavenDb Embedded and that will support the use of a USB Key for the location of the central database.
When the USB key is not present, the application will use a local store.
When the USB Key is present, the application will use the store on the key as the main store.
Of course, when when the USB key is present, the data between the two stores are merged/synced.
Is there a "known" pattern for doing this? Is there out-of-the-box of "merging" data between two stores? Does RavenDb Embedded support "multiple store databases"?
Per the documentation, Embedded mode does not support multiple databases. However, you can manage multiple databases yourself by creating two separate EmbeddableDocumentStore instances with different DataDirectory paths.
You can enable Embedded+HTTP mode to replicate between the two instances, as long as you put them on different HTTP ports.
That said, I don't think it's a very good idea to have raven using a USB key directly. The concerns are:
What happens if the key is removed in the middle of a write operation? Raven is supposed to handle shutdowns well, but my guess is that you will get some exceptions thrown, and I don't think it was designed with that scenario in mind.
Disk I/O may possibly be not be sufficient for performance, but perhaps that depends on the rated speed of the USB stick. Try it and let us know what your performance is like.
Related
How can a single instance of Redis be used in a multi-tenant environment. Meaning multiple and different applications using the same Redis instance.
Suppose I have two apps, one is Baking App and the other one is Delivery App. Both apps will be using the same Redis instance and both apps will be saving similar keys with similar key patterns (e.g. userid:uuid -> johnsmith) etc. Obviously, using the same Redis will have collisions, is there a way to "Namespace" the database as such even the same key will be isolated from each other allowing multiple apps to use the same Redis instance concurrently?
And also work with Redis search, and in the same way, search and indexing would be isolated from each app. So if the search is on the Delivery App namespace it would not fetch anything from the Baking App namespace.
How can this be achieved?
So there are multiple things that you can do:
You can prefix the keys with app name like app1:userid:uuid etc
You can use different in memory db provided by redis. Redis supports upto 16 DBs. You can store keys for different apps in different db. To fetch them connect with respective DB.
You can use both of the above methods.
To improve security so that the Apps cannot access other App's data:
Implement Redis ACL - If you are using Redis version 6+, you can leverage the feature of using ACL(Access Control Lists). You can create users with passwords for each app and pass these credentials while making Redis connection. You can even add permissions/commands etc. to the users.
Data in different DB cannot be accessed i.e. if you make connection to DB 0, you cannot fetch data from DB 1.
I currently have some different project that works on different redis instance ( consider the sample where I've 3 different asp.net application that are on different server each one with its redis server).
We've been asked to virtualize and to remove useless instances so I was wondering what happens if I have only one redis server and all the 3 asp.net points to the same redis instance.
For the application key I think there's no problem, I can prefix my own key with the application name , for example "fi-agents", "ga-agents", and so on... but I was wondering for the auth session what happens?
as far as I've read the Prefix is used as internal and it can't be used by final user to separate... it's just enought to use different Db?
Thanks
Generally and unless there are truely compelling reasons, you don't want to mix different applications and their data in the same database. Yes, it does lower ops costs initially but it can quickly deteriorate to scaling and performance nightmare. This, I believe, is true for any database.
Specifically with Redis, technically yes - you could use a key prefix or the shared/numbered database approach. I'm not sure what you meant by "auth" sessions but you can probably apply the same approach to them. But you really shouldn't... since Redis is a single-threaded process you can end up where one of the apps is blocking the other two. Since Redis by itself is so lightweight, just spin up dedicated servers - one per app - even in the same VM if you must. You can read more background information on why you don't want to opt for the shared approach here: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances
At my current project I actively use redis for various purposes. There are 2 redis databases for current application:
The first one contains absolutely temporary data: how many users are online, who are online, various admin's counters. This db is cleared before the application starts by start-up script.
The second database is used for persistent data like user's ratings, user's friends, etc.
Everything seems to be correct and everybody is happy.
However, when I've started implementing a new functionality in my application, I discover that I need to intersect a set with user's friends with a set of online users. These sets stored in different redis databases, and I haven't found any possibility to do this task in redis, except changing application architecture and move all keys into one namespace(database).
Is there actually any way to perform some command in redis using data from multiple databases? Or maybe my use case of redis is wrong and I have to perform a fix of system architecture?
There is not. There is a command that makes it easy to move keys to another DB:
http://redis.io/commands/move
If you move all keys to one DB, make sure you don't have any key clashes! You could suffix or prefix the keys from the temp DB to make absolutely sure. MOVE will do nothing if the key already exists in the target DB. So make sure you act on a '0' reply
Using multiple DBs is definitely not a good idea:
A Quote from Salvatore Sanfilippo (the creator of redis):
I understand how this can be useful, but unfortunately I consider
Redis multiple database errors my worst decision in Redis design at
all... without any kind of real gain, it makes the internals a lot
more complex. The reality is that databases don't scale well for a
number of reason, like active expire of keys and VM. If the DB
selection can be performed with a string I can see this feature being
used as a scalable O(1) dictionary layer, that instead it is not.
With DB numbers, with a default of a few DBs, we are communication
better what this feature is and how can be used I think. I hope that
at some point we can drop the multiple DBs support at all, but I think
it is probably too late as there is a number of people relying on this
feature for their work.
https://groups.google.com/forum/#!msg/redis-db/vS5wX8X4Cjg/8ounBXitG4sJ
I'm creating a mobile app and it requires a API service backend to get/put information for each user. I'll be developing the web service on ServiceStack, but was wondering about the storage. I love the idea of a fast in-memory caching system like Redis, but I have a few questions:
I created a sample schema of what my data store should look like. Does this seems like it's a good case for using Redis as opposed to a MySQL DB or something like that?
schema http://www.miles3.com/uploads/redis.png
How difficult is the setup for persisting the Redis store to disk or is it kind of built-in when you do writes to the store? (I'm a newbie on this NoSQL stuff)
I currently have my setup on AWS using a Linux micro instance (because it's free for a year). I know many factors go into this answer, but in general will this be enough for my web service and Redis? Since Redis is in-memory will that be enough? I guess if my mobile app skyrockets (hey, we can dream right?) then I'll start hitting the ceiling of the instance.
What to think about when desigining a NoSQL Redis application
1) To develop correctly in Redis you should be thinking more about how you would structure the relationships in your C# program i.e. with the C# collection classes rather than a Relational Model meant for an RDBMS. The better mindset would be to think more about data storage like a Document database rather than RDBMS tables. Essentially everything gets blobbed in Redis via a key (index) so you just need to work out what your primary entities are (i.e. aggregate roots)
which would get kept in its own 'key namespace' or whether it's non-primary entity, i.e. simply metadata which should just get persisted with its parent entity.
Examples of Redis as a primary Data Store
Here is a good article that walks through creating a simple blogging application using Redis:
http://www.servicestack.net/docs/redis-client/designing-nosql-database
You can also look at the source code of RedisStackOverflow for another real world example using Redis.
Basically you would need to store and fetch the items of each type separately.
var redisUsers = redis.As<User>();
var user = redisUsers.GetById(1);
var userIsWatching = redisUsers.GetRelatedEntities<Watching>(user.Id);
The way you store relationship between entities is making use of Redis's Sets, e.g: you can store the Users/Watchers relationship conceptually with:
SET["ids:User>Watcher:{UserId}"] = [{watcherId1},{watcherId2},...]
Redis is schema-less and idempotent
Storing ids into redis sets is idempotent i.e. you can add watcherId1 to the same set multiple times and it will only ever have one occurrence of it. This is nice because it means you don't ever need to check the existence of the relationship and can freely keep adding related ids like they've never existed.
Related: writing or reading to a Redis collection (e.g. List) that does not exist is the same as writing to an empty collection, i.e. A list gets created on-the-fly when you add an item to a list whilst accessing a non-existent list will simply return 0 results. This is a friction-free and productivity win since you don't have to define your schemas up front in order to use them. Although should you need to Redis provides the EXISTS operation to determine whether a key exists or a TYPE operation so you can determine its type.
Create your relationships/indexes on your writes
One thing to remember is because there are no implicit indexes in Redis, you will generally need to setup your indexes/relationships needed for reading yourself during your writes. Basically you need to think about all your query requirements up front and ensure you set up the necessary relationships at write time. The above RedisStackOverflow source code is a good example that shows this.
Note: the ServiceStack.Redis C# provider assumes you have a unique field called Id that is its primary key. You can configure it to use a different field with the ModelConfig.Id() config mapping.
Redis Persistance
2) Redis supports 2 types persistence modes out-of-the-box RDB and Append Only File (AOF). RDB writes routine snapshots whilst the Append Only File acts like a transaction journal recording all the changes in-between snapshots - I recommend adding both until your comfortable with what each does and what your application needs. You can read all Redis persistence at http://redis.io/topics/persistence.
Note Redis also supports trivial replication you can read more about at: http://redis.io/topics/replication
Redis loves RAM
3) Since Redis operates predominantly in memory the most important resource is that you have enough RAM to hold your entire dataset in memory + a buffer for when it snapshots to disk. Redis is very efficient so even a small AWS instance will be able to handle a lot of load - what you want to look for is having enough RAM.
Visualizing your data with the Redis Admin UI
Finally if you're using the ServiceStack C# Redis Client I recommend installing the Redis Admin UI which provides a nice visual view of your entities. You can see a live demo of it at:
http://servicestack.net/RedisAdminUI/AjaxClient/
I am building out a solution that will be deployed in multiple data centers in multiple regions around the world, with each data center having a replicated copy of data actively updated in each region. I will have a combination of multiple databases and file systems in each data center, the state of which must be kept consistent (within a data center). These multiple repositories will be fronted by a SOA service tier.
I can tolerate some latency in the replication, and need to allow for regions to be off-line, and then catch up later.
Given the multiple back end repositories of data, I can't easily rely on independent replication solutions for each one to maintain a consistent state. I am thus lead to implementing replication at the application layer -- by replicating the SOA requests in some manner. I'll need to make sure that replication loops don't occur, and that last writer conditions are sorted out correctly.
In your experience, what is the best pattern for solving this problem, and are there good products (free or otherwise) that should be investigated?
Lotus/ Domino is your answer. I've been working with it for ten years and its exactly what you need. It may not be trendy (a perception that I would challenge) but its powerful, adaptable and very secure, The latest version R8 is the best yet.
You should definitely consider IBM Lotus Domino. A Lotus Notes database can replicate between sites on a predefined schedule. The replicate in Notes/Domino is definitely a very powerful feature and enables for full replication of data between sites. Even if a server is unavailable the next time it connects it will simply replicate and get back in sync.
As far as SOA Service tier you could then use Domino Designer to write a webservice. Since Notes/Domino 7.5.x (I believe) Domino has been able to provision and consume webservices.
AS what other advised, I will recommend also Lotus Notes/Domino. 8.5 is really very powerful application development platfrom
You dont give enough specifics to be certain of your needs but I think you should check out SQL Server Merge replication. It allows for asynchronous replication of multiple databases with full conflict resolution. You will need to designate a Global master and all the other databases will replicate to that one, but all the database instances are fully functional (read/write) and so you can schedule replication at whatever intervals suit you. If any region goes offline they can catch up later with no issues - if the master goes offline everyone will work independantly until replication can resume.
I would be interested to know of other solutions this flexible (apart from Lotus Notes/Domino of course which is not very trendy these days).
I think that your answer is going to have to be based on a pub/sub architecture. I am assuming that you have reliable messaging between your data centers so that you can rely on published updates being received eventually. If all of your access to the data repositories is via service you can add an event notification to the orchestration of each of your update services that notifies all interested data centers of the event. Ideally the master database is the only one that sends out these updates. If the master database is the only one sending the updates you can exclude routing the notifications to the node that generated them in the first place thus avoiding update loops.