I am currently running IDS4 as a single instance in one region (single database for configuration and operational store). I now have to distribute the installation across two regions so that services/users in region A access IDS in region A and services/users in region B access IDS in region B.
Both instances should access the same datastore, but IDS in region B should not have to make cross-region read queries to the database in region A.
We use Azure SQL Server and the geo-replication feature which offers a single writable instance (either in region A or B) and multiple readable instances. We pointed IDS in region B to a read-only instance in the same region, but this does not work because IDS has to write operational data like persistent grants.
Is there a recommended architecture to achieve this or do you have any experience implementing a multi-region and load-balanced IDS deployment? Is it possible to configure IDS to use a different database for write operations and the database in the same region for read operations?
Instead of Geo-replication you can use Azure SQL Data Sync to have writable replicas of Azure SQL Database, defining one of them as the hub database and the others as the member database. Synchronization between all databases can be configured bidirectional thus all databases are updateable. You can start configuring Azure SQL Data Sync on this documentation.
It is unlikely that you will find a recommended architecture for a scenario like this due to how much of this problem is in your business domain. Also, there is nothing out of the box in Identity Server 4 library or its supporting libraries that would satisfy your criteria.
Having said that, I've had a similar requirement (unrelated to Identity Server 4 but identical functional requirements in a nutshell) and it should be possible to adapt the same idea in your case.
Firstly, your only problem like you've said is the fact that out of the box, using the Identity Server 4 EF package, the PersistedGrantStore uses one IPersistedGrantDbContext which does both writes and reads from the database. So in order to solve this, you basically need to create your own implementation of IPersistedGrantStore and in that custom implementation you could technically use two different DbContext types, one of which would be created using a connection string to single writeable instance of database and would only be used for implementing interface methods that do writes and another one would be used for read methods only and would use connection string for read only instance of the database.
Basic idea of the above summary is below:
public class MyCustomPersistedGrantStore : IPersistedGrantStore
{
private readonly WriteOnlyPersistedGrantDbContext _writeContext;
private readonly ReadOnlyPersistedGrantDbContext _readContext;
public PersistedGrantStore(WriteOnlyPersistedGrantDbContext writeContext, ReadOnlyPersistedGrantDbContext readContext)
{
_writeContext = writeContext;
_readContext = readContext;
}
public Task StoreAsync(PersistedGrant token)
{
//Use _writeContext to implement storage logic
}
public Task<PersistedGrant> GetAsync(string key)
{
//Use _readContext to implement read logic
}
...other interface methods
}
All you need after implementing your custom version is to add your implementation of IPersistedGrantStore as well as the DbContext's into DI system .
Lastly, it is worthwhile to note that if you stop using .AddOperationalStore(...config) then you also forfeit usage of TokenCleanupHostService so you would need to implement that as well.
I'm in the process of hammering out the kinks in my own private fork of IdentityServer4.Contrib.CosmosDB. If you take a look at the (very unfinished atm) source code, you'll get a rough understanding of how to implement your own DB provider that gracefully handles such a requirement. Actually, you may want to hypothetically think about using a NoSQL datastore for IDServer, as I believe it's 'optimized' for multi-region reads/writes compared to SQL Server.
Related
A simple question about scalability. I have been studying about scalability and I think I understand the basic concept behind it. You use an orchestrator like Kubernetes to manage the automatic scalability of a system. So in that way, as a particular microservice gets an increase demand of calls, the orchestrator will create new instances of it, to deal with the requirement of the demand. Now, in our case, we are building a microservice structure similar to the example one at Microsoft's "eShop On Containers":
Now, here each microservice has its own database to manage just like in our application. My question is: When upscaling this system, by creating new instances of a certain microservice, let's say "Ordering microservice" in the example above, wouldn't that create a new set of databases? In the case of our application, we are using SQLite, so each microservice has its own copy of the database. I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server. But if that was the case, wouldn't that be a bottle neck? I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?
In the case of our application, we are using SQLite, so each microservice has its own copy of the database.
One of the most important aspects of services that scale-out is that they are stateless - services on Kubernetes should be designed according to the 12-factor principles. This means that service-instances cannot have its own copy of the database, unless it is a cache.
I would asume that in order to be able to upscale such a system would require that each microservice connects to an external SQL Server.
yes, if you want to be able to scale-out, you need to use a database that are outside the instances and shared between the instances.
But if that was the case, wouldn't that be a bottle neck?
This depend very much on how you design your system. Comparing microservices to monoliths; when using a monolith, the whole thing typically used one big database, but with microservices it is easier to use multiple different databases, so it should be much easier to scale-out the database this way.
I mean, having multiple instances of a microservice to attend more demand of a particular service BUT with all those instances still accessing a single database server?
There are many ways to scale a database system as well, e.g. caching read-operations (but be careful). But this is a large topic in itself and depends very much on what and how you do things.
I have a few (small size) tables, saved in Table Storage which I use only for reading from.
When my service starts, I'd like to read all tables, save the data in a data structure (i.e. List), and read from that List from there on.
Is there a way to do that, or must I read from the Table Storage each time I need data?
If there is a way, where should the List be declared, and where should it be initialized?
Thanks.
Azure cache may be the best route, but there is an obvious cost.
Could you declare the WCF service as a singleton and store the data as a static property?
You could use the Windows Azure Cache service to store the data. See http://www.windowsazure.com/en-us/home/tour/caching/
If your list is not too big, you could use the Windows Azure caching component http://www.windowsazure.com/en-us/home/tour/caching/ . During the initialization process of your service, read the information from your tables, and stored it there. You are also asking where the list should declared and initialized. Are you also hosting your service on Windows Azure? Is this a web service runnig on IIS, or a windows service? Are you using WCF to expose your service?
I see others are suggesting static properties (good choice) and Azure Chache. Anyway it is good to cache the data if it is not often updated, and not read it every time from the Table Storage.
I want to give my two cents:
I would not use Azure Cahce if the data is small enough (1MB is small enough for me). Static property would do the work. But there is also something new to .NET 4.0 and obviously missing from most of programmes view. It's the System.Runtime.Caching namespace. I haven't presonally used it yet, but it seems to be a good for small local caches. You could use the MemoryCache object and store your data in-memory. And, of course program like against any other type of chache - in the getter of a property, check if data exists in the chache. If exists - return it. If does not exists - retrieve from tables, store in chache, and then return it.
I'm creating a mobile app and it requires a API service backend to get/put information for each user. I'll be developing the web service on ServiceStack, but was wondering about the storage. I love the idea of a fast in-memory caching system like Redis, but I have a few questions:
I created a sample schema of what my data store should look like. Does this seems like it's a good case for using Redis as opposed to a MySQL DB or something like that?
schema http://www.miles3.com/uploads/redis.png
How difficult is the setup for persisting the Redis store to disk or is it kind of built-in when you do writes to the store? (I'm a newbie on this NoSQL stuff)
I currently have my setup on AWS using a Linux micro instance (because it's free for a year). I know many factors go into this answer, but in general will this be enough for my web service and Redis? Since Redis is in-memory will that be enough? I guess if my mobile app skyrockets (hey, we can dream right?) then I'll start hitting the ceiling of the instance.
What to think about when desigining a NoSQL Redis application
1) To develop correctly in Redis you should be thinking more about how you would structure the relationships in your C# program i.e. with the C# collection classes rather than a Relational Model meant for an RDBMS. The better mindset would be to think more about data storage like a Document database rather than RDBMS tables. Essentially everything gets blobbed in Redis via a key (index) so you just need to work out what your primary entities are (i.e. aggregate roots)
which would get kept in its own 'key namespace' or whether it's non-primary entity, i.e. simply metadata which should just get persisted with its parent entity.
Examples of Redis as a primary Data Store
Here is a good article that walks through creating a simple blogging application using Redis:
http://www.servicestack.net/docs/redis-client/designing-nosql-database
You can also look at the source code of RedisStackOverflow for another real world example using Redis.
Basically you would need to store and fetch the items of each type separately.
var redisUsers = redis.As<User>();
var user = redisUsers.GetById(1);
var userIsWatching = redisUsers.GetRelatedEntities<Watching>(user.Id);
The way you store relationship between entities is making use of Redis's Sets, e.g: you can store the Users/Watchers relationship conceptually with:
SET["ids:User>Watcher:{UserId}"] = [{watcherId1},{watcherId2},...]
Redis is schema-less and idempotent
Storing ids into redis sets is idempotent i.e. you can add watcherId1 to the same set multiple times and it will only ever have one occurrence of it. This is nice because it means you don't ever need to check the existence of the relationship and can freely keep adding related ids like they've never existed.
Related: writing or reading to a Redis collection (e.g. List) that does not exist is the same as writing to an empty collection, i.e. A list gets created on-the-fly when you add an item to a list whilst accessing a non-existent list will simply return 0 results. This is a friction-free and productivity win since you don't have to define your schemas up front in order to use them. Although should you need to Redis provides the EXISTS operation to determine whether a key exists or a TYPE operation so you can determine its type.
Create your relationships/indexes on your writes
One thing to remember is because there are no implicit indexes in Redis, you will generally need to setup your indexes/relationships needed for reading yourself during your writes. Basically you need to think about all your query requirements up front and ensure you set up the necessary relationships at write time. The above RedisStackOverflow source code is a good example that shows this.
Note: the ServiceStack.Redis C# provider assumes you have a unique field called Id that is its primary key. You can configure it to use a different field with the ModelConfig.Id() config mapping.
Redis Persistance
2) Redis supports 2 types persistence modes out-of-the-box RDB and Append Only File (AOF). RDB writes routine snapshots whilst the Append Only File acts like a transaction journal recording all the changes in-between snapshots - I recommend adding both until your comfortable with what each does and what your application needs. You can read all Redis persistence at http://redis.io/topics/persistence.
Note Redis also supports trivial replication you can read more about at: http://redis.io/topics/replication
Redis loves RAM
3) Since Redis operates predominantly in memory the most important resource is that you have enough RAM to hold your entire dataset in memory + a buffer for when it snapshots to disk. Redis is very efficient so even a small AWS instance will be able to handle a lot of load - what you want to look for is having enough RAM.
Visualizing your data with the Redis Admin UI
Finally if you're using the ServiceStack C# Redis Client I recommend installing the Redis Admin UI which provides a nice visual view of your entities. You can see a live demo of it at:
http://servicestack.net/RedisAdminUI/AjaxClient/
This is an issue that I have struggled with in a number of systems but this one is a good example. It is to do with when one system consumes WCF services from another system, and each system has their own database, but there are relationships between the two databases.
We have a central database that holds a record of all documents in the company. This database includes Document and Folder tables and it mimicks a windows file structure. NHibernate takes care of data access, a domain layer handles logic (validating filenames/no identical filenames in the same folder etc.) and a service layer sits on that, with services named 'CreateDocument(bytes[])', 'RenameDocument(id, newName) ', 'SearchDocuments(filename, filesize, createdDate)' etc. These services are exposed with WCF.
An HR system consumes these services. The HR database has a separate database that has foreign keys to the Document database: it contains an HRDocument table that has a foreign key DocumentId, and then HR specific such as EmployeeId and ContractId.
Here are the problems amonst others:
1) In order to save a document, I have to call the WCF service to save it to the central db, return the ID and then save to the HRDocument table (along with the HR specific information). Because of the WCF call and all Document specific data access being done within the Document application, this can't be done all within one transaction, resulting in a possible loss of transaction integrity.
2) In order to search on say, employeeId and createdDate, I have to call the search service passing in the createdDate (Document database specific fields) and then search the HRDocument database on the Id's of the returned records to filter the results returned. This feels messy, slow and just wrong.
I could duplicate the NHibernate mapping files to the Document database in the DAL of the HR application. This means I could specify the relationship between HRDocument and Document. This means I could join the tables and search like that but would also mean I would have to duplicate domain logic and violate the DRY principle, and all that entails.
I can't help feeling I'm doing something wrong here and have missed something simple.
I recommend you to apply CQRS and Event Driven Architecture principles here
Use Guids as primary keys - then you
will be able to generate primary key
for document and pass it to WCF
method call.
Use messaging on other side of WCF service to prevent data loss
(in case of database failure and
something like that).
Remove constaints between databases - immediate
consistent applications don't
scale. Use eventual consistency
paradigm instead.
Introduce separate data storage for
reads purpose that contains denormalized data. Then you will be able
to do search very easy. To ensure
consistency in your read storage (in
case when Document creation
failed) you could implement some
simple workflow (saga in terms of
CQRS)
You can create a common codebase which will include base implementation of Document along with all the mappings, base Domain Model etc.
A Document Service and an HR System use the same codebase. But in HR System you extend base Document class (or classes) with your HRDocument using inheritance mapping strategy which will suit your needs the best.
public class HRDocument : Document
And from HR System you don't even have to call Document Service anymore, you just use NH and enjoy ACID and all that. But Document Service is still there and there's no code duplication.
At the company I work we have a single database schema but with each of our clients using their own dedicated database, with one central database that stores client contact details and what database the client is using so we can connect to the appropriate database. I've looked at using NHibernate Shards but it seems to have gone very quiet and doesn't look complete.
Does anyone know the status of this project? Has anyone used it in production?
If it's not yet at a point that is considered usable in production, what are the alternatives? The two main ones seem to be:
Create a session factory per database and then a wrapper to select the appropriate factory to generate the correct session - this seems to me to have redundant session factories and not too efficient
Create just one session factory but when calling opensession pass it an IDbConnection - which would allow the session to have a different database connection.
My concern with 2 is how will NHibernate cope with a 2nd level cache as I believe it is controlled by the session factory - also the HiLo generator uses the session factory I believe. In these cases will having sessions attach to different dbs cause problems? For example we will end up with a MyCompany.Model.User class that has an id of 2 in both databases will this cause conflicts within the cache?
You could have a look at Enzo SQL Shard a sharding library for SQL Server. If you are already using NHibernate there might be a few changes required in the code though
NHibernate Shards is up-to-date with the latest NHibernate API changes and now supports all query models of NHibrrnate, including Linq. Complex scalar queries are currently not supported.
We use it in production for a multi-tenant environment, but there are a few things to be mindful of.
NHibernate Shards has a session factory for each shard, but only uses a single NHibernate Configuration instance to generate the session factories. This approach likely won't scale well to large numbers of shards.
Querying across shards does not play well with paging. It works, but can involve considerable client-side processing. It is best to keep result sets as small as possible and lock queries to single shards where feasible.