My question relates to in-memory embedded HSQLDB. Say I have one database instance called my_db.
I assume the following code allows to access the above database instance:
org.hsqldb.util.DatabaseManagerSwing.main(new String[] { "--url", "jdbc:hsqldb:mem:my_db", "--noexit" });
Can I access the database from wherever I want provided it is in the same JVM process?
In which specific part of the memory is the data held?
More generally, what rules and restrictions determine from where and how I can access the database instance?
Q: Can I access the database from wherever I want provided it is in the same JVM process?
A: Yes you can.
Q: In which specific part of the memory is the data held?
A: In the memory heap of the JVM process
Q: More generally, what rules and restrictions determine from where and how I can access the database instance?
A: The rule is only one JVM process can access a single embedded database. If you need access from more than one JVM, then you need to run an HSQLDB Server instance.
Related
I am currently running IDS4 as a single instance in one region (single database for configuration and operational store). I now have to distribute the installation across two regions so that services/users in region A access IDS in region A and services/users in region B access IDS in region B.
Both instances should access the same datastore, but IDS in region B should not have to make cross-region read queries to the database in region A.
We use Azure SQL Server and the geo-replication feature which offers a single writable instance (either in region A or B) and multiple readable instances. We pointed IDS in region B to a read-only instance in the same region, but this does not work because IDS has to write operational data like persistent grants.
Is there a recommended architecture to achieve this or do you have any experience implementing a multi-region and load-balanced IDS deployment? Is it possible to configure IDS to use a different database for write operations and the database in the same region for read operations?
Instead of Geo-replication you can use Azure SQL Data Sync to have writable replicas of Azure SQL Database, defining one of them as the hub database and the others as the member database. Synchronization between all databases can be configured bidirectional thus all databases are updateable. You can start configuring Azure SQL Data Sync on this documentation.
It is unlikely that you will find a recommended architecture for a scenario like this due to how much of this problem is in your business domain. Also, there is nothing out of the box in Identity Server 4 library or its supporting libraries that would satisfy your criteria.
Having said that, I've had a similar requirement (unrelated to Identity Server 4 but identical functional requirements in a nutshell) and it should be possible to adapt the same idea in your case.
Firstly, your only problem like you've said is the fact that out of the box, using the Identity Server 4 EF package, the PersistedGrantStore uses one IPersistedGrantDbContext which does both writes and reads from the database. So in order to solve this, you basically need to create your own implementation of IPersistedGrantStore and in that custom implementation you could technically use two different DbContext types, one of which would be created using a connection string to single writeable instance of database and would only be used for implementing interface methods that do writes and another one would be used for read methods only and would use connection string for read only instance of the database.
Basic idea of the above summary is below:
public class MyCustomPersistedGrantStore : IPersistedGrantStore
{
private readonly WriteOnlyPersistedGrantDbContext _writeContext;
private readonly ReadOnlyPersistedGrantDbContext _readContext;
public PersistedGrantStore(WriteOnlyPersistedGrantDbContext writeContext, ReadOnlyPersistedGrantDbContext readContext)
{
_writeContext = writeContext;
_readContext = readContext;
}
public Task StoreAsync(PersistedGrant token)
{
//Use _writeContext to implement storage logic
}
public Task<PersistedGrant> GetAsync(string key)
{
//Use _readContext to implement read logic
}
...other interface methods
}
All you need after implementing your custom version is to add your implementation of IPersistedGrantStore as well as the DbContext's into DI system .
Lastly, it is worthwhile to note that if you stop using .AddOperationalStore(...config) then you also forfeit usage of TokenCleanupHostService so you would need to implement that as well.
I'm in the process of hammering out the kinks in my own private fork of IdentityServer4.Contrib.CosmosDB. If you take a look at the (very unfinished atm) source code, you'll get a rough understanding of how to implement your own DB provider that gracefully handles such a requirement. Actually, you may want to hypothetically think about using a NoSQL datastore for IDServer, as I believe it's 'optimized' for multi-region reads/writes compared to SQL Server.
In my scenario, I connect my ABAP system to a non ABAP based system using HTTP destination.
I want to implement caching in ABAP such that the performance of the application can be used and I don't have to hit the backend every time.
I guess in ABAP the caching can be only implemented by using the Shared memory.
https://help.sap.com/doc/abapdocu_751_index_htm/7.51/en-US/abenuse_shared_memory_guidl.htm
Is this correct?
I guess that by "buffering" you mean "loaded into ABAP memory and avoiding additional database roundtrips"?
If yes, I share your understanding that shared memory would be the only means to do that.
However, consider that on top of your database, you can have 1..n application servers, each of which can have 1..n work processes. Shared memory will allow you to buffer stuff across the work processes within one application server, but not across application servers.
If you take other means of "buffering" into account, such as aggregated views on otherwise slow-to-join data, you could get additional means by using appropriate database views or materializations.
Each time the ABAP AS get's a http request you get a new "session" (roll area). You are right that shared memory could be an option to implement a buffer. Another option could be to switch on table buffering in SE11.
But the typical way is really to start always from the database and read the data again.
To improve the performance of the application you could try to reduce the calls from http to the ABAP AS an implement more logic on the http side.
Please consider also that the DBMS caches too. Maybe that could also improve the response-time in many configurations.
You have to start all the select on ABAP then, send all of your's data to http request. You can elaborate it on the other system and do whatever you want. At the end, send results to abap and complete your purpose. This is the fast way
It depends on what your service is doing.
If it accesses a table you can simply buffer the table itself:
from ABAP tx SE11, go to Technical Settings->Buffering Switch On->Fully buffered.
That should be enough to speed up your service.
I am new to GemFire.
Currently we are using an MySQL DB and would like to move to GemFire.
How to move the existing data stored in MySQL over to GemFire? I.e., is there any way to to import existing MySQL data into GemFire?
There are many different options available for you to migrate data from 1 data store (e.g. an RDBMS like MySQL) to an IMDG (e.g. Pivotal GemFire). Pivotal GemFire does not provide any tools for this purpose OOTB.
However, you could...
A) Write a Spring Batch application to migrate all your data from MySQL to Pivotal GemFire in 1 large swoop. This is typical for most large-scale conversion processes, converting from 1 data store to another, either as part of an upgrade or a migration.
The advantage of using Pivotal GemFire as your target data store is that it stores Java Objects. So, if you are, say, using an ORM tool (e.g. Hibernate) to map the data stored in your MySQL database tables back to your application domain objects, you can then immediately and simply turnaround and store those same Objects directly into a corresponding Region in Pivotal GemFire. There is no additional mapping required to store an Object into GemFire.
Although, if you need something less immediate, then you can also...
B) Take advantage of Pivotal GemFire's CacheLoader, and maybe even the CacheWriter mechanisms. The CacheLoader and CacheWriter are implementations of the "Read-Through" and "Write-Through" design patterns.
More details of this approach can be found here.
In a nutshell, you implement a CacheLoader to load data from some external data source on Cache miss. You attach, or register the CacheLoader with a GemFire Region when the Region is created. When a Key (which can correspond to your MySQL Table Primary Key) is requested (Region.get(key)) and an entry does not exist, then GemFire will consult the CacheLoader to resolve the value, providing you actually registered a CacheLoader with the Region.
In this way, you slowly build up Pivotal GemFire from the MySQL RDBMS based on need.
Clearly, it is quite likely Pivotal GemFire will not be able to store all the data from your RDBMS in "memory". So, you can enable both Persistence and Overflow [to Disk] capabilities. By enabling Persistence, GemFire will load the data from it's own DiskStores the next time the nodes come online, assuming you brought them down prior.
The CacheWriter mechanism is nice if you want to run both Pivotal GemFire and MySQL in parallel for while, until you can shift enough of the responsibilities of MySQL over to GemFire, for instance. The CacheWriter will write back to your underlying MySQL DB each time an entry is written or updated in the GemFire Region. You can even do this asynchronously (i.e. "Write-Behind") using GemFire's AsyncEventQueues and Listeners; see here.
Obviously, you many options at your disposal. You need to carefully way your options and choose an approach that best meets your application requirements and needs.
If you have additional questions, let me know.
I am using MFP 8.0, and there are requirements that we want implement cache on the adapter level.
Whenever MFP server starts we want to dump all the database in cache till the server restart again.
Now whenever user hit some transaction or adapter procedure which call database so instead of calling database it must read from cache.
Adapters support read-only and transactional access modes to back-end systems.
Adapters are Maven projects that contain server-side code implemented in either Java or JavaScript. Adapters are used perform
any necessary server-side logic, and to transfer and retrieve
information from back-end systems to client applications and cloud
services.
JSONStore is an optional client-side API providing a lightweight, document-oriented storage system. JSONStore enables persistent storage
of JSON documents. Documents in an application are available in
JSONStore even when the device that is running the application is
offline. This persistent, always-available storage can be useful to
give users access to documents when, for example, there is no network
connection available in the device.
From your description, assuming you are talking about some custom DB where you have data stored, then you need to implement the logic of caching the data.
Adapter's have two classes <AdapterName>Application.java and <AdapterName>Resource.java. <>Application.java contains the lifecycle methods - init() and destroy().
You should put your custom code of loading data from your DB into cache in the init() method. And also take care of removing it in the destroy().
Now during transactional access (which hits <>Resource.java), you refer to the cache you have already created.
Your requirement, however may not be ideal for heavily loaded systems. You need to consider that:
a) Your adapter initialization is delayed. Any wrongly written code can also break the adapter initialization. An adapter isn't available to service your request until it has been initialized. In case of a clustered environment, the adapter load in all cluster members will delayed depending on the amount of data your are loading. Any client request intended for this adapter will get a runtime exception until the initialization is complete.
b) Holding the cache in memory means, so much space in the heap is used up. If your DB keeps growing, this adversely affects adapter initialization and also heap usage.
c) You are in charge maintaining the data at the latest level and also cleaning it up after use.
To summarize, while it is possible, it is not recommended. While this may work in case of very small data set, this cannot scale well. The design of adapters is to provide you transactional access to data/backend systems. You should use the adapter the way it was designed to.
I'm creating a mobile app and it requires a API service backend to get/put information for each user. I'll be developing the web service on ServiceStack, but was wondering about the storage. I love the idea of a fast in-memory caching system like Redis, but I have a few questions:
I created a sample schema of what my data store should look like. Does this seems like it's a good case for using Redis as opposed to a MySQL DB or something like that?
schema http://www.miles3.com/uploads/redis.png
How difficult is the setup for persisting the Redis store to disk or is it kind of built-in when you do writes to the store? (I'm a newbie on this NoSQL stuff)
I currently have my setup on AWS using a Linux micro instance (because it's free for a year). I know many factors go into this answer, but in general will this be enough for my web service and Redis? Since Redis is in-memory will that be enough? I guess if my mobile app skyrockets (hey, we can dream right?) then I'll start hitting the ceiling of the instance.
What to think about when desigining a NoSQL Redis application
1) To develop correctly in Redis you should be thinking more about how you would structure the relationships in your C# program i.e. with the C# collection classes rather than a Relational Model meant for an RDBMS. The better mindset would be to think more about data storage like a Document database rather than RDBMS tables. Essentially everything gets blobbed in Redis via a key (index) so you just need to work out what your primary entities are (i.e. aggregate roots)
which would get kept in its own 'key namespace' or whether it's non-primary entity, i.e. simply metadata which should just get persisted with its parent entity.
Examples of Redis as a primary Data Store
Here is a good article that walks through creating a simple blogging application using Redis:
http://www.servicestack.net/docs/redis-client/designing-nosql-database
You can also look at the source code of RedisStackOverflow for another real world example using Redis.
Basically you would need to store and fetch the items of each type separately.
var redisUsers = redis.As<User>();
var user = redisUsers.GetById(1);
var userIsWatching = redisUsers.GetRelatedEntities<Watching>(user.Id);
The way you store relationship between entities is making use of Redis's Sets, e.g: you can store the Users/Watchers relationship conceptually with:
SET["ids:User>Watcher:{UserId}"] = [{watcherId1},{watcherId2},...]
Redis is schema-less and idempotent
Storing ids into redis sets is idempotent i.e. you can add watcherId1 to the same set multiple times and it will only ever have one occurrence of it. This is nice because it means you don't ever need to check the existence of the relationship and can freely keep adding related ids like they've never existed.
Related: writing or reading to a Redis collection (e.g. List) that does not exist is the same as writing to an empty collection, i.e. A list gets created on-the-fly when you add an item to a list whilst accessing a non-existent list will simply return 0 results. This is a friction-free and productivity win since you don't have to define your schemas up front in order to use them. Although should you need to Redis provides the EXISTS operation to determine whether a key exists or a TYPE operation so you can determine its type.
Create your relationships/indexes on your writes
One thing to remember is because there are no implicit indexes in Redis, you will generally need to setup your indexes/relationships needed for reading yourself during your writes. Basically you need to think about all your query requirements up front and ensure you set up the necessary relationships at write time. The above RedisStackOverflow source code is a good example that shows this.
Note: the ServiceStack.Redis C# provider assumes you have a unique field called Id that is its primary key. You can configure it to use a different field with the ModelConfig.Id() config mapping.
Redis Persistance
2) Redis supports 2 types persistence modes out-of-the-box RDB and Append Only File (AOF). RDB writes routine snapshots whilst the Append Only File acts like a transaction journal recording all the changes in-between snapshots - I recommend adding both until your comfortable with what each does and what your application needs. You can read all Redis persistence at http://redis.io/topics/persistence.
Note Redis also supports trivial replication you can read more about at: http://redis.io/topics/replication
Redis loves RAM
3) Since Redis operates predominantly in memory the most important resource is that you have enough RAM to hold your entire dataset in memory + a buffer for when it snapshots to disk. Redis is very efficient so even a small AWS instance will be able to handle a lot of load - what you want to look for is having enough RAM.
Visualizing your data with the Redis Admin UI
Finally if you're using the ServiceStack C# Redis Client I recommend installing the Redis Admin UI which provides a nice visual view of your entities. You can see a live demo of it at:
http://servicestack.net/RedisAdminUI/AjaxClient/