I have a question that I'm curious about, but can't find the answer to exactly. Suppose we have a datawriter and a datareader and as a topic, we have an airplane's flight runtime information such as coordinates, etc. And suppose that datareader should take a heavy process from this information. When there is more than 100 flight at the same time, parallel operation of multiple datareaders seems to be a suitable solution. But I think multiplexing Datareaders won't make sense since because they will all process the same message (topic) in parallel in the same way. Alternative solution. To make the DataReader multithread. But this time there will be only one datareader. And we will have a constraint.
What kind of approach can be created in dds to create multiple datareaders and process them in parallel to distribute the workload.
OpenSpliceDDS implements an extension to the DDS specification to do exactly this i.e. using its DDS_ShareQoSPolicy on a subscriber/reader that allows sharing of entities by multiple processes or threads. When that policy is enabled, OpenSplice will try to lookup an existing entity that matches the name supplied in that ShareQoSPolicy. A new entity will only be created if a shared entity registered under the specified name doesn't exist yet.
Shared Readers can be useful for implementing algorithms like the worker pattern,
where a single shared reader can contain samples representing different tasks that
may be processed in parallel by separate processes. In this algorithm each process
consumes the (samples related to the) task it is going to perform (i.e. it takes the samples representing that task) thus preventing other processes from consuming and therefore performing the same task.
NOTE: Entities can only be shared between co-located processes where OpenSplice is running in federated mode where shared memory is exploited to share the data between the readers of the set of federated processes (so this doesn't work over machine-boundaries).
Related
I made my own research and found out that there is several ways to do that, but the most accurate is Change Data Capture. However, I don't see the benefits of it related to the asynchronous method for example :
Synchronous double-write: Elasticsearch is updated synchronously when
the DB is updated. This technical solution is the simplest, but it
faces the largest number of problems, including data conflicts, data
overwriting, and data loss. Make your choice carefully.
Asynchronous double-write: When the DB is updated, an MQ is recorded and used to
notify the consumer. This allows the consumer to backward query DB
data so that the data is ultimately updated to Elasticsearch. This
technical solution is highly coupled with business systems. Therefore,
you need to compile programs specific to the requirements of each
business. As a result, rapid response is not possible.
Change Data Capture (CDC): Change data is captured from the DB, pushed to an
intermediate program, and synchronously pushed to Elasticsearch by
using the logic of the intermediate program. Based on the CDC
mechanism, accurate data is returned at an extremely fast speed in
response to queries. This solution is less coupled to application
programs. Therefore, it can be abstracted and separated from business
systems, making it suitable for large-scale use. This is illustrated
in the following figure.
Alibabacloud.com
In another article it said that asynchronous is also risky if one datasource is down and we cannot easily rollback.
https://thorben-janssen.com/dual-writes/
So my question is : Should I use CDC to perform persistance operations for multiple datasources ? Why CDC is better than asynchronous given that is based on the same principle ?
We are soon going to start something with GEODE regarding reference data. I would like to get some guide lines for the same.
As you know in financial reference data world there exists complex relationships between various reference data entities like Instrument, Account, Client etc. which might be available in database as 3NF.
If my queries are mostly read intensive which requires joins across
tables (2-5 tables), what's the best way to deal with the same with in
memory grid?
Case 1:
Separate regions for all tables in your database and then do a similar join using OQL as you do in database?
Even if you do so, you will have to design it with solid care that related entities are always co-located within same partition.
Modeling 1-to-many and many-many relationship using object graph?
Case 2:
If you know how your join queries look like, create a view model per join query having equi join characteristics.
Confusion:
(1) I have 1 join query requiring Employee,Department using emp.deptId = dept.deptId [OK fantastic 1 region with such view model exists]
(2) I have another join query requiring, Employee, Department, Salary, Address joins to address different requirement
So again I have to create a view model to address (2) which will contain similar Employee and Department data as (1). This may soon reach to memory threshold.
Changes in database can still be managed by event listeners, but what's the recommendations for that?
Thanks,
Dharam
I think your general question is pretty broad and there isn't just one recommended approach to cover all UCs (primarily all your analytical views/models of your data as required by your application(s)).
Such questions involve many factors, such as the size of individual data elements, the volume of data, the frequency of access or access patterns originating from the application or applications, the timely delivery of information, how accurate the data needs to be, the size of your cluster, the physical resources of each (virtual) machine, and so on. Thus, any given approach will undoubtedly require application tuning, tuning GemFire accordingly and JVM tuning regardless of your data model. Still, a carefully crafted data model can determine the extent of such tuning.
In GemFire specifically, such tuning will involve different configuration such as, but not limited to: data management policies, eviction (Overflow) and expiration (LRU, or perhaps custom) settings along with different eviction/expiration thresholds, maybe storing data in Off-Heap memory, employing different partition strategies (PartitionResolver), and so on and so forth.
For example, if your Address information is relatively static, unchanging (i.e. actual "reference" data) then you might consider storing Address data in a REPLICATE Region. Data that is written to frequently (typically "transactional" data) is better off in a PARTITION Region.
Of course, as you know, any PARTITION data (managed in separate Regions) you "join" in a query (using OQL) must be collocated. GemFire/Geode does not currently support distributed joins.
Additionally, certain nodes could host certain Regions, thus dividing your cluster into "transactional" vs. "analytical" nodes, where the analytical-based nodes are updated from CacheListeners on Regions in transactional nodes (be careful of this), or perhaps better yet, asynchronously using an AEQ with AsyncEventListeners. AEQs can be separately made highly available and durable as well. This transactional vs analytical approach is the basis for CQRS.
The size of your data is also impacted by the form in which it is stored, i.e. serialized vs. not serialized, and GemFire's proprietary serialization format (PDX) is quite optimal compared with Java Serialization. It all depends on how "portable" your data needs to be and whether you can keep your data in serialized form.
Also, you might consider how expensive it is to join the data on-the-fly. Meaning, if your are able to aggregate, transform and enrich data at runtime relatively cheaply (compute vs. memory/storage), then you might consider using GemFire's Function Execution service, bringing your logic to the data rather than the data to your logic (the fundamental basis of MapReduce).
You should know, and I am sure you are aware, GemFire is a Key-Value store, therefore mapping a complex object graph into separate Regions is not a trivial problem. Dividing objects up by references (especially many-to-many) and knowing exactly when to eagerly vs. lazily load them is an overloaded problem, especially in a distributed, replicated data store such as GemFire where consistency and availability tradeoffs exist.
There are different APIs and frameworks to simplify persistence and querying with GemFire. One of the more notable approaches is Spring Data GemFire's extension of Spring Data Commons Repository abstraction.
It also might be a matter of using the right data model for the job. If you have very complex data relationships, then perhaps creating analytical models using a graph database (such as Neo4j) would be a simpler option. Spring also provides great support for Neo4j, led by the Neo4j team.
No doubt any design choice you make will undoubtedly involve a hybrid approach. Often times the path is not clear since it really "depends" (i.e. depends on the application and data access patterns, load, all that).
But one thing is for certain, make sure you have a good cursory knowledge and understanding of the underlying data store and it' data management capabilities, particularly as it pertains to consistency and availability, beginning with this.
Note, there is also a GemFire slack channel as well as a Apache DEV mailing list you can use to reach out to the GemFire experts and community of (advanced) GemFire/Geode users if you have more specific problems as you proceed down this architectural design path.
I'm considering solving a problem using Elixir, mainly because of the ability to spawn large numbers of processes cheaply.
In my scenario, I'd want to create several "original" processes, which load specific, immutable data into memory, then make copies of those processes as needed. The copies would all use the same base data, but do different, read-only tasks with it; eg, imagine that one "original" has the text of "War and Peace" in memory, and each copy of that original does a different kind of analysis on the text.
My questions:
Is it possible to copy an existing process, memory contents and all, in Elixir / the Erlang VM?
If so, does each copy consume as much memory as the original, or can they share memory, as Unix processes do with the "copy on write" strategy? (And in this case, there would be no subsequent writes.)
There is no built-in way to copy processes. The easiest way to do it is to start the "original" process and the "copies" and send all the relevant data in messages to the copies. Processes don't share data so there is no more efficient way of doing it. Putting the data in ETS tables only partially helps with sharing as the data in the ETS tables are copied to the process when they are used, however, you don't need to have all the data in the process heap.
An Erlang process has no process-specific data apart from what's stored in variables (and the process dictionary), so to make a copy of the memory of a process, just spawn a new process passing all relevant variables as arguments to the function.
In general, memory is not shared between processes; everything is copied. The exceptions are ETS tables (though data is copied from ETS tables when processes read it), and binaries larger than 64 bytes. If you store "War and Peace" in a binary, and send it to each worker process (or pass it along when you spawn those worker processes), then the processes would share the memory, only copying it if they wanted to modify the binary. See the chapter on binaries in the Erlang efficiency guide for more details.
You are thinking of Erlang/Elixir processes as similar to Unix processes. They aren't at all, I really wish they had a different name, because they really aren't either threads or processes in the standard Unix sense. It took me some time to wrap my head around the differences.
You have to throw out all your preconceived ideas about processes, they are all wrong. Eprocesses have the following characteristics.
They are cheap and fast. Use lot's, there are always more.
They share no resources[1]. ( Even writing to stdout is a message to another Eprocess. )
IPC ( or messages ) are very fast with relatively low overhead compared to standard Unix IPC.
What I would try in your case is to create a server that managed the data and have each analysis worker message the server for data chunks that it needs. It's perfectly acceptable to have an Eprocess be more or less a
manager of shared memory.
To me the most useful way to think of Eprocesses is as objects with their own thread of execution.
[1] Well, there is the ETS table, but it's best to think of them as not sharing resources until you absolutely have to.
I am learning Apache Helix. I came across the keyword 'Partitions'.
According to the definition mentioned here http://helix.apache.org/Concepts.html, Each subtask (of a main task) is referred to as a partition in Helix.
When I gone through the recipe - Distributed Lock Manager, partitions are nothing but instances of a resource. (Increase the numOfPartitions, number of locks is increased).
final int numPartitions = 12;
admin.addResource(clusterName, lockGroupName, numPartitions, "OnlineOffline",
RebalanceMode.FULL_AUTO.toString());
Can someone explain with simple example, what exactly the partition in Apache Helix is ?
I think you're right that a partition is essentially an instance of a resource. As is the case in other distributed systems, partitions are used to achieve parallelism. A resource with only one instance can only run on one machine. Partitions simply provide the construct necessary to split a single resource among many machines by, well, partitioning the resource.
This is a pattern that is found in a large portion of distributed systems. The difference, though, is while e.g. distributed databases explicitly define partitions essentially as a subset of some larger data set that can fit on a single node, Helix is more generic in that partitions don't have a definite meaning or use case, but many potential meanings and potential use cases.
One of these use cases in a system with which I'm very familiar is Apache Kafka's topic partitions. In Kafka, each topic - essentially a distributed log - is broken into a number of partitions. While the topic data can be spread across many nodes in the cluster, each partition is constrained to a single log on a single node. Kafka provides scalability by adding new partitions to new nodes. When messages are produced to a Kafka topic, internally they're hashed to some specific partition on some specific node. When messages are consumed from a topic, the consumer switches between partitions - and thus nodes - as it consumes from the topic.
This pattern generally applies to many scalability problems and is found in almost any HA distributed database (e.g. DynamoDB, Hazelcast), map/reduce (e.g. Hadoop, Spark), and other data or task driven systems.
The LinkedIn blog post about Helix actually gives a bunch of useful examples of the relationships between resources and partitions as well.
Suppose in your web application you need to do a number of redis calls to render a page, like, getting a bunch of user hashes. To speed this up you could wrap up your redis commands in a MULTI/EXEC section, thus using pipelining, so that you avoid doing many round-trips. But you also want to shard your data, because you have lots of it and/or you want to distribute writes. Then pipelining wouldn't work, because different keys would potentially live on different nodes, unless you have a clear idea of the data layout of your application and shard based on roles rather than using a hash function. So, what are the best practices to shard data across different servers without compromising performance too much due to many servers being contacted to complete a "conceptually unique" job? I believe the answer depends on the web application one is developing, and I'll eventually run some tests, but it'd be helpful to know how others have coped with the trade-offs I mentioned.
MULTI/EXEC and pipelining are two different things. You can do MULTI/EXEC without any pipelining and vice versa.
If you want to shard and pipeline at the same time, you need to group the operations to pipeline per Redis instance, and then use pipelining for each instance.
Here is a simple example using Ruby: https://gist.github.com/2587593
One way to further improve performance is to parallelize the traffic on the Redis instances once the operations have been grouped (i.e. you group the operations, you send them to all instances in parallel, you wait for the answers from all instances).
This is a bit more complex, because an asynchronous non blocking client is required. For maximum performance, C/C++ should be used on client side. This can be easily implemented by using hiredis + the event loop of your choice.