This is how I create my observable:
Observable.fromCallable(new EventObtainer()).flatMap(Observable::from).subscribeOn(Schedulers.io()).repeat();
And after that, through http request i'm trying to add different observers. The thing is that if I have more than one observer I can't predict which observer will obtain emitted item. Why doesn't observable emit item to every subscribed observer, but one item at time to different observers?
I resolved this,
In observable contract:
http://reactivex.io/documentation/contract.html
There is information:
There is no general guarantee that two observers of the same
Observable will see the same sequence of items.
So i resolved it by making my observable Connectable observable by publish, and then invoke connect method on it:
Observable.fromCallable(new EventObtainer()).flatMap(Observable::from).subscribeOn(Schedulers.io()).repeat().publish();
observable.connect();
and now even if asynchronously i will add more observers it will emit obtained item to every observers.
Related
I have one question about akka-persistence and event migration. I do have read the "Schema Evolution for Event Sourced Actors" chapter. However, this does not give an answer to my question.
Given I have one persistent actor ChildActor that produce Created event. But, later we discover that ChildActor should be a child of ParentActor. And ParentActor has to update his state based on the creation of ChildActor (to maintains a collection of childs).
We can add a new command CreateChild for ParentActor that will create the ChildActor. However, the parent will never receive the Created event emitted by his child. Thus it will not be able to update his state. Of course, ParentActor can create a ChildCreated event for himself.
But, what about the Created events already persisted by FirstActor?
How can we "send" (and, ideally adapt) them to the ParentActor?
So, my question is:
Can we "route" persisted events from one actor to another?
Thanks
It is possible to watch the events persisted by a given persistence ID with the events by persistence ID query. Since this query is very much like what Akka Persistence must do in replaying events to rebuild a persistent actor's state, it's available in all the commonly used plugins: you'll need to check the documentation for your plugin for how to summon a ReadJournal. Once summoned, assuming that the ReadJournal is further an instance of EventsByPersistenceIdQuery, you would use (Scala):
readJournal.eventsByPersistenceId(childActorPersistenceId, fromOffset, Long.MaxValue)
which would give you an Akka Streams Source of events in order starting at fromOffset. Your subscribing actor may (probably will) want to save in its state the last-seen sequence number as part of its state so if it resumes it doesn't see the event it processed (ideally the event updating the sequence number would be in the same batch or otherwise atomically part of the state update).
Note that there will be an observable delay from persisting the event to when ParentActor sees the event, though many of the recent iterations of plugins (e.g. Cassandra or R2DBC) can directly propagate the event or at least the notification that there's an event for the persistence ID to the query.
This question is regarding rxpy.
I am trying to build a reactive system that handles messages from a source observable. In addition to that, I am trying to integrate it with a leader election system based on zookeeper.
This combination will allow only one leader in a farm of processes to handle the message stream. Below is the gist of the code I am trying to construct.
# event_source is an observable of messages
# manager.leaders is an observable of leader election events
# manager.followers is an observable of leader relinquish events
event_source\
.skip_until(manager.leaders)\
.take_until(manager.followers)\
.subscribe(observer)
It works fine and all, but I need to inject between skip_until and take_until a piece to handle backfill. This is designed to handle potential gap between a leader process failure and another process assuming leadership. Every processed message will leave a record so that a new leader can catch up on missing messages, if any, before proceeding with the stream.
I tried start_with operator without success. Am I not approaching it in a manner it is not meant to be used for?
Ultimately, the solution I am looking for is to inject a specific number of items in the stream triggered by an event from another stream.
What about this:
manager.leaders \
.flat_map(lambda e: event_source
.start_with(...)
.take_until(manager.followers))
Every time manager.leaders emits a message event_source will be subscribed to, starting with injected items, until manager.followers emits.
From https://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSNotificationCenter_Class/Reference/Reference.html:
You must invoke removeObserver: or removeObserver:name:object: before any object specified by
addObserverForName:object:queue:usingBlock: is deallocated
Why does it matter that I stop observing before the object whose notifications I'm observing is deallocated? I understand why I as the observer need to stop observing if I'm going to disappear and the block depends on my existence, but I don't understand why the lifetime of the observed object matters. Am I misinterpreting this?
I understand why I as the observer need to stop observing if I'm going to disappear and the block depends on my existence, but I don't understand why the lifetime of the observed object matters.
I think that a possible explanation is the following.
addObserverForName:object:queue:usingBlock description says:
Adds an entry to the receiver’s dispatch table with a notification queue and a block to add to the queue, and optional criteria: notification name and sender.
"sender" in this context is just another name for the object parameter, which is described in the following terms:
The object whose notifications you want to add the block to the operation queue.
If you pass nil, the notification center doesn’t use a notification’s sender to decide whether to add the block to the operation queue.
So, object acts as a sort of filter: when a notification comes in, the notification center decides based on that value (if present) if the block must be added to the specified operation queue.
Now, consider this:
the observed object is deallocated without the observer to be removed;
a different object, also able to post notifications is created, and it happens it has the same address as the object deallocated at point 1;
now the observer will react to notifications posted by the second object.
I admit it is a pretty rare case, but it might happen, so you better code against it.
If u don't remove observer, it may leed to a situation when you already destroyed an object but it is still sent notifications - this will cause "message sent to deallocated instance" error
I am a big fan of the Observer pattern. In our code we use it in many places to decouple services from each other. However, I've seen it implemented poorly in many places since there is a lot to worry about:
Exception handling - don't want listeners throwing runtime exceptions around.
Long-running listeners holding up the main thread
Concurrent modification of the listener list as we are iterating through it.
What's more, we end up repeating this code all over the place. In the spirit of DRY, I want to pull out all Notification concerns into a single service. Some pseudo code:
Interface NotificationService
// register the listener to receive notifications from this producer
registerAsListener (NotificationProducer, NotificationListener)
// Sends a notification to listeners of this producer
sendNotification (NotificationProducer, Notification)
// Sends a notification in a background thread
sendAsynchNotification (NotificationProducer, Notification)
// Listener no longer receives messages from this producer
removeListener(NotificationProducer, NotificationListener)
My question is this: am I losing the original point of the observer pattern by doing this? Am I making a mistake by introducing another dependency on both sides of the pattern? Both the Listener and the Producer will now have an extra dependency on NotificationService.
What are your views?
you are right with your concerns and questions.
implementing the observer pattern many times seems like plain repetition.
you're also right that the above solution does lose the pattern's objective.
what you've just implemented is a (global?) event bus. it's a matrix of producers and listeners. that's useful for many applications (see GWT's event bus ).
however if you just want to minimize code duplication while maintaining the pattern. you can remove the coupling between the listeners and the service, use a minified version of the above interface as a member of the observed class. so the logic of registration and notification is written once.
the observed class is just delegating the registration and notification logic to the service.
class ObservedClass implements Observable {
NotificationService notificationService = new NotificationServiceImpl (this);
....
}
interface NotificationService {
// register the listener to receive notifications from this producer
registerAsListener ( NotificationListener)
// Sends a notification to listeners of this producer
sendNotification (Notification)
// Sends a notification in a background thread
sendAsynchNotification (Notification)
// Listener no longer receives messages from this producer
removeListener(NotificationListener)
I'd like to pick someone's brain on this. I have a dedicated save NSManagedObjectContext and GCD queue from which I operate on it. And whenever new data comes into my app I save it on that context and merge the changes into the main context. My problem arises in telling the main thread what just happened. Right after I call save my current context is now up-to-date, but if I fire a method in the main context it's context isn't. If I wait for the NSManagedObjectContextDidSave notification, and I save three times, I now have three queued delegate calls but no way to match them to the notifications coming in. Does anyone know of a good way to get around this?
EDIT
What I ended up doing was creating a new context for each save operation and attaching a block to be called when the save notification arrived. It looks like this, http://pastie.org/2068084
From your answer to my comment above, I see that you pass along the managedObjectContext in the notification. I'm not that confident about asynchronous stuff yet, but I do think that you're violating some concurrency rule, if I correctly interpret this quote from the NSManagedObjectContext Class Reference:
Concurrency
Core Data uses thread (or serialized queue) confinement to protect managed objects and managed object contexts (see “Concurrency with Core Data”). A consequence of this is that a context assumes the default owner is the thread or queue that allocated it—this is determined by the thread that calls its init method. You should not, therefore, initialize a context on one thread then pass it to a different thread. Instead, you should pass a reference to a persistent store coordinator and have the receiving thread/queue create a new context derived from that.
I'd say, try passing along the persistent store coordinator in the notification and recreate the managed object context in the block.
I'm not sure what you mean by, " ...if I fire a method in the main context it's context isn't. If I wait for the NSManagedObjectContextDidSave..." That implies that you are not waiting until the contexts have been merged. If so, that is why you can't access the data, it's just not in the front context.
Do you call mergeChangesFromContextDidSaveNotification: from the front context after it receives the notification?