I have a cold Observable that might OnError when it is subscribed to. How can i create a cold Observable, that returns a single element (an object that receives as a dependency the source Observable), or propagates the OnError of the source.
Using the Publish operator, the onError handler it is not called.
Private Shared Sub Test()
Dim source = Observable.Throw(Of Integer)(New Exception)
' Dim source = Observable.Range(0, 9)
Dim obs = source.Publish(Function(published)
Return Observable.
Return(New ObjectThatConsumesXs(published))
End Function)
obs.Subscribe(Sub(a)
End Sub,
Sub(ex)
End Sub,
Sub()
End Sub)
End Sub
Private Class ObjectThatConsumesXs
Private _subscription As IDisposable
Public Sub New(source As IObservable(Of Integer))
_subscription = source.Subscribe(Sub(x)
End Sub,
Sub(ex)
End Sub,
Sub()
End Sub)
End Sub
End Class
EDIT:
This is going to be somehow a long description.
I have a device that is essentially a CAN Bus scanner. This device has a serial port, and upon receiving a Start command it starts mirroring whatever messages it captures on the CAN Bus until it receives a Stop command. The replied messages are wrapped in a variant of PPP Protocol to mitigate errors, given the fact that the serial port baud-rate is about 1MBaud.
I want to design a desktop application that connects to the scanner, sends commands to it, and receives the captured CAN messages. It should display the received messages in a ListBox/ListView, with the ability to live filter what is displayed by some criteria. It should also group by IDs embedded in each message, and display a list with encountered ID's and their total occurence. It should also display a total of distinct ID's, and a total count of messages.
What is received between a Start and a Stop command is a collection of messages that represent a record. The application should be capable to record multiple times in a session and must provide a way to persist records on disk along with applied filters, a user defined name, etc. The same application should be capable to import those records for offline analysis.
The fore mentioned ObjectThatConsumesXs is my record, that exposes it's contained messages as an Observable that reply to its subscribers(Reply operator).
I am using ReactiveUI - MVVM/WinForms/Reactive Extensions, and among others i have managed to design a service that exposes a GetRecordUntil function that returns an IObservable(Of Record). Upon subscription the observable emits a single Record that is updated with messages received from the scanner.
I am opened to suggestion regarding the design of the application. But i am afraid that my question should be at least re-tagged if not even renamed.
In general, I would suggest a design that doesn't have you passing IObservable(Of T) in as parameters.
The Observable interfaces provide a way for something you depend on (i.e. has no knowledge of you) to call you back.
However if you are passing an observable sequence to something, then you clearly know about it, and it clearly is expecting to be called (or react to ) stimulus. Why not just call methods on that dependency directly when the events happen?
Regardless, the current design you have will not pass on the OnError.
The outer sequence you guarantee to only ever OnNext a single ObjectThatConsumesXs and then complete the sequence.
Internally, that ObjectThatConsumesXs will subscribe to the published sequence, receive the error, but have no way to propagate that back to the other code path.
As another note, you have a type called ObjectThatConsumesXs, but you then go on to consumer the outer sequence directly within the other method.
Why the double handling?
If you can explain what it is that you are trying to do (not how you are trying to solve it), then I am sure the community can point you to a more appropriate design.
Related
I have a hierarchy of classes one base B, several derived D from B.
There is a protected member m_treeID, which is the ID of each tree inside.
I want at the base class to fill the message map like
ON_NOTIFY(NM_CLICK, m_treeID, OnNMClickTree)
instead of going for each D to do
ON_NOTIFY(NM_CLICK, TREE_A, OnNMClickTree)
ON_NOTIFY(NM_CLICK, TREE_B, OnNMClickTree)
... and so on.
Is it possible?
If I understand you right, have you looked at using ON_NOTIFY_RANGE?
If you need to process the same WM_NOTIFY message for a set of controls, you can use ON_NOTIFY_RANGE rather than ON_NOTIFY. For instance, you may have a set of buttons for which you want to perform the same action for a certain notification message.
When you use ON_NOTIFY_RANGE, you specify a contiguous range of child identifiers for which to handle the notification message by specifying the beginning and ending child identifiers of the range.
ClassWizard does not handle ON_NOTIFY_RANGE; to use it, you need to edit your message map yourself.
It explains how to use it in the article. As long as TREE_A, TREE_B etc. are sequentially numbered then you can have one message handler for all of them.
There are methods that must deal with special situations.
For example, method Print must deal with situations when printing was manually canceled by the user during printing (Canceled) or printer is out of paper (OutOfPaper).
There situations are not errors and not exceptions, because they are part of business logic.
I see two variants of method implementation.
Variant 1:
public enum PrintResult
Ok
Canceled
OutOfPaper
end enum
public function Print() as PrintResult
end function
Method Print has a distinct result type PrintResult which contains a report of what happened during method execution.
Consumer calls method Print, obtains the result, analyses the result and decides what to do next.
Variant 2:
public Sub Print(CanceledAction as Action, OutOfPaperAction as Action)
end Sub
Method Print doesn't have a distinct result type, but behavior what to do in special situations is passed to the method by means of callbacks / delegates / interfaces.
When calling method Print consumer provides methods to use is special situations.
Questions:
Are there other variants?
When is it better to use each variant?
I vote for returning a result. Passing delegates implies that method knows much more of the situation that it should.
Ideally, method should not know that some actions are even available. It just returns its state. Is there any action there to react or not it is not its bussiness.
The code below represents a singleton that I use in my application. Lets assume that _MyObject = New Object represents a very expensive database call that I do not want to make more than once under any circumstance. To ensure that this doesn't happen, I first check if the _MyObject backing field is null. If it is, I break into a SyncLock to ensure that only one thread can get in here at a time. However, in the event that two threads get past the first null check before the singleton is instantiated, thread B would end up sitting at the SyncLock while thread A creates the instance. After thread A exits the lock, thread B would enter the lock and recreate the instance which would result in that expensive database call being made. To prevent this, I added an additional null check of the backing field which occurs within the lock. This way, if thread B manages to end up waiting at the lock, it will get through and do one more null check to ensure that it doesn't recreate the instance.
So is it really necessary to do two null checks? Would getting rid of the outer null check and just starting out with the Synclock be just the same? In other words, is thread-locking a variable just as fast as letting multiple threads access a backing field simultaneously? If so, the outer null check is superfluous.
Private Shared synclocker As New Object
Private Shared _MyObject As Object = Nothing
Public Shared ReadOnly Property MyObject As Object
Get
If _MyObject Is Nothing Then 'superfluous null check?
SyncLock synclocker
If _MyObject Is Nothing Then _MyObject = New Object
End SyncLock
End If
Return _MyObject
End Get
End Property
This will probably be better as an answer rather than a comment.
So, using Lazy to implement "do expensive operation only once, than return reference to the created instance":
Private Shared _MyObject As Lazy(Of Object) = New Lazy(Of Object)(AddressOf InitYourObject)
Private Shared Function InitYourObject() As Object
Return New Object()
End Function
Public Shared ReadOnly Property MyObject As Object
Get
Return _MyObject.Value
End Get
End Property
This is a very simple and thread-safe way of doing on-demand one time initialization. The InitYourObject method handles whatever initialization you need to do and returns an instance of the created class. On first request, the initialization method is called when you call _MyObject.Value, the subsequent requests will return the same instance.
You're absolutely right to have added the inner If statement (you would still have a race condition without it, as you correctly noted).
You are also correct that, from a purely-logical point of view, the outer check is superfluous. However, the outer null check avoids the relatively-expensive SyncLock operation.
Consider: if you've already created your singleton, and you happen to hit your property from 10 threads at once, the outer If is what prevents those 10 threads from queueing up to essentially do nothing. Synchronising threads isn't cheap, and so the added If is for performance rather than for functionality.
MS reference: http://msdn.microsoft.com/en-us/library/3a86s51t(v=vs.71).aspx
"The type of the expression in a SyncLock statement must be a reference type, such as a class, a module, an interface, array or delegate."
Scenario: Multiple threads reading and editing a list.
I know this will avoid a race condition:
SyncLock TheList
TheList.item(0) = "string"
End SyncLock
But will this?
SyncLock TheList.item(0)
TheList.item(0) = "string"
End SyncLock
No, your second snippet is fundamentally wrong. Since you are replacing the object that you lock on. So another thread is going to take the lock on another object, you therefore have no thread-safety at all. A lock can only work if threads use the exact same object to store the lock state.
Notable too is the kind of object you take the lock on. Your second snippet does so on an interned string. Very, very bad since is very likely to cause deadlock. Any other code anywhere else might be wrong the same way and also take a lock on a string literal. If that happens to be "string" as well, you'll easily get completely undiagnosable deadlock.
Also the problem with your first snippet, other code might be taking a lock on the TheList object since it is probably public. Producing deadlock for the same reason. Boilerplate is that you always use a dedicated object to store the lock state that isn't used for anything else, only ever appearing in any code that accesses the list.
Private ListLock As Object = New Object
Option Strict On
Public Class UtilityClass
Private Shared _MyVar As String
Public Shared ReadOnly Property MyVar() As String
Get
If String.IsNullOrEmpty(_MyVar) Then
_MyVar = System.Guid.NewGuid.ToString()
End If
Return _MyVar
End Get
End Property
Public Shared Sub SaveValue(ByVal newValue As String)
_MyVar = newValue
End Sub
End Class
While locking is a good general approach to adding thread safety, in many scenarios involving write-once quasi-immutability, where a field should become immutable as soon as a non-null value is written to it, Threading.Interlocked.CompareExchange may be better. Essentially, that method reads a field and--before anyone else can touch it--writes a new value if and only if the field matches the supplied "compare" value; it returns the value that was read in any case. If two threads simultaneously attempt a CompareExchange, with both threads specifying the field's present value as the "compare" value, one of the operations will update the value and the other will not, and each operation will "know" whether it succeeded.
There are two main usage patterns for CompareExchange. The first is most useful for generating mutable singleton objects, where it's important that everyone see the same instance.
If _thing is Nothing then
Dim NewThing as New Thingie() ' Or construct it somehow
Threading.Interlocked.CompareExchange(_thing, NewThing, Nothing)
End If
This pattern is probably what you're after. Note that if a thread enters the above code between the time another thread has done so and the time it has performed the CompareExchange, both threads may end up creating a new Thingie. If that occurs, whichever thread reaches the CompareExchange first will have its new instance stored in _thing, and the other thread will abandon its instance. In this scenario, the threads don't care whether they win or lose; _thing will have a new instance in it, and all threads will see the same instance there. Note also that because there's no memory barrier before the first read, it is theoretically possible that a thread which has examined the value of _thing sometime in the past might continue seeing it as Nothing until something causes it to update its cache, but if that happens the only consequence will be the creation of a useless new instance of Thingie which will then get discarded when the Interlocked.CompareExchange finds that _thing has already been written.
The other main usage pattern is useful for updating references to immutable objects, or--with slight adaptations--updating certain value types like Integer or Long.
Dim NewThing, WasThing As Thingie
Do
WasThing = _thing
NewThing = WasThing.WithSomeChange();
Loop While Threading.Interlocked.CompareExchange(_thing, NewThing, WasThing) IsNot WasThing
In this scenario, assuming there is some means by which, given a reference to Thingie, one may cheaply produce a new instance that differs in some desired way, it's possible to perform any such operation on _thing in a thread-safe manner. For example, given a String, one may easily produce a new String which has some characters appended. If one wished to append some text to a string in a thread-safe manner (such that if one thread attempts to add Fred and the other tries to add Joe, the net result would be to either append FredJoe or JoeFred, and not something like FrJoeed), the above code would have each thread read _thing, generate a version with its text appended and, try to update _thing. If some other thread updated _thing in the mean-time, abandon the last string that was constructed, make a new string based upon the updated _thing, and try again.
Note that while this approach isn't necessarily faster than the locking approach, it does offer an advantage: if a thread which acquires a lock gets stuck in an endless loop or otherwise waylaid, all threads will be forever blocked from accessing the locked resource. By contrast, if the WithSomeChanges() method above gets stuck in an endless loop, other users of _thing won't be affected.
With multithreaded code, the relevant question is: Can state be modified from several threads? If so, the code isn’t thread safe.
In your code, that’s the case: there are several places which mutate _MyVar and the code is therefore not thread safe. The best way to make code thread safe is almost always to make it immutable: immutable state is simply thread safe by default. Furthermore, code that doesn’t modify state across threads is simpler and usually more efficient than mutating multi-threaded code.
Unfortunately, it’s impossible to see without context whether (or how) your code could be made immutable from several threads. So we need to resort to locks which is slow, error-prone (see the other answer for how easy it is to get it wrong) and gives a false sense of security.
The following is my attempt to make the code correct with using locks. It should work (but keep in mind the false sense of security):
Public Class UtilityClass
Private Shared _MyVar As String
Private Shared ReadOnly _LockObj As New Object()
Public Shared ReadOnly Property MyVar() As String
Get
SyncLock _LockObj
If String.IsNullOrEmpty(_MyVar) Then
_MyVar = System.Guid.NewGuid.ToString()
End If
Return _MyVar
End SyncLock
End Get
End Property
Public Shared Sub SaveValue(ByVal newValue As String)
SyncLock _lockObj
_MyVar = newValue
End SyncLock
End Sub
End Class
A few comments:
We cannot lock on _MyVar since we change the reference of _MyVar, thus losing our lock. We need a separate dedicated locking object.
We need to lock each access to the variable, or at the very least every mutating access. Otherwise all the locking is for naught since it can be undone by changing the variable in another place.
Theoretically we do not need to lock if we only read the value – however, that would require double-checked locking which introduces the opportunity for more errors, so I’ve not done it here.
Although we don’t necessarily need to lock read accesses (see previous two points), we might still have to introduce a memory barrier somewhere to prevent reordering of read-write access to this property. I do not know when this becomes relevant because the rules are quite complex, and this is another reason I dislike locks.
All in all, it’s much easier to change the code design so that no more than one thread at a time has write access to any given variable, and to restrict all necessary communication between threads to well-defined communication channels via synchronised data structures.