Is foreach the only way to consume a BlockingCollection<T> in C#? - .net-4.0

I'm starting to work with TPL right now. I have seen a simple version of the producer/consumer model utilizing TPL in this video.
Here is the problem:
The following code:
BlockingCollection<Double> bc = new BlockingCollection<Double>(100);
IEnumerable<Double> d = bc.GetConsumingEnumerable();
returns an IEnumerable<Double> which can be iterated (and automatically consumed) using a foreach:
foreach (var item in d)
{
// do anything with item
// in the end of this foreach,
// will there be any items left in d or bc? Why?
}
My questions are:
if I get the IEnumerator<Double> dEnum = d.GetEnumerator() from d (to iterate over d with a while loop, for instance) would the d.MoveNext() consume the list as well? (My answer: I don't think so, because the the dEnum is not linked with d, if you know what I mean. So it would consume dEnum, but not d, nor even bc)
May I loop through bc (or d) in a way other than the foreach loop, consuming the items? (the while cycles much faster than the foreach loop and I'm worried with performance issues for scientific computation problems)
What does exactly consume mean in the BlockingCollection<T> type?
E.g., code:
IEnumerator<Double> dEnum = d.GetEnumerator();
while (dEnum.MoveNext())
{
// do the same with dEnum.Current as
// I would with item in the foreach above...
}
Thank you all in advance!

If I get the IEnumerator<Double> dEnum = d.GetEnumerator() from d (to iterate over d with a while loop, for instance) would the d.MoveNext() consume the list as well?
Absolutely. That's all that the foreach loop will do anyway.
May I loop through bc (or d) in a way other than the foreach loop, consuming the items? (the while cycles much faster than the foreach loop and I'm worried with performance issues for scientific computation problems)
If your while loop is faster, that suggests you're doing something wrong. They should be exactly the same - except the foreach loop will dispose of the iterator too, which you should do...
If you can post a short but complete program demonstrating this discrepancy, we can look at it in more detail.
An alternative is to use Take (and similar methods).
What does exactly consume mean in the BlockingCollection type?
"Remove the next item from the collection" effectively.

There is no "performance issue" with foreach. Using the enumerator directly is not likely to give you any measurable improvement in performance compared to just using a foreach loop directly.
That being said, GetConsumingEnumerable() returns a standard IEnumerable<T>, so you can enumerate it any way you choose. Getting the IEnumerator<T> and enumerating through it directly will still work the same way.
Note that, if you don't want to use GetConsumingEnumerable(), you could just use ConcurrentQueue<T> directly. By default, BlockingCollection<T> wraps a ConcurrentQueue<T>, and really just provides a simpler API (GetConsumingEnumerable()) to make Producer/Consumer scenarios simpler to write. Using a ConcurrentQueue<T> directly would be closer to using BlockingCollection<T> without using the enumerable.

Related

How do I remember the root of a binary search tree in Haskell

I am new to Functional programming.
The challenge I have is regarding the mental map of how a binary search tree works in Haskell.
In other programs (C,C++) we have something called root. We store it in a variable. We insert elements into it and do balancing etc..
The program takes a break does other things (may be process user inputs, create threads) and then figures out it needs to insert a new element in the already created tree. It knows the root (stored as a variable) and invokes the insert function with the root and the new value.
So far so good in other languages. But how do I mimic such a thing in Haskell, i.e.
I see functions implementing converting a list to a Binary Tree, inserting a value etc.. That's all good
I want this functionality to be part of a bigger program and so i need to know what the root is so that i can use it to insert it again. Is that possible? If so how?
Note: Is it not possible at all because data structures are immutable and so we cannot use the root at all to insert something. in such a case how is the above situation handled in Haskell?
It all happens in the same way, really, except that instead of mutating the existing tree variable we derive a new tree from it and remember that new tree instead of the old one.
For example, a sketch in C++ of the process you describe might look like:
int main(void) {
Tree<string> root;
while (true) {
string next;
cin >> next;
if (next == "quit") exit(0);
root.insert(next);
doSomethingWith(root);
}
}
A variable, a read action, and loop with a mutate step. In haskell, we do the same thing, but using recursion for looping and a recursion variable instead of mutating a local.
main = loop Empty
where loop t = do
next <- getLine
when (next /= "quit") $ do
let t' = insert next t
doSomethingWith t'
loop t'
If you need doSomethingWith to be able to "mutate" t as well as read it, you can lift your program into State:
main = loop Empty
where loop t = do
next <- getLine
when (next /= "quit") $ do
loop (execState doSomethingWith (insert next t))
Writing an example with a BST would take too much time but I give you an analogous example using lists.
Let's invent a updateListN which updates the n-th element in a list.
updateListN :: Int -> a -> [a] -> [a]
updateListN i n l = take (i - 1) l ++ n : drop i l
Now for our program:
list = [1,2,3,4,5,6,7,8,9,10] -- The big data structure we might want to use multiple times
main = do
-- only for shows
print $ updateListN 3 30 list -- [1,2,30,4,5,6,7,8,9,10]
print $ updateListN 8 80 list -- [1,2,3,4,5,6,7,80,9,10]
-- now some illustrative complicated processing
let list' = foldr (\i l -> updateListN i (i*10) l) list list
-- list' = [10,20,30,40,50,60,70,80,90,100]
-- Our crazily complicated illustrative algorithm still needs `list`
print $ zipWith (-) list' list
-- [9,18,27,36,45,54,63,72,81,90]
See how we "updated" list but it was still available? Most data structures in Haskell are persistent, so updates are non-destructive. As long as we have a reference of the old data around we can use it.
As for your comment:
My program is trying the following a) Convert a list to a Binary Search Tree b) do some I/O operation c) Ask for a user input to insert a new value in the created Binary Search Tree d) Insert it into the already created list. This is what the program intends to do. Not sure how to get this done in Haskell (or) is am i stuck in the old mindset. Any ideas/hints welcome.
We can sketch a program:
data BST
readInt :: IO Int; readInt = undefined
toBST :: [Int] -> BST; toBST = undefined
printBST :: BST -> IO (); printBST = undefined
loop :: [Int] -> IO ()
loop list = do
int <- readInt
let newList = int : list
let bst = toBST newList
printBST bst
loop newList
main = loop []
"do balancing" ... "It knows the root" nope. After re-balancing the root is new. The function balance_bst must return the new root.
Same in Haskell, but also with insert_bst. It too will return the new root, and you will use that new root from that point forward.
Even if the new root's value is the same, in Haskell it's a new root, since one of its children has changed.
See ''How to "think functional"'' here.
Even in C++ (or other imperative languages), it would usually be considered a poor idea to have a single global variable holding the root of the binary search tree.
Instead code that needs access to a tree should normally be parameterised on the particular tree it operates on. That's a fancy way of saying: it should be a function/method/procedure that takes the tree as an argument.
So if you're doing that, then it doesn't take much imagination to figure out how several different sections of code (or one section, on several occasions) could get access to different versions of an immutable tree. Instead of passing the same tree to each of these functions (with modifications in between), you just pass a different tree each time.
It's only a little more work to imagine what your code needs to do to "modify" an immutable tree. Obviously you won't produce a new version of the tree by directly mutating it, you'll instead produce a new value (probably by calling methods on the class implementing the tree for you, but if necessary by manually assembling new nodes yourself), and then you'll return it so your caller can pass it on - by returning it to its own caller, by giving it to another function, or even calling you again.
Putting that all together, you can have your whole program manipulate (successive versions of) this binary tree without ever having it stored in a global variable that is "the" tree. An early function (possibly even main) creates the first version of the tree, passes it to the first thing that uses it, gets back a new version of the tree and passes it to the next user, and so on. And each user of the tree can call other subfunctions as needed, with possibly many of new versions of the tree produced internally before it gets returned to the top level.
Note that I haven't actually described any special features of Haskell here. You can do all of this in just about any programming language, including C++. This is what people mean when they say that learning other types of programming makes them better programmers even in imperative languages they already knew. You can see that your habits of thought are drastically more limited than they need to be; you could not imagine how you could deal with a structure "changing" over the course of your program without having a single variable holding a structure that is mutated, when in fact that is just a small part of the tools that even C++ gives you for approaching the problem. If you can only imagine this one way of dealing with it then you'll never notice when other ways would be more helpful.
Haskell also has a variety of tools it can bring to this problem that are less common in imperative languages, such as (but not limited to):
Using the State monad to automate and hide much of the boilerplate of passing around successive versions of the tree.
Function arguments allow a function to be given an unknown "tree-consumer" function, to which it can give a tree, without any one place both having the tree and knowing which function it's passing it to.
Lazy evaluation sometimes negates the need to even have successive versions of the tree; if the modifications are expanding branches of the tree as you discover they are needed (like a move-tree for a game, say), then you could alternatively generate "the whole tree" up front even if it's infinite, and rely on lazy evaluation to limit how much work is done generating the tree to exactly the amount you need to look at.
Haskell does in fact have mutable variables, it just doesn't have functions that can access mutable variables without exposing in their type that they might have side effects. So if you really want to structure your program exactly as you would in C++ you can; it just won't really "feel like" you're writing Haskell, won't help you learn Haskell properly, and won't allow you to benefit from many of the useful features of Haskell's type system.

VB.NET, best practice for sorting concurrent results of threadpooling?

In short, I'm trying to "sort" incoming results of using threadpooling as they finish. I have a functional solution, but there's no way in the world it's the best way to do it (it's prone to huge pauses). So here I am! I'll try to hit the bullet points of what's going on/what needs to happen and then post my current solution.
The intent of the code is to get information about files in a directory, and then write that to a text file.
I have a list (Counter.ListOfFiles) that is a list of the file paths sorted in a particular way. This is the guide that dictates the order I need to write to the text file.
I'm using a threadpool to collect the information about each file, create a stringbuilder with all of the text ready to write to the text file. I then call a procedure(SyncUpdate, inlcluded below), send the stringbuilder(strBld) from that thread along with the name of the path of the file that particular thread just wrote to the stringbuilder about(Xref).
The procedure includes a synclock to hold all the other threads until it finds a thread passing the correct information. That "correct" information being when the xref passed by the thread matches the first item in my list (FirstListItem). When that happens, I write to the text file, delete the first item in the list and do it again with the next thread.
The way I'm using the monitor is probably not great, in fact I have little doubt I'm using it in an offensively wanton manner. Basically while the xref (from the thread) <> the first item in my list, I'm doing a pulseall for the monitor. I originally was using monitor.wait, but it would eventually just give up trying to sort through the list, even when using a pulse elsewhere. I may have just been coding something awkwardly. Either way, I don't think it's going to change anything.
Basically the problem comes down to the fact that the monitor will pulse through all of the items it has in the queue, when there's a good chance the item I am looking for probably got passed to it somewhere earlier in the queue or whatever and it's now going to sort through all of the items again before looping back around to find a criteria that matches. The result of this is that my code will hit one of these and take a huge amount of time to complete.
I'm open to believing I'm just using the wrong tool for the job, or just not using tool I have correctly. I would strongly prefer some sort of threaded solution (unsurprisingly, it's much faster!). I've been messing around a bit with the Parallel Task functionality today, and a lot of the stuff looks promising, but I have even less experience with that vs. threadpool, and you can see how I'm abusing that! Maybe something with queue? You get the idea. I am directionless. Anything someone could suggest would be much appreciated. Thanks! Let me know if you need any additional information.
Private Sub SyncUpdateResource(strBld As Object, Xref As String)
SyncLock (CType(strBld, StringBuilder))
Dim FirstListitem As String = counter.ListOfFiles.First
Do While Xref <> FirstListitem
FirstListitem = Counter.ListOfFiles.First
'This makes the code much faster for reasons I can only guess at.
Thread.Sleep(5)
Monitor.PulseAll(CType(strBld, StringBuilder))
Loop
Dim strVol As String = Form1.Volname
Dim strLFPPath As String = Form1.txtPathDir
My.Computer.FileSystem.WriteAllText(strLFPPath & "\" & strVol & ".txt", strBld.ToString, True)
Counter.ListOfFiles.Remove(Xref)
End SyncLock
End Sub
This is a pretty typical multiple producer, single consumer application. The only wrinkle is that you have to order the results before they're written to the output. That's not difficult to do. So let's let that requirement slide for a moment.
The easiest way in .NET to implement a producer/consumer relationship is with BlockingCollection, which is a thread-safe FIFO queue. Basically, you do this:
In your case, the producer threads get items, do whatever processing they need to, and then put the item onto the queue. There's no need for any explicit synchronization--the BlockingCollection class implementation does that for you.
Your consumer pulls things from the queue and outputs them. You can see a really simple example of this in my article Simple Multithreading, Part 2. (Scroll down to the third example if you're just interested in the code.) That example just uses one producer and one consumer, but you can have N producers if you want.
Your requirements have a little wrinkle in that the consumer can't just write items to the file as it gets them. It has to make sure that it's writing them in the proper order. As I said, that's not too difficult to do.
What you want is a priority queue of some sort onto which you can place an item if it comes in out of order. Your priority queue can be a sorted list or even just a sequential list if the number of items you expect to get out of order isn't very large. That is, if you typically have only a half dozen items at a time that could be out of order, then a sequential list could work just fine.
I'd use a heap because it performs well. The .NET Framework doesn't supply a heap, but I have a simple one that works well for jobs like this. See A Generic BinaryHeap Class.
So here's how I'd write the consumer (the code is in pseudo-C#, but you can probably convert it easily enough).
The assumption here is that you have a BlockingCollection called sharedQueue that contains the items. The producers put items on that queue. Consumers do this:
var heap = new BinaryHeap<ItemType>();
foreach (var item in sharedQueue.GetConsumingEnumerable())
{
if (item.SequenceKey == expectedSequenceKey)
{
// output this item
// then check the heap to see if other items need to be output
expectedSequenceKey = expectedSequenceKey + 1;
while (heap.Count > 0 && heap.Peek().SequenceKey == expectedSequenceKey)
{
var heapItem = heap.RemoveRoot();
// output heapItem
expectedSequenceKey = expectedSequenceKey + 1;
}
}
else
{
// item is out of order
// put it on the heap
heap.Insert(item);
}
}
// if the heap contains items after everything is processed,
// then some error occurred.
One glaring problem with this approach as written is that the heap could grow without bound if one of your consumers crashes or goes into an infinite loop. But then, your other approach probably would suffer from that as well. If you think that's an issue, you'll have to add some way to skip an item that you think won't ever be forthcoming. Or kill the program. Or something.
If you don't have a binary heap or don't want to use one, you can do the same thing with a SortedList<ItemType>. SortedList will be faster than List, but slower than BinaryHeap if the number of items in the list is even moderately large (a couple dozen). Fewer than that and it's probably a wash.
I know that's a lot of info. I'm happy to answer any questions you might have.

What is the quickest way to iterate through a Iterator in reverse

Let's say I'd like to iterate through a generic iterator in reverse, without knowing about the internals of the iterator and essentially not cheating via untyped magic and assuming this could be any type of iterable, which serves a iterator; can we optimise the reverse of a iterator at runtime or even via macros?
Forwards
var a = [1, 2, 3, 4].iterator();
// Actual iteration bellow
for(i in a) {
trace(i);
}
Backwards
var a = [1, 2, 3, 4].iterator();
// Actual reverse iteration bellow
var s = [];
for(i in a) {
s.push(i);
}
s.reverse();
for(i in s) {
trace(i);
}
I would assume that there has to be a simpler way, or at least fast way of doing this. We can't know a size because the Iterator class doesn't carry one, so we can't invert the push on to the temp array. But we can remove the reverse because we do know the size of the temp array.
var a = [1,2,3,4].iterator();
// Actual reverse iteration bellow
var s = [];
for(i in a) {
s.push(i);
}
var total = s.length;
var totalMinusOne = total - 1;
for(i in 0...total) {
trace(s[totalMinusOne - i]);
}
Is there any more optimisations that could be used to remove the possibility of the array?
It bugs me that you have to duplicate the list, though... that's nasty. I mean, the data structure would ALREADY be an array, if that was the right data format for it. A better thing (less memory fragmentation and reallocation) than an Array (the "[]") to copy it into might be a linked List or a Hash.
But if we're using arrays, then Array Comprehensions (http://haxe.org/manual/comprehension) are what we should be using, at least in Haxe 3 or better:
var s = array(for (i in a) i);
Ideally, at least for large iterators that are accessed multiple times, s should be cached.
To read the data back out, you could instead do something a little less wordy, but quite nasty, like:
for (i in 1-s.length ... 1) {
trace(s[-i]);
}
But that's not very readable and if you're after speed, then creating a whole new iterator just to loop over an array is clunky anyhow. Instead I'd prefer the slightly longer, but cleaner, probably-faster, and probably-less-memory:
var i = s.length;
while (--i >= 0) {
trace(s[i]);
}
First of all I agree with Dewi Morgan duplicating the output generated by an iterator to reverse it, somewhat defeats its purpose (or at least some of its benefits). Sometimes it's okay though.
Now, about a technical answer:
By definition a basic iterator in Haxe can only compute the next iteration.
On the why iterators are one-sided by default, here's what we can notice:
if all if iterators could run backwards and forwards, the Iterator classes would take more time to write.
not all iterators run on collections or series of numbers.
E.g. 1: an iterator running on the standard input.
E.g. 2: an iterator running on a parabolic or more complicated trajectory for a ball.
E.g. 3: slightly different but think about the performance problems running an iterator on a very large single-linked list (eg the class List). Some iterators can be interrupted in the middle of the iteration (Lambda.has() and Lambda.indexOf() for instance return as soon as there is a match, so you normally don't want to think of what's iterated as a collection but more as an interruptible series or process iterated step by step).
While this doesn't mean you shouldn't define two-ways iterators if you need them (I've never done it in Haxe but it doesn't seem impossible), in the absolute having two-ways iterators isn't that natural, and enforcing Iterators to be like that would complicate coding one.
An intermediate and more flexible solution is to simply have ReverseXxIter where you need, for instance ReverseIntIter, or Array.reverseIter() (with using a custom ArrayExt class). So it's left for every programmer to write their own answers, I think it's a good balance; while it takes more time and frustration in the beginning (everybody probably had the same kind of questions), you end up knowing the language better and in the end there are just benefits for you.
Complementing the post of Dewi Morgan, you can use for(let i = a.length; --i >= 0;) i; if you wish to simplify the while() method. if you really need the index values, I think for(let i=a.length, k=keys(a); --i in k;) a[k[i]]; is the best that give to do keeping the performance. There is also for(let i of keys(a).reverse()) a[i]; which has cleaner writing, but its iteration rate increases 1n using .reduce()

how to optimize search difference between array / list of object

Premesis:
I am using ActionScript with two arraycollections containing objects with values to be matched...
I need a solution for this (if in the framework there is a library that does it better) otherwise any suggestions are appreciated...
Let's assume I have two lists of elements A and B (no duplicate values) and I need to compare them and remove all the elements present in both, so at the end I should have
in A all the elements that are in A but not in B
in B all the elements that are in B but not in A
now I do something like that:
for (var i:int = 0 ; i < a.length ;)
{
var isFound:Boolean = false;
for (var j:int = 0 ; j < b.length ;)
{
if (a.getItemAt(i).nome == b.getItemAt(j).nome)
{
isFound = true;
a.removeItemAt(i);
b.removeItemAt(j);
break;
}
j++;
}
if (!isFound)
i++;
}
I cycle both the arrays and if I found a match I remove the items from both of the arrays (and don't increase the loop value so the for cycle progress in a correct way)
I was wondering if (and I'm sure there is) there is a better (and less CPU consuming) way to do it...
If you must use a list, and you don't need the abilities of arraycollection, I suggest simply converting it to using AS3 Vectors. The performance increase according to this (http://www.mikechambers.com/blog/2008/09/24/actioscript-3-vector-array-performance-comparison/) are 60% compared to Arrays. I believe Arrays are already 3x faster than ArrayCollections from some article I once read. Unfortunately, this solution is still O(n^2) in time.
As an aside, the reason why Vectors are faster than ArrayCollections is because you provide type-hinting to the VM. The VM knows exactly how large each object is in the collection and performs optimizations based on that.
Another optimization on the vectors is to sort the data first by nome before doing the comparisons. You add another check to break out of the loop if the nome of list b simply wouldn't be found further down in list A due to the ordering.
If you want to do MUCH faster than that, use an associative array (object in as3). Of course, this may require more refactoring effort. I am assuming object.nome is a unique string/id for the objects. Simply assign that the value of nome as the key in objectA and objectB. By doing it this way, you might not need to loop through each element in each list to do the comparison.

Why is there no 'forall' in std.parallel?

I've been going over the new std.parallel library. I'm not a language or library designer, so forgive my ignorance, but would it not be beneficial if there was a forall statement in the language, or at least in std.parallel?
For example, instead of this:
auto logs = new double[1_000_000];
foreach(i, ref elem; taskPool.parallel(logs)){
elem = log(i + 1.0);
}
we could write this:
auto logs = new double[1_000_000];
forall!((x){ return log(x + 1.0); })(logs);
foreach is sequential by nature and we can break out of it anytime, whereas forall is a guarantee that all elements will be processed. Is that a correct statement? Is it only a matter of time before forall is implemented, or is there a good reason for not having it?
I think that you're misunderstanding what std.parallelism is doing with foreach. If you look at the documentation, it specifically states that
Break­ing from a par­al­lel fore­ach
loop via a break, la­beled break,
la­beled con­tinue, re­turn or goto
state­ment throws a
Par­al­lelFore­achEr­ror.
So, you can't break out of it at any time unless you throw an exception - which is exactly what the case would be with forall. When you use foreach with parallel, you're telling it to dole out the iterations of that loop to separate threads. They're almost certainly doled out in sequential order, but they're executed in parallel, and you don't really care about the order. If you did, you couldn't do them in parallel. So, adding a forall wouldn't buy you anything here.
D is by its very nature a sequential language just like most programming languages. It provides some powerful features which relate to threading (such as defaulting to thread-local storage), but I expect that it would require a fair bit of redesign to put something like forall directly in the language. And as it turns out, it's not necessary. The language is powerful enough to allow for the parallelism to be built on top of it. std.parallelism is effectively giving you forall. It's just that it's doing it by using the existing language feature foreach rather than having to have the language altered to understand and contain forall as a built-in feature.
And, as CyberShadow notes, a new module, std.parallel_algorithm, is in the works which will have parallel versions of many of the functions in std.algorithm so that you get that parallelism for free. Overall, std.parallelism seems to be doing a good job of giving easy to use but powerful parallelism features for D.
How about this?
auto logs = array(taskPool.amap!`log(a + 1.0)`(iota(0, 1_000_000)));
I should note that std.parallel_algorithm is in the works.