I have been looking through the Topshelf code, and notice that it is using an assembly called 'stact.dll'. There does not seem to be a lot of information around on this. It seems to be a library for building concurrent applications using actors and 'channels'. I find the Topshelf code a bit hard to follow, but I am interested in finding out more about this style of programming. Has anyone had any experience with this library? How did you go about learning how to use it?
Stact is currently only really used internally at the moment. It's something we've built up from our experiences writing concurrent software and mostly the work of Chris Patterson (https://github.com/phatboyg/Stact).
The simplest example I can think of that's out there is from Cashbox.
https://github.com/Cashbox/Cashbox/blob/v1.0/src/Cashbox/Engines/FileStorageEngine.cs
You have a channel which passes messages. On one end of that channel you set up the message subscriptions. Line 72 builds the subscriptions, setting a handler action for each message type it expects. The HandleOnFiber(_fiber) is forcing all messages to be processed on the same thread and they are queued up as they are received. There are other handle calls and hopefully the API is rather discoverable.
Now this example hides all the channels and fibers in one class, you might have channels connecting different classes in which case a reference to the channel in question would have to be passed around.
Stact is really an Actor library. There aren't any great examples, at the moment, of using it to write actors. I hope this helps.
Related
New to NServiceBus (4.7.5) and just implemented an NSB host.exe hosted service (implementing IWantToRunWhenBusStartsAndStops) that detects changes to database tables and notifies subscribing web apps by publishing events, e.g. "CustomerDataWasUpdatedEvent". In the future we will perform the actual update through messagehandlers receiving commands obviously, but at the moment this publishing service just polls the database etc.
It all works well, however, approaching production, I noticed that David Boike, in his latest edition of "Learning NServiceBus", states that classes implementing
IWantToRunWhenBusStartsAndStops are really mostly for development and rarely used in production. I set up my database change detection in the Start method and it works nicely, does anyone know why this is discouraged?
Here is the comment in the actual book:
https://books.google.se/books?id=rvpzBgAAQBAJ&pg=PA110&lpg=PA110&dq=nservicebus+iwanttorunwhenbusstartsandstops+in+production+david+boike&source=bl&ots=U6sNII0nm3&sig=qIXffOVFhcy-_3qDnSExRpwRlD4&hl=sv&sa=X&ei=lHWRVc2_BKrWywPB65fIBw&ved=0CBsQ6AEwAA#v=onepage&q=nservicebus%20iwanttorunwhenbusstartsandstops%20in%20production%20david%20boike&f=false
The actual quote is:
...it isn't common to have widespread use of in a production system.
Uncommon is not the same thing as discouraged.
That said I do think there is intent here by the author to highlight the fact that further up the page they assert that this is not a good place to be doing lots of coding, as an unhandled exception can cause the whole process to fail.
The author actually does go on to mention a possible use case for when you may want to load a resource(s) to do work within the handler.
Ok, maybe it's just this scenario we have that is a bit uncommon
Agreed - there is nothing fundamentally wrong with your approach. I recently did the same thing as you for wiring up SqlDependency to listen for database events and then publish a message as a result. In these scenarios there is literally nothing else you can do other than to use IWantToRunAtStatup.
Also, David himself often trawls the nservicebus tag, maybe he'll provide a more definitive answer than mine.
I'll copy the answer I gave in the Particular Software Google Group...
I'll quote myself directly here:
An implementation of IWantToRunWhenBusStartsAndStops is a great place to create a quick interface in order to test messages during debugging by allowing you to send messages based on the console input. Apart from this, it isn't common to have widespread use of them in a production system. One possible production use case will be to provision a resource needed by the endpoint at startup and then tear it down when the endpoint stops.
I think if I could add a little bit of emphasis it would be to "widespread use". I'm not trying to say you won't/can't have an IWantToRunWhenBusStartsAndStops in production code or that avoiding them is a best practice. I am trying to say that having a ton of them is probably a code smell.
Above that paragraph in the book, I warn about IWantToRunWhenBusStartsAndStops not having any ambient transactions or try/catch stuff going on. THAT is really the key part. If you end up throwing an exception in an IWantToRunWhenBusStartsAndStops, tyou can run into big problems. If you use something like a .NET Timer and then throw an exception, you can crash your process!
Let me tell you how I screwed up on this in my first-ever NServiceBus system. The system (still in use today, from what I hear) is responsible for ingesting more than 3000 RSS feeds (probably a lot more than that now) into a CMS. So processing each feed, breaking it up into items, resizing images, encoding attached video for mobile ... all those things were handled in NServiceBus message handlers, which was scaled out to multiple servers, and that was all fantastic.
The problem was the scheduler. I implemented that as an IWantToRunWhenBusStartsAndStops (well, actually IWantToRunAtStartup at that time) and it quickly turned into a mess. I kept the whole table worth of feed information in memory so that I could calculate when to fire off the next ProcessFeed command. I was using the .NET Timer class, and IIRC, I eventually had to use threading primitives like ManualResetEvent in order to coordinate the activity. And because I was using .NET Timer, if the scheduler threw an exception, that endpoint failed and had to restart. Lots of weird edge cases and it was always a quagmire of bugs. Plus, this was now a singleton "commander app" so while the feed/item processors could be scaled out, the scheduler could not.
As I got more experienced with NServiceBus, I realized that each feed should have been a saga, starting from a FeedCreated event, controlled through PauseProcessing and ResumeProcessing commands, using timeouts to control the next processing time, and finally (perhaps) ended via a FeedRemoved event. This would have been MUCH more straightforward and everything would have executed inside transactionally-controlled message handlers.
That experience led me to be a little bit distrustful/skeptical of IWantToRunWhenBusStartsAndStops. Not saying it's bad, just something to be aware of. Always be prepared to consider if what you're trying to do couldn't be better accomplished in another way.
I understand that mule has 3 thread pools and how they work, however I am amazed at the lack of documentation around numberOfConcurrentTransactedReceivers, there is virtually nothing that talks about it directly not even Dossots book.
There is one blog post which indirectly mentions it, but nothing concrete.
This answer here calls a hidden feature :), can someone please shed some light on it ?, and how is it related to the threading profile, maxActiveThreads and so on...
After a fair bit of looking around this is what I have found...
numberOfConcurrentTransactedReceivers is important and undocumented !!
The behavior depends on the connector it is being used with, so this may not be a complete answer, however it is my attempt at starting something. I am happy to mark a new answer as correct if it is more complete
Only transactional message sources use numberOfConcurrentTransactedReceivers.It defines the number of threads that will be triggering messages from the message source at the same time.
Threading profiles maxThreads is not taken into account by this transports. So configuring it is useless. Nevertheless if you set the receiver threading profile doThreading attribute to false explicitly it will disable the use of numberOfConcurrentTransactedReceivers.
For example take the JMS Transport
For queues which are not using XA transactions, use
numberOfConsumers.
For queues using XA transactions, use
numberOfConcurrentTransactedReceivers
For topics, do not use any of them as Mule will always create a single consumer.
I fear I may be displaying my ignorance with this question, but here goes...
I would like to use WCF to implement interprocess communication between a .NET app and a third-party app written in Qt. The Qt app has a plugin architecture that, if I choose to, can be used to bootstrap some .NET classes to handle WCF cleanly at both ends, but I'd rather keep the codebase native and therefore I'm thinking of ways to make sure that whatever I send down the wire with WCF, I can reassemble at the other end using classes available in Qt.
Qt has a SOAP message class, so I figured the preferable solution - and the one that's closest to the one we've hacked together already - is to send SOAP messages and pick them up off a QLocalSocket. Question is, is it possible to force WCF to encode messages as SOAP over a NetNamedPipeBinding, and if so, is it wise to do so?
I'm feeling rather wary at this point that my question might not make complete sense due to my shaky understanding of the technology involved. If this is the case, please take the time to explain why instead of just saying 'no'.
edit #1: I figure an update is warranted, as I've investigated some and should report my findings.
Firstly, I have found that Qt is a pig. The QtSoapMessage class I mentioned, it turns out, doesn't exist in the current version, and is available only as an after-market source package that you have to compile yourself. It took me many hours of googling to find out why this wasn't working. The Qt documentation is utterly dreadful, Qt Creator is counterintuitive in the extreme, and I've all but lost patience with it so haven't pursued this idea any further as yet. Furthermore, it isn't obvious how exactly I am to pass the socket data into the soap message constructor, which takes a QDomDocument, whereas the API for reading XML from a socket uses a QXmlStreamReader or somesuch. There doesn't seem to be any conversion between them.
You actually have a different problem to the one you think you have.
WCF will by default exchange SOAP messages over the NetNamedPipeBinding.
However, the message exchange is layered over some Microsoft proprietary protocols for transaction flow, message framing and encoding, which means that if on the Qt side you pick up a byte stream directly from a QLocalSocket you will have a lot of work to do to implement these underlying protocols before you will be able to get at the SOAP infoset itself.
It is possible to configure the NetNamedPipe binding to remove some of these protocol layers, but not all of them - the framing protocol will always be there, for example.
You might like to read my blog for a lot more detail on this.
as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.
Could anyone point me to a resource that explains WCF with pictures and simple code snippets. I am tired of googling and finding the same "ABC" articles in all search results.
WCF is a very complex technology that in my opinion is very poorly documented. It is incredibly easy to get up and running with, but the performance tuning to run a large scale app can be incredibly complicated and a lot of trial and error. One day everything is working fine and then you find out that only a single Channel is kept waiting for a new connection and that there is a config setting that you need to adjust on a custom binding to allow more channels to be waiting so that calls don't fail inbetween when a channel is used and the next channel is spun up.
In general Nicholas Allen's blog is a gold mine of information. However Windbg has been my best friend in trying to explain some very bizarre behavior coming from WCF.
Here's a really simple example. It's specific to CE/Mobile devices, but the concept is no different PC to PC.
I found the following two books to be really good for getting up to speed on WCF:
Programming WCF Services (Lowy - O'Reilly)
Pro WCF (Peiris, Mulder - Apress)
They both start with more of a conceptual description of WCF, so you understand the concepts and terms. This is really useful, because it allows you to narrow any google searches to more specific concepts.
And this is an article that breaks down understanding WCF and why it was developed in a simple, bulleted list.