What are the consequences for leaking resources? - idisposable

Obviously, we should clean up after ourselves as a matter of principle. And those of us around before the Windows 2000 era know the pain that memory leaks inflict on users. But I'm curious as to what the consequences of leaking handles to other system resources might be.
This would be things like unclosed files or database connections. Really anything that would be IDisposable in .net. We're a Windows shop, but I would be interested in other OSes as well.
What arguments can I use to get team members to take this more seriously, or are there bigger fish to fry on modern systems?

Instead of thinking of resources as "things" to be "released", it's better to think of the acquisition of an IDisposable object a responsibility to be carried out. Many kinds of IDisposable objects ask outside entities to do things on their behalf until they notify those entities that their services are no longer needed; by so doing, they acquire a responsibility to ensure that those outside entities are in fact given such notice. When Dispose is called on an IDisposable, it can carry out its responsibility to notify anything whose services it had been using that those services are no longer required.
Objects can request notification if the system notices that they've been abandoned. Objects that receive such notification can then generally assume that their services are no longer needed, and notify anyone whose services that they had been using of that. This mechanism works okay in some cases, but it should not be considered reliable, since a variety of factors may prevent the system from noticing that an object has been effectively abandoned.
As for the consequence of failing to call Dispose, it's very simple: things that were supposed to happen as a consequence of an object's services no longer being required, won't happen. If an object was supposed to notify other objects or entities that their services are no longer required, and they were in turn supposed to notify other objects or entities, none of those notifications will happen.
Except in a few cases where code will be using a managed resource for the life of the program, and the OS can be relied upon to recognize termination of the program as an indication that the program no longer needs its services, it will generally be easier to call Dispose on things that are no longer needed, whether or not they really "care", than to try to identify the cases where major problems would be caused by the resulting failures of entities to notify everything that care that their services are no longer needed.

It really depends on what the resource is.
Some resources almost only affect your own process. Open file handles are limited within your process, but you don't much affect the overall system by leaking them. Cleaning them up is important if you have a long-running process such as a server or a GUI application, but for a run-once job, it's not that important. When your process shuts down, these resources get cleaned up anyway.
Some resources affect other processes. Databases typically have connection limits that are quite low (sometimes due to licensing restrictions). If you don't properly close your connections when you're done with them, you will run out very quickly. In addition, open connections use up resources of the database server, thus potentially slowing it down for all users. In addition, such resources may not be reclaimed on process shutdown either, because the OS is not aware of them; rather, the connections might eventually time out on the server, but that may be considerably longer than your process is running.

Related

Possible to share information between an add-on to an existing program and a standalone application? [duplicate]

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

Why use Queueing systems such as RabbitMQ

I am not a senior programmer but I have been deploying applications for a while and devloped small complete systems.
I am starting to hear about queueing systems such as RabbitMQ. May be, I never developed any systems that had to use a queueing system. But, I am worried if I am not using it because I have no idea what to do with this. I have read RabbitMQ tutorial on their site but I am not sure why I would use this for. I am not sure if any of those cannot be achieved by conventional programming with no additional component and regular databases or similar.
Can someone please explain why I would use a queueing system with a small example. I mean not a hello world example, but a a practical scenario.
Thanks a lot for your time
RM
One of the key uses of middleware like message queues is to be able to send data between non homogenous systems. The messages themselves can be many things. Strings are the easiest to be understood by different languages on different systems but are often less useful for transferring more meaningful data. As a result JSON and XML are very popular for the messages. These are just structured strings that can be converted into objects in the language of choice at the consumer end.
Additional useful features:
In some MQ systems like RabbitMQ (not true in all MQ systems) is that the client handles the communication side of things very nicely.
The messages can be asynchronous. If the consumer goes down, the messages will remain until the consumer is back online.
The MQ system can be setup to varying degrees of message durability. They can be removed from the queue once read or remain until the are acknowledged. They can be persistent so even if the MQ systems goes down message will not be lost.
Here goes with some possibly contrived examples. A Java program on a local system wants to send a message to a system on the connected through the internet. The local system has a server connected to the internet. Everything is blocked coming from the internet except a connection to the MQ. The Java program can publish the message to the MQ with out needing access to the internet. The message sits on the queue until the external system picks it up. The Java program publishes a message, lets say XML, and the consumer could be a Perl program. As long as they have some way of understanding the XML with a predefined way of serialization and deserialization it will be fine.
MQ systems tend to work best in "fire-and-forget" scenarios. If an event happens and others need to be notified of it, but the source system has no need for feedback from the other systems, then MQ might be a good fit.
If you understand the pros and cons of MQ and still don't understand why it would be a good fit for a particular system, then it probably isn't. I've seen systems where MQ was used but not needed, and the result was not pretty.
Most of the scenarios I've seen where it's worked out well is integration between unrelated systems (usually out-of-the-box type system). Let's say you have one system that takes orders, and a different system that fills the orders and ships them. In that scenario, the order system can use a MQ to notify the fulfillment system of the order, but the order system has no interest in waiting until the fulfillment system receives the order. So it puts a message in a queue keep going.
This is a very simplified answer, but it gives the general ideas.
Let's think about this in terms of telephone vs. email. Pretend for a minute that email does not exist. To get work done, you must phone everyone. When you communicate with someone via telephone, you need to have them at their desk in order to reach them (assume they are in a factory and can't hear their cell phone ring) :-) If the person you wish to reach isn't at the desk, you are stuck waiting until they return your call (or far more likely, you call them back later). It's the same with you - you don't have any work to do until someone calls you up. If multiple people call at once, you don't know about it because you can only handle one person at a time.
However, if we have email, it is possible for you to "queue" your requests with someone else, to answer (but more likely ignore) at their convenience. If they do ignore your email, you can always re-send it. You don't have to wait for them to be at the desk, and they don't have to wait until you are off the phone. The workload evens out and things run much more smoothly. As an added bonus, you can forward messages that you don't want to deal with to your peons.
In systems engineering, we use the term "closely coupled" to define programs (or parts of programs) that work like the telephone scenario above. They depend very closely upon each other, often sharing implementations among various parts of the program. In these programs, data is processed in serial order, one at a time. These systems are typically easy to build, but there are a few important drawbacks to consider: (1) changing any part of the program likely will cause cascading changes throughout the code, and this introduces bugs; (2) the system is not very scalable, and typically must be scrapped and rebuilt as needs grow; (3) all parts of the system must be functioning simultaneously or the whole system will not work.
Basically, closely-coupled programs are good if the program is very simple or if there is some specialized reason to use a closely-coupled program.
In the real world, things are much more complex. Programs cannot be that simple, and it becomes a nightmare to develop enterprise applications in a closely-coupled manner. Therefore, we use the term "loosely-coupled" to define large systems that are composed of many smaller pieces. The pieces have very well-defined boundaries and functions, so that changing of the system may be accomplished more easily. It is the essence of object-oriented design. Message queues (like RabbitMQ) allow email-like communication to take place among various programs and parts of programs, thus making workflow much more like it would be with people. Adding extra capacity then becomes a simple matter of starting up and additional computer wherever you need it.
Obviously, this is a gross simplification, but I think it conveys the general idea. Building applications that use message queuing enables you to deploy massively scalable applications leveraging cloud service providers. Here is an article that talks about designing for the cloud:
http://blogs.msdn.com/b/silverlining/archive/2011/08/23/designing-and-building-applications-for-the-cloud.aspx

Distributed Objects + Grand Central Dispatch

Not a specific question as such, I'm more trying to test the waters. I like distributed objects, and I like grand central dispatch; How about I try to combine the two?
Does that even make sense? Has anyone played around in these waters? Would I be able to use GCD to help synchronize object access across machines? Or would it be better to stick to synchronizing local objects only? What should I look out for? What design patterns are helpful and what should I avoid?
as an example, I use GCD queues to synchronize accesses to a shared resource of some kind. What can I expect to happen if I make this resource public via distributed objects? Questions like: How nicely to blocks play with distributed objects? Can I expect to use the everything as normal across machines? If not, can I wrangle it to do so? What difficulties can I expect?
I very much doubt this will work well. GCD objects are not Cocoa objects, so you can't reference them remotely. GCD synchronization primitives don't work across process boundaries.
While blocks are objects, they do not support NSCoding, so they can't be transmitted across process boundaries. (If you think about it, they are not much more than function pointers. The pointed-to function must have been compiled into the executable. So, it doesn't make sense that two different programs would share a block.)
Also, Distributed Objects depends on the connection being scheduled in a given run loop. Since you don't manage the threads used by GCD, you are not entitled to add run loop sources except temporarily.
Frankly, I'm not even sure how you envision it even theoretically working. What do you hope to do? How do you anticipate it working?
Running across machines -- as in a LAN, MAN, or WAN?
In a LAN, distributed objects will probably work okay as long as the server you are connecting to is operational. However, most programmers you meet will probably raise an eyebrow and just ask you, "Why didn't you just use a web server on the LAN and just build your own wrapper class that makes it 'feel' like Distributed Objects?" I mean, for one thing, there are well-established tools for troubleshooting web servers, and it's easier and often cheaper to hire someone to build a web service for you rather than a distributed object server.
On a MAN or WAN, however, this would be slow and a very bad idea for most uses. For that type of communication, you're better off using what everyone else uses -- REST-like APIs with HTTPS/HTTP, sending either XML, JSON, or key/value data back and forth. So, you could make a class wrapper that makes this "feel" sort of like distributed objects. And my gut feeling tells me that you'll need to use tricks to speed this up, such as caching chunks of data locally on the client so that you don't have to keep fetching from the server, or even caching on the server so that it doesn't have to interact with a database as often.
GCD, Distributed Objects, Mach Ports, XPC, POSIX Message Queues, Named Pipes, Shared Memory, and many other IPC mechanisms really only make the most sense on local, application to application communication on the same computer. And they have the added advantage of privilege elevation if you want to utilize that. (Note, I said POSIX Message Queues, which are workstation-specific. You can still use a 'message queue service' on a LAN, MAN, or WAN -- there are many products available for that.)

Testing fault tolerant code

I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs.
This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory:
Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test
Add code the application that will cause it to crash a certain know critical points
My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.
I think there are three ways to deal with this, if available I could suggest a comprehensive set of integration tests for these various pieces of code, using dependency injection or factory objects to produce broken actions during these integrations.
Secondly, running the application with random kill -9's, and disabling of network interfaces may be a good way to test these things.
I would also suggest testing file system failure. How you would do that depends on your OS, on Solaris or FreeBSD I would create a zfs file system in a file, and then rm the file while the application is running.
If you are using database code, then I would suggest testing failure of the database as well.
Another alternative to dependency injection, and probably the solution I would use, are interceptors, you can enable crash test interceptors in your code, these would know the state of the application and introduce the above listed failures at the correct time, or any others you may want to create. It would not require changes to your existing code, just some additional code to wrap it.
A possible answer to the first point is to multiply experiments with your external process so that probability to impact problematic parts of code is increased. Then you can analyze core dump file to determine where the code has actually crashed.
Another way is to increase observability and/or commandability by stubbing library or kernel calls, i.e., without modifying your application code.
You can find some resources on Fault Injection page of Wikipedia, in particular in Software Implemented Fault Injection section.
Your concern about fault injection is not a fundamental concern. You merely need a foolproof way to prevent such code ending up in deployment. One way to do so is by designing your fault injector as a debugger. I.e. the faults are injected by a process external to your process. This already provides a level of isolation. Furthermore, most OS'es provide some kind of access control which prevents debugging unless specifially enabled. In the most primitive form, it's by limiting it to root, on other operating systems it requires a specific "debug privilege". Naturally, on production nobody will have that, and thus your fault injector cannot even run on production.
Practially, the fault injector can set breakpoints at specific addresses, i.e. function or even line of code. You can then react to that, e.g. by terminating the process after a certain breakpoint is hit three times.
I was just about to write the same as Justin :)
The component I would suggest to replace during testing could be the logging component (if you have one, if not, I'd strongly suggest to implement one...). It's relatively easy to replace it with code that generates error and the logger usually gets enough information to know the current application state.
Also it seems to be feasible to make sure that the testing code doesn't go into production. I would discourage conditional compilation though but rather go with some configuration file to select the logging component.
Using "random" kills might help to detect errors but is not well suited for systematic testing because of its non-determinism. Therefore I wouldn't use it for automatic tests.

What would a multithreaded UI api look like, and what advantages would it provide?

Or, equivalently, how would you design such an API. Expected/example usage would be illustrative as well.
My curiosity comes directly from the comments (and subsequent editting on my part) of this answer. Similar questions/discussions in the past provide a bit of inspiration to actually asking it.
Executive summary:
I don't feel a multithreaded UI api is possible in a meaningful way, nor particularly desirable. This view seems somewhat contentious and being a (relatively) humble man I'd like to see the error of my ways, if they actually are erroneous.
*Multithreaded is defined pretty loosely in this context, treat** it however makes sense to you.
Since this is pretty free-form, I'll be accepting whichever answer has the most coherent and well supported answer in my opinion; regardless of whether I agree with it.
Answer Accepted
**Ok, perhaps more clarification is necessary.
Pretty much every serious application has more than one thread. At the very least, they'll spin up an additional thread to do some background task in response to a UI event.
I do not consider this a multithreaded UI.
All the UI work is being done on single thread still. I'd say, at a basic level, a multithreaded UI api would have to do away with (in some way) thread based ownership of UI objects or dispatching events to a single thread.
Remeber, this is about the UI api itself; not the applications that makes use of it.
I don't see how a multithreaded UI API would differ much from existing ones. The major differences would be:
(If using a non-GC'd language like C++) Object lifetimes are tracked by reference-counted pointer wrappers such as std::tr1::shared_ptr. This ensures you don't race with a thread trying to delete an object.
All methods are reentrant, thread-safe, and guaranteed not to block on event callbacks (therefore, event callbacks shall not be invoked while holding locks)
A total order on locks would need to be specified; for example, the implementation of a method on a control would only be allowed to invoke methods on child controls, except by scheduling an asynchronous callback to run later or on another thread.
With those two changes, you can apply this to almost any GUI framework you like. There's not really a need for massive changes; however, the additional locking overhead will slow it down, and the restrictions on lock ordering will make designing custom controls somewhat more complex.
Since this usually is a lot more trouble than it's worth, most GUI frameworks strike a middle ground; UI objects can generally only be manipulated from the UI thread (some systems, such as win32, allow there to be multiple UI threads with seperate UI objects), and to communicate between threads there is a threadsafe method to schedule a callback to be invoked on the UI thread.
Most GUI's are multithreaded, at least in the sense that the GUI is running in a separate thread from the rest of the application, and often one more thread for an event handler. This has the obvious benefit of complicated backend work and synchronous IO not bringing the GUI to a screeching halt, and vice versa.
Adding more threads tends to be a proposition of diminishing returns, unless you're handling things like multi-touch or multi-user. However, most multi-touch input seems to be handled threaded at the driver level, so there's usually no need for it at the GUI level. For the most part you only need 1:1 thread to user ratio plus some constant number depending on what exactly you're doing.
For example, pre-caching threads are popular. The thread can burn any extra CPU cycles doing predictive caching, to make things run faster in general. Animation threads... If you have intensive animations, but you want to maintain responsiveness you can put the animation in a lower priority thread than the rest of the UI. Event handler threads are also popular, as mentioned above, but are usually provided transparently to the users of the framework.
So there are definitely uses for threads, but there's no point in spawning large numbers of threads for a GUI. However, if you were writing your own GUI framework you would definitely have to implement it using a threaded model.
There is nothing wrong with, nor particularly special about multithreaded ui apps. All you need is some sort of synchronization between threads and a way to update the ui across thread boundaries (BeginInvoke in C#, SendMessage in a plain Win32 app, etc).
As for uses, pretty much everything you see is multithreaded, from Internet Browsers (they have background threads downloading files while a main thread is taking care of displaying the parts downloaded - again, making use of heavy synchronization) to Office apps (the save function in Microsoft Office comes to mind) to games (good luck finding a single threaded big name game). In fact the C# WinForms UI spawns a new thread for the UI out of the box!
What specifically do you think is not desirable or hard to implement about it?
I don't see any benifit really. Let's say the average app has 3 primary goals:
Rendering
User input / event handlers
Number crunching / Network / Disk / Etc
Dividing these into one thread each(several for #3) would be pretty logical and I would call #1 and #2 UI.
You could say that #1 is already multithreaded and divided on tons of shader-processors on the GPU. I don't know if adding more threads on the CPU would help really. (at least if you are using standard shaders, IIRC some software ray tracers and other CGI renderers use several threads - but i would put such applications under #3)
The user input metods, #2, should only be really really short, and invoke stuff from #3 if more time is needed, that adding more threads here wouldn't be of any use.