Implementations of SASL: Cyrus SASL vs Gnu SASL vs Dovecot SASL? - sasl

I am trying to understand the main differences between those implementations of SASL. Actually I have to admit that I am very far away from understanding the internal structure so if you have further references besides the respective specifications I would be very glad. I was skipping through the internal documentaries, but as I am not an expert it is difficult for me to understand what is happening.

SASL is how the application decouples authentication mechanisms from application protocols, meaning the application ultimately must link to a SASL implementation. If an application supports multiple SASL implementations, then the distribution ultimately choses which one is utilized.
The choice really is about flexibility/robustness vs performance/simplicity. Or if it's not, that's how it ends up working out in practice.
With that said, I personally would prefer DovecotSASL whenever possible. For reasons of performance, and simplicity. This assumes it provides all the features you require, configuration was intuitive. In fact, they should really consider making DovecotSASL its own project, and promote it like Cyrus does.

Related

Asynchronous Messaging Protocol compatibility outside Python (and twisted)

The Asynchronous Messaging Protocol is a simple protocol in python-twisted. I have a fairly complete app (python, twisted, kivy) using it. The client-server architecture implements a view-controller sort of relationship, with allmost all business logic server-side and the UI interface code simply reflecting change in state of models (sent by server) and sending the appropriate AMP messages.
Here is a list of implementations of the AMP protocol in other languages, but some seen unfinished, and most don't seem to be actually being used for anything serious.
The use-case I'm looking at is a fully Python app which currently works on Windows, Linux, and Android (possibly iOS if I ever get round to building that). And possibly, in the future, replacing the View/UI bit with 'native' language (Java/Swift on Android, for instance) while keeping the business bits in python and twisted.
So I have two main questions:-
Is it accurate to say that AMP is only really used within python-twisted and those programs that use it?
Are there other, more generally useful network protocols which are both implemented and fairly easy to use in twisted as well as being non-specific (e.g. jabber is really only for chat)? Preferably which don't require a server like WAMP/autobahn do (if I understand correctly) so it can be self-contained within any device which can run python.
This isn't entirely accurate. Twisted just happens to use it the most. Other languages make use of AMP, it's just that AMP hasn't become very popular given popularity of other more robust options like AMQP (ZeroMQ, RabbitMQ, WebsphereMQ, etc).
AMP is about as simple as it can get. Also, it's unlikely you will find a solution without a server.
AMP is not locked to Twisted or Python. There are other implementations in other languages but like you said some are not used in a "serious" manner and often go unmaintained. Don't let that scare you off because the protocol is so simple, there often isn't much to do after it's been implemented. You will be happy to know that the actual protocol hasn't changed much and isn't very difficult to implement in any language if you follow the design. If you want something more generic, cross platform, and ensured compatibility, then consider HTTP requests.

What does "Opinionated API" mean?

I came across the term "Opinionated API" when reading about the ssl.create_default_context() function introduced by Python 3.4, what does it mean? What's the style of such API? Why do we call it an "opinionated one"?
Thanks a lot.
It means that the creator of the API makes some choices for you that are in her opinion the best.
For example, a web application framework could choose to work best with (or even bundle or work exclusively with) a selection of lower-level libraries (for stuff like logging, database access, session management) instead of letting you choose (and then have to configure) your own.
In the case of ssl.create_default_context some security experts have thought about reasonably secure defaults to configure SSL connections. In particular, it limits the available algorithms to those that are still considered secure, at the expense of complete compatibility with legacy systems, a trade-off that is beneficial in their (and my) opinion.
Essentially they are saying "we have a lot of experience in this domain, and we really think you should do things in the following way".
I suppose this is a response to "enterprise" API that claim to work with every implementation of as many standard interfaces as possible (at the expense of complexity in configuration and combination, requiring costly consultants to set up everything).
Or a natural extension of "Convention over Configuration".
Things should work very well out-of-the-box, so that you only have to twiddle around with expert settings in special cases (and by then you should know what you are doing), as opposed to even a beginner having to make informed decisions about every aspect of the application (which can end in disaster).

Possible to share information between an add-on to an existing program and a standalone application? [duplicate]

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

Should I default the environment for someone using my library?

I have been having this debate with a friend where i have a library (its python but I didn't include that as a tag as the question is applicable to any language) that has a few dependencies. The debate is whether to provide a default environment in the initialization or force the user of the code to explicitly set one.
My opinion is to force the user as its explicit and will avoid confusion and make it clear what they are pointing to.
My friend this is safer and more convenient to default to an environment and let the user override if he wants to.
Thoughts ? Are there any good references or examples / patterns in popular libraries that support either of our arguments? also, any popular blogs or articles that discuss this API design point?
I don't have any references, but here are my thoughts as a potential user of said library.
I think it's good to have a default configuration available to allow developers to quickly evaluate the library. I don't want to have to go through a bunch of configuration just to see if the library will do what I need. Once I'm happy that the library will do what i need it to do, then I'm happy to configure it the way I want.
A good example is Microsoft's ASP.Net MVC framework. When you create a new MVC project it hooks in a default authentication and membership provider, which allows the developer to very quickly get a functioning application up and running. It is also easy to configure different providers to be used if the default one's don't meet the requirements of the application in question.
As a slightly different example, Atlassian Confluence is wiki software which supports many different back-end databases. Atlassian could have chosen to have no default DB configuration, but instead Confluence ships with a default, simple, file-based database to allow users to evaluate the software. For production installations you can then hook up to Oracle, SQL Server, mySQL or whatever else you like.
There may be instances where a default configuratino for a library doesn't really make sense, but I think that would be a special case, rather than a general rule.
It depends. If you can provide sensible defaults, you might want to do that: it will make life easer on the occasional user of the library as they can set only the relevant settings, as opposed to the whole environment (with possibly settings the implications of which they don't fully understand (yet)). You are correct, that in situations it is possible this leads to frustration and confusion as the defaulted settings might cause unexpected behavior (unexpected by the (inexperienced) user) -- you have to weigh the reduced frustration of convenience against the price of not-understood defaults to make the choice for each of these possible-to-default settings, which choice might affect the choice for other, related settings as well
On the other hand, if there is no sensible default (e.g. DB credentials, remote address), you should require the user to provide those settings.
The key in both cases is to provide enough information in the documentation of the library and in the error messages (either for missing settings or conflicting ones) that the user can figure out what those settings actually mean/control without having to read through the source code of the library. This part is hard because 1) it is usally tedious from the point of view of the library developer (so it is often skimped) and 2) the documentation has to be written from the mindset of a newbie to the library, which is often different from the library developer's mindset -- the latter knows the implicit connections/implications, the former has to be told about those in an understandable way.
Although not exactly identical in terms of problem domain, this strikes me as the Convention over Configuration argument.
There has been quite a lot momentum behind CoC in recent years, and in my mind, it makes a whole lot of sense. As long as flexibility is not lost, you have everything to gain. Lower friction development is what we are all after, and if I've got to configure every aspect of your API in order to get it working, I'm less inclined to use it over another API of equal functionality.
I happen to like Hanselman's podcasts, so if you want a little light listening, check out this podcast.
I think your question needs some clarification. For starters, I don't think a library should have any runtime configuration. In terms of dependencies, library dependencies should be handled in a manner appropriate to the environment they are being written for. In python, those dependencies should be in the setup.py file (under requirements), and ultimately that file should meet the requirements of whatever service you plan on making it available on (i.e. pypi for python).
For applications, it is completely okay to require runtime configuration, but you should try to have sensible defaults. If your application depends on libraries, that dependency should be handled in the same way a library dependency would be handled, even though that information may be redundant in the context of an installer (if needed). For the most part first-run scripts and their ilk should be apart of the installer/rpm.
For Web Frameworks, it is typical that your app would carry configuration with it, and likely that it would need to be installed in a different way than traditional applications. Here, about the only thing you can do is try to follow the conventions of whatever framework you are writing in.

Use erlang as/instead of expect script

I would like to reset passwords on a bunch of boxes over SSH. Any pointers on how Erlang could be used for this purpose?
Erlang is indeed a well-suited choice for this problem.
You should have a look at the ssh module. Start a connection with
ssh:connect(Host, Port, Options).
Then use the ssh_connection module to execute the right passwd command (hint: start a shell first) and log out.
Edit: The above is mostly wrong, this blog post might get you started faster.
You can even write a simple server that does all of these things on several hosts in parallel, resulting in the most multicore-capable multi-host ssh password changer on this very planet. Weekend project idea: make a web app out of it.
Simply don't use Erlang for such a thing.
Reading from here:
What sort of applications is Erlang particularly suitable for?
Distributed, reliable, soft real-time concurrent systems.
Telecommunication systems, e.g. controlling a switch or converting
protocols.
Servers for Internet applications, e.g. a mail transfer agent, an IMAP-4
server, an HTTP server or a WAP Stack.
Telecommunication applications, e.g. handling mobility in a mobile network
or providing unified messaging.
Database applications which require soft realtime behaviour.
Erlang is good at solving these sorts of problems because this is the
problem domain it was originally
designed for. Stating the above in
terms of features:
Erlang provides a simple and powerful
model for error containment and fault
tolerance (supervised processes).
Concurrency and message passing are a fundamental to the language.
Applications written in Erlang are
often composed of hundreds or
thousands of lightweight processes.
Context switching between Erlang
processes is typically one or two
orders of magnitude cheaper than
switching between threads in a C
program.
Writing applications which are made of parts which execute on different
machines (i.e. distributed
applications) is easy. Erlang's
distribution mechanisms are
transparent: programs need not be
aware that they are distributed.
The OTP libraries provide support for many common problems in networking
and telecommunications systems.
The Erlang runtime environment (a virtual machine, much like the Java
virtual machine) means that code
compiled on one architecture runs
anywhere. The runtime system also
allows code in a running system to be
updated without interrupting the
program.
What sort of problems is Erlang not particularly suitable for?
People use Erlang for all sorts of
surprising things, for instance to
communicate with X11 at the protocol
level, but, there are some common
situations where Erlang is not likely
to be the language of choice.
The most common class of 'less
suitable' problems is characterised by
performance being a prime requirement
and constant-factors having a large
effect on performance. Typical
examples are image processing, signal
processing, sorting large volumes of
data and low-level protocol
termination.
Another class of problem is
characterised by a wide interface to
existing C code. A typical example is
implementing operating system device
drivers.
Most (all?) large systems developed
using Erlang make heavy use of C for
low-level code, leaving Erlang to
manage the parts which tend to be
complex in other languages, like
controlling systems spread across
several machines and implementing
complex protocol logic.
As suggested by Andrzej, you should look into other directions. Maybe a different question on StackOverflow asking "which language would be good for..." could be the first step...
UPDATE
If you still intend to use Erlang to reset your passwords you might want to have a look to the Erlang SSH Channel Behaviour as well.
Reading from the doc:
Ssh services are implemented as channels that are multiplexed over an ssh connection and
communicates via the ssh connection protocol. This module provides a callback API that
takes care of generic channel aspects such as flow control and close messages and lets the
callback functions take care of the service specific parts.