Asynchronous Messaging Protocol compatibility outside Python (and twisted) - twisted

The Asynchronous Messaging Protocol is a simple protocol in python-twisted. I have a fairly complete app (python, twisted, kivy) using it. The client-server architecture implements a view-controller sort of relationship, with allmost all business logic server-side and the UI interface code simply reflecting change in state of models (sent by server) and sending the appropriate AMP messages.
Here is a list of implementations of the AMP protocol in other languages, but some seen unfinished, and most don't seem to be actually being used for anything serious.
The use-case I'm looking at is a fully Python app which currently works on Windows, Linux, and Android (possibly iOS if I ever get round to building that). And possibly, in the future, replacing the View/UI bit with 'native' language (Java/Swift on Android, for instance) while keeping the business bits in python and twisted.
So I have two main questions:-
Is it accurate to say that AMP is only really used within python-twisted and those programs that use it?
Are there other, more generally useful network protocols which are both implemented and fairly easy to use in twisted as well as being non-specific (e.g. jabber is really only for chat)? Preferably which don't require a server like WAMP/autobahn do (if I understand correctly) so it can be self-contained within any device which can run python.

This isn't entirely accurate. Twisted just happens to use it the most. Other languages make use of AMP, it's just that AMP hasn't become very popular given popularity of other more robust options like AMQP (ZeroMQ, RabbitMQ, WebsphereMQ, etc).
AMP is about as simple as it can get. Also, it's unlikely you will find a solution without a server.
AMP is not locked to Twisted or Python. There are other implementations in other languages but like you said some are not used in a "serious" manner and often go unmaintained. Don't let that scare you off because the protocol is so simple, there often isn't much to do after it's been implemented. You will be happy to know that the actual protocol hasn't changed much and isn't very difficult to implement in any language if you follow the design. If you want something more generic, cross platform, and ensured compatibility, then consider HTTP requests.

Related

What does "Opinionated API" mean?

I came across the term "Opinionated API" when reading about the ssl.create_default_context() function introduced by Python 3.4, what does it mean? What's the style of such API? Why do we call it an "opinionated one"?
Thanks a lot.
It means that the creator of the API makes some choices for you that are in her opinion the best.
For example, a web application framework could choose to work best with (or even bundle or work exclusively with) a selection of lower-level libraries (for stuff like logging, database access, session management) instead of letting you choose (and then have to configure) your own.
In the case of ssl.create_default_context some security experts have thought about reasonably secure defaults to configure SSL connections. In particular, it limits the available algorithms to those that are still considered secure, at the expense of complete compatibility with legacy systems, a trade-off that is beneficial in their (and my) opinion.
Essentially they are saying "we have a lot of experience in this domain, and we really think you should do things in the following way".
I suppose this is a response to "enterprise" API that claim to work with every implementation of as many standard interfaces as possible (at the expense of complexity in configuration and combination, requiring costly consultants to set up everything).
Or a natural extension of "Convention over Configuration".
Things should work very well out-of-the-box, so that you only have to twiddle around with expert settings in special cases (and by then you should know what you are doing), as opposed to even a beginner having to make informed decisions about every aspect of the application (which can end in disaster).

Possible to share information between an add-on to an existing program and a standalone application? [duplicate]

I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..

Preferred python networking framework/library for desktop app

I want to write a p2p share software using python, it mainly used in windows, but can also works in linux. So I've tried some frameworks/libraries such as Twisted, Gevent, and Tornado(may be tornado is not a good one for windows desktop client).
But I don't know which one to choose.
Twisted is a little big, I think...
I think Gevent is more useful in *nix platform.
Tornado is a web server, so may be this one is not suitable for desktop app.
Twisted is the most suited of these to the development of network applications. It contains the most support code for implementing protocols. Twisted also includes the best GUI library integration out of these. It works with Gtk (on Windows, too) and Qt3 and Qt4. It may also work with wxWidgets (though this is less well supported than Gtk or Qt3/4). It can integrate with the Windows GUI event loop as well.
Of course, it would be ridiculous to suggest that Twisted is the best suited library for your needs, given the extremely minimal (almost non-existent) description of your needs. I think it's likely that Twisted is at least as well suited, if not better suited, to the needs of an arbitrary network application than the other options you've listed (and, indeed, any of the other options available in Python). However, whether it is best suited to your particular case, I can't say.
I think the default for the underlying event loop for all of these in Windows will be based on Select (although it appears at least Twisted has platform specific support for IOCP).
Someone with a better understanding of the differences above that should probably comment but, from the developer's perspective the choice will largely be around preferred syntax. Twisted implements everything through a reactor pattern while gevent uses co-routines. I'd take a look at some simple examples of each and see which is better suited to your sensibility.

If I write a framework that gets information from the Internet, should I make a degelate or use blocks?

Say I'm writing a publicly available framework for the Vimeo API. This framework needs to get information from the Internet. Because this can take some time, I need to use threadin to prevent the UI from hanging. Foundation uses delegates for this, like NSURLConnectionDelegate. However, Game Kit uses blocks as callback functions.
What is the recommended way of doing this? I know blocks aren't supported in standard GCC versions, but they require less, much less code for the one that uses my framework.
Delegates, on the other hand, are real methods and when protocols are used, I'm sure the methods are implemented.
Thanks.
I really like blocks but I would be tempted to use a delegate protocol in this case. Network connections can fail in a large number of ways and their delegates tend to keep a fair amount of stateful information about them. I find that that maps well to a delegate protocol with a number of optional methods.
If you're providing a very simplified API for accessing network data then a success/failure pair of blocks might be sufficient. Personally I find that I have to deal with alot of different cases which use many delegate methods on a stateful delegate object. For example; should I retry failed connections immediately or later, does the relative priority of failed connections change, can I make us of a partial response, should I switch a connection to wifi when it becomes available, do I offer a user a chance to authenticate if prompted, do I display incremental progress in a connection? You could handle all of those with blocks but I find that I would rather have a delegate class managing the connection.
Without knowing more about what data you intend for your interface to fetch I don't know that I can be more specific but. I would be tempted to allow users of the API to manage their own connection state if possible.
It all depends on who your target audience is. If you want people writing apps for OS X 10.5 or iOS 3.x, then you need to use delegates. Otherwise, go ahead and use blocks.
It's quite a subjective question since both are valid options, but Apple seems to be shifting further towards using blocks for "throw-away" methods.
The main question would be your target audience.
Block are limited to Snow Leopard (and IOS 4? cant remember).
If you want your framework to be usable by previous operating systems, you can't use blocks.
If you're happy with os limitations, then go with blocks and NSOperationQueue, it's really good and simple to use.
Better, you could offer both options..
I would recommend using blocks, and if you do it right, you can support 10.5 at the same time.
Check out the open-source PLBlocks runtime, it allows you to seamlessly use blocks on both 10.5 and 10.6.

Cross platform IPC [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm looking for suggestions on possible IPC mechanisms that are:
Cross platform (Win32 and Linux at least)
Simple to implement in C++ as well as the most common scripting languages (perl, ruby, python, etc).
Finally, simple to use from a programming point of view!
What my options are? I'm programming under Linux, but I'd like what I write to be portable to other OSes in the future. I've thought about using sockets, named pipes, or something like DBus.
In terms of speed, the best cross-platform IPC mechanism will be pipes. That assumes, however, that you want cross-platform IPC on the same machine. If you want to be able to talk to processes on remote machines, you'll want to look at using sockets instead. Luckily, if you're talking about TCP at least, sockets and pipes behave pretty much the same behavior. While the APIs for setting them up and connecting them are different, they both just act like streams of data.
The difficult part, however, is not the communication channel, but the messages you pass over it. You really want to look at something that will perform verification and parsing for you. I recommend looking at Google's Protocol Buffers. You basically create a spec file that describes the object you want to pass between processes, and there is a compiler that generates code in a number of different languages for reading and writing objects that match the spec. It's much easier (and less bug prone) than trying to come up with a messaging protocol and parser yourself.
For C++, check out Boost IPC.
You can probably create or find some bindings for the scripting languages as well.
Otherwise if it's really important to be able to interface with scripting languages your best bet is simply to use files, pipes or sockets or even a higher level abstraction like HTTP.
Why not D-Bus? It's a very simple message passing system that runs on almost all platforms and is designed for robustness. It's supported by pretty much every scripting language at this point.
http://freedesktop.org/wiki/Software/dbus
If you want a portable, easy to use, multi-language and LGPLed solution, I would recommend you ZeroMQ:
Amazingly fast, almost linear scaleable and still simple.
Suitable for simple and complex systems/architectures.
Very powerful communication patterns available: REP-REP, PUSH-PULL, PUB-SUB, PAIR-PAIR.
You can configure the transport protocol to make it more efficient if you are passing messages between threads (inproc://), processes (ipc://) or machines ({tcp|pgm|epgm}://), with a smart option to shave off some part of the protocol overheads in case of connections are running between VMware virtual machines (vmci://).
For serialization I would suggest MessagePack or Protocol Buffers (which other have already mentioned as well), depending on your needs.
You might want to try YAMI , it's very simple yet functional, portable and comes with binding to few languages
I can suggest you to use the plibsys C library. It is very simple, lightweight and cross-platform. Released under the LGPL. It provides:
named system-wide shared memory regions (System V, POSIX and Windows implementations);
named system-wide semaphores for access synchronization (System V, POSIX and Windows implementations);
named system-wide shared buffer implementation based on the shared memory and semaphore;
sockets (TCP, UDP, SCTP) with IPv4 and IPv6 support (UNIX and Windows implementations).
It is easy to use library with quite a good documentation. As it is written in C you can easily make bindings from scripting languages.
If you need to pass large data sets between processes (especially if speed is essential) it is better to use shared memory to pass the data itself and sockets to notify a process that the data is ready. You can make it as following:
a process puts the data into a shared memory segment and sends a notification via a socket to another process; as a notification usually is very small the time overhead is minimal;
another process receives the notification and reads the data from the shared memory segment; after that it sends a notification that the data was read back to the first process so it can feed more data.
This approach can be implemented in a cross-platform fashion.
How about Facebook's Thrift?
Thrift is a software framework for scalable cross-language services development. It combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, Smalltalk, and OCaml.
I think you'll want something based on sockets.
If you want RPC rather than just IPC I would suggest something like XML-RPC/SOAP which runs over HTTP, and can be used from any language.
YAMI - Yet Another Messaging Infrastructure is a lightweight messaging and networking framework.
If you're willing to try something a little different, there's the ICE platform from ZeroC. It's open source, and is supported on pretty much every OS you can think of, as well as having language support for C++, C#, Java, Ruby, Python and PHP. Finally, it's very easy to drive (the language mappings are tailored to fit naturally into each language). It's also fast and efficient. There's even a cut-down version for devices.
Distributed computing is usually complex and you are well advised to use existing libraries or frameworks instead of reinventing the wheel. Previous poster have already enumerated a couple of these libraries and frameworks. Depending on your needs you can pick either a very low level (like sockets) or high level framework (like CORBA). There can not be a generic "use this" answer. You need to educate yourself about distributed programming and then will find it much easier to pick the right library or framework for the job.
There exists a wildly used C++ framework for distributed computing called ACE and the CORBA ORB TAO (which is buildt upon ACE). There exist very good books about ACE http://www.cs.wustl.edu/~schmidt/ACE/ so you might take a look. Take care!
TCP sockets to localhost FTW.
It doesn't get more simple than using pipes, which are supported on every OS I know of, and can be accessed in pretty much every language.
Check out this tutorial.
Python has a pretty good IPC library: see https://docs.python.org/2/library/ipc.html
Xojo has built-in cross-platform IPC support with its IPCSocket class. Although you obviously couldn't "implement" it in other languages, you could use it in a Xojo console app and call it from other languages making this option perhaps very simple for you.
In the current ages there is available a very easy, C++1x compliant, well documented, Linux and Windows compatible, open-source "CommonAPI" library: CommonAPI C++.
The underlying IPC system is D-Bus (libdbus) or SomeIP if one wish. Application interfaces are specified using a simple and tailored for that Franca IDL language.