Networking in Mono - mono

All,
I'm attempting to estimate the effort to port an app developed on Windows (.NET) to Linux (Mono). I came across the MoMA tool, which attempts to look through my .exe and find potential areas of incompatibility. Most of my issues appear to be centered around get/set of network settings, getting network info, etc. (Object ManagementBaseObject.get_Item and set_Item. etc).
In almost all of the cases, the Mono functionality is listed as "ToDo". For estimation purposes, is it safe to assume most/all of these have some kind of workaround? I would imagine this type of basic networking support must be included in the latest version of Mono. Or should I assume none of this is currently available and I would be stuck waiting for it to be implemented (or be forced to implement it myself)?
Thanks,
Dan

First,see Mono Compatible Networking/Socket Library. Also,take a look on Cross-Platform Network Applications with Mono. You can start with C# Network Library.

Related

C# mono inter process, inter application cross platform messaging implementation. (How to)

I am developing applications and c#, I at the moment, I work on projects for Windows Platform only. However, I am planning to move into using C# mono to make my programs be able to work with linux, mac-os and windows.
One of the feature I am implementing in my program is the ability to communicate between them (ie. A Console type program that can communicate and interact with GUI Program by sending commands and receiving reply messages, logging messages, signals,.etc). Back in windows dotnet framework, I am looking at anonymous pipes, but now, I am checking if Mono.Unix.UnixPipes will do the job for me and will let me implement inter process messaging with very little to no adjustments at all under linux, mac-os and windows.
I am a little bit new to this kind of feature, and i am now reading into the documentations (however, class and objects documentations are not helping me so much yet). I am also browsing to some of the inter process messaging questions that are posted here in stackoverflow.
If anyone has a link to a tutorial kind of document or example on how to do this, it will be a great help. please help?
thank you.
I highly recommend running a mongodb (easly scalable from a dev boxes to hundreds of servers) and using the library https://github.com/dominionenterprises/mongo-queue-csharp on top of it for the messaging. It has the ability to query for messages which enables some really nice patterns. Also its compatible with mono ! Also has some other languages if needed down the road.

What are the advantages of using OSGi at target side in a Remote Software Provisioning System?

I am developing a Remote Software Provisioning system that should be able to handle all deployment, installation, un-installation and upgrades of software components. Software can be in any language (java, .net, c/c++ etc) and target side can be PC, embedded systems and smart phones.
I have found Apache ACE as good candidate for developing this system.
I want to know if there is any advantage/necessity of using OSGi at target side as Apache ACE can do software provisioning to non-OSGi targets as well.
Having a modular framework like OSGi at the client side is a huge advantage when doing remote management, because it gives you much insight into what's happening inside - installed bundles, dependencies, states of the bundles, available services etc. This helps a lot when you have to solve a problem remotely. Another advantage is that OSGi basically forces programmers to develop proper modular and dynamic systems, which makes (remote) updating much easier.
So, if you have to decide now what language and framework to use for the client side, I strongly recommend OSGi for the embedded and mobile clients. For the PCs (I guess you mean desktop PCs?) this is probably not the best choice - it depends a lot what you want to achieve there. If you want to install MS Office remotely OSGi won't bring you forward ;)
However, if you already have existing programs at the client side and are discussing whether to convert them to OSGi, I would recommend to investigate some time first to see whether they can be converted easily. Some software packages could give you a lot of trouble converting to OSGi, not because OSGi is complex, but because the program itself is not modular and has a lot of assumptions about the static nature of the environment (e.g. nothing ever disappears, parts of the system never get updated etc.). The irony in the matter is that these are exactly the programs which will give you most trouble later anyway no matter which remote provisioning system you chose.
If you have OSGi at some of the targets be sure to use a remote provisioning system which gives you access to the full OSGi functionality and not only the most basic and simple install and update functions. I haven't yet used Apache ACE, but I have experience with another provisioning system - mPower Remote Manager. Here are some snapshots from the documentation which can give you a feeling what is possible with OSGi as a base - you can draw your own conclusions whether it will be useful for your case or not.
I've given some examples in the other question you asked:
What are the non-osgi targets with which Apache ACE can work
You can write your own management agent that talks to the ACE server and installs artifacts. There actually are a couple of places where you could hook in your own code and protocol. Is there a concrete language/environment you're thinking of using, or are you just exploring the possibilities right now?
Well, the advantages of OSGi haven't changed, so for that I can refer you to the standard page.
To be a bit more constructive, I'll read the question as 'Should I bother converting my application to OSGi, as it is not necessary for ACE?'
I think that depends on what 'kind' of updating mechanism you're after. If you have a monolithical application (at least from the provisioning perspective) which you deploy and update only as a whole (Like an iOS app) then there isn't much to gain for provisioning purposes by using OSGi.
For the rest I can tell you the same as I tell anybody else: Converting an application to OSGi isn't hard, but modularizing code can be a nightmare, but something you'll need to face at some point, OSGi or not. If your code is modularized already, using OSGi should be a piece of cake.

Developing web application using Objective-C on FreeBSD

I saw that the Clang 3.0 port includes Objective-C as a development language, and furthermore, I also found this port "libobjc2-1.6" (Replacement Objective-C runtime supporting Obj-C 2 features) and "ofc-0.8.1_5" (The Objective-C Foundation Classes library).
Let's say we are considering to use Objective-C on FreeBSD to develop a web-based application (vs. using Java and running it on Tomcat/Glassfish), how do we approach it?
Does Objective-C development actually work on FreeBSD (9.0)?
What are the things (frameworks/library) to download and install?
What IDE?
As I mentioned that let's say we intend to develop a web application, what are the library?/libraries (We also saw that there is "GNUstepWeb" - successor to WebObjects - is this the web library we should consider? Is this the ONLY ONE - what about other alternatives? Further, can GNUstep/GNUstepWeb compile under Clang 3.0 or make use of those Objective-C ports ("libobjc2-1.6" and "ofc-0.8.1_5") mentioned above? Are those ports relavant?
Has anyone successfully done a web application project development on FreeBSD using Objective-C (and deployed on FreeBSD)?
Note: Web-based applications means it takes in HTTP (RESTful) calls and talks to a database (for traditional and/or NoSQL databases).
There is http://cocotron.org, a port (more like re-write) of Apple's runtime for Objective-C.
I still could advice against using ObjC for web stack. I did that previously, and I must say that it involves a big chunk of pretty common code that you will need to implement for basic HTTP server functionality.
Also, Cocotron is not really that fast (as a runtime). It's ok for desktop applications, but web world is much more restrictive.
I am writing a library supporting this using FastCGI to interface the server called CGIKit (https://github.com/xcvista/CGIKit) and it works on GNUstep instead of Cocoatron.
you may look at sope and sogo http://sope.opengroupware.org/en/build/thirdparty.html
Someone seems having success building Objective-C program for FreeBSD 9.x
You don't need to worry about the IDE if you don't mind using Apple. It would be possible to write on Mac, and run on FreeBSD. (personally I think this is the best of both world) IMO, if there's a server OS with Objective-C ready, FreeBSD will be first one.
More serious problem is libraries and frameworks. We don't have much options in Objective-C for web server development even on OSX. But we can wrap existing C/C++ libraries, (just as like many great node.js, Python, Ruby libraries do) and I think we may can get bunch of options with small efforts.
Some people worry about security. And I always wonder how many foundational programs on the network are written in C/C++ and other languages.
In his blog post “Using Objective-C on the server” Graham Lee describes how to set up a minimal GNUStep-WebApp. Obviously, the build instruction for GNUstep-make would differ, but other than that this seems like a nice starting point.
He wrote several other posts (jQuery, AJAX) further exploring GSW.

Should I use CORBA, MessagePack RPC or Thrift, or something else entirely?

I'm writing software for a new hardware device which I want any kind of new third-party application to be able to access if they want to.
The software will be a native process (C++) that should be pollable by 3rd party games and applications that want to support the hardware device. Those 3rd party apps should also be able to receive events from the native process, on a subscribe basis. So aside from the native process, I'll also supply "connector" libraries to the 3rd party developers, for all platforms/languages that they might choose (Java, C++, Python etc.) to embed in their apps so they can easily connect to the device with hardly any extra code needing to be written by them. I want to target all desktop/laptop OS platforms, and have a pretty good idea of what functions I want to expose, but ideally I don't want to be too stuck (i.e. I want it to be elegantly scalable from both client and server perspectives).
I'm looking for reliability going forward, performance, maintainability going forward, and cross-platform/language flexibility going forward, and ease of development, in that order.
What should I use?
CORBA, MessagePack-RPC, Thrift, or something else entirely?
(I've omitted ICE because of it's licensing)
Thrift or Message Pack is the best option going forward. Both are sleek, light weight and do not add much latencies to your process. They have support for most of the common languages, and are in Active Development. At the current stage I would prefer thrift personally but message pack does seem to promise a lot of features.
Thought thrift might not be as windows friendly as we want but people are using it on windows.
This is a starter guide for thrift on windows.
http://wiki.apache.org/thrift/ThriftInstallationWin32
Only installing and getting the Thrift compiler can be troublesome on windows. Using the generated files depend on the language you choose and lot of the languages have good support to run the files by importing thrift libraries. (Java it is very easy, MAVEN artifact)
There is a discussion on the RPC frameworks available at RPC frameworks available?
CORBA according to me is old cumbersome and very heavyweight.
If ancient and heavyweight don't put you off, obsolete definitely should. Regardless, I can tell you what we've been using Google Protocol Buffers at work recently, and they're pretty easy to use.
From the developer's perspective, all you need to do is have a build of GPB (which really isn't that difficult), and then it will generate source files for you. The end result is a cross-platform binary message transport message passing interface (think XML and limited RMI, not MPI-like functionality).
We use it on Windows to talk to an Arm-based Linux system (TS-7200's from embedded arm) running the same software. to my knowledge, it is compatible with many languages.
CORBA is the only free "RPC" thing that would work for my system right now, even though it scales very badly. Thrift isn't Windows-friendly yet. Neither is MessagePack-RPC yet available in all languages and OSs, even though it's still in development. If CORBA was elegantly scalable it probably wouldn't have become obsolete at all.
Protocol Buffers and messaging would work, I'd have to develop a both a client and service implementation for every platform/language. It would also be very scalable. I've decided on this.
I'm currently using Apache Thrift for a Hospital Manager project. It is better than CORBA in many areas, not to mention it is lightweight and much easier to implement and understand. The learning curve for Thrift is definitely subtle compared to CORBA, but the documentation for Thrift is the worst thing.
I'm using a Ruby Thrift server to which Obj-C and Java clients connect. The Thrift parser or "compiler" does a pretty good job generating source files for the languages you want, although it is far too verbose. I would definitely look into implementing Thrift, or Google ProtoBuffs if I was starting a new project, since CORBA is really outdated, and might not implement new technologies in the future, not to mention that there are many vulnerabilities and exploits targeting CORBA that will not get patched since it's not in development anymore, presenting some serious security holes on your new project.
Thrift supports many programming languages: C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Objective-C, JavaScript, Node.js, Smalltalk, OCaml and Delphi as of this writing. Supporting multiple languages is key, I think, for the purpose of your project.

Is Mono appropriate for developing servers?

Is Mono appropriate for developing server applications, or only desktop applications? I'd like to develop server applications in C# for Linux. I want to write a First Person Shooter (FPS) game in C#/XNA, and I've a Linux dedicated server. But this question is generally for all types of server applications...
Mono handles ASP.NET (including ASP.NET MVC) quite well. Most other server implementations work very well, as well. It does depend, slightly, on what exactly you are trying to serve, and how you are going to use it.
Mono also supports WCF directly in the core, which allows most non-web service applications to be written very effectively.
Edit:
Given your edit, and your desire to handle the server side of a multi-player FPS game, Mono should work fine. You will likely want to avoid using the high level interfaces like WCF and ASP.NET, and go straight to the System.Net namespace (depends a bit on how many players you'll be synchronizing, but if it's large, you'll want speed here over ease). Mono supports this quite well.
That being said, Mono's support of the System.Net namespace is very good, and quite mature, so you should have no problems using it for the server side of a multiplayer FPS game.
I don't see why not. I believe FogBugz uses Mono to deploy to apache servers.
Here is a conversation about running the FogBugz application on mono as an example of having a server app running on it.
It looks like your needs cover a broad range of different applications.
I think the overall answer would be yes, Mono is appropriate for developer server applications.
As others have pointed out, Mono has ASP.NET support as well as WCF built-in.
You also have the ability of working directly down to the Socket level if you need to squeeze every last bit of performance out of your server application (although you'll have to figure out how to persist state if the need is there).
I'd definitely be interested in seeing the performance difference of something like that between the two platforms (I wouldn't expect much difference...it's possible that Mono might even get slightly better performance because of the rest of the *NIX stack).