REBOL3 - what is the difference between the different branches? - rebol

What are the differences between the different Rebol 3 branches, especially with the new REN branch?
Is it the platforms they'll run on, the feature set, code organization, the C standard compliance?

This is an answer destined to become outdated, hence set to Community Wiki. This information is as of Sep-2015. So if updating this answer after some time has passed, please modify the date as well.
Binary download of Rebol3 from rebol.com
Last build was 5-Mar-2011 and pre-dates the open source release.
No GUI support, no HTTPS support, no serial port support, no UDP support, no smart console...
No 64-bit builds. Binaries are for Windows x86, OS/X (PPC or x86), Linux (x86 or PPC), FreeBSD x86.
While Rebol2 binaries are archived for many "esoteric" systems (BeOS, AIX, Windows DEC Alpha, QNX, Solaris...) similar binaries were not provided for Rebol3. The only "weird" build is for Amiga, and only an OS4 PowerPC Amiga. No successful builds of Rebol3 for Amiga emulators have been reported.
Open source release of Rebol3 on Github rebol/rebol
Open-sourcing was on 12-Dec-2012.
The rebol.com binary downloads were not rebuilt as part of this release. However, a community member (#earl here on SO) created a build farm at rebolsource.net that follows this GitHub master whenever it updates. Given that GitHub's rebol/rebol master hasn't been updated since March 2014, this dynamism is currently underused.
Building the source at time of release got an executable not distinguishable (?) in functionality from the builds on 5-Mar-2011. This suggests few changes to the source were made besides some cleanup and Apache-licensing edits to prepare for publication.
Minor patches and bugfixes were integrated sporadically, with most PRs sitting idle. Last PR accepted at time of writing was Mar 3, 2014, which is over a year ago.
The most noticeable "breaking" PR that did get approved was to repurpose the FUNCTION name. It was considered to be worth breaking the old arity 3 form to let the word be taken for the much more useful implementation as locals-gathering FUNCT. (This also brought Rebol in alignment with Red, whose FUNCTION is arity 2 and acts similarly.) FUNCT was kept around as-is for legacy code.
The most major non-breaking PR that was taken is probably not requiring blocks around IF, UNLESS, or EITHER bodies. This has been received well among those who know it's there, as fitting the freeform and non-boilerplate philosophy of the language. It allows some code constructs to get "prettier" and gives programmers more choice, while it doesn't seem to cause any more problems than anything else. It's certainly less of a speedbump than if [condition] [...], in fact it seems almost no one knows this feature got added, so it must not be biting anyone. (If anyone can bend ears over at Red to make sure it gets IF and IF/ONLY then that would be ideal.)
RETURN/REDO was removed. Rationale was that it permitted functions to effectively behave with variable arity, and that this was unnecessary and took terra firma away by no longer being able to predict a function's arity from its spec. Perhaps this stance warrants a second look...as Lisp users who are pressuring for the addition of Lisp-style macros seeming aren't worried about that very much. (Here in the StackExchange universe, this provoked a Programmers.SE question Would Rebol (or Red) benefit from Lisp-style Macros?, which hasn't gotten much in the way of answers yet.)
The fork by Saphirion: "Saphir"
Prior to the open-sourcing of Rebol, Saphirion AG had a special relationship with Rebol technologies. They had access to the source and were taking responsibility for most of the development work for Rebol3 GUI features. They also added several other things like HTTPS.
Saphir is available as a binary download from their website, but only provided for 32-bit Windows. There was at one time an experimental .APK for Android from Saphirion.
Some (but not all) of Saphir's source was released after the open-sourcing. Notable omissions were the android build and some Rebol3 code for encapping...a way of injecting compressed scripts and resources into binaries of the interpreter without needing to recompile it.
(Note: Under Apache2 license there is no requirement to release source code for one's derived work.)
"Community" Integration at Rebolsource on GitHub
With the GitHub rebol/rebol being held up on integrations, a fork at rebolsource/r3 was established to be a "community build" where work could be staged.
Rebolsource changes were conservative, seemingly aimed toward showing process for how GitHub's rebol/rebol might adopt changes "in the spirit in which Rebol was conceived" should that repository be delegated to the community. (For that spirit, see this.) Hence it integrated non-controversial bugfixes and tweaks, instead of large third-party cryptography libraries for implementing HTTPS. Also: no allowance for adding build dependencies besides a C compiler (no GNU autotools, for instance).
Binaries for the community build were produced on an as-needed basis for those requesting them who could not build it themselves.
Atronix Engineering's Rebol "3.0" at Github zsx/r3
Atronix is an industrial automation solutions provider that uses Rebol. How they do so is described in a video here by David den Haring, director of Engineering, and their ZOE software is built on their version of Rebol.
After the open sourcing, Atronix partnered with Saphirion to port the GUI to Linux. Atronix publishes their source publicly as it is developed, and David den Haring notes in the video above that they have only one proprietary component they developed (an industrial control driver). Other than that they are happy to share the source for all Rebol development they do.
Atronix integrated the 64-bit patches from Rebolsource, created a Windows 64-bit target, and offer up-to-date binaries of their development branch for Windows and Linux x86/x64, as well as Linux ARMv7.
Besides having the features of Saphir, the Atronix build added support for CALL with /INPUT, /OUTPUT, /ERROR. It also added a Foreign Function Interface, implementing LIBRARY!, ROUTINE! and STRUCT! for communicating with non-Rebol dynamic libraries. It brings in encapping support as well on Windows and Linux.
Rebol's "religion" was at times at odds with expedience, so the Rebol-based build process was replaced when needed by hand-edited makefiles and Visual Studio projects. The FFI library introduced a dependency on GNU autotools to build.
All Atronix builds include the GUI, so there is no "Core" build. And again, only Linux and Windows.
Ren-C
(Bias Note: This fork is the initiative #HostileFork started, knows the most about, and will speak most enthusiastically about.)
Ren-C started as an an extraction of a Core build out of Atronix's codebase. That gave it features like HTTPS, the enhanced CALL, and Foreign Function Interface to essentially all the platforms that Rebolsource was able to build for. Updates Jul/Sep-2015 Ren/C supports line continuations in the console, user infix functions, several bugfixes...
Ren-C makes large-scale changes and fixes fundamental issues in R3-Alpha, which are tracked on a Trello that provides more information. There is a new FAQ as a GitHub wiki. Critical issues like definitionally-scoped returns have been solved, with continuous work on other outstanding problems.
Though Atronix's R3/View required some additional dependencies, Ren/C pushed back to being able to be built with nothing besides a C compiler, and eliminated all handmade makefiles/projects.
Beyond Windows, Linux and Mac in both 32-bit and 64-bit variants, Ren/C has also been built for smaller players like HaikuOS and yes, even Syllable. This is interesting more for the demonstration of how broadly turnkey builds of the C89 code work (simply as make -f makefile.boot) as opposed to there being a particularly large userbase of those particular OSes!
From the point of view of language rigor, Ren/C is pushing on modern techniques. Although it can still build as C89, it can be built as C99 and C11 as well. It has also been verified to build as C++98 through C++14, and with some strategic modifications under #ifdef __cplusplus it can take advantage of modern C++ as a kind of static analysis tool over the C code. Warnings are raised, type errors all fixed up, and it's "const correct". The necessary changes were carefully considered to make Rebol's baseline C code not just more correct but cleaner and clearer source across the board.
From a point of view of C developers, Ren/C should be stable, organized, and commented enough for anyone who knows C to "modify with confidence" and try new features. That means being able to implement definitionally scoped returns (actually written, but not pushed), or try developing features like NewPath.
From a point of view of architecture, Ren/C is intended to not have an executable at all...but to be a library for embedding a Rebol interpreter into other programs. It is now the basis for Ren/C++, which was designed to anticipate working with Red as well.
From a point of view of testing, Ren/C intends to whip everything into shape for engineering rigor and zero bug tolerance. This means avoiding practices like zero-filling memory to obscure uninitialized memory accesses, using Address Sanitizer, Valgrind, and a test suite that can pass the highest settings on both.
While enabling all the extra functionality has made Ren/C's executable nearly twice the size of Rebolsource's, there's not yet been any audit to see how this can be brought down. It has been confirmed that there are duplicate copies of Zlib and PNG encoding/decoding--for instance (Saphirion included LodePNG, likely to work around a bug in the existing PNG because it was easier than fixing it...yet did not mothball the previous code). Also, being able to do a build which selectively integrates only the codecs you want to use is on the agenda.
Ren/C currently has the stakeholders from Atronix and Rebolsource participating in its development and direction, which strengthens the likelihood that it may evolve into "the" Rebol Core. It is now being linked in as the code backing Ren Garden, and using a similar approach it may be set up as the library used by Atronix's R3/View...then Rebolsource...and perhaps ultimately rebol/rebol itself.
The fork by Oldes
(Bias Note: this edit is added 28-Feb-2019 by Oldes himself)
Forked from the community branch. Main focus on keeping the code close to the original Carl's release without blindly taking everything from Atronix/Saphirion but still trying to pick-up the good things from these branches slowly.
Not like Ren-C, this version is not trying to introduce new syntax, but rather be closer to the original Rebol2 and new Red language

Related

Will Xcode 8 support plugins (-> Alcatraz)

Apple introduced Xcode source editor extensions with Xcode 8.
Will Xcode 8 still support plugins served via Alcatraz?
Xcode 8 prohibits code injection (the way plugins used to load) for security reasons. You can circumvent this by removing the code signing on Xcode. Both of these tools are capable of simplifying that:
https://github.com/inket/update_xcode_plugins
https://github.com/fpg1503/MakeXcodeGr8Again
To work on Xcode 8+ without removing the code signing, plugins will have to be rewritten as Xcode Source Editor Extensions. Unfortunately, the APIs for these extensions only allow for text replacement at the moment, so they are not an adequate replacement.
I've filed a report on rdar, do not hesitate to express your mind as well:
Xcode is a primary tool for the development on all Apple platforms.
People can either love or hate it, the fact is it's still the most
powerful development tool around.
Lots of its power and usefulness has been achieved by 3rd-party
plugins, later covered by the Alcatraz project, which is the number
one extension management system for Xcode, as vital and needed as for
example npm is needed for Node.js. It's all based on a fair, aware
community developing its helpful open-source extras and publishing
them on GitHub. It's not a code-injecting ghetto targeting infecting
stuff. It's a community within a community.
Xcode 8 tends to drop support for these plugins, most often being
narrated as a security step in favour of preventing distribution of
injected stuff. This is false; you simply can't prevent that 'cause
there's always someone who finds the way. This step simply makes Xcode
was less usable, complicated and not that feature-rich. There are many
important plugins which developers love, contribute and move forward
to make Xcode even better, tell yourself honestly, mostly even better
than you could in a short period.
The community needs powerful stuff. Way more powerful than basic
source-editing magic. Please reconsider this step in a spirit of
community and support to your developers.
In last years, there's a move towards closing your platform. First
shutting down Spotlight plugins and its great Flashlight plugins
manager, which is simply great and now I need to disable Rootless to
use it. Now it's Xcode plugins. You're doing more and more to make
developers and power users feel sad and not having their computing
device in their hands.
There's a detailed discussion on Alcatraz repo, it says everything:
https://github.com/alcatraz/Alcatraz/issues/475
I'm attaching a list of great plugins I simply can't spend a day
without:
AxeMode – Xcode issues patching Backlight – active line highlighting
ClangFormat – code formatter DerivedData Exterminator – daily need
getting rid or bad stuff FuzzyAutocomplete – name says it all, still
more powerful than Xcode completion HighlightSelectedString MCLog –
console log filtering, including regexes OMColorSense Polychromatic –
variables colouring, cute stuff RSImageOptimPlugin – processing PNG
files before committing SCXcodeMinimap – love this SublimeText-thingy!
XCFixin_FindFix – fixing Find features XcodeRefactoringPlus – patching
Refactor functionality, still buggy, but less than Xcode without
plugin XToDo – TODOs collection ZLGotoSandbox – 'cause dealing with
your folders would be a hell without it
Most of them are not source code-related, thus deserve having a way to
be loaded and working like a charm again.
You can certainly load all your plugins by recode signing Xcode 8.0. All credits to the XVim team. They seemed to solve this problem.
https://github.com/XVimProject/XVim/blob/master/INSTALL_Xcode8.md
The Most Important Step From The Solution
There is no support and we can't expect any. Apple decides to shut down the ecosystem around the Alcatraz package manager before they have an api ready (extensions) that is able to do what the plugins were doing before. The extensions are currently limited to the text frame which does not allow to do much.
The main reason announced by apple is security and we can now disable code signing with effort to get back the most important features that were missing in Xcode.
Bad day for the community, bad decision from apple.
I also recommend the discussion on Alcatraz here: https://github.com/alcatraz/Alcatraz/issues/475
Most importantly if you want to support Alcatraz file a bug at http://bugreport.apple.com to make them aware that many people are suffering with this change
I did the same (openradar.appspot.com/28423208):
Xcode is a primary tool for the development on all Apple platforms.
People can either love or hate it, the fact is it's still the most
powerful development tool around.
Lots of its power and usefulness has been achieved by 3rd-party plugins, later covered by the Alcatraz project, which is the number
one extension management system for Xcode, as vital and needed as for
example npm is needed for Node.js. It's all based on a fair, aware
community developing its helpful open-source extras and publishing
them on GitHub. It's not a code-injecting ghetto targeting infecting
stuff. It's a community within a community.
Xcode 8 tends to drop support for these plugins, most often being narrated as a security step in favour of preventing distribution of
injected stuff. This is false; you simply can't prevent that 'cause
there's always someone who finds the way. This step simply makes Xcode
was less usable, complicated and not that feature-rich. There are many
important plugins which developers love, contribute and move forward
to make Xcode even better, tell yourself honestly, mostly even better
than you could in a short period.
The community needs powerful stuff. Way more powerful than basic source-editing magic. Please reconsider this step in a spirit of
community and support to your developers.
In last years, there's a move towards closing your platform. First shutting down Spotlight plugins and its great Flashlight plugins
manager, which is simply great and now I need to disable Rootless to
use it. Now it's Xcode plugins. You're doing more and more to make
developers and power users feel sad and not having their computing
device in their hands.
There's a detailed discussion on Alcatraz repo, it says everything:
github.com/alcatraz/Alcatraz/issues/475
I'm attaching a list of great plugins I simply can't spend a day without:
AutoHighlightSymbol - Add highlights to the currently selected token
ClangFormat – code formatter
DerivedData Exterminator – daily need getting rid or bad stuff
FuzzyAutocomplete – name says it all, still more powerful than Xcode completion
KZLinkedConsole - be able to click on a link in the console to open the relevant file and be faster to debug
PreciseCoverage - nicer gui than xcode provides to view the coverage
XcodeColors - shows colors in the console depending on log level (how else should a console be used?)
Most of them are not source code-related, thus deserve having a way to be loaded and working like a charm again.
If you do not make a fast step to support your community i'm sure we
will find another platform to work with.
Seems like this should work. Found some answers here:
https://github.com/alcatraz/Alcatraz/issues/475
The key seems to be to removing code signing in order to get existing plugins to work.
Apparently not :'(
https://github.com/alcatraz/Alcatraz/issues/475
We have to wait until someone convert the plugins into the new Xcode Extensions

alternative IDE for squeak/pharo

I've been using smalltalk for a while now and I love the language and the concept. What I just hate is the System browser. This tool doesn't even resemble a modern IDE. How am I supposed to code without tabs, outlines and handy shortcuts? I often find myself implementing a selector and noticing that it would be nice to isolate a piece of code in a separate (private) selector, just for readability shake, but I don't. Because it takes like 5 mouse clicks and I have to navigate away from the selector I am working at, and navigate back to it. Oh wait, I can't! Because it has syntax errors, because I haven't finished it yet! Kills me. And I don't have a 24 inch display to open 3 browsers.
Sorry for a little rant. My question is, is there a real IDE (Eclipse, Net.Beans, VS) for smalltalk? Maybe for some commercial version of smalltalk?
You might want to check out tODE. It's at a very early stage, but it is an attempt to provide a Smalltalk IDE in the web-browser and is a break from the traditional Smalltalk IDE. With that said, I don't think you'd want to start using tODE right away, but you can keep an eye on it as it evolves.
Dale
Pharo is trying to have the Nautilus Browser ready for Pharo
1.4. I suspect there will be a flood of awesome new tools as the system stabilizes over the next few releases.
There's the Glamorous Inspector.
Spoon has been mounted as a WebDAV filesystem, so you can use whatever tool you like. Spoon is not another Smalltalk, but a testbed for revolutionary Smalltalk technology, which can be incorporated into any other Smalltalk (it's currently on top of Squeak)
There is the Tiling Window Manager to help you organize
Since Squeak and Pharo are live, dynamic, open systems powered only by volunteers, anyone sufficiently motivated can create the next generation of tools ;-)
p.s. I feel your pain. The 20-browsers-open thing is a drag. Let's invent the future!
Historically, the real "IDE" is the Smalltalk one, and one could claim that the others are just an adaptation to the limits of traditional textual programming languages (not rethorical, just check out the evolution of typical development environment UIs and how they are adding features that exists in Smalltalk from the very beggining, like the senders and references in VS).
Just a side note: actually more than 2000 open-source projects in the SqueakSource repository were coded without tabs, outlines and shortcuts (I think still in Squeak you can cross reference any text selecting and pressing with Alt-6). I can't tell you how sad I feel when I must to go back to file based developement, still don't understand why most developers love to sweep text, mess with line numbers and page up-down files in directories. The good news for you, is that you have many options:
There is an alternative browser called BobsBrowser (works in Pharo 1.3) which lets you browse
Class hierarchy windows exploring each class
System Category window
Unsaved edits
Recent classes
Recent methods
Method categories for instances and classes
Unsent methods
Driller relating every structural information
etc.
The advantage over the Whisker browser is that the hierarchical lists are attached to a window while in BobsBrowser you can detach them.
It all depends of the different activities you're performing when you're developing. With some experience in Smalltalk you'll find that you prefer some browser for exploratory insights and others for refactorings, etc. BobsBrowser for example is good for knowledge organization or custom navigation of Smalltalk classes and categories, the hierarchies you can see are the organizations from the Smalltalk reflective meta-architecture at any level (classes, senders, implementors), and they are expandable/collapsable (in the classic system browsers you can only expand the system categories and subcategories).
The instance variables were shown historically in the Smalltalk/V flavors, and there is an old goodie (from Squeak 2.7 IIRC) to enable it back again but almost nobody today maintains the classic System Browser in Squeak/Pharo. Adding that feature to OmniBrowser would be more complicated though because is a browser framework (as every serious framework, it took some time to learn it for the first time), although the effort of the Squeak/Pharo community is absolutely incredible, still the Smalltalk community needs more developers.
You have also a commercial Smalltalk which isn't public (downloadable) yet but includes IDE-like features of traditional programming environments
And I don't have a 24 inch display to open 3 browsers.
You could give the Whisker Browser a try. It lays out the methods side-by-side so that you don't have to position all these windows manually.
I played with it a few years ago but I'm not sure what state it's in right now.
I don't know how mature it is, but the Etoile project has an IDE called CodeMonkey for writing Smalltalk applications. It's not specifically for Squeak, and instead uses their own smalltalk implementation, but it may be worth looking into. Unfortunately, it's only available in their SVN repository, so it's a pain to compile and install.

Why use an IDE? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
This may be too opinionated, but what I'm trying to understand why some companies mandate the use of an IDE. In college all I used was vim, although on occasion I used netbeans for use with Java. Netbeans was nice because it did code completion and had some nice templates for configuration of some the stranger services I tried.
Now that my friends are working at big companies, they are telling me that they are required to use eclipse or visual studio, but no one can seem to give a good reason why.
Can someone explain to me why companies force their developers into restricted development environments?
IDE vs Notepad
I've written code in lots of different IDEs and occasionally in notepad. You may totally love notepad, but at some point using notepad is industrial sabatoge, kind of like hiring a gardener who shows up with a spoon instead of a shovel and a thimble instead of a bucket. (But who knows, maybe the most beautiful garden can be made with a spoon and a thimble, but it sure isn't going to be fast)
IDE A vs IDE B
Some IDE's have team and management features. For example, in Visual Studio, there is a screen that finds all the TODO: lines in source code. This allows for a different workflow that may or may not exist in other IDEs. Ditto for source control integration, static code analysis, etc.
IDE old vs IDE new
Big organizations are slow to change. Not really a programming related problem.
Because companies standardize on tools, as well as platforms--if your choice of tools is in conflict with their standards then you can either object, silently use your tool, or use the required tool.
All three are valid; provided your alternative doesn't cause other team-members issues, and provided that you have a valid argument to make (not just whining).
For example: I develop in Visual Studio 2008 as required by work, but use VS2010 whenever possible. Solutions/Projects saved in 2010 can't be opened in 2008 without some manual finagling--so I can't use the tool of my choice because it would cause friction for other developers. We also are required to produce code according to documented standards which are enforced by Resharper and StyleCop--if I switched to a different IDE I would have more difficulty in ensuring the code I produced was up to our standards.
If you're good at using vim and know everything there is to know about it, then there is no reason to switch to an IDE. That said, many IDEs will have lots of useful features that come standard. Maintaining an install of Eclipse is a lot easier than maintaining an install of Vim with plugins X, Y, and Z in order to simulate the same capabilities.
IntelliSense is incredibly useful. I realize that vim has all sorts of auto-completion, but it doesn't give me a list of overloaded methods and argument hints.
Multiple panes to provide class hierarchies/outlines, API reference, console output, etc.. can provide you more information than is available in just multiple text buffers. Yes, I know that you have the quickfix window, but sometimes it's just not enough.
Compile as you type. This doesn't quite work for C++, but is really nice in Java and C#. As soon as I type a line, I'll get feedback on correctness. I'm not arrogant enough as a programmer to assume that I never make syntax errors, or type errors, or forget to have a try/catch, or... (the list goes on)
And the most important of all...
Integrated Debuggers. Double click to set a break point, right click on a variable to set a watch, have a separate pane for changing values on the fly, detailed exception handling all within the same program.
I love vim, and will use it for simple things, or when I want to run a macro, or am stuck with C code. But for more complicated tasks, I'll fire up Eclipse/Visual Studio/Wing.
Sufficiently bad developers are greatly assisted by the adoption of an appropriately-configured IDE. It takes a lot of extra time to help each snowflake through his own custom development environment; if somebody doesn't have the chops to maintain their own dev environment independently, it gets very expensive to support them.
Corporate IT shops are very bad at telling the difference between "sufficiently bad" and "sufficiently good" developers. So they just make everybody do the same thing.
Disclaimer: I use Eclipse and love it.
Theoretically, it would decrease the amount of training needed to get an unexperienced developer to deal with the problems of a particular IDE if all the team uses that one tool.
Anyway, most of the top companies don't force developers to use some specific IDE for now...
I agree with this last way of thinking: You don't need your team to master one particular tool, having team knowledge in many will improve your likelyhood to know better ways to solve a particular roblems.
For me, I use Visual Studio with ReSharper. I cannot be nearly as productive (in .Net) without it. At least, nobody has ever shown me a way to be more productive... Vim, that is great. You can run Vim inside of Visual Studio + R# and get all the niceties that the IDE provides, like code navigation, code completion and refactoring.
Same reason we use a hammer to nail things instead of rocks. It's a better tool.
Now if you are asking why you are forced to use a specific IDE over another, well that's a different topic.
A place that uses .NET will use Visual Studio 99% of the time, at least that's what I've seen. And I haven't found anything out there that is better than Visual Studio for writing .NET applications.
There is much more than code completion into an IDE:
debugging facilities
XML validation
management of servers
automatic imports
syntax checking
graphical modeling
support of popular technologies like Hibernate, TestNG or Spring
integration of source code management
indexing of file names for quick opening
follow "links" in code: implementation, declaration
integration of source code control
searching for classes or methods
code formatting
process monitoring
one click/button debugging/building
method/variable/field/... renaming
etc
Nothing to do with incompetence from the programmers. Anybody would be A LOT less productive using vim for developing a big Java EE application.
How big were you projects at college? A couple of classes in a couple of files? Or rather a couple of hundreds of classes in a couple of hundreds of files?
Today I had the "honor" of looking at a file in a rather large project where the programmer opted to use vi (yes vi, not vim) and a handcrafted commandline compiler call (no make). The file contained on function spanning about 900 lines with a series of if-else-if-else-constructs (because that way you have all your code in one place!!!!!!). Macho-Programmer at his finest.
OK there are very good reasons for enforcing a particular toolset within a production environment:
Companies want to standardize everything so that if an employee leaves they can replace that person with minimal effort.
Commercial IDEs provide a complex enough environment to support a single interface for a variety of development needs and supporting varying levels of code access. For instance the same file-set could be used by the developer, by non-programmers (graphics designers etc.) and document writers.
Combine this with integrated version control and code management without the need of someone learning a particular version control system, all of a sudden IDEs start to look nicer and nicer.
It also streamlines maintenance of build systems in a multi-homed environment.
IDEs are easier to give tutorials to via phone or video, and probably come with those.
etc. etc. and so forth.
The business decision making behind enforcing a standardized environment goes beyond the preference of a single programmer or for that matter perhaps the understanding of the programming team.
Using an IDE helps an employee to work with huge projects with minimal training. Learn a few key combos - and you will comfortably work with multi-thousand-file project in Eclipse, IDE handles most of the work for you under the hood. Just imagine how many years of learning it takes to feel comfortable developing such projects in Vim.
Besides, with an IDE it is easy to support common coding standards across the entire team: just set a couple of options and an IDE will force you to write code in a standardized way.
Plus, IDE gives a few added bonuses like refactoring tools (especially good in Eclipse), integrated debugging (especially good in Visual Studio), intellisense, integrated unit tests, integrated version control system etc.
The advantages and disadvantages of using an IDE also greatly depends on the development platform. Some platforms are geared towards the use of IDEs, others are not. As a rule of thumb, you should use IDE for Java and .Net development (unless you're extremely advanced); you should not use IDE for ruby, python, perl, LISP etc development (unless you're extremely new to these languages and associated frameworks).
Features like these aren't available in vim:
Refactoring
Integrated debuggers
Knowing your code base as an integrated whole (e.g., change a Java class name; have the change reflected in a Spring XML configuration)
Being able to run an app server right inside the IDE so you can deploy and debug your code.
Those are the reasons I choose IntelliJ. I could go back to sticks and bones, but I'd be a lot less productive.
As said before, the question about using an IDE is basicaly productivity. However there is some questions that should be considered by the company when choosing a specific IDE. that includes:
Company culture
Standardize use of tool, making it accessible for all developers. That easies training, reduces costs and improve the speed of learn curve.
Requirements from specific contract. As an example, there are some development packages that are fully supported (i.e. plugins) by some IDE and not by anothers. So, if you are working with the support contract you will want to work with the supported IDE. A concrete example is when you are working with not common OS like VxWorks, where you can work with the Workbench (that truely is an eclipse with lot of specific plugins for eclipse).
Company policy (and also I include the restriction on company budget)
Documentation relating to the IDE
Comunity (A strong one can contribute and develop still further the IDE and help you with your doubts)
Installed Base (no one wants to be the only human to use that IDE on the world)
Support from manufacturers (an IDE about to be discontinued probably will not be a good option)
Requirements from the IDE. (i.e. cross platform or hardware requirements that are incompatible with some machines of the company)
Of course, there is a lot more. However, I think that this short list help you to see that there is some decisions that are not so easy to take, when we are talking about money and some greater companies.
And if you start using your own IDE think what mess will be when another developer start doing maintenance into your code. How do you think will the application be signed at the version manager ? Now think about a company with 30+ developers each using its own IDE (each with its own configuration files, version and all that stuff)...
http://xkcd.com/378/
Real programmers use the best tools available to get the job done. Some companies have licenses for tools but there's nothing saying you can't license/use another IDE and then just have the other IDE open to copy/paste what you've done in your local IDE.
The question is a bit open-ended, perhaps you can make it community wiki...
As you point out, the IDE can be useful, or even a must have, for some operations, like refactoring, or even project exploring: I use Eclipse at my work, on Java projects, and I find very useful to get a list of all occurrences of the usage of a public method or a class in a project. Likely, I appreciate to be able to rename it from where it is defined, and having all these occurrences automatically updated.
The fact I have the JavaDoc displayed when hovering over a name is very nice too. Like autocompletion, jump to a class name, etc.
And, of course, debugging facilities...
Now, usage of Eclipse isn't mandatory in our shop! Some years ago, some people used the Delphi IDE (forgot its name), I tried NetBeans, etc. But I think we de facto standardized on Eclipse, but it was a natural evolution rather than a company policy. And we often just open files in a text editor when we need a quick update...

Building Cross Platform app - recommendation

I need to build a fairly simple app but it needs to work on both PC and Mac.
It also needs to be redistributable on a disc or usb drive as a standalone desktop app.
Initially I thought AIR would be perfect for this (it ticks all the API requirements), but the difficulty is making it distributable, as the app would require the AIR runtime to be installed to run.
I came across Shu Player as an option as it seems to be able to package the AIR runtime with the app and do a (silent?) install.
However this seems to break the T&C from Adobe (as outlined here) so I'm not sure about the legality.
Another option could be Zinc but I haven't tested it so I'm not sure how well it'll fit the bill.
What would you recommend or suggest I check out?
Any suggestion much appreciated
EDIT:
There's a few more discussions on mono usage (though no real conclusion):
Here and Here
EDIT2:
Titanium could also fit the bill maybe, will check it out.
Any more comments from anyone?
EDIT3 (one year on): It's actually been almost a year since I posted that question but it seems some people still come across it every now and then, and even contribute an answer, even a year later.
Thought I'd update the question a bit. I did not get around to try the tcl/tk option at the end, time constraint and the uncertainty of the compatibility to different os versions led me to discard that as an option.
I did try Titanium for a bit but though the first impressions were ok, they really are pushing the mobile platform more than anything, and imho, the desktop implementation suffers a bit from that lack of attention. There are also some report of problems with some visual studio runtime on some OSs (can't remember the details now though).. So discarded that too.
I ended up going with XULRunner. The two major appeals were:
Firefox seems to work out of the box on most OS version, so I took it as good faith that a XULRunner app would likely be compatible with most system. Saved me a lot of testing and it turned out that it did run really well on all platforms, there hasn't been a single report of not being able to start the app
It's Javascript baby! Language learning curve was minimal. The main thing to work out is what the additional xpcom interfaces are and how to query them.
On the down side:
I thought troubleshooting errors was a sometimes difficult task, the venkman debugger is kinda clunky, ended up using the console more than anything.
The sqlite interface is a great asset for a desktop app but I often struggled to find relevant error infos when something didn't work - maybe i was doing it wrong.
It took a little while to work out how to package the app as a standalone app for both PC and Mac. The final approach was to have a "shell" mac app and a shell pc app and a couple of "compile" script that would copy the shells and add the custom source code onto it in the correct location.
One last potential issue for some, due to the nature of xulrunner apps, your source code will be deployed with the app, you can use obfuscation if you want but that's something to keep in mind if you want to protect your intellectual property
All in all, great platform for a cross-platform app. I'd highly recommend it.
Tcl/Tk has one of the best packaging solutions out there. You can easily wrap a cross-platform application (implemented in a fully working virtual filesystem) with a platform-specific binary to get a single file executable for just about any modern desktop system. Search google for the terms starkit, starpack and tclkit. Such wrapped binaries are tiny in comparison to many executables these days.
Many deride Tk as being "old" or "immature" but it's one of the oldest, most stable toolkits out there. It uses native widgets when such widgets exist.
One significant drawback of Tcl/Tk, however, is that it lacks any sort of printing support. If your application needs to print you'll have to be a bit creative. There are platform-specific solutions, and the ability to generate postscript documents, and libraries to create pdfs, but it takes a little extra effort.
Java is probably your best bet, although not all Windows PCs will necessarily have Java (most should). JavaFX is new enough you can't count on it - you'll probably find a lot of machines running Java 1.5 or (shudder) 1.4. I believe recent Mac OS still ships with 1.5 (latest version may have changed to 1.6).
Consider JavaFX
It would run everywhere with a modern JRE ..!
AIR could be an option, but only if you don't mind distributing two different files (the offline runtime installer and your app), and expecting the user to run one and then the other. You do have to submit an online form at Adobe's site saying you agree to distribute the offline installer as-is, rather than digging out individual DLLs or whatever, before they give you the installer.
Unfortunately there's currently no way to get both an AIR app and the runtime to install from one file though. I'm not sure what the deal with Shu is, or whether it's doing anything that isn't kosher.
i would recommended zink. it has all the functionalities you require for desktop. however, the las time i used it it was a bit glitchy.
i was hung up by trying to write a 6M file to the disk. thought it trough and changed the code to write 512K chunks at a time (3min work, fast).
probably it still has some little annoying glitches like making you think on root lvl but the ease of use and the features are just way too sweet to ignore.

Will static linking on one unix distribution work but not another?

If I statically link an executable in ubuntu, is there any chance that that executable won't work within another distribution such as mint os? or fedora? I know processor types are affected, but other then that is there anything else I have to be wary of? Sorry if this is a dumb question. Thanks for any help
There are a few corner cases, but for the most part, you should be in good shape with static linking. The one that comes to mind is libnss. This particular library is essentially impossible to link statically, because of the way it does its job (permissions, authentication, security tasks). As long as the glibc-versions are similar, you should be ok on this issue, though.
If your program needs to work with subtle features of the kernel, like volume managers, you've got a pretty slim chance of getting your program to work, statically linked, across distros, because the kernel interfaces may change slightly.
Most typical applications, the kind that even makes sense to discuss portability, like network services, gui-applications, language tools (like compilers/interpreters) wont have a problem with any of this.
If you statically link a program on one computer and then move it to another computer in which the system basically runs the same way, then it should work just fine. That's the point of static linking; that there are no other files the program depends on - it's entirely self-contained, so as long as it can run at all, it will run the same way it does on its "host" system.
This contrasts with dynamic linking, in which the program incorporates elements of other files (libraries) at runtime. If you move a dynamically linked program to another system where the libraries it depends on are different (or nonexistent), it won't work.
In most cases, your executable will work just fine. As long as your executable doesn't depend on anything unusual being present for it to function, there will be no problem. (And, if it does depend on something unusual being present, then you'll have the same issue even if you dynamically link.)
Statically linking is usually safer than dynamically linking for compatibility between different UNIX environments, as long as the same CPU is in use.
To have a statically linked binary fail, again assuming the same processor architecture, you would have to do something such as link on a system using the a.out binary format and try to execute it on a system running ELF, in which case the dynamically linked version would fail just as badly.
So why do people not routinely link statically? Two reasons:
It makes the executable larger, sometimes MUCH larger, and
If bugs in the libraries are fixed, you'll have to relink your program to get access to the bug fixes. If a critical security bug is fixed in the libraries, you have to relink and redistribute your exe.
On the contrary. Whatever your chances are of getting a binary to work across distributions or even OSes, those chances are maximized by static linking. Static linking makes an executable self-contained in terms of libraries. It can still go wrong if it tries to read a file that's not there on another system.
For even better chances of portability, try linking against dietlibc or some other libc. An article at Linux Journal mentions some candidates. A smaller, simpler libc is less likely to depend on things in the filesystem that differ from distro to distro.
I would, for the reasons noted above avoid statically linking something unless you absolutely must.
That being said, it should work on any other similar kernel of the same architecture (i.e. if you statically link on a machine running linux 2.4.x , the loader VDSO is going to be different on linux 2.6, VDSO being virtual dynamic shared object, a shared object that the kernel exposes to every process containing loader code).
Other pitfalls include things in /etc not being where you'd think, logs being in different places, system utilities being absent or different (ubuntu uses update-rc.d, RHEL uses chkconfig), etc.
There are sometimes that you just have no choice. I was writing a program that talked to LVM2's string based cmdlib interface in favor of using execv().. low and behold, 30% of the distros I needed to support did NOT include that library and offered no way of getting it. So, I had to link against the static object when producing binary packages.
If you are using glibc, you can be confident that stuff like getpwnam() and friends will still work .. just make sure to watch any hard coded paths (better yet, make them configurable at run time)
As long as you can guarantee it'll only be executed on a similar version of the OS on similar hardware your program will work fine if it statically linked. so, if you build for a 2.6 Linux and statically link you will be fine to run on (almost) all 2.6 Linux distributions.
Be warned you can't statically link some parts of GLIBC so if you're using them you'll have to dynamically link anyway. From memory the name service stuff (nss) parts required dynamic linking when I was investigating it.
You can't statically link a program for (say) Linux then expect it to run on BSD or Windows. BSD and Unix don't present or handle their system calls in the same way Linux does. I tell a slight lie because the BSDs have a Linux emulation layer that can be enabled, but out of the box it won't work.
No it will not work. Static linking for distribution independence is a concept from the old unix ages and is not recommended. By the fact you can't as many libraries are not avail as static libraries anyway.
Follow the Linux Standard Base way, this is your only chance to get as much cross distribution portability as possible.
The LSB also works fine if you program for FreeBSD and Solaris.
There are two compatibility questions at issue here: library versions and library inventory.
You don't say what libraries you are using.
If you have no '-l' options, then the only 'library' is glibc itself, which serves as the interface to the kernel. Glibc versions are upward compatible. If you link on a glibc 2.x system you can run on a glibc 2.y, for y > x. The developers make a firm commitment to this.
If you have -l options, static linking is always safe. If you are dynamically linked, you have to ensure that (1) the library is present on the target system, and (2) has a compatible version. Your Mileage Might Vary as to whether the target distro has what you need.