Is it possible to embed LLVM Interpreter in my software and does it make sense? - interpreter

Suppose I have a software and I want to make cross-plataform plugins. You compile the plugin for a virtual machine, and any platform running my software would be able to run this code.
I am wondering if it is possible to use LLVM interpreter and bytecode for this purpose. Also, I am wondering if does make sense using LLVM for this purpose instead of something else, i.e. is it what LLVM was made for?

I'm not sure that LLVM was designed for it. However, I doubt there is anything that hasn't been done using LLVM1
Other virtual-machines based script engines are specifically created for the job:
LUA is very popular
Wikipedia lists some other Extension/embeddable languages under the Scripting language entry
If you're looking for embeddable virtual machines:
IKVM supports embedding JVM and CLR in a bridged mode (interoperable)
Parrot supports embedding (and includes a Python interpreter; mind you, you can just run python bytecode images)
Perl has similar architecture and supports embedding
Javascript supports embedding (not sure about the architecture of v8, but I guess it would use a virtual machine)
Mono's CLR engine supports embedding: http://www.mono-project.com/Embedding_Mono
1 including compiling c++ information to javascript to run in your browser...

There is VMIR (https://github.com/andoma/vmir) which is a LLVM bitcode interpreter / JIT engine that's intended to be embedded into other apps.
Disclaimer: I'm the author of it and it's still work-in-progress but works reasonable well.

In theory, there exist a limited subset of LLVM IR which can be portable across various platforms. You shall not specify alignments, you shall not bitcast pointers to integral types, you must avoid intrinsics, etc. Which means - you can't immediately use a code generated by a stock C compiler (llvm-gcc, Clang, whatever), unless you specify a limited target for it and implement sanitising LLVM passes. Another issue is that the bitcode format from different LLVM versions is not guaranteed to be compatible.
In practice, I would not go there. Mono is a reasonably small, embeddable, fast VM, and all the .NET stack of tools is available for it. VM itself is pretty low-level (as long as you do not care about the verifyability).

LLVM includes an interpreter, so if you can build this interpreter for your target platforms, you can then evaluate LLVM bitcode on the fly.
It's apparently not so fast though.

In their classic discussion (that you do not want to miss if you're a fan of open source, LLVM, compilers) about LLVM vs libJIT, that has happened long before LLVM became famous and established, the author of libJIT Rhys Weatherley raised this particular issue, he stated that LLVM is not suitable to be embedded, while Chris Lattner, the author of LLVM stated that otherwise, it is modular and you can use it in any possible fashion including embedding only the parts you need.

Related

How to make a normal C library work with embedded environment?

I was recently asked about how to use a C library (Cello in this case) in an embedded environment, but I'm not sure how to go about that.
Is it correct to say that if a library can be compiled in the embedded environment, it can be used?
Should I care about making the library more lightweight or something like that?
Any suggestions are appreciated.
To have it compile is the bare minimum. Notably most embedded systems are freestanding systems, such as microcontroller and RTOS applications. Compilers for freestanding systems need not provide all standard library headers, the only mandatory ones are (C17 4/6):
<float.h>, <iso646.h>, <limits.h>, <stdalign.h>, <stdarg.h>, <stdbool.h>,
<stddef.h>, <stdint.h>, <stdnoreturn.h>
In addition, the embedded system need not support floating point arithmetic. Some systems implement software floating point support, but using that is very bad practice. If your MCU does not have a FPU, you should not be using floating point arithmetic, or you picked the wrong MCU for the task, period.
"I need to represent this number with decimals internally or to the user" is not a valid reason for using floating point. Fixed point arithmetic should be used for that. You only need floating point if you are to use math libraries like math.h and more advanced math.
Traditionally, embedded system compilers have been slow to adapt the latest C standard. It's been quite a while since C11 release now though, so at the moment all useful compilers have caught up with it (C17 only contains minor changes so we can likely ignore that one). Historically, embedded compilers have been horribly bad at this though, so remain sceptical. There shouldn't be any reason to pick a compiler without C11 support for new product development.
Summary for getting the lib to compile (bare minimum):
Does the library use hosted system headers, and if so does the embedded compiler support them?
Does the library use floating point and if so does the target system have a FPU, or at least a software floating point lib?
Does the library rely on the latest C standards and if so does the embedded compiler support them?
With that out of the way, you have to consider if the library is at all written to be portable. Did they take care with things like integer types, enums and alignment? Are they using stdint.h or are they using "sloppy typing" int all over the place? Did they consider endianess? Is the lib using dynamic allocation, which is banned in most embedded systems? Is it compatible with industry standards like MISRA-C? And so on.
Then there's optimizations to consider on top of that. Optimizing code for microcontrollers is very different than optimizing code for PC CPUs.
A brief glance at the various "compiler switches" (#ifdef) present usually gives a clue of how portable the code is. Looking (very briefly) at this cello lib, they seem to have considered porting between mainstream x86 systems but that's it. You would have to rewrite pretty much the whole lib if you were to port it to an embedded system. The work effort depends on how alien the target CPU is compared to x86. Porting to a high end Cortex-A with Little Endian might not require much effort. Porting to some low-end crap MCU would require a monumental effort.
Code portability is a big topic and requires very competent C programmers. To make the very same code run on for example a x86-64 and a crappy 8-bit MCU is not a trivial task.
Professional libs like protocol stacks usually come with a system port for a specific MCU, where they have not just taken generic portability in account, but also the specific system.
Not all libraries that can be compiled, can be used in embedded environments. Libraries that use malloc and free (or their C++ counterparts) are dangerous and therefore should be handled with care. These libraries can result in undeterministic behaviour because of memory allocations failing.
It is possible that the standard C STD could be wholly compiled for embedded devices but that doesn't mean that you'll have much use for printf or scanf. So a better question before you ask if you can compile it is should you use it. Cello seems like a fun experiment but isn't a stable platform to develop something real on. It can be done though and an example of that is the Espruino.
Most of the time it is a bad idea to rewrite a library to be 'lightweight' or more importantly in an embedded environment: statically allocated. You are probably not as smart as those people or won't put in the time needed to create a complete functional embedded fork which is as stable as the original or even better. Don't be dissuaded for a fun little side project but don't depend on it for a real project.
Another problem could be that the library is too big for your microcontroller. The Atmega32a only has 32KB of programmable flash. To take a C++ example of the top of my head: boost won't fit in that space even for all the highly useable tools that it provides.

Is there any high-level, natively-compiled object-oriented language in wide use?

There are lots of oop languages, but I couldn't find any that has conveniences like garbage collection, but compiles natively to machine code. Sort of like between C and java/c#. One interesting language I found was Vala, but that's limited to the GNOME platform and is not that well-known
Go is probably closest.
But why on earth do you want it natively compiled anyway?
JIT compilation of portable bytecode has proved to be an extremely effective strategy. It compiles down to native code at runtime (so you get up to the performance of native code after the first few iterations) and it avoids the issues of having to build and manage platform-specific compiled binaries.
Are you thinking about C++? That is in high usage and can be compiled on nearly any (major) platform.
In the case you want to use an oo language that compiles down to native code you will "always" have to use header files and stuff as the elf format doesn't support oo (There is no class information in elf).In case you want to use classes from external libs you need to make the compiler aware somehow about the fact that there are classes, functions, etc. that are declared outside of your project. In C++ this is solved by the use of header files. So that's, I think, a major drawback in native object oriented languages. To resolve that issue a few tweaks would need to be made to elf/loader/linker in order to support the kind of features (like "linking" against "classes") you might expect. Even though mechanism for garbage collection could be implemented even in native languages. But that seems no good for os implementation.
There are C++ libs to do that for userspace apps like:
Boehm collector
Smart pointers

Can I use variadic templates (but none of the other c++0x features) in g++?

The thinking is that since variadic templates are a compile time feature, there will be little ABI impact or runtime behaviour change. Is this possible?
I specifically want the benefit of faster compile times for boost::mpl::vector and boost::mpl::string.
Rephrasing the question...
Is it possible to mix c++03 and c++11 code by separating them into libraries? I.e. we use quite a few 3rd party c++ libraries which are compatible with gcc 4.3 but we are moving on too gcc 4.7 and intend to use c++11 features where possible/makes sense. Or is it impossible to mix c++11 and c++03?
You should compile and link everything using the same tools running in compatible modes. You can't cherry-pick features like this.
The ABI impact comes in, for example, increased virtual function tables for standard I/O classes. It is not safe to mix things around.
I cant give a qualified answer, but from what I understood is, that lots of people would be concerned if this kind of backward-compatibility would be broken. As far as I understood there is nothing in the new C++11 that makes it necessary to rebuild everything. Thus, it could only be your specific compiler that would make that necessary. For the GCC I dont't expect it, although, the different libstdc++ versions could create "issues".
My strong guess is, that on a typical (intel-) linuxes you should be able to create two independent libs with different decently new versions of the gcc (maybe >4.x) and use/link them into a final program. You may have some things in there twice, though. I had some minor solvable issues with threads in 4.7.0 and <thread>. I don't know if they would create a good or bad mix with other thread-libs (eg. boost). However, you don't want to use gcc-4.7.0 for your production code, yet. And before a final gcc compiler is out, only a statement from the responsible projects team can give you certainty.

Will static linking on one unix distribution work but not another?

If I statically link an executable in ubuntu, is there any chance that that executable won't work within another distribution such as mint os? or fedora? I know processor types are affected, but other then that is there anything else I have to be wary of? Sorry if this is a dumb question. Thanks for any help
There are a few corner cases, but for the most part, you should be in good shape with static linking. The one that comes to mind is libnss. This particular library is essentially impossible to link statically, because of the way it does its job (permissions, authentication, security tasks). As long as the glibc-versions are similar, you should be ok on this issue, though.
If your program needs to work with subtle features of the kernel, like volume managers, you've got a pretty slim chance of getting your program to work, statically linked, across distros, because the kernel interfaces may change slightly.
Most typical applications, the kind that even makes sense to discuss portability, like network services, gui-applications, language tools (like compilers/interpreters) wont have a problem with any of this.
If you statically link a program on one computer and then move it to another computer in which the system basically runs the same way, then it should work just fine. That's the point of static linking; that there are no other files the program depends on - it's entirely self-contained, so as long as it can run at all, it will run the same way it does on its "host" system.
This contrasts with dynamic linking, in which the program incorporates elements of other files (libraries) at runtime. If you move a dynamically linked program to another system where the libraries it depends on are different (or nonexistent), it won't work.
In most cases, your executable will work just fine. As long as your executable doesn't depend on anything unusual being present for it to function, there will be no problem. (And, if it does depend on something unusual being present, then you'll have the same issue even if you dynamically link.)
Statically linking is usually safer than dynamically linking for compatibility between different UNIX environments, as long as the same CPU is in use.
To have a statically linked binary fail, again assuming the same processor architecture, you would have to do something such as link on a system using the a.out binary format and try to execute it on a system running ELF, in which case the dynamically linked version would fail just as badly.
So why do people not routinely link statically? Two reasons:
It makes the executable larger, sometimes MUCH larger, and
If bugs in the libraries are fixed, you'll have to relink your program to get access to the bug fixes. If a critical security bug is fixed in the libraries, you have to relink and redistribute your exe.
On the contrary. Whatever your chances are of getting a binary to work across distributions or even OSes, those chances are maximized by static linking. Static linking makes an executable self-contained in terms of libraries. It can still go wrong if it tries to read a file that's not there on another system.
For even better chances of portability, try linking against dietlibc or some other libc. An article at Linux Journal mentions some candidates. A smaller, simpler libc is less likely to depend on things in the filesystem that differ from distro to distro.
I would, for the reasons noted above avoid statically linking something unless you absolutely must.
That being said, it should work on any other similar kernel of the same architecture (i.e. if you statically link on a machine running linux 2.4.x , the loader VDSO is going to be different on linux 2.6, VDSO being virtual dynamic shared object, a shared object that the kernel exposes to every process containing loader code).
Other pitfalls include things in /etc not being where you'd think, logs being in different places, system utilities being absent or different (ubuntu uses update-rc.d, RHEL uses chkconfig), etc.
There are sometimes that you just have no choice. I was writing a program that talked to LVM2's string based cmdlib interface in favor of using execv().. low and behold, 30% of the distros I needed to support did NOT include that library and offered no way of getting it. So, I had to link against the static object when producing binary packages.
If you are using glibc, you can be confident that stuff like getpwnam() and friends will still work .. just make sure to watch any hard coded paths (better yet, make them configurable at run time)
As long as you can guarantee it'll only be executed on a similar version of the OS on similar hardware your program will work fine if it statically linked. so, if you build for a 2.6 Linux and statically link you will be fine to run on (almost) all 2.6 Linux distributions.
Be warned you can't statically link some parts of GLIBC so if you're using them you'll have to dynamically link anyway. From memory the name service stuff (nss) parts required dynamic linking when I was investigating it.
You can't statically link a program for (say) Linux then expect it to run on BSD or Windows. BSD and Unix don't present or handle their system calls in the same way Linux does. I tell a slight lie because the BSDs have a Linux emulation layer that can be enabled, but out of the box it won't work.
No it will not work. Static linking for distribution independence is a concept from the old unix ages and is not recommended. By the fact you can't as many libraries are not avail as static libraries anyway.
Follow the Linux Standard Base way, this is your only chance to get as much cross distribution portability as possible.
The LSB also works fine if you program for FreeBSD and Solaris.
There are two compatibility questions at issue here: library versions and library inventory.
You don't say what libraries you are using.
If you have no '-l' options, then the only 'library' is glibc itself, which serves as the interface to the kernel. Glibc versions are upward compatible. If you link on a glibc 2.x system you can run on a glibc 2.y, for y > x. The developers make a firm commitment to this.
If you have -l options, static linking is always safe. If you are dynamically linked, you have to ensure that (1) the library is present on the target system, and (2) has a compatible version. Your Mileage Might Vary as to whether the target distro has what you need.

Why do we need other JVM languages

I see here that there are a load of languages aside from Java that run on the JVM. I'm a bit confused about the whole concept of other languages running in the JVM. So:
What is the advantage in having other languages for the JVM?
What is required (in high level terms) to write a language/compiler for the JVM?
How do you write/compile/run code in a language (other than Java) in the JVM?
EDIT: There were 3 follow up questions (originally comments) that were answered in the accepted answer. They are reprinted here for legibility:
How would an app written in, say, JPython, interact with a Java app?
Also, Can that JPython application use any of the JDK functions/objects??
What if it was Jaskell code, would the fact that it is a functional language not make it incompatible with the JDK?
To address your three questions separately:
What is the advantage in having other languages for the JVM?
There are two factors here. (1) Why have a language other than Java for the JVM, and (2) why have another language run on the JVM, instead of a different runtime?
Other languages can satisfy other needs. For example, Java has no built-in support for closures, a feature that is often very useful.
A language that runs on the JVM is bytecode compatible with any other language that runs on the JVM, meaning that code written in one language can interact with a library written in another language.
What is required (in high level terms) to write a language/compiler for the JVM?
The JVM reads bytecode (.class) files to obtain the instructions it needs to perform. Thus any language that is to be run on the JVM needs to be compiled to bytecode adhering to the Sun specification. This process is similar to compiling to native code, except that instead of compiling to instructions understood by the CPU, the code is compiled to instructions that are interpreted by the JVM.
How do you write/compile/run code in a language (other than Java) in the JVM?
Very much in the same way you write/compile/run code in Java. To get your feet wet, I'd recommend looking at Scala, which runs flawlessly on the JVM.
Answering your follow up questions:
How would an app written in, say, JPython, interact with a Java app?
This depends on the implementation's choice of bridging the language gap. In your example, Jython project has a straightforward means of doing this (see here):
from java.net import URL
u = URL('http://jython.org')
Also, can that JPython application use any of the JDK functions/objects?
Yes, see above.
What if it was Jaskell code, would the fact that it is a functional language not make it incompatible with the JDK?
No. Scala (link above) for example implements functional features while maintaining compatibility with Java. For example:
object Timer {
def oncePerSecond(callback: () => unit) {
while (true) { callback(); Thread sleep 1000 }
}
def timeFlies() {
println("time flies like an arrow...")
}
def main(args: Array[String]) {
oncePerSecond(timeFlies)
}
}
You need other languages on the JVM for the same reason you need multiple programming languages in general: Different languages are better as solving different problems ... static typing vs. dynamic typing, strict vs. lazy ... Declarative, Imperative, Object Oriented ... etc.
In general, writing a "compiler" for another language to run on the JVM (or on the .Net CLR) is essentially a matter of compiling that language into java bytecode (or in the case of .Net, IL) instead of to assembly/machine language.
That said, a lot of the extra languages that are being written for JVM aren't compiled, but rather interpreted scripting languages...
Turning this on its head, consider you want to design a new language and you want it to run in a managed runtime with a JIT and GC. Then consider that you could:
(a) write you own managed runtime (VM) and tackle all sorts of technically difficult issues that will doubtless lead to many bugs, bad performance, improper threading and a great deal of portability effort
or
(b) compile your language into bytecode that can run on the Java VM which is already quite mature, fast and supported on a number of platforms (sometimes with more than one choice of vendor impementation).
Given that the JavaVM bytecode is not tied so closely to the Java language as to unduly restrict the type of language you can implement, it has been a popular target environment for languages that want to run in a VM.
Java is a fairly verbose programming language that is getting outdated very quickly with all of the new fancy languages/frameworks coming out in the past 5 years. To support all the fancy syntax that people want in a language AND preserve backwards compatibility it makes more sense to add more languages to the runtime.
Another benefit is it lets you run some web frameworks written in Ruby ala JRuby (aka Rails), or Grails(Groovy on Railys essentially), etc. on a proven hosting platform that likely already is in production at many companies, rather than having to using that not nearly as tried and tested Ruby hosting environments.
To compile the other languages you are just converting to Java byte code.
I would answer, “because Java sucks” but then again, perhaps that's too obvious … ;-)
The advantage to having other languages for the JVM is quite the same as the advantage to having other languages for computer in general: while all turing-complete languages can technically accomplish the same tasks, some languages make some tasks easier than others while other languages make other tasks easier. Since the JVM is something we already have the ability to run on all (well, nearly all) computers, and a lot of computers, in fact already have it, we can get the "write once, run anywhere" benefit, but without requiring that one uses Java.
Writing a language/compiler for the JVM isn't really different from writing one for a real machine. The real difference is that you have to compile to the JVM's bytecode instead of to the machine's executable code, but that's really a minor difference in the grand scheme of things.
Writing code for a language other than Java in the JVM really isn't different from writing Java except, of course, that you'll be using a different language. You'll compile using the compiler that somebody writes for it (again, not much different from a C compiler, fundamentally, and pretty much not different at all from a Java compiler), and you'll end up being able to run it just like you would compiled Java code since once it's in bytecode, the JVM can't tell what language it came from.
Different languages are tailored to different tasks. While certain problem domains fit the Java language perfectly, some are much easier to express in alternative languages. Also, for a user accustomed to Ruby, Python, etc, the ability to generate Java bytecode and take advantage of the JDK classes and JIT compiler has obvious benefits.
Answering just your second question:
The JVM is just an abstract machine and execution model. So targetting it with a compiler is just the same as any other machine and execution model that a compiler might target, be it implemented in hardware (x86, CELL, etc) or software (parrot, .NET). The JVM is fairly simple, so its actually a fairly easy target for compilers. Also, implementations tend to have pretty good JIT compilers (to deal with the lousy code that javac produces), so you can get good performance without having to worry about a lot of optimizations.
A couple of caveats apply. First, the JVM directly embodies java's module and inheritance system, so trying to do anything else (multiple inheritance, multiple dispatch) is likely to be tricky and require convoluted code. Second, JVMs are optimized to deal with the kind of bytecode that javac produces. Producing bytecode that is very different from this is likely to get into odd corners of the JIT compiler/JVM which will likely be inefficient at best (at worst, they can crash the JVM or at least give spurious VirtualMachineError exceptions).
What the JVM can do is defined by the JVM's bytecode (what you find in .class files) rather than the source language. So changing the high level source code language isn't going to have a substantial impact on the available functionality.
As for what is required to write a compiler for the JVM, all you really need to do is generate correct bytecode / .class files. How you write/compile code with an alternate compiler sort of depends on the compiler in question, but once the compiler outputs .class files, running them is no different than running the .class files generated by javac.
The advantage for these other languages is that they get relatively easy access to lots of java libraries.
The advantage for Java people varies depending on language -- each has a story tell Java coders about what they do better. Some will stress how they can be used to add dynamic scripting to JVM-based apps, others will just talk about how their language is easier to use, has a better syntax, or so forth.
What's required are the same things to write any other language compiler: parsing to an AST, then transforming that to instructions for the target architecture (byte code) and storing it in the right format (.class files).
From the users' perspective, you just write code and run the compiler binaries, and out comes .class files you can mix in with those your java compiler produces.
The .NET languages are more for show than actual usefulness. Each language has been so butchered, that they're all C# with a new face.
There are a variety of reasons to provide alternative languages for the Java VM:
The JVM is multiplatform. Any language ported to the JVM gets that as a free bonus.
There is quite a bit of legacy code out there. Antiquated engines like ColdFusion perform better while offering customers the ability to slowly phase their applications from the legacy solution to the modern solution.
Certain forms of scripting are better suited to rapid development. JavaFX, for example, is designed with rapid Graphical development in mind. In this way it competes with engines like DarkBasic. (Processing is another player in this space.)
Scripting environments can offer control. For example, an application may wish to expose a VBA-like environment to the user without exposing the underlying Java APIs. Using an engine like Rhino can provide an environment that supports quick and dirty coding in a carefully controlled sandbox.
Interpreted scripts mean that there's no need to recompile anything. No need to recompile translates into a more dynamic environment. e.g. Despite OpenOffice's use of Java as a "scripting language", Java sucks for that use. The user has to go through all kinds of recompile/reload gyrations that are unnecessary in a dynamic scripting environment like Javascript.
Which brings me to another point. Scripting engines can be more easily stopped and reloaded without stopping and reloading the entire JVM. This increases the utility of the scripting language as the environment can be reset at any time.
It's much easier for a compiler writer to generate JVM or CLR byte-codes. They are a much cleaner and higher level abstraction than any machine language. Because of this, it is much more feasible to experiment with creating new languages than ever before, because all you have to do is target one of these VM architectures and you will have a set of tools and libraries already available for your language. They let language designers focus more on the language than all the necessary support infrastructure.
Because the JSR process is rendering Java more and more dead: http://www.infoq.com/news/2009/01/java7-updated
It's a shame that even essential and long known additions like Closures are not added just because the members cannot agree on an implementation.
Java has accumulated a massive user base over seven major versions (from 1.0 to 1.6). Its capability to evolve is limited by the need to preserve backwards compatibility for the uncountable millions of lines of Java code running in production.
This is a problem because Java needs to evolve to:
compete with newer programming languages that have learned from Java's successes and failures.
incorporate new advances in programming language design.
allow users to take full advantage of advances in hardware - e.g. multi-core processors.
fix some cutting edge ideas that introduced unexpected problems (e.g. checked exceptions, generics).
The requirement for backwards compatibility is a barrier to staying competitive.
If you compare Java to C#, Java has the advantage in mature, production ready libraries and frameworks, and a disadvantage in terms of language features and rate of increase in market share. This is what you would expect from comparing two successful languages that are one generation apart.
Any new language has the same advantage and disadvantage that C# has compared to Java to an extreme degree. One way of maximizing the advantage in terms of language features, and minimizing the disadvantage in terms of mature libraries and frameworks is to build the language for an existing virtual machine and make it interoperable with code written for that virtual machine. This is the reason behind the modest success of Groovy and Clojure; and the excitement around Scala. Without the JVM these languages could only ever have occupied a tiny niche in a very specialized market segment, whereas with the JVM they occupy a significant niche in the mainstream.
They do it to keep up with .Net. .Net allows C#, VB, J# (formerly), F#, Python, Ruby (coming soon), and c++. I'm probably missing some. Probably the big one in there is Python, for the scripting people.
To an extent it is probably an 'Arms Race' against the .NET CLR.
But I think there are also genuine reasons for introducing new languages to the JVM, particularly when they will be run 'in parallel', you can use the right language for the right job, a scripting language like Groovy may be exactly what you need for your page presentation, whereas regular old Java is better for your business logic.
I'm going to leave someone more qualified to talk about what is required to write a new language/compiler.
As for how to writing code, you do it in notepad/vi as usual! (or use a development tool that supports the language if you want to do it the easy way.) Compiling will require a special compiler for the language that will interpret and compile it into bytecode.
Since java also produces bytecode technically you don't need to do anything special to run it.
The reason is that the JVM platform offers a lot of advantages.
Giant number of libraries
Broader degree of platform
implementations
Mature frameworks
Legacy code that's
already part of your infrastructure
The languages Sun is trying to support with their Scripting spec (e.g. Python, Ruby) are up and comers largely due to their perceived productivity enhancements. Running Jython allows you to, in theory, be more productive, and leverage the capabilities of Python to solve a problem more suited to Python, but still be able to integrate, on a runtime level, with your existing codebase. The classic implementations of Python and Ruby effect the same ability for C libraries.
Additionally, it's often easier to express some things in a dynamic language than in Java. If this is the case, you can go the other way; consume Python/Ruby libraries from Java.
There's a performance hit, but many are willing to accept that in exchange for a less verbose, clearer codebase.