Cross-compiling vs virtualization - virtual-machine

I want my app to run in Windows and Ubuntu, in both 32- and 64-bit modes. So I must compile four times and also test it four times. The question is whether it's best to use cross-compiling or compile in virtual-machine (VM) like VirtualBox.
I know cross-compiling is hard for the first time, but this way I can keep the VM for testing "clean", with no development stuff that may hide some lack of files in the final user PC. On the other hand, compiling directly in a VM is quite more simple.
So I ask:
What are other pros/cons for each method?
Which is the right way?
Which is the most used way and why?

TL;DR: In this case, skip cross-compilation. Build and test on each target platform directly.
Details: If you need to ship your software on these 4 platforms, you will need either physical or virtual manifestations of them, regardless of whether you cross-compile or compile natively on the target platform.
Why? Because you will want to run tests on every target platform, not just one.
Why? Because your cross-compiler could have bugs on one platform but not another, and because 32-bit vs. 64-bit as well as Linux vs. Windows are sufficiently different. For example, if you only run tests on Ubuntu-32, but cross-compile to Windows-64 and ship the software, you might find a problem only once it reaches the customer.
Cross-compiling is hard to set up and hard to maintain. Given that you're going to want to test the code, the installation, etc. on every of these platforms, you might as well skip the cross-compilation and just run builds and tests on each of the target platforms directly.
Speaking of keeping VM state "clean": don't set it up manually, create it from scratch every time. Use tools like Packer and Vagrant to automate the builds, and use clean VMs every time to ensure it's reproducible and hermetic.

Related

MXE compile on linux for linux

So I saw this project http://mxe.cc/ and tried it, it seems like it is very easy to compile stuff for windows with this. I tried to hack it a little bit to compile binaries for linux instead, because, if it compiles for other system so easily how can it be hard to compile for host? 90% of the stuff seems to just build out of the box, but there are some errors and therefore I cannot build. I want to ask, how correctly should I configure mxe to build for the linux host? I know this is not supported but I don't think it should be that hard because we build from source anyway. And there are next to no modifications for downloaded sources too (in a windows build that is).
For people who might ask why I don't want to use shared stuff, I want to basically have two options:
dpkg package for user with dependencies specified (the linux way)
single standalone static executable
Any suggestions? Or maybe there's whole another guide on linux on how to build things from scratch (without a lot of manual work like mxe does)?

Native standalone executable with smalltalk?

I really like live smalltalk environment (though I only experimented a bit with Pharo), but there is one thing why I can't really use it for everyday development. It seems that it is not possible to create a native standalone executable from smalltalk system. The native standalone executable means to create a single executable file (PE on windows, ELF on linux, Mach-O on macosx), that a user could run by double clicking it without the need to install any additional execution environments. Am I missing something and it is in fact possible to create native standalone executable with smalltalk?
If we talk about Pharo specifically. I know that Pharo's environment includes efficient just in time compiler (that generates true native code from Pharo's VM bytecode), I know that the VM image can be stripped down by cutting of the code that my application won't ever need. So basically we already have almost everything (except the linker I guess) to be able to create native standalone executables. Cross-compilation shouldn't be a problem too, if we put all code generation stuff (for all target processors) in the image.
I know that in smalltalk world it is considered to be a good thing to deliver the whole VM image separately from the runtime environment, so the user can hack on software he/she is using. However I don't see any good reasons why it shouldn't be possible to deliver your smalltalk software as fully compiled native standalone executable. Could you please explain me why it is not a common thing to do in smalltalk world? Is there any good smalltalk implementation that allow to do it?
To sum all this. I dream of a live smalltalk environment, where I could develop and test my software, but then (when the software is actually ready for delivery) cross-compile it to native executables for windows, linux and macosx from my single development machine. That would be really awesome.
ironically enough there is one thing that an exe needs to be preloaded. Your OS. See the thing is that C/C++ can be so light because already your OS acts pretty much as the image acts with a ton of preloaded libraries. You need to waste several GBs of memory just get a simple calculator starting. Your OS is a collection of C/C++ libraries.
Things are not any prettier with other languages like python, java etc , even if the app is smaller, they still depend on this libs and they come will quite big libraries that would need installation whether your app use them or not.
Pharo and Smalltalk in general is diffirent case because they aspire to be a virtual OS by itself. The diffirence with a real OS , the smalltalk image is made to be hacked the easy way by a user .
Saying that you can rename the pharo executable, change its icon, disable to IDE tools inside Pharo so your user sees only the GUI of your App. Applications like Dr. Geo and Phratch already do this.
Compiling a Pharo project to a native executable will not make much difference, because a) Pharo is already compiled to a native executable b) you dont need to do that since Pharo is already standalone does not need to even be installed.
My advice is stop worrying about things that do not really matter and enjoy learning how powerful and fun Pharo can be.
Not Pharo, but native, compiled (through ANSI C or its own JIT) smalltalk [applications] (with ability to load pre-compiled DLLs or [JIT-]compile code on demand):
Smalltalk/X
http://www.exept.de/en/products/smalltalk-x.html
or (unofficial enhanced development version):
Smalltalk/X JV branch - https://swing.fit.cvut.cz/projects/stx-jv/
Nearly completely open-source. Everything except librun (core runtime - memory management, JIT, etc.) and stc (smalltalk-to-c) compiler is already open-source. Claus Gittinger / Exept also promised/confirmed they would release remaining sources were they to stop further development (AFAIK, those parts are closed only because of concerns of existing clients).
I highly recommend everyone to check it out, it is a wonder such a great implementation is so little known.
You might also check out Dolphin from Object Arts.
It is windows only, but the very best IDE, bar none.
If you do anything in Smalltalk, you should buy a copy.
(They also have a free non-commercial version, but you will
want to support the kind of craftsmanship behind it by buying
the Pro version. An absolutely kick-ass product, IMHO).
It will produce a standalone exe, if that's what you want.
I made an exe of a medium-featured wiki with it - the exe was
less than 1 MB. That is not a typo.
-Jim
The problem is that Pharo, in that case, cannot be compared to any native compiler like C, C++ or others, but more like java, python, ruby and other languages with a Virtual Machine around.
In these languages, you produce jars, eggs or gems to distribute your project.
In Pharo, you produce a "production image" following technics you already mentioned. But nothing prevents you to deliver an artefact including also the PharoVM (it is 2m large, after all), and you can prepare your apps to detect and open your production image (without having to ask for it).
It is about as practical with Smalltalk as with other languages: not very much. As soon as you create a somewhat larger application, you start depending on other libraries/applications being installed. If you compile them statically with the application, you have now created a much larger application that takes longer to download, and needs to be updated at least as soon as a security problem is found in one of the dependencies. If not, your application is no longer double-click startable.
There are two directions for solutions: web applications, and installers and package managers.
Squeak still maintains its one-click installer, allowing the same set of files to work on windows, mac and linux. Pharo used to have that too, but moved to having separate builds. The need hasn't been so large that the one-click build has been reinstated. It is mostly seen as useful to be able to carry around a cross-platform environment on an usb-stick. With the move to the 64-bit spur vm the dependency problems will lessen as more of the needed libraries will come pre-installed on those platforms.
Dolphin Smalltalk can produce a standalone .exe for Windows.
This is a key feature of the Pro version.
Your dream has been around since the mid-80's and it is called Smalltalk/X.

Is Google's Golang an interpreter or compiler?

I have been researching Golang and I see that it has a compiler.
But is it compiling Go into assembly level code or just converting it into BYTECODES and then calling that compilation? I mean, even in PHP we are able to convert it into BYTECODES and have faster performance.
Is Golang a REPLACEMENT for system level programming and compiling ?
This is really a compiler (in fact it embbeds 2 compilers) and it makes totally self sufficient executables. You don't need any supplementary library or any kind of runtime to execute it on your server. You just have to have it compiled for your target computer architecture.
From the documentation :
There are two official Go compiler tool chains. This document focuses
on the gc Go compiler and tools (6g, 8g etc.). For information on how
to work on gccgo, a more traditional compiler using the GCC back end,
see Setting up and using gccgo.
The Go compilers support three instruction sets. There are important
differences in the quality of the compilers for the different
architectures.
amd64 (a.k.a. x86-64); 6g,6l,6c,6a
A mature implementation. The
compiler has an effective optimizer (registerizer) and generates good
code (although gccgo can do noticeably better sometimes).
386 (a.k.a. x86 or x86-32); 8g,8l,8c,8a
Comparable to the amd64 port.
arm (a.k.a. ARM); 5g,5l,5c,5a
Supports only Linux binaries. Less widely used than
the other ports and therefore not as thoroughly tested.
Except for
things like low-level operating system interface code, the run-time
support is the same in all ports and includes a mark-and-sweep garbage
collector, efficient array and string slicing, and support for
efficient goroutines, such as stacks that grow and shrink on demand.
The compilers can target the FreeBSD, Linux, NetBSD, OpenBSD, OS X
(Darwin), and Windows operating systems. The full set of supported
combinations is listed in the discussion of environment variables
below.
On a server you'll usually target the amd64 platform.
Note that Go is well known for the speed of compilation. When deploying my server programs, I don't build for the different platforms on the development computer : I deploy the sources and I compile directly on the production servers. Since Go1 I never had a code compiling on one platform and not compiling on the other ones.
On Windows I had no problem in making an exe on my development computer and simply sending this exe to people never having installed anything Go related.
Go compiles quickly to machine code yet has the convenience of garbage collection and the power of run-time reflection. It's a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.
Source - golang.org
Golang is a compiler-based language, it can easily be compiled on the development computer for any targeted system such as linux and mac.
A golang project when have compiled turns to a self-sufficient executable and can be ran on the targeted system without anything additional. It's because the golang compiler turns your code into bytes ready to execute on a system which can run compiled c code.

Language for cross-platform install script

In writing an install script, I quickly found that I'd have cross-platform issues, and bash scripts are hard to maintain. I decided to look for a cleaner solution that's more cross-platform.
The goal is to have an intelligent script sniff out components of the user's system and have as little user interaction as possible. That being stated, I thought about these languages:
Python- cross-platform, and many other programs rely on it, so it may already be present
Javascript- nodejs is required by part of my application, but it's a little clunky for exec calls
Are there any languages that would be a better fit for this application?
Requirements:
Available on all platforms
May be distributed as part of my application if small enough
Little to no version variation, so Ruby is out
*nix only for now, but eventually will be run on Windows
Maintainable
Clear syntax (Perl is out)
Modular (if I sniff the OS, I can include separate OS-specific code)
Capable of downloading files (unmet dependencies)
Capable of relatively complex scripting tasks
Testing for used HTTP ports
Reading and parsing files for configuration data
Checking for permissions and changing directories of insufficient privileges
Open source
Python can do all of those things:
Available on all platforms (Mac, Linux, Windows, and more)
May be distributed as part of my application if small enough (You can make binaries with cx_freeze, if needed)
Little to no version variation, so Ruby is out (Python is pretty static when it comes to version changes)
*nix only for now, but eventually will be run on Windows (It comes pre-installed on Mac, and ships with just about any Linux distro. Binaries don't need the interpreter to run)
Maintainable
Clear syntax (Perl is out) (Python is very easy to read, but that's up to you to decide)
Modular (if I sniff the OS, I can include separate OS-specific code) (Modules are just files in Python)
Capable of downloading files (unmet dependencies) (Urllib2 takes care of that, and it's pre-installed)
Open source (Yep)
Ant will do what you need. It is OS independent and will allow compiles and installs.

host target development model

I am quite new to the embedded linux programming and did not really understand this concept very well.
Can anyone explain the essence of the "host-target" relation? Is this model only specific to the "cross-compilation"? Is it used just because "executable code will be run on another enviroment"? and what matters with the linux kernel on the target? E.g., the "building the embedded linux system" book mentioned this, but did not explain its motivation or goal of this type of development.
Thanks a lot.
The 'motivation' for this model is that seldom is an embedded target a suitable platform for development. It may be resource constrained, have no operating system, have no compiler that will run on the target, have no filesystem for source files, have no keyboard or display, no networking, and may be relatively slow or anything else you might need to develop effectively.
If your embedded system is suited to running Linux, it is possible that not all of the above limitations apply, but almost certainly enough of them to make you want to avoid developng directly on the target. If this were not so, they it hardly qualifies as an embedded system perhaps.
http://www.landley.net/writing/docs/cross-compiling.html
Seems pretty clear. What specific questions do you have?
Linux since its very origin was written in very portable way. It runs on a whole range of machines with very different CPUs, and it is considered the Good Thing to write in a portable way, so that, for example, package maintainer can easily port your program to some embedded ARM or Cygwin, or Amiga, or...
So, yes, the model is "only" specific to cross-compilation, but actually about every compilation on Linux is a (variant of) cross-compilation, just that by default build, host and target are automatically set to the same value, the same as the machine you run on.
Still, even then, you can take a Linux-i386 compiled compiler, sources for it, and "cross-compile" it for Linux-amd64. And the resulting binary will run much faster on a 64bit CPU.
It IS quite essential in embedded programming though. Mostly because you write programs for weak CPUs that are not capable of running a compiler or would run it at a snail pace. So you take a cross-compiler on a fast CPU (say, some multi-core Intel) and cross-compile for the embedded CPU (say, some low-end ARM).
"In different environment" is putting things very mildly. What you're doing when cross-compiling for embedded is working with entirely different instruction set, different memory access modes, different resource access methods and so on and so on. A machine of entirely different construction than the build host. Your build host may be a Windows PC running Cygwin. Your target may be a chip inside a smartphone. The binary will look nothing like the Cygwin .exe files.
As a direct consequence, -everything- must be compiled for the target from scratch. The kernel, the system utilities, the system libraries, all the tools the target must be running. Thing is, if the target is a ticket selling booth, there is really no sense cross-compiling Eclipse, GCC and Gnome for it, then developing in "local" environment, typing your code on a ticket booth keyboard. Instead, you just cross-compile the essentials of the OS, and your specific applications. You keep the development environment on the build machine, and cross-compile everything you need on the embedded device.
[in practice, you get a Linux distro for the target, and just compile whatever you need modified].