Using Adobe AIR packaging options to package for iOS, I came across these targets for ADT packager.
What is the difference between target ipa-test and ipa-ad-hoc?
According to Adobe AIR * ADT command documentation:
-target The type of package to create. The supported package types are:
iOS package targets:
ipa-ad-hoc — an iOS package for ad hoc distribution.
ipa-app-store — an iOS package for Apple App store distribution.
ipa-debug — an iOS package with extra debugging information. (The SWF files in the application must also be compiled with debugging
support.)
ipa-test — an iOS package compiled without optimization or debugging information.
ipa-debug-interpreter — functionally equivalent to a debug package, but compiles more quickly. However, the ActionScript bytecode
is interpreted and not translated to machine code. As a result, code
execution is slower in an interpreter package.
ipa-test-interpreter — functionally equivalent to a test package, but compiles more quickly. However, the ActionScript bytecode is
interpreted and not translated to machine code. As a result, code
execution is slower in an interpreter package.
Source: iOS package targets
Does ad-hoc mean better performance?
I suspect ad-hoc to be the closest thing to release/distribution IPA. Maybe even the same, just with different distr. policy?
Yes, ad-hoc means better performance. And also a way of distribution.
Also keep in mind to compile swf in release mode if it goes for release-distribution or if you're gonna test actual performance.
Related
I am totally new to CMake and compiled languages for that matter. I have seen this question and answer. But I still don't fully understand what CMake is.
I am coming from a nodeJs/Javascipt environment, therefore if I could know a CMake equivalent in the nodeJs/Javascipt environment it would really help me understand what it is.So... Is CMake an equivalent of npm?
No, citing from Wikipedia:
CMake is a cross-platform free and open-source software tool for managing the build process of software using a compiler-independent method. It supports directory hierarchies and applications that depend on multiple libraries. It is used in conjunction with native build environments such as Make, Qt Creator, Ninja, Apple's Xcode, and Microsoft Visual Studio. It has minimal dependencies, requiring only a C++ compiler on its own build system.
JavaScript is an interpreted language, that means NodeJS/Browsers read and understand the code and execute it directly. For example C is built via a compiler (that reads and understands the code before execution) to Machine code (that does not need to be understand because it's the native language from your processor) and can be executed faster. CMake simplifies calling the Compiler, linking libraries (something like setting up require) and more for all files. Altough sometimes using babel, webpack and others via npm run is called 'building'.
I really like live smalltalk environment (though I only experimented a bit with Pharo), but there is one thing why I can't really use it for everyday development. It seems that it is not possible to create a native standalone executable from smalltalk system. The native standalone executable means to create a single executable file (PE on windows, ELF on linux, Mach-O on macosx), that a user could run by double clicking it without the need to install any additional execution environments. Am I missing something and it is in fact possible to create native standalone executable with smalltalk?
If we talk about Pharo specifically. I know that Pharo's environment includes efficient just in time compiler (that generates true native code from Pharo's VM bytecode), I know that the VM image can be stripped down by cutting of the code that my application won't ever need. So basically we already have almost everything (except the linker I guess) to be able to create native standalone executables. Cross-compilation shouldn't be a problem too, if we put all code generation stuff (for all target processors) in the image.
I know that in smalltalk world it is considered to be a good thing to deliver the whole VM image separately from the runtime environment, so the user can hack on software he/she is using. However I don't see any good reasons why it shouldn't be possible to deliver your smalltalk software as fully compiled native standalone executable. Could you please explain me why it is not a common thing to do in smalltalk world? Is there any good smalltalk implementation that allow to do it?
To sum all this. I dream of a live smalltalk environment, where I could develop and test my software, but then (when the software is actually ready for delivery) cross-compile it to native executables for windows, linux and macosx from my single development machine. That would be really awesome.
ironically enough there is one thing that an exe needs to be preloaded. Your OS. See the thing is that C/C++ can be so light because already your OS acts pretty much as the image acts with a ton of preloaded libraries. You need to waste several GBs of memory just get a simple calculator starting. Your OS is a collection of C/C++ libraries.
Things are not any prettier with other languages like python, java etc , even if the app is smaller, they still depend on this libs and they come will quite big libraries that would need installation whether your app use them or not.
Pharo and Smalltalk in general is diffirent case because they aspire to be a virtual OS by itself. The diffirence with a real OS , the smalltalk image is made to be hacked the easy way by a user .
Saying that you can rename the pharo executable, change its icon, disable to IDE tools inside Pharo so your user sees only the GUI of your App. Applications like Dr. Geo and Phratch already do this.
Compiling a Pharo project to a native executable will not make much difference, because a) Pharo is already compiled to a native executable b) you dont need to do that since Pharo is already standalone does not need to even be installed.
My advice is stop worrying about things that do not really matter and enjoy learning how powerful and fun Pharo can be.
Not Pharo, but native, compiled (through ANSI C or its own JIT) smalltalk [applications] (with ability to load pre-compiled DLLs or [JIT-]compile code on demand):
Smalltalk/X
http://www.exept.de/en/products/smalltalk-x.html
or (unofficial enhanced development version):
Smalltalk/X JV branch - https://swing.fit.cvut.cz/projects/stx-jv/
Nearly completely open-source. Everything except librun (core runtime - memory management, JIT, etc.) and stc (smalltalk-to-c) compiler is already open-source. Claus Gittinger / Exept also promised/confirmed they would release remaining sources were they to stop further development (AFAIK, those parts are closed only because of concerns of existing clients).
I highly recommend everyone to check it out, it is a wonder such a great implementation is so little known.
You might also check out Dolphin from Object Arts.
It is windows only, but the very best IDE, bar none.
If you do anything in Smalltalk, you should buy a copy.
(They also have a free non-commercial version, but you will
want to support the kind of craftsmanship behind it by buying
the Pro version. An absolutely kick-ass product, IMHO).
It will produce a standalone exe, if that's what you want.
I made an exe of a medium-featured wiki with it - the exe was
less than 1 MB. That is not a typo.
-Jim
The problem is that Pharo, in that case, cannot be compared to any native compiler like C, C++ or others, but more like java, python, ruby and other languages with a Virtual Machine around.
In these languages, you produce jars, eggs or gems to distribute your project.
In Pharo, you produce a "production image" following technics you already mentioned. But nothing prevents you to deliver an artefact including also the PharoVM (it is 2m large, after all), and you can prepare your apps to detect and open your production image (without having to ask for it).
It is about as practical with Smalltalk as with other languages: not very much. As soon as you create a somewhat larger application, you start depending on other libraries/applications being installed. If you compile them statically with the application, you have now created a much larger application that takes longer to download, and needs to be updated at least as soon as a security problem is found in one of the dependencies. If not, your application is no longer double-click startable.
There are two directions for solutions: web applications, and installers and package managers.
Squeak still maintains its one-click installer, allowing the same set of files to work on windows, mac and linux. Pharo used to have that too, but moved to having separate builds. The need hasn't been so large that the one-click build has been reinstated. It is mostly seen as useful to be able to carry around a cross-platform environment on an usb-stick. With the move to the 64-bit spur vm the dependency problems will lessen as more of the needed libraries will come pre-installed on those platforms.
Dolphin Smalltalk can produce a standalone .exe for Windows.
This is a key feature of the Pro version.
Your dream has been around since the mid-80's and it is called Smalltalk/X.
I have been researching Golang and I see that it has a compiler.
But is it compiling Go into assembly level code or just converting it into BYTECODES and then calling that compilation? I mean, even in PHP we are able to convert it into BYTECODES and have faster performance.
Is Golang a REPLACEMENT for system level programming and compiling ?
This is really a compiler (in fact it embbeds 2 compilers) and it makes totally self sufficient executables. You don't need any supplementary library or any kind of runtime to execute it on your server. You just have to have it compiled for your target computer architecture.
From the documentation :
There are two official Go compiler tool chains. This document focuses
on the gc Go compiler and tools (6g, 8g etc.). For information on how
to work on gccgo, a more traditional compiler using the GCC back end,
see Setting up and using gccgo.
The Go compilers support three instruction sets. There are important
differences in the quality of the compilers for the different
architectures.
amd64 (a.k.a. x86-64); 6g,6l,6c,6a
A mature implementation. The
compiler has an effective optimizer (registerizer) and generates good
code (although gccgo can do noticeably better sometimes).
386 (a.k.a. x86 or x86-32); 8g,8l,8c,8a
Comparable to the amd64 port.
arm (a.k.a. ARM); 5g,5l,5c,5a
Supports only Linux binaries. Less widely used than
the other ports and therefore not as thoroughly tested.
Except for
things like low-level operating system interface code, the run-time
support is the same in all ports and includes a mark-and-sweep garbage
collector, efficient array and string slicing, and support for
efficient goroutines, such as stacks that grow and shrink on demand.
The compilers can target the FreeBSD, Linux, NetBSD, OpenBSD, OS X
(Darwin), and Windows operating systems. The full set of supported
combinations is listed in the discussion of environment variables
below.
On a server you'll usually target the amd64 platform.
Note that Go is well known for the speed of compilation. When deploying my server programs, I don't build for the different platforms on the development computer : I deploy the sources and I compile directly on the production servers. Since Go1 I never had a code compiling on one platform and not compiling on the other ones.
On Windows I had no problem in making an exe on my development computer and simply sending this exe to people never having installed anything Go related.
Go compiles quickly to machine code yet has the convenience of garbage collection and the power of run-time reflection. It's a fast, statically typed, compiled language that feels like a dynamically typed, interpreted language.
Source - golang.org
Golang is a compiler-based language, it can easily be compiled on the development computer for any targeted system such as linux and mac.
A golang project when have compiled turns to a self-sufficient executable and can be ran on the targeted system without anything additional. It's because the golang compiler turns your code into bytes ready to execute on a system which can run compiled c code.
The mono installation provided at mono-project.com comes packed with several libraries, such as gkt, making the installation quite large (~280MB or so).
Is there any way to provide users with an installation of "just" the mono environment? I am targeting Windows, MacOSX and Linux.
You might be looking for mkbundle which merges your application, the libraries it uses and the Mono runtime into a single executable image. See under "Bundles" in that page.
If you do this though, it may mean that you are distributing LGPL code with all that this implies.
If you are distributing your app for Mac OS X, there is also MonoMacPackager which offers more advanced support for this. I think they have done a bunch of work in the linker to really minimize the amount of Mono that you have to include. It will actually remove unused code from assemblies that you reference.
I am a newbie to embedded developement, as figure shown. I have a small ARM board, AT91SAM7-EX256. I have also a JTAG programmer dongle, too. I am using Linux (Ubuntu x86_32) on my notebook and desktop machine. I'm using CodeSourcery Lite for cross-compiling to ARM-Linux.
Am I right that I can't use this Linux-target cross-compiler to make binary or hex files for the small ARM board (it comes without any operating system)? Should I use the version called ARM EABI instead?
As I see, it's a "generic" ARM compiler. I've read some docs, and there're lot of options to specify the processor type and instruction set (thumb, etc.), there will be no problem with it. But how can I tell the compiler, how should the image (bin/hex) looks like for the specific board (startup, code/data blocks etc.)? (In assemblers, there're the org and load directives for it.)
What software do I need to capture some debug messages from the board on my PC? I don't want to on-board debugging, I just need some detailed run-time signal, more than just blinking leds.
I have an option to use MS-Windows, I can get a dedicated machine for it. Do you recommend it, is it much easier?
Can I use inline assembly somehow in my C code? I dunno anything about that. Can I use C++ or just C?
I have also a question, which don't need to answer: are there really 4096 kind of GNU compilers and cross-compilers (from Linux_x86_32 -> Linux_x86_32, Linux_x86_32 -> Linux_ARM, OSX -> Linux_ARM, PPC_Linux -> OSX) and 16 different GNU compiler sources (as many target platforms/processors exists) around? The signs says "yes", but I can't believe it. Correct me, and show me the GNU compiler which can produce object file for any platform/processor, and the universal linker which can produce executable for any platform.
While Windows is not a "better" platform do this kind of embedded development on, it may be easier to start with since you can get a pre-built environment to work with. For example, Yagarto (which I would recommend).
Setting up an embedded development environment on Linux can require a considerable amount of knowledge, but it's not impossible.
To answer your questions:
Your Linux cross-compiler comes with libraries to build executables for a Linux environment. You have hinted that you want to build a bare-metal executable for this board. While you can do this with your compiler, it will just confuse things. I recommend building a baremetal cross-compiler. Since you're building your own baremetal executable (and thus you are the operating system, the ABI doesn't matter since you're generating all of the code and not interoperating with other previously built code.
There are several versions of the ARM instruction set (and Thumb). You need to generate code for your particular processor. If you generate the code for a newer version of the instruction set, you will likely generate code which generates a reserved instruction exception. Most prebuilt gcc cross-compiler toolchains for ARM are "multilib" and will build for a variety of architectures in both ARM and Thumb.
Not sure exactly what you're looking for here. This is a bare metal platform. You can use the debugger channel to send messages if you're debugging on target, or you'll need to build your own communication channel into the firmware you write (i.e. uart support).
See above.
Yes. See here for details on gcc's extended inline assembly syntax. You can do this in C++ and C. You can also simply link pure assembly files.
There is no universal gcc compiler / linker. You need a uniquely built compiler for each host / target combination you use.
Finally, please take a look at Atmel's documentation. They have a wealth of information on developing for this target as well as a board package with the needed linker directives and example programs. Note of course the package is for Atmel's own eval board, but it will get you started.
http://sam7stuff.blogspot.com/
I use either of the codesourcery lite versions. But I have no use for the gcc library nor a C library, I just need a compiler.
In the gcc 3 days newlib was great, modify two files worth of system support (simple open, close, read, putc type stuff) and you could compile just about anything, but with gcc 4.x you cannot even go back and cross compile gcc 3.x, you have to install an old linux distro in a virtual machine.
To get the gcc library yes you probably want to use the eabi version not the version with linux gnueabi in the file names.
You might also consider llvm (if you dont need a C library, and you will still need binutils), hmm, I wonder if newlib compiles with llvm.
I prefer to avoid getting trapped in sandboxes, learn the tools and how to manipulate the linker, etc to build your binaries.