Where is there good information about low level PC booting? - hardware

I'm interested in writing a boot loader for USB sticks that looks for a directory of ISOs and gives you the option to boot one of them as if it were a bootable CD. This is basically so I have a menu driven program that allows me to install one of several different distributions off of a USB stick.
Where would I go to figure out how to make this work? Do I need to install some kind of BIOS hack to allow remapping of CD blocks to blocks in the filesystem? How would that work once the boot from the CD had enough marbles to start trying to access the device directly?

are you looking to learn how bootloading works? if so, you could check out the grub docs here: http://www.gnu.org/software/grub/
otherwise, if you're trying to create a usb bootloader that can load a variety of operating systems, this tutorial looks like it covers it quite nicely: http://www.freesoftwaremagazine.com/articles/grub_intro/
I don't think it will dynamically build a loader of operating systems as they are available by reading the contents of a directory.

Related

Embedded Linux and Cross Compiling

I'm just now starting to learn embedded linux system development and I'm wondering how to go about a few things. Mainly, I have questions about cross compiling. I know what cross compiling is but I'm wondering how to actually go about the whole process when it comes to writing the makefile and deploying the application to the board (mainly the makefile part though).
I've researched a good amount online and found a ton of different things have to be set whether it's in regards to the toolchain, the processor, etc. Are there any good resources to learn this topic and master it or could anyone explain the best way to go about it?
EDIT:
I'm not wondering about how to cross compile in general. I'm wondering about cross compiling already existing applications (e.g. openCV, samba, etc) for a target system from the host system (especially when there is no support regarding the process with the application, which is common).
Basically you just need a special embedded Linux distribution, that will take care of cross-compilation process. Take a look at for example Buildroot. In folder package you'll find package recipe examples.
For your own software build process you can take a look at CMake. libuci recipe shows, how to use CMake based projects in Buildroot.
This answer is based on my own experience, so you're to justify, if it suits your needs.
I learned everything about embedding Linux with these guys: http://free-electrons.com/
They not only offer free docs but also courses for successfully running your box with custom Linux distros. In my case, I achieved embedding uClinux in an board with MMU-less 32 bit CPU with 32 MB RAM. The Linux image just occupied 1MB.

Is there an emulator of MSP430 chip that works without the actual chip and integrates with Code Composer Studio?

I need to learn to program MSP430, but don't have the actual chip yet. All configurations that I've tried at Code Composer Studio (except Snapshot, but it does not count, right?) require something on my USB. How do I learn to program the chip without the chip?
And what is an emulator that requires a USB?
Online emulator (used chrome): http://www.msp430emulator.com
This MSP430 Emulator is open source, and can be used directly online without downloading anything. Still under construction but has a good debug interface. Unfortunately no integration with CCS.
It is on the TI Open source page: TI Open Source Project Page
"The MSP430 Online Emulator provides a complete software model of the MSP430 16-bit instruction set. It is an interactive debugger for advanced development and in depth firmware/hardware analysis. Peripherals include UART, GPIO Ports, BCM+, Timer_A, and more! Open source, and absolutely free - access to the TI MSP430 Launchpad allows you to effectively build and debug firmware. No hardware setup, emulate anytime anywhere!"
open source on github:
https://github.com/RudolfGeosits/MSP430-Emulator
If you need something implemented you can add to the code yourself and run a local emulation server for real time applications.
This emulator is pretty awesome, once you can get it running. Note that it does claim GDB support, which likely means you can get a pure eclipse CDT C project & CDT-GDB-HW-Debugging session up and running against it (making sure to compile with the msp430's tool chain, of course).
http://opencores.org/project,openmsp430
As far as a simulator, the answer is truly 'no'. I would like to be wrong on that... But consider for a moment the number of variants of the msp430, the peripherals, and so on. Not sure if any company can justify that kind of cost!
Especially when launchpad/etc are so cheap and fast.
If you can afford £10 then the launchpad is the way to go just to teach yourself about the MSP430. You can use either IAR Embedded Workbench or Code Composer Studio, both which come in code size limited version which will be plenty big enough to learn with. I don't like either, but of the two the IAR one is, IMHO, the better bet as it's not Eclipse based. If you don't mind Java and Eclipse, then CCS is a viable option for you. One huge advantage with CCS is it runs on Linux, but really, it's still not a patch on Rowley Crossworks which runs on Linux. The latter has a cheap educational licence.
As far as the emulator and USB question is concerned, it's maybe being slightly pedantic, but it's not an emulator, it's a debug interface. There is a debugger built into the chip that enables you to load the code into the chip, set breakpoints, single step through code.
This kit is a great way to start because the debugger interface is built into the kit, you can access pins on the processor, see LEDs come on and all that good stuff that gives you the warm feeling you're programming a chip properly. For the sake of £10 you'd be mad not to!

Is there a cross-platform scripting language that requires no installation?

The application I'm working on has simple functionality, but has one requirement that's giving me trouble. The application will be run from a thumb drive, and requires access to write a file to said thumb drive, so browser-based javascript/html is out.
My ideal goal is to have a single script that can be double clicked from mac's finder or windows explorer that will kick-off the update of this file that is stored on the usb drive. Is this possible?
I've come across similar questions (OK Programming language from USB stick with no installation), but everything I've found would still require separate starting points for each script. For example, if I put Lua binaries on the usb stick I will have to have a separate script for each platform I want to support.
i really think it's not possible. otherwise, JAVA, Adobe Air and other platforms wouldn't have been created in the hope of a cross-platform language. besides, mac, linux and windows have different "executable" file types.
how bad can 3 (mac, windows, linux) start-points be? they could operate on the same file anyway.
It is possible to create Java jar-files which are startable with a double click. No platform specific scripts are required. See this question and its linked stuff.
This works of course only, if a JRE has been installed correctly on each computer.

host target development model

I am quite new to the embedded linux programming and did not really understand this concept very well.
Can anyone explain the essence of the "host-target" relation? Is this model only specific to the "cross-compilation"? Is it used just because "executable code will be run on another enviroment"? and what matters with the linux kernel on the target? E.g., the "building the embedded linux system" book mentioned this, but did not explain its motivation or goal of this type of development.
Thanks a lot.
The 'motivation' for this model is that seldom is an embedded target a suitable platform for development. It may be resource constrained, have no operating system, have no compiler that will run on the target, have no filesystem for source files, have no keyboard or display, no networking, and may be relatively slow or anything else you might need to develop effectively.
If your embedded system is suited to running Linux, it is possible that not all of the above limitations apply, but almost certainly enough of them to make you want to avoid developng directly on the target. If this were not so, they it hardly qualifies as an embedded system perhaps.
http://www.landley.net/writing/docs/cross-compiling.html
Seems pretty clear. What specific questions do you have?
Linux since its very origin was written in very portable way. It runs on a whole range of machines with very different CPUs, and it is considered the Good Thing to write in a portable way, so that, for example, package maintainer can easily port your program to some embedded ARM or Cygwin, or Amiga, or...
So, yes, the model is "only" specific to cross-compilation, but actually about every compilation on Linux is a (variant of) cross-compilation, just that by default build, host and target are automatically set to the same value, the same as the machine you run on.
Still, even then, you can take a Linux-i386 compiled compiler, sources for it, and "cross-compile" it for Linux-amd64. And the resulting binary will run much faster on a 64bit CPU.
It IS quite essential in embedded programming though. Mostly because you write programs for weak CPUs that are not capable of running a compiler or would run it at a snail pace. So you take a cross-compiler on a fast CPU (say, some multi-core Intel) and cross-compile for the embedded CPU (say, some low-end ARM).
"In different environment" is putting things very mildly. What you're doing when cross-compiling for embedded is working with entirely different instruction set, different memory access modes, different resource access methods and so on and so on. A machine of entirely different construction than the build host. Your build host may be a Windows PC running Cygwin. Your target may be a chip inside a smartphone. The binary will look nothing like the Cygwin .exe files.
As a direct consequence, -everything- must be compiled for the target from scratch. The kernel, the system utilities, the system libraries, all the tools the target must be running. Thing is, if the target is a ticket selling booth, there is really no sense cross-compiling Eclipse, GCC and Gnome for it, then developing in "local" environment, typing your code on a ticket booth keyboard. Instead, you just cross-compile the essentials of the OS, and your specific applications. You keep the development environment on the build machine, and cross-compile everything you need on the embedded device.
[in practice, you get a Linux distro for the target, and just compile whatever you need modified].

Can you freeze a C/C++ process and continue it on a different host?

I was wondering if it is possible to generate a "core" file, copy if to another machine and then continue execution of the a core file on that machine?
I have seen the gcore utility that will make a core file from a running process. But I do not think gdb can continue execution based on a core file.
Is there any way to just dump the heap/stack and and restore those at a later point?
it's called process migration.
mosix and OpenMosix used to be able to do that. nowadays it's easiest to migrate a whole VM.
On modern systems, not from a core file, no you can't. For freezing and restoring an individual process on Linux, CryoPID and the new Kernel-based checkpoint and restart are in the works, but their abilities are currently quite limited. OpenVZ and other virtualization-like softwares can freeze and restore an entire system.
Also checkout out the Condor project. Condor can do that with parallel jobs as well. Condor also include monitors that can automatically migrate your process when some, for example, starts using their workstation again. It's really designed for utilizing spare cycles in networked environments.
This won't, in general, be sufficient to let an arbitrary process continue on another machine. In addition to the heap and stack state, there may also also open I/O handles, allocated hardware resources, etc. etc.
Your options are either to explicitly write your software in a way that lets it dump state on a signal and later resume from the dumped state, or to run your software in a virtual machine and migrate that to the alternate host - Xen and Vmware both support freeze/restore as well as live migration.
That said, CryoPID attempts to do precisely this and occasionally succeeds.
As of Feb. 2017, there's a fairly stable and mature tool, called CRIU that depends on updates to the Linux Kernel made in version 3.11 (as this was done in Sep. 2013, most modern distros should have those incorporated into their kernel versions).
It can be installed via aptitude by simply calling sudo apt-get install criu.
Instructions on how to use it.
In some cases, this can be done. For example, part of the Emacs build process is to load up all the Lisp libraries and then dump the memory image on disk for quick loading. Some other language interpreters do that too (I'm thinking of Lisp and Scheme implementations, mostly). However, they're specially designed for that kind of use, so I don't know what special things they have to do to allow that to work.
I think this would be very hard to do for a random program, but if you wrote a framework where all objects supported serialisation/deserialisation, you can then serialise all objects used by your program, and then ship that elsewhere, and deserialise them at the other end.
The other people's answers about virtualisation are on the spot, too.
Depends on the machine. It's very doable in a very small embedded system, for instance. I think it's also implemented somewhat in Beowulf clusters and other supercomputeresque apps.
There are lots of reasons you can't do what you want very easily. For example, when you restore the core file on the other machine how do you resolve file descriptors that you process had open? What about sockets, named pipes, semaphores, or any other OS-level resource? Basically unless your system is specifically designed to handle such an operation you can't naively dump a core file and move it to another machine.
I don't believe this is possible. However, you might want to look into virtualization software - e.g. Xen - which make it possible to freeze and move entire system images fromone machine to another.