I'm a newly graduated electronics engineer and one of my first tasks in my new job is to import a code to Mbed compiler.
I'm trying to run the Mbed Blinky example on my custom hardware with LPC1769 chip. I've exported the Blinky app to GNU Eclipse from the Online MBED compiler and imported it to the IDE.
The Mbed blinky code runs fine when I set the appropriate led pin(changing LED1 in the PinNames.h from 1.10 to 2.13 for my hardware) and flash it directly. So MBed and my custom HW isn't problematic. However, my firm has a custom bootloader and it's required to use it with any application. The custom bootloader requires that I start the program beginning from 0x4000.
For this my firm was previously adding this line to their code, flashing the bootloader and uploading the IDE's output .bin file to the board with a custom FW loading program.
SCB->VTOR = (0x4000) & 0x1FFFFF80;
When I try to follow the same steps, compiler builds without any complaints, but I see no blinks when I upload the program to my bootloader.
I'm suspecting I have to make some changes to the built-in CMSIS library, and/or the startup_LPC17XX.o and system_LPC17xx.o files come with the MBED export, but I'm confused. Any help would be much appreciated.
Also, I'm using the automatically built make file, in case there's any wonders.
Most importantly, you need to adjust the code location in the linker script, for example:
MEMORY {
FLASH : ORIGIN = 0x4000, LENGTH = 0x7C000
}
Check the startup code and linker script for any further absolute addresses in flash memory.
Adjusting the VTOR is required for interrupts, if the bootloader doesn't already do that. The & operation looks weird; it should be sufficient to simply write 0x4000, or, even better, something like:
SCB->VTOR = (uint32_t) &_IsrVector;
Assuming you have defined _IsrVector in your linker script or startup code to refer to the very first byte in the vector table, i.e. the definition of the initial stack pointer. This way you don't have to adjust the code if the memory layout is changed in the linker script, and you avoid magic numbers.
How do we Exactly define tasks in embedded programming.
I mean what are the criterias which has to be fulfilled, to call a function a task.
A task is a more general term than "process" as in Linux or "thread" as in Java. Those terms have very specific meanings in those contexts. The term "task" is meant to be less well-defined.
A task is a piece of code run by an operating system that is given control of the processor for a time determined by the OS. From the point of view of the task, it is the only code running on the processor, and interacts with other tasks through the operating system.
To manage the running of tasks, the OS must keep track of code and context for the task. That is, when the task is interrupted, the OS must be able to restore the processor not only to the point on the code where the task was running, but also the state of the processor itself.
A task in an OS may be required to be specified in the form of a C function, this is not necessary. For example, in Linux, processes can be shell scripts or executable programs. In Java, threads are the run() method of a class.
In a system where tasks are specified by functions, the function alone does not describe the task. For example, several tasks could be running the code of the same function. Instead, the defining feature of task versus function is that some form of OS or scheduler must exist and is used to create and control tasks, whether the code of the task is given by a function or not.
I'm working on a project which will entail multiple devices, each with an embedded (ARM) processor, communicating. One development approach which I have found useful in the past with projects that only entailed a single embedded processor was develop the code using Visual Studio, divided into three portions:
Main application code (in unmanaged C/C++ [see note])
I/O-simulating code (C/C++) that runs under Visual Studio
Embedded I/O code (C), which Visual Studio is instructed not to build, runs on the target system. Previously this code was for the PIC; for most future projects I'm migrating to the ARM.
Feeding the embedded compiler/linker the code from parts 1 and 3 yields a hex file that can run on the target system. Running parts 1 and 2 together yields code which can run on the PC, with the benefit of better debugging tools and more precise control over I/O behavior (e.g. I can make the simulation code introduce certain types of random hiccups more easily than I can induce controlled hiccups on real hardware).
Target code is written in C, but the simulation environment uses C++ so as to simulate I/O registers. For example, I have a PortArray data structure; the header file for the embedded compiler includes a line like unsigned char LATA # 0xF89; and my header file for simulation includes #define LATA _IOBIT(f89,1) which in turn invokes a macro that accesses a suitable property of an I/O object, so a statement like LATA |= 4; will read the simulated latch, "or" the read value with 4, and write the new value. To make this work, the target code has to compile under C++ as well as under C, but this mostly isn't a problem. The biggest annoyance is probably with enum types (which behave as integers in C, but have to be coaxed to do so in C++).
Previously, I've used two approaches to making the simulation interactive:
Compile and link a DLL with target-application and simulation code, and have VB code in the same project which interacts with it.
Compile the target-application code and some simulation code to an EXE with instance of Visual Studio, and use a second instance of Visual Studio for the simulation-UI. Have the two programs communicate via TCP, so nearly all "real" I/O logic is in the simulation program. For example, the aforementioned `LATA |= 4;` would send a "read port 0xF89" command to the TCP port, get the response, process the received value, and send a "write port 0xF89" command with the result.
I've found the latter approach to run a tiny bit slower than the former in some cases, but it seems much more convenient for debugging, since I can suspend execution of the unmanaged simulation code while the simulation UI remains responsive. Indeed, for simulating a single target device at a time, I think the latter approach works extremely well. My question is how I should best go about simulating a plurality of target devices (e.g. 16 of them).
The difficulty I have is figuring out how to make each simulated instance get its own set of global variables. If I were to compile to an EXE and run one instance of the EXE for each simulated target device, that would work, but I don't know any practical way to maintain debugger support while doing that. Another approach would be to arrange the target code so that everything would compile as one module joined together via #include. For simulation purposes, everything could then be wrapped into a single C++ class, with global variables turning into class-instance variables. That would be a bit more object-oriented, but I really don't like the idea of forcing all the application code to live in one compiled and linked module.
What would perhaps be ideal would be if the code could load multiple instances of the DLL, each with its own set of global variables. I have no idea how to do that, however, nor do I know how to make things interact with the debugger. I don't think it's really necessary that all simulated target devices actually execute code simultaneously; it would be perfectly acceptable for simulation instances to use cooperative multitasking. If there were some way of finding out what range of memory holds the global variables, it might be possible to have the 'task-switch' method swap out all of the global variables used by the previously-running instance and swap in the contents applicable to the instance being switched in. Although I'd know how to do that in an embedded context, though, I'd have no idea how to do that on the PC.
Edit
My questions would be:
Is there any nicer way to allow simulation logic to be paused and examined in VS2010 debugger, while keeping a responsive UI for the simulator front-end, than running the simulator front end and the simulator logic in separate instances of VS2010, if the simulation logic must be written in C and the simulation front end in managed code? For example, is there a way to tell the debugger that when a breakpoint is hit, some or all other threads should be allowed to keep running while the thread that had hit the breakpoint sits paused?
If the bulk of the simulation logic must be source-code compatible with an embedded system written in C (so that the same source files can be compiled and run for simulation purposes under VS2010, and then compiled by the embedded-systems compiler for use in real hardware), is there any way to have the VS2010 debugger interact with multiple simulated instances of the embedded device? Assume performance is not likely to be an issue, but the number of instances will be large enough that creating a separate project for each instance would be likely be annoying in the absence of any way to automate the process. I can think of three somewhat-workable approaches, but don't know how to make any of them work really nicely. There's also an approach which would be better if it's possible, but I don't know how to make it work.
Wrap all the simulation code within a single C++ class, such that what would be global variables in the target system become class members. I'm leaning toward this approach, but it would seem to require everything to be compiled as a single module, which would annoyingly affect the design of the target system code. Is there any nice way to have code access class instance members as though they were globals, without requiring all functions using such instances to be members of the same module?
Compile a separate DLL for each simulated instance (so that e.g. if I want to run up to 16 instances, I would include 16 DLL's in the project, all sharing the same source files). This could work, but every change to the project configuration would have to be repeated 16 times. Really ugly.
Compile the simulation logic to an EXE, and run an appropriate number of instances of that EXE. This could work, but I don't know of any convenient way to do things like set a breakpoint common to all instances. Is it possible to have multiple running instances of an EXE attached to a single debugger instance?
Load multiple instances of a DLL in such a way that each instance gets its own global variables, while still being accessible in the debugger. This would be nicest if it were possible, but I don't know any way to do so. Is it possible? How? I've never used AppDomains, but my intuition would suggest that might be useful here.
If I use one VS2010 instance for the front-end, and another for the simulation logic, is there any way to arrange things so that starting code in one will automatically launch the code in the other?
I'm not particularly committed to any single simulation approach; while it might be nice to know if there's some way of slightly improving the above, I'd also like to know of any other alternative approaches that could work even better.
I would think that you'd still have to run 16 copies of your main application code, but that your TCP-based I/O simulator could keep a different set of registers/state for each TCP connection that comes in.
Instead of a bunch of global variables, put them into a single structure that encompasses the I/O state of a single device. Either spawn off a new thread for each socket, or just keep a list of active sockets and dedicate a single instance of the state structure for each socket.
the simulators I have seen that handle multiple instances of the instruction set/processor are designed that way. There is a structure usually that contains a complete set of registers, and a new pointer or an array of these structures are used to multiply them into multiple instances of the processor.
Fibonacci sequence is a great 'hello-world' app when starting with a new language. I want to make a pure machine program that will execute just that, without wasting any resources on intermediary VM, unnecessary memory management, etc.
The best solution is writing down an assembly code and compile it to native binaries. But I've never worked with Assembly language, so what is the best place to start from?
I'm using iMac 64-bit dual-core x86 system.
It's fun working with assembly language and it's a great way to learn more about the internal machinery. I am not sure you are wasting that many resources using objective-c for computing the fibonacci sequence but maybe you can prove me wrong.
To learn assembly start with something really simple and then add more functions and inputs and outputs to understand the system calls and function call sequences and then get more creative.
Be sure to document each line as it's hard maintaining assembly.
For Mac OS X
Create a file called simple.asm :-
; simple.asm - exit
section .text
global simple ; make the main function externally visible
simple:
mov eax, 0x1 ; system call number for exit
sub esp, 4 ; OS X (and BSD) system calls needs "extra space" on stack
int 0x80 ; make the system call
Compile and Link it :-
nasm -f macho simple.asm
ld -o simple -e simple simple.o
Run it :-
asm $ ./simple
asm $ echo $?
1
There are a lot of free resources online for x86 assembly as well as the intel 64-bit specific details.
http://en.wikibooks.org/wiki/X86_Assembly
http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html
Have a look at resources for system calls for the bsd kernel and mach kernel for osx specific system calls.
http://osxbook.com
http://www.freebsd.org/doc/en/books/developers-handbook/x86-system-calls.html
http://peter.michaux.ca/articles/assembly-hello-world-for-os-x
Have a look at linkers and loaders if you want to create libraries.
When using vxWorks as a development platform, we can't write our application with the standard main() function. Why can't we have a main function?
Before the 6.0 version VxWorks only
supported kernel execution environment for tasks and did not support
processes, which is the traditional application execution environment
on OS like Unix or Windows. Tasks have an entry point which is the
address of the code to execute as a task. This address corresponds to
a C or assembly function. It can be a symbol named "main" but there
are C/C++ language assumptions about the main() function that are not
supported in the kernel environment (in particular the traditional
handling of the argc and argv parameters). Furthermore, prior to
VxWorks 6.0, all tasks execute kernel code. You can picture the kernel
as a common repository of code all linked together and then you'll see
that you cannot have several symbols of the same name ("main") since
this would create name collisions.
Now this is accurate only if you link your application code to the
kernel image. If you were to download your application code then the
module loader will accept to load several modules each with a main()
routine. However the last "main" symbol registered in the system
symbol table is the only one you can access via the target shell. If
you want to start tasks executing the code of one of the first loaded
modules you'd have to use the addresses of the previous main()
function. This is possible but not convenient. It is far more
practical to give different names to the entry points of tasks (may be
like "xxxStart" where "xxx" is a name meaningful for what the task is
supposed to do).
Starting with VxWorks 6.0 the OS supports a process environment. This
means, among many other things, that you can have a traditional main()
routine and that its argc and argv parameters are properly handled,
and that the application code is executing in a context (user context)
which is different from the kernel context, thus ensuring the
isolation between application code (which can be flaky) and kernel
code (which is not supposed to be flaky).
PAD