As I understand, a BSP (Board Support Package) contains bootloader, kernel and device driver which help OS to work on HW. But I'm confused because OS also contains a kernel. So what is the difference between the kernel in OS and the kernel in BSP?
What a BSP comprises of depends on context; generically it is code or libraries to support a specific board design. That may be provided as generic code from the board supplier for use in a bare-metal system or for integrating with an OS, or it may be specific to a particular OS, or it may even include an OS. In any case it provides board specific support for higher-level software.
A kernel is board agnostic (though often processor architecture specific), and makes no direct access to hardware not intrinsic to the processor architecture on which it runs. Typically an OS or application will require a Hardware Abstraction Layer (HAL); the HAL may well be built using the BSP, or the BSP may in fact be the HAL. A vendor may even package a HAL and OS and refer to that as a BSP.
The term means what it means to whoever is using it - context is everything. For example in VxWorks, WindRiver use the term BSP to refer to the layer that supports the execution of a VxWorks based application on a specific hardware design. A board vendor on the other hand may provide a complete Linux distribution ported to the board and refer to that as a BSP.
However and to what extent a particular vendor or developer chooses to support a board is a board support package regardless of how much or how little it may contain.
BSP definition is broad. It is a supporting software package for a specific board. BSP for a tiny microcontroller probably just contains HW drivers for its peripherals. On the other hand, for an embedded CPU it may contain HW drivers, bootloader and OS kernel and what not.
So the kernel in a BSP (board support package) is just a specific version of an OS kernel that has been ported to your board.
Im probably just saying the same things already said.
You have a chip and/or board product you want to sell to other (software) developers. A reference design (board) with the chip(s) in question are used. The BSP is a vague term to mean the software that is provided to you as a software developer to ideally make your life easier in using that product (chip and/or board) and adding your software to it or developing for it. So if it is a linux or rtos or other operating system capable platform and the vendor (providing the bsp) believes that users want an operating system and a specific operating system, then instead of you having to port the os to that target, they do it for you. If something like linux that is open source, then you either are told which linux sources to download then the patches made by the bsp are added and/or the bsp contains the complete sources for the whole thing already patched. Drivers, applications as deemed necessary by the vendor etc. Multiple operating systems may be supported if the vendor feels that is needed in order to attract customers to buy that board/chip product.
The whole package of software that you get from them to make that chip/board into your own product, is the BSP.
vxWorks kernel which you can run on a Board contains vxWorks core kernel and "other components" which may change from one environment.
Core kernel contains essential programs such as Scheduler, Memory manager, Basic File systems, security features etc.
These "other components" which are part of BSP may be optional or may vary from system to system, and helps the core kernel features.
In simple words, the image dislays the defination of BSP. Please correct me if I'm wrong
I would say for a well structured code base, the application layer should be abstracted from lower layers by the HAL layer. This would allow the app layer to be portable if we want to migrate the system to a new board. If you see you have board/CPU specific logic in your app layer, you know you have broken the portability.
The HAL layer functions' bodies should contain board specific code, here is where the BSP layer code comes into play. When we want to port the system to a new board, code changes should happens in the HAL functions' bodies, while the HAL functions' declaration should not change, which leads to the app layer remains the same.
Related
I have with interest studied the hardware virtualization extensions added by Intel and AMD to the x86 architecture (known as VMX and SVM, respectively). While this is still a relatively recent addition to x86 CPU's my understanding is that the mainframe architecture made extensive use of virtualization since the 70's-80's for instance in the form of the venerable z/VM operating system. Even nested virtualization has been used.
My question is, is there a public documentation of the hardware facilities provided by the z/Architecture used by the z/VM operating system to implement this virtualization? I.e. the control registers and data structures that the hardware implements to allows the hypervisor to simulate the guest state and trap necessary instructions? Another thing I am curious about is if the z/Architecture supports second-level address translation (which was added later to VMX and SVM).
Just to get it out of the way, System/370 and all its descendants support virtualization as is (they satisfy virtualization requirements). In that sense, no special hardware support has ever been needed, as opposed to Intel architecture.
The performance improvements for VM guests on System/370, XA, ESA etc. all the way through z/Architecture have been traditionally implemented using DIAG (diagnose) instruction as well as microcode (now millicode) assist. In modern terms, it is more of paravirtualization. The facilities are documented, you can start here for instance.
Update - after reading extensive comments, a few notes and clarifications.
S/370 and its descendants never needed specialized hardware virtualization support to correctly run guest operating systems - not because the virtualization was part of the initial design and requirements - it wasn't, but because the architecture was properly designed to support secure multiuser environment. Popek and Goldberg's virtualization requirements are actually very weak - in essence, that only privileged instructions can affect system configuration. This requirement was part of even S/370's predecessor, System/360, well before first virtualized systems appeared.
Performance imporvements of VM guests proceeded along two lines.
First, paravirtualization approach - essentially developing well-architected API for guest-hypervisor communication. It's been used not only for performance, but for a wide variety of other services such as inter-VM communication. The API is documented in the manual referred to above.
Second, microcode extensions (VM microcode assist) that performed some performance sensitive hypervisor logic at microcode level, essentially hardware level. That is not paravirtualization, it is hardware virtualization support proper. But in early 370 machines this support was not architected, meaning it was model-dependent and subject to change. With 370/XA, IBM introduced a proper architectural way to support high-performance virtualization, the Start Interpretive Execution (SIE) instruction. This instruction is not documented in Principles of Operation, but rather in a separate publication, IBM System/370 XA Interpretive Execution. (This document is referenced multiple times in Principles of Operation. The link refers to the first version of the document, you can download version 2 here. I am not sure if that publication was ever updated - probably this is the latest version.) Additionally, I/O subsystem provided VM assist too.
I failed to mention the SIE instruction and the manual that documented it in my original answer, which is a crucial part of the story. I am grateful to the author of the question and the extensive comments that proded me to check my memory and figure out that I skipped an important bit of technical background. This presentation provides an excellent overview of z/VM core facilities that covers additional aspects including memory management, I/O, networking etc.
The SIE instruction is how virtualization software accesses the z/Architecture Interpretive Execution Facility (IEF). The exact details of the interface have not been published since the early 1990s.
This is a hardware-based capability. IEF provides two levels of virtualization. The first level is used by firmware (via the SIE instruction) to create partitions. In each partition you can run an operating system. One of those operating systems is z/VM. It uses the SIE instruction (running within the context of the first level SIE instruction) to run virtual machines. You can run any z/Architecture operating system in a virtual machine, including z/VM itself.
The SIE instruction takes as input the description of a virtual server (partition or virtual machine). The hardware then runs the virtual server's instruction stream, stopping only when it needs help from whatever issued the SIE instruction, whether it be the partition hypervisor or the z/VM hypervisor.
I'm doing a project in C language that runs on a target with vxWorks operating system.
I would like to run my code on PC also for two reasons:
The HW of the target is not available yet, and i want to start testing my SW.
Even when the target will be ready it will be easier for me perform testing and simulations on a PC.
Is there some interesting way to do it?
Thanks.
You have three choices:
Use the VxWorks Simulator (vxsim) - it's part of the Workbench and can be accessed like a real target
Pros:
Easy to to use
Integrated into workbench
Debug functionality and good control of the system
Doesn't need any further hardware
Documentation (check Wind River VxWorks Simulator User's Guide)
Cons:
Not the real target system (but this is a con for all points here)
Use a x86 machine and boot eg. through ftp
Pros:
You can test booting via network and network
Cons:
The system may lack of drivers
Possible you have to change kernel
Debug is not as good as vxsim
The difference to your target may be realy big
Use a Virtual Maschine
Pros:
Runs on same pc - no further hardware required
Possible to test several bootloaders
Cons:
Not possible to simulate the target cpu etc.
A VM is not the best way for VxWorks testing
As Archie, I recommend you the VxWorks Simulator too.
A third way is to abstract the HW and OS in a separate layer in your application architecture, and provide both a PC and VxWorks versions of this layer.
This is rather costly, of course, but will have other advantages, i.e. insulation from vendor instability (like when pSos support was stopped years ago...) It might also nudge you in the direction of a nice, layered architecture.
I am wondering if anyone have any information on development boards where you can utilize ARM TrustZone? I have the BeagleBoard XM which uses TI's OMAP3530 with Cortex-A8 processor that supports trust zone, however TI confirmed that they have disabled the function on the board as it is a general purpose device.
Further research got me to the panda board which uses OMAP4430 but there is no response from TI and very little information on the internet. How do you learn how to use trust zone?
Best Regards
Mr Gigu
As far as I know, all the OMAP processors you can get off-the-shelf are GP devices, i.e. with the TrustZone functions disabled (or else they're processors in production devices such as off-the-shelf mobile phones, for which you don't get the keys). The situation is similar with other SoC manufacturers. Apart from ARM's limited publications (which only cover the common ARM features anyway, and not the chip-specific features such as memory management details, booting and loading trusted code), all documentation about TrustZone features comes under NDA. This is a pity because it precludes independent analysis of these security features or leverage by open-source software.
I'm afraid that if you want to program for a TrustZone device, you'll have to contact a representative of TI or one of their competitors, convince them that your application is something they want to happen, and obtain HS devices, the keys to sign code for your development boards, and the documentation without which you'll have a very hard time.
As of today OP-TEE runs on quite a few devices (see OP-TEE platforms supported) and several of them are development boards readily available. To name a few HiKey, Raspberry Pi3, ARM Juno Board, Freescale i.MX6 variants etc. Either you could pick up one of those or you could simply try it all using QEMU which is very well supported in OP-TEE.
You can get 45 days trial version for ARM fastmodels. RaspberyPI is supposed to support TrustZone too. www.openvirtualization.org has full open source implementation of ARM TrustZone. ARM is moving away from its proprietary TrustZone APIs to globalplatform API. GlobalPlatform also defines the APIs for Inter process communication etc.
There are a few select boards at this time that do allow development with TrustZone. As far as general purpose board, the FriendlyARM board is a good start (http://www.friendlyarm.net). Also, any board with a Cortex A15 processor must have TrustZone available due to the fact that the virtualization extensions can only be utilized from the Normal world. There may still be a question of whether or not the manufacturer has their own code running in the Secure world, but you can always try. The Arndale is a good development board, but unfortunately Samsung already has code running in the Secure world, so by the time you get access, you're running in the Normal world. So if you need Secure world access, look for non-Samsung, Cortex A15 processors. That'd be your best bet.
It's also worth noting the TI did not technically disable TrustZone. Instead, the bootrom code transitions the processor into the Normal world prior to switching execution to U-boot. So it's actually using TrustZone to move to the Normal world, but then doesn't provide a mechanism for moving back to the Secure world. To prove this, just try to read the SCR and you'll get an undefined exception, which is what will typically happen from the Normal world. However, if you perform a SMC call, it will execute just as expected (i.e., it switches to the Secure world, but then just switches right back to the Normal world), so it looks like nothing happened.
regarding openvirtualization, it can be ported to arm development board like the samsung exynos 4XXX.
you will have access to all source code including the secure os if you use openvirtualization.
but if you just want to develop programs that use the trustzone, I wonder if it is necessary. maybe there are standard driver or api that allow you to do it without worrying about compiling your own secure os?
the best thing you can do is contact parties like Gemalto and the people that brought Mobicore. Note that they will indeed ask you to sign an NDA.
Secondly, you can buy the ARM DS5 development suite. This comes with a lot of documentation including some on trustzone.
You should really take a look at the USB armory from Inverse Path: http://www.inversepath.com/usbarmory.html
It's built on open hardware and open source with full access to Trustzone (you can blow in die fuse to enable secure boot): https://github.com/inversepath/usbarmory
They successfully ran Genode within TZ and Linux in the normal world.
I am involved with an embedded software development for telecom industry. I have zero experience before with such embedded hardware devices.
I got a network processor board, which is featured in switching pipeline engines.
Besides the board, there is also an accessory board called "piggy"(seems for ethernet connetion), and another serial line connection.
I am completely lost about these boards and serial line connections. what they are used for? I tried to use google to find some useful introduction or materials but failed. Can anyone point out what this piggy board is used for? Any good references or books that explain about this?
Thanks a lot!!
To develop for your embedded system you will need a development host (a PC or workstation that hosts the development tools including cross-compiler, platform libraries, debugger etc.), and a debug connection to the target (typically an in-circuit emulator or JTAG debugger, but in some cases debug over serial, USB or Ethernet may be supported via software running on the target - though that is less reliable since the code you are debugging may corrupt or break the debug stub running on the same target).
When you have got that together and can build, load and run code on the target, you may then be in a position to ask a more specific question. Writing code for this platform will depend on many things such as processor type, programming language, target operating system (if any), real-time performance requirements, regulatory standards, product type standards etc.
With respect to how to access your specific hardware, then no one can tell you that without access to the documentation and hardware schematics, and you cannot do anything with it yourself without that. Some knowledge of electronics will be a distinct advantage in most cases.
I know little about MCUs and embedded systems.
A year ago we made contract with a company to design a special purpuse MP4 device based on the SigmaTel STMP 3650 kit. Now we have all the source code for the firmware (code, resource around 1G).
My questions are
Can we use this code to run on other STMP 3xxx famliy based devices (with acceptable modification, of course)? What about other ARM9 based devices?
ARM9 defines the processor core (but even then there are variants; yours being ARM962EJ-S), but most on-chip peripherals and support hardware including clocks, PLLs, and interrupt controller are vendor specific, you you would have to port your hardware initialisation and device driver code, and make sure that you choose a device with a comparable peripherals set to the ones your current code uses.
Moreover if the code is written in C or C++ rather than assembler, much of it may be prtable to other architectures, particularly if the application layer and hardware abstraction layer are well defined.
Another issue may be whether your existing implementation relies on any particular OS or RTOS; you may need to select a device that supports the same OS in order to reduce the porting effort.
Finally, a non-programming point, but just to keep you out of trouble; you need to be sure that you own the rights to the code you intend to reuse, and that the original client has no claim on it.
your logical successor chip is iMX233 from Freescale for a couple of reasons.
STMP3650 lead to STMP3780 by SigmaTel - same CPU core (ARM9EJ-S), mostly same architecture and registers. Then, SigmaTel was sold to Freescale and they simply copied STMP3780 to ...iMX233. Identical silicon.
We have a fully fledged MP3/MP4 player based on STMP3650 (see bones.ch website) and transfer our R&D now to iMX233. What is your project doing by now? How "good" was the design based on STMP3650 running? Do you have any chip stock left?