How to find a pre-defined constant's value? - bbc-microbit

I'm trying to find the values of several C++ constants which are useful for coding the V1 microbit's ADC, such as ADC_ENABLE_ENABLE_Enabled (in this article).
I believe the #include "MicroBit.h" refers to this repo https://github.com/lancaster-university/microbit/tree/master/inc but I can't find the ADC definitions!

The definitions for these macros come from deep in the low-level libraries for the microprocessor used on the BBC microbit, the nrf51822. Higher level libraries typically include these files (or the build system might) to implement more intuitive and easier-to-use APIs. While I am not sure where these files are on your system as it depends on your build environment, the source is on GitHub:
https://github.com/NordicSemiconductor/nrfx/blob/7eca6c2dc02b24cbdaa3ba0e63a7195e34ebe07c/mdk/nrf51_bitfields.h#L163

Related

How to make a normal C library work with embedded environment?

I was recently asked about how to use a C library (Cello in this case) in an embedded environment, but I'm not sure how to go about that.
Is it correct to say that if a library can be compiled in the embedded environment, it can be used?
Should I care about making the library more lightweight or something like that?
Any suggestions are appreciated.
To have it compile is the bare minimum. Notably most embedded systems are freestanding systems, such as microcontroller and RTOS applications. Compilers for freestanding systems need not provide all standard library headers, the only mandatory ones are (C17 4/6):
<float.h>, <iso646.h>, <limits.h>, <stdalign.h>, <stdarg.h>, <stdbool.h>,
<stddef.h>, <stdint.h>, <stdnoreturn.h>
In addition, the embedded system need not support floating point arithmetic. Some systems implement software floating point support, but using that is very bad practice. If your MCU does not have a FPU, you should not be using floating point arithmetic, or you picked the wrong MCU for the task, period.
"I need to represent this number with decimals internally or to the user" is not a valid reason for using floating point. Fixed point arithmetic should be used for that. You only need floating point if you are to use math libraries like math.h and more advanced math.
Traditionally, embedded system compilers have been slow to adapt the latest C standard. It's been quite a while since C11 release now though, so at the moment all useful compilers have caught up with it (C17 only contains minor changes so we can likely ignore that one). Historically, embedded compilers have been horribly bad at this though, so remain sceptical. There shouldn't be any reason to pick a compiler without C11 support for new product development.
Summary for getting the lib to compile (bare minimum):
Does the library use hosted system headers, and if so does the embedded compiler support them?
Does the library use floating point and if so does the target system have a FPU, or at least a software floating point lib?
Does the library rely on the latest C standards and if so does the embedded compiler support them?
With that out of the way, you have to consider if the library is at all written to be portable. Did they take care with things like integer types, enums and alignment? Are they using stdint.h or are they using "sloppy typing" int all over the place? Did they consider endianess? Is the lib using dynamic allocation, which is banned in most embedded systems? Is it compatible with industry standards like MISRA-C? And so on.
Then there's optimizations to consider on top of that. Optimizing code for microcontrollers is very different than optimizing code for PC CPUs.
A brief glance at the various "compiler switches" (#ifdef) present usually gives a clue of how portable the code is. Looking (very briefly) at this cello lib, they seem to have considered porting between mainstream x86 systems but that's it. You would have to rewrite pretty much the whole lib if you were to port it to an embedded system. The work effort depends on how alien the target CPU is compared to x86. Porting to a high end Cortex-A with Little Endian might not require much effort. Porting to some low-end crap MCU would require a monumental effort.
Code portability is a big topic and requires very competent C programmers. To make the very same code run on for example a x86-64 and a crappy 8-bit MCU is not a trivial task.
Professional libs like protocol stacks usually come with a system port for a specific MCU, where they have not just taken generic portability in account, but also the specific system.
Not all libraries that can be compiled, can be used in embedded environments. Libraries that use malloc and free (or their C++ counterparts) are dangerous and therefore should be handled with care. These libraries can result in undeterministic behaviour because of memory allocations failing.
It is possible that the standard C STD could be wholly compiled for embedded devices but that doesn't mean that you'll have much use for printf or scanf. So a better question before you ask if you can compile it is should you use it. Cello seems like a fun experiment but isn't a stable platform to develop something real on. It can be done though and an example of that is the Espruino.
Most of the time it is a bad idea to rewrite a library to be 'lightweight' or more importantly in an embedded environment: statically allocated. You are probably not as smart as those people or won't put in the time needed to create a complete functional embedded fork which is as stable as the original or even better. Don't be dissuaded for a fun little side project but don't depend on it for a real project.
Another problem could be that the library is too big for your microcontroller. The Atmega32a only has 32KB of programmable flash. To take a C++ example of the top of my head: boost won't fit in that space even for all the highly useable tools that it provides.

What is the difference between libc++ and libc++abi library in LLVM?

I saw the two projects are quite related, but what are the differences between them? The official webpage doesn't tell much about it.
I know that ABI (Application Binary Interface) is used to provide low-level binary interface among different platforms. So is libc++abi used to provide different implementations for different platforms, and general interface for libc++?
Would be better go give some specific example, e.g. what are included in libc++abi and what in libc++.
Thanks.
The Application Binary Interface, or ABI for short, is intended to provide certain low level functions from which to build the C++ standard library. It is a supporting library that is a separate component from the actual standard library. Along with libcxxabi, you may also come across Pathscale's libcxxrt or GCC's libsupcxx.
On the other hand, libc++ is an implementation of the C++ standard library that can be built using either of the 3 mentioned ABIs.

Import compiled code into C/C++ source code for microcontroller

We'd like to offer a compiled library that implement a protocol layer to be imported into C/C++ source code project for microcontrollers. And eventually expose a sort of compiled function to the source code project. let's say a sort of "dll". Is there any know technique to realize something of similar?
While it is possible to provide functions via a library, generally in the microcontroller/embedded realm it quickly becomes impractical.
Each microcontroller core will have a unique instruction set. Further, micros from the same family may have a variety of extensions which are either supported or not... So you're left with providing a library file for each individual microcontroller (from each vendor) that you'd like to support.
But...
In my experience, calling conventions between compilers are not the same. So a library compiled by one toolchain will not be able to be linked to object files created by another toolchain.
That leads you to then provide a library for each individual micro from each vendor for each toolchain someone might use. Ick. Oh, and don't rely on an OS calls either, as you don't know what you'll be linked with...
A more conventional approach is to use the same approach RTOS vendors tend to use: provide the source, and protect your IP with licensing terms. The reality is that if your end users want to, they can step through the assembly and figure out exactly what is happening, so you're not hiding your implementation that carefully anyway.

Creating your own custom libraries in iOS?

I'm fairly new to programming and wanted to start programming more efficiently. Try as I may I often find myself straying from the MVC model.
I was wondering are there any tips or methods in keeping your code organized when coding in xcode objc? To be more specific (I know you guys like that :) I want to
Be able to write libraries or self-containing code that can bring from one project to another
Share my code with others as open sourced projects
Prevent myself from writing messy code that does not follow proper structure
Use a high warning level. Build cleanly.
Remove all static analyzer issues.
Write some unit tests.
Keep the public interfaces small.
Specify your library's dependencies (e.g. minimum SDK versions and dependent libraries).
Compile against multiple/supported OS versions regularly.
Learn to create and manage static library targets. This is all you should need to support and reuse the library in another project (unless you drag external resources into the picture, which becomes a pain).
No global state (e.g. singletons, global variables).
Be precise about support in multithreaded contexts (more commonly, that concurrency shall be the client's responsibility).
Document your public interface (maybe your private one too…).
Define a precise and uniform error model.
You can never have enough error detection.
Set very high standards -- Build them for reuse as reference implementations.
Determine the granularity of the libraries early on. These should be very small and focused.
Consider using C or C++ implementations for your backend/core libraries (that stuff can be stripped).
Do establish and specify any prefixes for your library's objc classes and categories. Use good prefixes too.
Minimize visible dependencies (e.g. don't #import tons of frameworks which could be hidden).
Be sure it compiles without the client needing to add additional #imports.
Don't rely on clients putting things in specific places, or that resources will have specific names.
Be very conservative about memory consumption and execution costs.
No leaks.
No zombies.
No slow blocking operations on the main thread.
Don't publish something until it's been well tested, and has been stable for some time. Bugs break clients' code, then they are less likely to reuse your library if it keeps breaking their program.
Study, use, and learn from good libraries.
Ask somebody (ideally, who's more experienced than you) to review your code.
Do use/exercise the libraries wherever appropriate in your projects.
Fix bugs before adding features.
Don't let that scare you -- it can be really fun, and you can learn a lot in the process.
There are a number of ways you can reuse code:
Store the code in a common directory and include that directory in your projects. Simple, but can have versioning issues.
Create a separate project which builds a static iOS library and then create a framework. More complex to setup because it involves scripting to build the framework directory structure. But easy to use in other projects and can handle versioning and device/simulator combined libs.
Create a separate project which builds a static iOS library and then include this as a subproject in other projects. Avoids having to build frameworks and the results can be more optimised.
That's the basic 3, there are of course a number of variations on these and how you go about them. A lot of what you decide to do is going to come down to who you are going to do this for. For example I like sub projects for my own code, but for code I want to make available for others, I think frameworks are better. even if they are more work to create. Plus I can then wrap them up with docsets of the api documentation and upload the whole lot as a DMG to github for others to download.

Is it possible to embed LLVM Interpreter in my software and does it make sense?

Suppose I have a software and I want to make cross-plataform plugins. You compile the plugin for a virtual machine, and any platform running my software would be able to run this code.
I am wondering if it is possible to use LLVM interpreter and bytecode for this purpose. Also, I am wondering if does make sense using LLVM for this purpose instead of something else, i.e. is it what LLVM was made for?
I'm not sure that LLVM was designed for it. However, I doubt there is anything that hasn't been done using LLVM1
Other virtual-machines based script engines are specifically created for the job:
LUA is very popular
Wikipedia lists some other Extension/embeddable languages under the Scripting language entry
If you're looking for embeddable virtual machines:
IKVM supports embedding JVM and CLR in a bridged mode (interoperable)
Parrot supports embedding (and includes a Python interpreter; mind you, you can just run python bytecode images)
Perl has similar architecture and supports embedding
Javascript supports embedding (not sure about the architecture of v8, but I guess it would use a virtual machine)
Mono's CLR engine supports embedding: http://www.mono-project.com/Embedding_Mono
1 including compiling c++ information to javascript to run in your browser...
There is VMIR (https://github.com/andoma/vmir) which is a LLVM bitcode interpreter / JIT engine that's intended to be embedded into other apps.
Disclaimer: I'm the author of it and it's still work-in-progress but works reasonable well.
In theory, there exist a limited subset of LLVM IR which can be portable across various platforms. You shall not specify alignments, you shall not bitcast pointers to integral types, you must avoid intrinsics, etc. Which means - you can't immediately use a code generated by a stock C compiler (llvm-gcc, Clang, whatever), unless you specify a limited target for it and implement sanitising LLVM passes. Another issue is that the bitcode format from different LLVM versions is not guaranteed to be compatible.
In practice, I would not go there. Mono is a reasonably small, embeddable, fast VM, and all the .NET stack of tools is available for it. VM itself is pretty low-level (as long as you do not care about the verifyability).
LLVM includes an interpreter, so if you can build this interpreter for your target platforms, you can then evaluate LLVM bitcode on the fly.
It's apparently not so fast though.
In their classic discussion (that you do not want to miss if you're a fan of open source, LLVM, compilers) about LLVM vs libJIT, that has happened long before LLVM became famous and established, the author of libJIT Rhys Weatherley raised this particular issue, he stated that LLVM is not suitable to be embedded, while Chris Lattner, the author of LLVM stated that otherwise, it is modular and you can use it in any possible fashion including embedding only the parts you need.