how to write sensor's libraries from scratch [closed] - embedded

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Can someone explain to me how can I write sensor's library from zero, I read the datasheet, and some Arduino libraries but I did not understand how they had written them

It's not a trivial task to write a library for embedded projects. Most of the times, it's almost impossible to write a completely generic one that can satisfy everyone's needs.
Don't let Arduino library examples fool you. Most of them are not designed and optimized for real world applications with strict timing constraints. They are useful when reading that sensor is the only thing your embedded system does. You can also use them sequentially in a master loop when blocking read operations are not a concern.
Complex embedded applications don't fit into this scheme. Most of the time, you need to execute more than one task at the same time, and you use interrupts and DMA to handle your sensor streams. Sometimes you need to use an RTOS. Timing constrains can be satisfied by using the advanced capabilities of STM32 Timer modules.
Connecting timers, DMAs, interrupts and communication (or GPIO) modules together so that they can work in harmony is not easy (also add RTOS, if you use one), and it's almost impossible to generalize. Here is an list of examples that comes into my mind:
You need to allocate channels for DMA usage. You library must be aware of the channel usage of other libraries to avoid conflicts.
TIM modules are not the same. They may have different number of I/O pins. Some specific peripherals (like ADC) can be triggered by some TIM modules but not the others. There are constraints if you want to chain them, you can't just take one timer and connect it to some other one.
The library user may want to use DMAs or interrupts. Maybe even an RTOS. You need to create different API calls for all possible situations.
If you use RTOS, you must consider different flavors. Although the RTOS concepts are similar, their approaches to these concepts are not the same.
HW pin allocation is a problem. In Arduino libraries, library user just says "Use pins 1, 2, 3 for the SPI". You can't do this in a serious application. You need to use pins which are connected to hardware modules. But you also need to avoid conflicts with other modules.
Devices like STM32 have a clock tree, which affects the cloks of each peripheral module. Your library must be aware of the clock frequency of the module it uses. Low power modes can change these settings and break a library which isn't flexible for such changes. Some communication modules have more complicated timing settings, like the CAN Bus module for example, which needs a complex calculation for both bit rate and bit sampling position.
[And probably many more reasons...]
This is probably why the uC vendors provide offline configuration & code generation tools, like the CubeMX for STM32's. Personally I don't like them and I don't use them. But I must admit that I still use CubeMX GUI to determine HW pin allocations, even though I don't use the code it generates.
It's not all hopeless if you only want to create libraries for your own use and your own programming style. Because you can define constraints precisely from the start. I think creating libraries are easier in C++ compared to C. While working on different projects, you slowly create and accumulate your own code snippets and with some experience, these can evolve into easily configurable libraries. But don't expect someone else can benefit from them as much as you do.

Related

About embedded firmware development [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In the past few days I found how important is RTOS layer on the top of the embedded hardware.
My question is :
Is there any bifurcation between device driver (written in C directly burned over the microcontroller)
And the Linux Device driver ?
This question is a little broad, but an answer, a little broad itself, can be given.
The broadness comes from the fact that "embedded hardware" is not a precise term. That hardware ranges from 4 bit microcontrollers, or 8 pins ones, up to big CPUs which have many points in common with typical processors used tipically on linux machines (desktop and servers). Linux itself can be tailored up to the point it does not resemble a normal operating system anymore.
Anyway, a few things, generally acceptable, can be the following. Linux is not, in its "plain" version, a real time operating system - with the term RTOS instead, the "real time" part is implied. So, this can be one bifurcation. But the most important thing, I think, is that embedded firmware tries to address the hardware and the task to be done without anything else added. Linux O.S. instead is general purpose - it means that it offers a lot of services and functionalities that, in many cases, are not needed and only give more cost, less performances, more complication.
Often, in a small or medium embedded system, there is not even a "driver": the hardware and the application talk directly to each other. Of course, when the hardware is (more or less) standard (like a USB port, a ethernet controller, a serial port), the programming framework can provide ready-to-use software that sometimes is called "driver" - but very often it is not a driver, but simply a library with a set of functions to initialize the device, and exchange data. The application uses those library routines to directly manage the device. The O.S. layer is not present or, if the programmer wants to use an RTOS, he must check that there are no problems.
A Linux driver is not targeted to the application, but to the kernel. And the application seldom talks to the driver - it uses instead a uniform language (tipically "file system idiom") to talk to the kernel, which in turns calls the driver on behalf of the application.
A simple example I know very well is a serial port. Under Linux you open a file (may be /dev/ttyS0), use some IOCTL and alike to set it up, and then start to read and write to the file. You don't even care that there is a driver in the middle, and the driver was written without knowledge of the application - the driver only interacts with the kernel.
In many embedded cases instead, you set up the serial port writing directly to the hardware registers; you then write two interrupt routines which read and write to the serial port, getting and putting data from/into ram buffers. The application reads and writes data directly to those buffers. Special events (or not so special ones) can be signaled directly from the interrupt handlers to the application. Sometimes I implement the serial protocol (checksum, packets, sequences) directly in the interrupt routine. It is faster, and simpler, and uses less resources. But clearly this piece of software is no more a "driver" in the common sense.
Hope this answer explains at least a part of the whole picture, which is very large.

Udacity: Functional Hardware Verification. What are the implementations? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I've just finished watching the first week of Functional Hardware Verification course at Udacity. In the promo, it says, it requires both Electrical Engineer and Software Engineer skills. This is why it intrigued me. But I got an impression like, it is only for chip designers. Am I wrong?
Can it be also useful for embedded software developing ?
For example, how can I utilize this information with a Raspberry Pi and/or an Android device ? Is it possible or am I wasting my time with this course?
I'll be happy if someone could give me an insight.
1) is this a stack overflow question? not sure, will see what happens.
2) is this a waste of time, I would argue no, its free, just watch, learn something
3) does it apply to embedded on a raspberry pi or android? probably not, depends on your definition of embedded first off, if you are making api/library calls to an operating system or environment, that is just writing applications. if you are digging down into the bare metal, it gets closer, but not quite. Now if you are working somewhere where you work hand in hand or are or are going to be designing chips, fpgas, cplds, etc. And, the company is such that they are willing to move into the 1990's or 2000's and allow the software developers to develop against the rtl in simulation, access to the simulator licenses (very expensive, doesnt take very many cadence licences to shadow your salary).
During the chip development phase of time (vs the post silicon sell the chips for a while before the next chip starts) we build our own sims using the kinds of things I assume this class is teaching, but perhaps not since the traditional testing methods are not quite what I am talking about. we sim portions or the whole chip, use the "foreign language interface" or whatever the vendor calls it, to interface software to the simulated hardware. We do it using an abstraction layer in such a way that a high percentage of the code we write against sim will run against silicon using a different abstraction layer/shim. This can give months to a year or more head start on the software, and can find bugs in the logic design as well as the design of the hardware/software interface (are interrupts a good idea, can we poll fast enough, use dma, etc).
Cadence is of course going to push their product and their ways of doing things even though their products support a wide range of features. There are other tools. I am a fan of verilator, open source, free. but it is very particular about the verilog it supports, mostly synthesizable only, and that is fine by me, so depending on the verilog author you are relying on they may not have habits that support whate verilator is looking for. icarus verilog is much more forgiving but very slow. verilator is like 10 times slower than cadence, icarus is slower than that. but free is free...There are a number of things at opencores for example that you can play with if you want to see this in action, amber, mpx, and altor32 are ones I have played with for example.
If you land one of these chip/board company jobs then familiarity with simulators like cadence and modelsim and the free tools (verilator, icarus, gtkwave, ghdl, etc) are valuable. Being able to read verlog and/or vhdl (which is not hard if you are already a programmer, the only thing new is that some of the code is truly "parallel", which with multi-core processors today that is not actually a new thing for a programmer). If you are able to interface software to the simulator, then you are an asset to that company because you can facilitate this development against the simulated hardware and save the company money in units of multiples of your salary with found bugs before silicon and shorting the schedule by many months.
Lastly, being able to look at waveforms and see your code execute is an addictive experience.
Just like learning bare metal, or assembly, getting familiar with hardware at this next lower level can only help you as a programmer, even if the experience is with logic or processors that are not the ones you are programming. Remember that just like programmers, take any N number of programmers given the same task they may come up with anywhere up to N different solutions. Just because one implementation of a mips clone has certain details does not mean all mips nor all processors look like that on the insides.

Microcontroller interfacing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm new to the world of embedded programming and I'm looking for information about interfacing with a microcontroller using I2C, USB, UART, CAN, etc. Does anybody know any good links, books, tutorials, about this subject? Since I´m a real newb on this subject I prefer if it is as basic as possible.
Since you are already a desktop developer, you can probably jump in somewhere in the middle. Download the user manual for the controller you plan to use. Get the example code from the manufacturers etc. sites for one of the simpler peripherals - UART is good.
Get a development board and an Eclipse/Crossworks/whatever development system that supports your board and get something to work - flash LED, UART echo or the like. Don't try to use a multitasker and interrupt driver first off - just poll the UART with as few lines of code as possible. Why - because just getting a development setup to compile, link, download and run one line of code is a considerable exercise in itself, without any complications from complex code. You have your development setup and hardware to debug first before you can effectively write/debug any code for the controller. Just getting 'blinky' code that merely flashes an on-board LED to work is a huge step forwards:)
Most controllers have dedicated groups/blogs, either on the uC manufacturers site or linked from it - join them.
If you want to get into this efficiently, get a board and try to get it to do something - there is no better way. Once you can get a LED to flash or a UART to issue a string of chars, you're off to the races:)
Developing simple 'blinky' or UART-polling functions are not wasted - you can continue to use them later on when you have more complex code. Blinking a LED, (with the delay-loop replaced by an OS sleep), is such a good indicator that the code is sorta running that I've always kept it in on delivered systems. The UART poll is useful too - it will run without any interrupts and so you can call it from the data/prefetch/whatever abort vectors to issue the many 'CRITICAL ERROR - SYSTEM HALTED' messages that you will get during ongoing development:)
Wikipedia really is as good a place to start as any to learn the basics surrounding each of these mechanisms.
Once you've got a grasp, I'd recommend looking at an actual datasheet and user's guide for a microcontroller that has some of these features. For example, this 16-bit PIC has a dedicated UART, I2C and SPI bus. Looking at the relevant sections of the documentation, armed with your new knowledge of the underlying principles, you'll start to see how to design a system that uses them.
The next step would be to buy a development board for such a device and then, using example code (of which there is tons), code yourself up some datalinks. Incidentally, a UART is certainly the easiest to test, since all PCs can transmit using the RS-232 protocol from a terminal, so in this case you can code a loopback to transmit and receive characters with relative ease. With I2C and SPI however, I think you would need to buy a dedicated host dongle to allow you to transmit using the protocols, although I think Windows 8 might be introducing native support (but don't quote me on that).
I haven't implemented a datalink using CAN, so I can't comment specifically, although I just did a quick Google search and there's a PIC family that supports it, so I'm sure you could follow a similar approach. As for USB, I consider it a bit of a black art, so I'll leave someone else to answer that one - although I do know you can get a USB software stack for PICs, so again, a similar approach could probably be followed.
On a final note, I've only mentioned PICs as an example microcontroller. If you're not familiar with microcontrollers in general, you'll soon realise that all the major microcontroller families from companies like Microchip, TI and Atmel generally follow the same design, so I would have a browse and pick the family who's documentation you're most comfortable with (or least uncomfortable with, as suggested by Martin James!).
For the most part the data sheet or reference manual for the microcontroller you intend to use is what you need. Each vendor will implement these interfaces in different ways, so that is your only definitive source for low-level programming information.
I2C and UART are relatively simple with no standard higher-level protocol stack; that would be defined by you or the device you might be connection to. Your microcontroller vendor will almost ceratainly have example code and/or application notes.
CAN is more complex, and typically CAN networks use a higher-level application protocol of which there are several for different application domains such as CANopen, NMEA2000, DeviceNet, IEC J1939 and more. Most often a third-party library is the most cost effective method of implementing an application protocol stack, but if the network comprises of only devices you are implementing, then that complexity may not be necessary. Again your microcontroller vendor is likely to have examples.
USB is very strongly defined by the USB Implementers Forum, and protocol stacks are non-trivial. Most microcontroller vendors that have on-chip USB interfaces will provide sample code and application notes for at least device-class USB. USB Host stacks are more complex, and again the use of a third-party library (which are relatively expensive) is the quickest way to market. Fees you would otherwise be obliged to pay the USB IF may make writing your own host stack no less expensive. Even for a USB Device interface you will strictly need a USB Vendor ID at $2000 from USB IF. Some microcontroller and library vendors will allow you to use their Vendor ID for a defined subset of Product IDs at little or no cost, but you would probably have to be a significant customer in terms of volumes. For internal experimentation, you can probably get away without an official VID, but such a product could not be released commercially.

Why would I consider using an RTOS for my embedded project?

First the background, specifics of my question will follow:
At the company that I work at the platform we work on is currently the Microchip PIC32 family using the MPLAB IDE as our development environment. Previously we've also written firmware for the Microchip dsPIC and TI MSP families for this same application.
The firmware is pretty straightforward in that the code is split into three main modules: device control, data sampling, and user communication (usually a user PC). Device control is achieved via some combination of GPIO bus lines and at least one part needing SPI or I2C control. Data sampling is interrupt driven using a Timer module to maintain sample frequency and more SPI/I2C and GPIO bus lines to control the sampling hardware (ie. ADC). User communication is currently implemented via USB using the Microchip App Framework.
So now the question: given what I've described above, at what point would I consider employing an RTOS for my project? Currently I'm thinking of these possible trigger points as reasons to use an RTOS:
Code complexity? The code base architecture/organization is still small enough that I can keep all the details in my head.
Multitasking/Threading? Time-slicing the module execution via interrupts suffices for now for multitasking.
Testing? Currently we don't do much formal testing or verification past the HW smoke test (something I hope to rectify in the near future).
Communication? We currently use a custom packet format and a protocol that pretty much only does START, STOP, SEND DATA commands with data being a binary blob.
Project scope? There is a possibility in the near future that we'll be getting a project to integrate our device into a larger system with the goal of taking that system to mass production. Currently all our projects have been experimental prototypes with quick turn-around of about a month, producing one or two units at a time.
What other points do you think I should consider? In your experience what convinced (or forced) you to consider using an RTOS vs just running your code on the base runtime? Pointers to additional resources about designing/programming for an RTOS is also much appreciated.
There are many many reasons you might want to use an RTOS. They are varied & the degree to which they apply to your situation is hard to say. (Note: I tend to think this way: RTOS implies hard real time which implies preemptive kernel...)
Rate Monotonic Analysis (RMA) - if you want to use Rate Monotonic Analysis to ensure your timing deadlines will be met, you must use a pre-emptive scheduler
Meet real-time deadlines - even without using RMA, with a priority-based pre-emptive RTOS, your scheduler can help ensure deadlines are met. Paradoxically, an RTOS will typically increase interrupt latency due to critical sections in the kernel where interrupts are usually masked
Manage complexity -- definitely, an RTOS (or most OS flavors) can help with this. By allowing the project to be decomposed into independent threads or processes, and using OS services such as message queues, mutexes, semaphores, event flags, etc. to communicate & synchronize, your project (in my experience & opinion) becomes more manageable. I tend to work on larger projects, where most people understand the concept of protecting shared resources, so a lot of the rookie mistakes don't happen. But beware, once you go to a multi-threaded approach, things can become more complex until you wrap your head around the issues.
Use of 3rd-party packages - many RTOSs offer other software components, such as protocol stacks, file systems, device drivers, GUI packages, bootloaders, and other middleware that help you build an application faster by becoming almost more of an "integrator" than a DIY shop.
Testing - yes, definitely, you can think of each thread of control as a testable component with a well-defined interface, especially if a consistent approach is used (such as always blocking in a single place on a message queue). Of course, this is not a substitute for unit, integration, system, etc. testing.
Robustness / fault tolerance - an RTOS may also provide support for the processor's MMU (in your PIC case, I don't think that applies). This allows each thread (or process) to run in its own protected space; threads / processes cannot "dip into" each others' memory and stomp on it. Even device regions (MMIO) might be off limits to some (or all) threads. Strictly speaking, you don't need an RTOS to exploit a processor's MMU (or MPU), but the 2 work very well hand-in-hand.
Generally, when I can develop with an RTOS (or some type of preemptive multi-tasker), the result tends to be cleaner, more modular, more well-behaved and more maintainable. When I have the option, I use one.
Be aware that multi-threaded development has a bit of a learning curve. If you're new to RTOS/multithreaded development, you might be interested in some articles on Choosing an RTOS, The Perils of Preemption and An Introduction to Preemptive Multitasking.
Lastly, even though you didn't ask for recommendations... In addition to the many numerous commercial RTOSs, there are free offerings (FreeRTOS being one of the most popular), and the Quantum Platform is an event-driven framework based on the concept of active objects which includes a preemptive kernel. There are plenty of choices, but I've found that having the source code (even if the RTOS isn't free) is advantageous, esp. when debugging.
RTOS, first and foremost permits you to organize your parallel flows into the set of tasks with well-defined synchronization between them.
IMO, the non-RTOS design is suitable only for the single-flow architecture where all your program is one big endless loop. If you need the multi-flow - a number of tasks, running in parallel - you're better with RTOS. Without RTOS you'll be forced to implement this functionality in-house, re-inventing the wheel.
Code re-use -- if you code drivers/protocol-handlers using an RTOS API they may plug into future projects easier
Debugging -- some IDEs (such as IAR Embedded Workbench) have plugins that show nice live data about your running process such as task CPU utilization and stack utilization
Usually you want to use an RTOS if you have any real-time constraints. If you don’t have real-time constraints, a regular OS might suffice. RTOS’s/OS’s provide a run-time infrastructure like message queues and tasking. If you are just looking for code that can reduce complexity, provide low level support and help with testing, some of the following libraries might do:
The standard C/C++ libraries
Boost libraries
Libraries available through the manufacturer of the chip that can provide hardware specific support
Commercial libraries
Open source libraries
Additional to the points mentioned before, using an RTOS may also be useful if you need support for
standard storage devices (SD, Compact Flash, disk drives ...)
standard communication hardware (Ethernet, USB, Firewire, RS232, I2C, SPI, ...)
standard communication protocols (TCP-IP, ...)
Most RTOSes provide these features or are expandable to support them

What Skill set should a low level programmer possess?

I am an embedded SW Engineer, with less than 3 yrs of experience. I aim to "sharpen the saw" continuously. I was wondering if there was anything specific to low level programming that C/C++ coders should be proficient with.
What comes to my mind is familiarity with the hardware's architecture and instruction set. Knowing how to fiddle with bits is also important, resource management and performance have been part of my job, is there anything else?
EDIT: I work with an in-house customized RTOS, not embedded Linux.
I see a lot of high-level operating system answers here, but you specifically said low-level.
Some scattered thoughts:
Design for test. As you work through a problem only change one thing at a time per test.
You need to understand busses and interfaces, spi, i2c, usb, ethernet, etc. Number one interface, today, yesterday, and tomorrow, the uart, serial.
The steps involved in programming a flash.
Tricks to avoid making the product easily brickable.
Bootloaders in general.
Bit-banging above said interfaces on various families of parts (different chip
vendors have different ideas about io pins, pull ups, direction
controls, etc).
Board and chip bring up, you certainly never want to
boot a many tens of thousands of lines of code program on the first
power up (think led on, led off).
How to debug a product without using too much test equipment (logical analyzers and scopes), at the same time you have to learn to use a scope for debugging, you are far
more valuable if you don't HAVE TO have a tech or engineer in the lab
with you.
How would you reprogram the unit in the field? What would
you do to minimize human error when allowing the user to field
upgrade the unit? Remember field downgrades as well.
What would you do to discourage hacking (binaries, etc).
Efficient use of the flash/rom (don't wear out one bank or section, spread the wear around, or see if the flash is doing it for you).
How and when to use a watchdog timer.
State machines, very useful with bytestreams (serial and ethernet), design packet structures that stream well and are tailored to a state machine, and that have a header and checksum or other structure that insures you do not interpret partial packets or
random data as a good packet.
Specific concepts like,
Endianness (this link is to an old but good linuxjournal article)
Effective use of multithreading architectures (the Embedded site is good in general)
Debugging embedded and multithreaded systems
Understand, Learn and Follow good programming techniques (the link is very old and the point very generic and subjective, but think about it)
Other things (this IBM page on embedded linux sums up most of the other points I want to make)
One more thing -- never underestimate testing! or, planning test cases!!
Use the reference links I give as concepts,
please followup further for deeper knowledge.
I'd study the electronics of the actual chips. Learn how they work internally (such as architecture), interface with peripherals, electrical and timing characteristics, etc.
Basically, read the data sheet start to finish a few times and dig into anything you've not seen/used before.
By the way, what chips do you work with?
Similar to what Brian said, learn how to create unit tests and automated builds.
These skills are are good for all levels of software engineers to be proficient in. They will help improve the quality of your code while also making it easier to refactor and improve the code base.
If you haven't yet I think every Software Engineer should read The Pragmatic Programmer and Code Complete. I know these are not specific to low level programming, but have a large wealth of knowledge in them that applies to all sub disciplines.
Having great familiarity with pointers, the checks these languages don't do much (like buffer overflow and stuff like that), digital electronics. Operational systems internals might also help.
Get to know how stuff is represented internally, specially ready-made data structures (supposing you won't build your own one).
Above all, practice a lot. Doing it brings much more to you than just reading about it ;)
bit operations
processor architectures (caches, etc)
wcet analysis
scheduling
Edit: What I forgot to mention is model based development.
Today, the control algorithms are often implemented as some kind of automaton from which C code is generated afterwards.
Commercial available tools are for example MATLAB/Simulink, ASCET or SCADE.
Get yourself a copy of the MISRA-C book. It was originally written by members of the automotive industry, and attempts to make software written in C more robust by applying a number (quite a large number!) of rules and guidelines.
Then, buy PC-Lint (or another static analysis tool) to check your code for MISRA and other rules.
These are particularly relevant to low-level and embedded C, as between them they deal with the causes of a lot of bugs in such software, such as issues relating to pointers, memory leaks, integer promotion (there's a whole chapter on that in the MISRA book), endianness, and undefined behaviour.
Good question. Some that haven't been mentioned...
Learn your various options for achieving low-level multitasking. From basic round-robin (non-preemptive) schedulers, with timing ticks from a hardware timer, up to a preemptive RTOS. Learn why you might need an RTOS, and why you might not. If you use an RTOS, learn that beginners with a PC background probably tend to want to create too many tasks.
Getting visibility into the internals for debugging can be a challenge. There's no screen typically, so no throwing in "printf" calls wherever you want. An emulator or JTAG interface is ideal--you can set breakpoints and step through your program (as long as halting the micro doesn't make hardware go crazy, like swinging a robot arm around at full speed!). If emulator/JTAG is not available, learn how to use a spare serial port (or maybe even bit-bash a pin to make a serial port) for a debug channel, with some simple memory peek/poke commands.