The problem plaguing every firmware development project is not lack of code, but lack of skill in working with that code. That is why I have compiled a list of 64 key skills that every embedded firmware developer must be an expert at in order for the whole team to make progress.
This is by no means an exhaustive list - firmware development is a complex and multi-disciplinary subject. These are just the skills that I use most often in my own consulting work and also the skills that I try to cover as much as possible in all of my training materials as well.
Why your firmware project is behind schedule
Firmware development is the act of designing software architecture that makes our modern connected IoT world function.
In many respects it is like writing a book. We can even draw this analogy further by tabulating how it would compare to writing a book:
|Firmware development||Book authoring|
|Programming language||Letters of the alphabet|
|Functions in a C file||Words of a language|
|One C file||Sentences and paragraphs|
|Libraries of C files||Chapters|
|Software stack||A library of books|
As a Development Manager or CTO, your job is to organize this book writing gig of yours - the team - to produce and organize words into books and libraries of books that you can then offer for sale to your customers.
You can then see how ridiculous it seems that the highest emphasis when hiring programmers is placed on their knowledge of a programming language. That would be like hiring writers to write a book based on their knowledge of the English alphabet.
It is no wonder than that your project easily slides off the schedule when your programmers do not possess sufficient skill to organize everything between the alphabet and the finished product.
100 Skills Of Embedded Firmware Engineer
The purpose of learning is to acquire a skill. The only way to acquire a skill is to practice. Practicing things without the correct knowledge simply means that you are not actually acquiring a skill.
If I were to give you a guitar and you were to practice without any knowledge about music, chances of you learning how to play the guitar are pretty slim.
If however, I were to give you a guitar and also give you knowledge about "patterns" (chords) of guitar playing and you were then to practice these chords then chances of you learning how to play really well have increased by orders of magnitude (10x, 20x etc).
Knowledge of what to practice is key to mastering any field.
This is why I have created this list of 100 skills that every developer needs to practice in order to really do well in this field of embedded engineering.
I have tried to compile this list in the order of importance, but keep in mind that all these skills are important to have in a team. While it is not always necessary for every team member to have all the skills, there is certainly nothing negative about making sure that every developer has at least some degree of command of all of them.
The language itself is a very simple language. Which is why I also compared it to being the "alphabet" from which everything else is built. You need to be able to command this language well and to know how to build software from these small building blocks.
We organize this higher level knowledge as "C programming design patterns". Design patterns are like the study of correct sentence structure in conventional languages. If you know how to build sentences well, then you can move on to writing paragraphs and books. If there is any blanks in your understanding of sentence structure then you will struggle with expressing higher level concepts at all times.
Step 1 therefore is for you to learn the language constructs really well. Then once you have done that, step 2 is to understand design patterns so that you can organize your code.
Python has become a powerful system level language. While a lot of things can be accomplished in CMake and using bash scripts, python lends itself very well to the level above shell scripts.
With Python we can more easily automate the whole build process and tools like
twister which are used for builds and test automation in Zephyr
are both written in python.
Python is also tremendously useful when combined with RobotFramework and used for test automation at the simulation level.
An embedded developer should be able to smoothly switch from C to python and back while writing python scripts as part of their daily work.
While it is relatively rare today to write assembly code, there are still important things to understand about it if you are interfacing with hardware.
- Expert level C programming: understanding what to do in C to get specific
behavior in assembler and be able to verify it (for example
attributemusttail return g(x);translates to a single
jmpinstruction which completely skips adding a new stack frame and can be a useful optimization for very fast data processing at gigabit speeds.
- Power management: most CPUs support special instructions that are used to stop the CPU and wait for an event (WFE: wait for event). You must understand how this works when working with low power systems.
- Memory barriers: these instructions ensure that data has been committed to memory before the code proceeds to next instruction. They are useful in implementing synchronization primitives at system level.
- High speed interrupts: if you need to use a low end CPU and have a requirement to make a particular interrupt very very quick to meet hard deadlines of your application, you may occasionally need to implement these interrupts in assembly.
Even if you will not be using assembly language most of the time in your daily work, knowing how the CPU does its computations is helpful in being able to work at a higher level of writing C code.
The Linux shell, bash, is an ultimate "job control" tool. It was designed to be extended using command line utilities. The utilities themselves form what we call "rootfs" or "root file system" and have been packaged up in software packages like "busybox". These utilities form the "programming API" of bash.
Effectively, bash is a way of connecting many different standard programs into
scripts that process information as it flows from one utility to the next. This
flow either happens through files, or through
pipes - or streams (from stdout
of one program to stdin of another).
Bash is unique from python and other scripting languages in that it’s whole concept is predominantly based around stringing together multiple command line utilities to get the job done.
You will be using bash a lot in your embedded work. Most often you will use it for aggregation of more complex commands and build steps.
You need to be skilled in writing bash scripts - most importantly
while statements. Using bash, you can then aggregate other more complex
operations written in Python or in C.
When we talk about code optimization, we don’t necessarily talk about assembly level optimization. Compilers today are very good at optimizing code at the machine level. However, no compiler will complain if your algorithm is horribly inefficient.
Thus the skill of code optimization is more about algorithm design than about the low level details. You need to understand how your algorithm scales and how you can do things faster.
If we were to draw the analogy back to conventional language model, then code optimization would be equivalent to knowing how to write your paragraphs so that they are easy to understand.
The study of code optimization is centered around data structures and algorithms for manipulating these data structures efficiently on a machine.
When you know about data structures then you can practice and acquire the skill of code optimization by using correct data structures in the right places in your application to make sure that your application can scale effectively as the amount of tasks if does grows.
Data Structures And Algorithms
Algorithms are standardized ways in which we do operations on data, and data structures define the way in which we store data for easy access.
This knowledge applies to all programming languages and pretty much everything we every do in our code with data. From packing network buffers to packing pixels on the screen and rendering polygons. Everything in code is a form of algorithm.
Algorithms differ from design patterns in that algorithms are standardized ways to do something specific with information - while design patterns are architectural patterns for structuring code.
The primary purpose of documentation is to communicate what to do in order to achieve certain results with code that has been developed. The skill of documenting code is in part the ability to communicate what the architecture of the software actually is, and in part ability to provide a brief description for every method, what it does and how to use it to achieve a result.
Documentation is intimately connected to the CI process because it is through CI process that we can enforce that documentation is created. Documentation needs to be built in full and checks must be placed in the CI automation to verify that documentation can be generated without errors.
It is also very convenient for other team members that code is documented because modern code completion tools are able to parse documentation and present it as a popup when developers write code - making it very easy for new team members to get started.
As you make it into a habit to document what you are doing, you will also be improving your architectural skills because describing what you are doing is a great way to find errors in your judgement and go back and fix them before the accumulated errors become unmanageable.
One of the biggest problems with conventional word processors (Google Docs, Word, LibreOffice etc) is that documents created with these programs are extremely difficult to subdivide into parts organize in a hierarchical manner while still retaining ability to identify errors (such as missing links etc). Not to mention how hard it is to track versions.
ASCII document formats in conjunction with git completely solve all of these problems.
They also allow you to
compile your document into any other format -
including HTML, PDF, Doc and even slides. Another benefit is natural
subdivision of text into multiple parts that can be linked together using
include statements - just like code.
Several formats exist for ascii documents: Asciidoc and Markdown being the two most common ones.
You need to be skilled in partitioning your text into ascii documents, version controlling them in git and compiling them into multiple formats just like code. You will be using this a lot for all kinds of 'non-code' content.
An embedded engineer needs to understand how to write code that is secure in how it parses and handles unknown values. At the very basic level this means using bounded standard library functions like snprintf instead of sprintf to avoid risk of unintentional buffer overflow.
However, this extends to all code that is part of firmware that parses data coming from network and other user interfaces.
The skill of safe programming involves understanding of buffer overflows, of designing robust protocols (for example using protobuf parser generator rather than designing the parser manually) and using existing protocols as much as possible to avoid bugs.
In networking in particular, protocol testing is often complemented using "fuzzing" where the network message parser is fed with randomly generated packets until it starts to misbehave. The identified problematic packets are then saved into a database of packets that the protocol must be able to handle without problems and the search continues.
Knowledge of how to test the parsers thoroughly ensures that firmware does not offer ways to trigger internal bugs through carefully crafted network messages.
Functional programming is one of the most powerful concepts that a programmer can learn and apply.
With 'functional' we mean that you build your code through the act of composing and applying functions. It’s a thinking pattern. You need to get very skilled at thinking about code in terms of stringing functions together.
The result is that you will be writing your C code following a similar mindset - which will tremendously simplify your work because you will naturally avoid introducing "impurities" into the code and you will be writing very 'functional' code indeed.
Even you are an embedded developer, it is difficult to avoid working with databases. You may occasionally need to cache data coming from sensors on your linux gateway before passing it further upstream to the cloud.
For organizing tables of data we use databases. These databases can be an SQLite database, or a full SQL database.
Databases make it simple to interface with data efficiently and use languages to accomplish this - such as the SQL language.
You need to at least understand these language and be able to build simple table databases so that you can apply this knowledge whenever you need to store and query tabular data.
It means that you can automate all your common repetitive tasks and use your editing environment to the fullest.
Not to mention the fact that knowing many programming languages makes it easy for you to generalize concepts across many languages all at once.
These are absolutely essential for both editing and CI scripts.
Regular expressions is a standardized way to match and replace text. They use a special syntax that makes it easy to make complex replacements in code and search for patterns in output of other programs.
It is almost too easy to omit regular expressions from this list of skills because we almost take them for granted.
Debugging And Testing
The skill of debugging is the ability to narrow down and find a bug quickly. This can involve connecting to the microcontroller using JTAG and inspecting the contents of the memory at particular places in the program. But as a developer you must know where to inspect and what to inspect.
Testing is an excellent way to avoid the need for debugging altogether by thinking through the code in detail and putting all design decisions in writing by writing tests that verify that these decisions are in place.
Testing involves thorough understanding of how to:
- Write Unit tests to verify logic of each function in code.
- Write Integration tests to verify functionalities of larger modules.
- Write simulation tests that verify the firmware itself through direct interaction with a simulated version of the firmware running in an emulator like Renode.
You will be spending most of your embedded development time on Linux.
You must be comfortable using Linux for your day to day work and skilled in organizing your desktop environment for maximum productivity (for example by using a tiling window manager instead of a conventional one).
When teams lack basic skills in working in Linux they tend to stumble on the fact that nearly every build system is optimized to work on Linux (usually Ubuntu).
It is highly desirable that all team members use Linux as their primary desktop environment to accelerate the learning curve and reduce inefficiencies associated with using virtualization solutions.
Once you have written your code, you must now build it and produce a firmware image out of it. This involves many steps and being skilled in build systems like CMake make you very proficient in automating this task.
Automation is part of an efficient CI/CD cycle and you will have to learn how to automate nearly every part of build process.
In build automation we use Python, YML, Bash, CMake and a wide variety of tools and utilities that build, sign and check the firmware. As a developer working on the firmware and creating features in the firmware, you will inevitably also have to use the full range of tools necessary for efficient build automation.
Build automation is the "implementation" of the DevOps process, but in your repository, in code and in a way that is directly useful for efficient development.
You need to be skillful at using the toolchain to it’s full potential. If you build C code with default compiler flags, the compiler will accept a great deal of nonsense and still compile it.
You need to have knowledge of the compiler flags and how you can use them to
catch potential errors. At the very least this involves using
-pedantic options, but there are many more special compiler flags that will
help you build better software.
You also need to know how to build your code using different toolchains because it is very common to write portable embedded code that must compile on both Zephyr and OpenWRT for example. While both use GCC, the OpenWRT project compiles it’s own GCC from scratch. Thus building the compiler is also part of the build process.
There are also a lot of very useful tools like
objdump that you can
use to inspect the binaries produced by the toolchain. These tools can reveal
to you information about variables and functions included in the final
executable - making it easy to find out which parts of your firmware perhaps
should not be included or are taking too much space.
A linker script determines what gets included into the firmware image and how it is placed in memory.
Whenever you need to structure the firmware image in a custom way, you may need to add extensions to the default linker script used for building your image.
Understanding how to use linker scripts to place things in flash and in special core coupled memory of a particular CPU let’s you have full freedom of expression when assembling the firmware image.
This is also connected to understanding the layout of your firmware which is useful when you want to inspect its content (for example to see what variables take up too much space in flash or ram).
Your IDE is the software you will be spending most of your professional life in and it’s important that you develop skills in using it to such a degree that it is almost and extension of you.
Many programmers, including myself, choose to make up an IDE out of independent components. I personally use VIM and EMACS for editing and a wide variety of command line tools for accomplishing other tasks.
An IDE is never just the text editor - it is the whole programming environment that you are using every day. Understanding and practicing workflows that speed up your ability to edit code is key to being efficient in expressing your ideas in code.
GDB is a fantastic debugging tool upon which many other debugging systems operate (such as IDE debugging features).
GDB itself is a fully standalone debugger capable of connecting with hardware debugging functionality over a "GDB server" interface.
Most JTAG debug probes operate this way in that you start a GDB server and then connect to it using GDB. This is called remote debugging. The same concept of GDB server also extends to simulated CPUs like the ones emulated by Renode or QEMU (Renode can start a GDB server and then you can just connect to it and debug your firmware running in the simulator).
If you are skilled in GDB, you can do any debugging operation that is doable through the hardware debug interface - including streaming data from the chip, logging access to memory, dumping and writing memory from files etc.
It is a very powerful tool to master because once you have mastered it, you will not be limited by the features offered solely through the graphical debugging interface that is part of your IDE.
Kconfig is a configuration format that has been created specifically for configuring cross platform, large scale C projects. Most notably the Linux kernel.
It is one of the most flexible ways to handle large numbers of hierarchical build time configuration options with dependencies between them.
It consists of a config file notation which is then translated to C headers and
CMake variables automatically by the
As an embedded engineers you must utilize Kconfig as much as possible and avoid using custom preprocessor definitions for configuring your application as much as possible.
Teams that do not use Kconfig tend to end up with very complex header files with configuration options and highly fragile ifdef statements all across the code base never knowing exactly what gets included into the code and when.
If a team wants to stay agile, productive and organized they need to apply continuous delivery. This means that everything that is being done manually - must also be automated.
This becomes a problem when we have complex development environments consisting of many packages that need to be installed, configured and updated using a long build process.
The solution to this is docker. The docker system allows a developer to completely reproduce their whole build environment from scratch using a Docker commands file.
Understanding how docker works and how to build images for it is essential skill for maintaining a continuous delivery build pipeline.
Docker usage can be extended to the complete IT infrastructure defined as a docker compose file and brought up in seconds with full ability to reproduce the whole system from scratch retained at all times.
When writing unit tests for C code, the primary way to know what has been tested and what needs attention is through code coverage.
A developer needs to understand how to generate code coverage report and be skilled in implementing this in CI so that code coverage can be continuously exposed as a metric of how well the source code has been tested.
This includes understanding of code coverage tools like gcovr and lcov as well as compiler flags for generating code coverage.
The final code coverage report should be inspected by CI scripts to ensure that all source files are present and that code coverage is above threshold defined for the project as minimum acceptable code coverage percentage.
It is not uncommon to run into issues with any part of the work environment at any given time. A developer needs to understand how to troubleshoot every part of the system. This includes: build scripts, CI pipeline, docker images, c source code, linker scripts and a wide range of other steps that are involved in building embedded firmware.
For linux firmware, this process includes also troubleshooting software packages that are part of the firmware as well as configurations for these packages. This often involves the need to read through documentation, understand concepts like packet routing and filter, having a deep understanding of debugging techniques (including JTAG).
The skill of troubleshooting code is a whole area in itself. While you can tremendously decrees your need to troubleshoot by making sure you have unit tests and clear functional requirements for each part of the software architecture - but even then you may need to dig deep into the implementation to find bugs and fix them.
This requires knowing multiple programming languages, ability to use existing Linux tools to diagnose problems as well as ability to find the right documentation and understand it.
Sphinx is a powerful system for organizing documentation from many different sources. docs.swedishembedded.com uses Sphinx to organize documentation from multiple projects and present it in an easy to navigate HTML format.
However, sphinx doesn’t just generate HTML documentation. It is capable of generating very nice PDF documentation as well.
To make the most of it, you write python plugins that are added to the sphinx configuration of your project. These plugins enable you to do advanced documentation processing such as including Kconfig variables into documentation and checking for errors.
Sphinx is a fantastic tool for the whole team to unify Doxygen and textual documentation from multiple projects.
When you are writing code, you are implicitly making hundreds of decisions about what the application should do. There is no way that you can keep track of the most important of these decisions without test automation.
The purpose of test automation is to ensure that all the decisions you have made are not randomly reverted as you continue working on and modifying your code.
Once you understand how to structure the testing of your code and how to ensure that your tests run automatically as part of your CI pipeline then you can practice this skill by making sure that every important decision you or your team mates make is always committed to the test code so that it is verified at all times afterwards.
This way you can make sure that you code base remains in good shape even as development continues.
Keep in mind however, that this requires good knowledge of C language and design patterns because without this knowledge you will be writing tests for poorly structured code and will have to do extensive refactoring over time - severely limiting the benefit of testing in the process.
Given the vast variations of hardware in use today it has been becoming more and more important to reuse firmware across multiple versions of hardware. This has led to the adaptation of data-driven approach to hardware configuration. A special textual notation has been created called the "device tree" which is used to outline configuration options for hardware devices and which enables code reuse across multiple architectures.
As a firmware engineer you will be expected to use device tree to configure options for hardware instead of hard coding these options into the code.
The device tree source itself is often pre-processed with C pre-processor to enable use of C macros in the device tree definitions. You need understand how this works so that you can keep your device tree clean and avoid unnecessary duplication of options (and use include directives instead).
The bootloader is used very often in today’s systems because it allows reliably upgrading firmware in the field.
An embedded engineer must have the skill of utilizing existing bootloader functionality to verify signature on new firmware upgrade being uploaded, to make sure that proper response is in place for cases when firmware upgrade fails, and to use the bootloader as much as possible for finalizing an over the air upgrade instead of trying to re-implement the same functionality in the firmware itself.
MCUBoot is a standard bootloader that we use in the Swedish Embedded SDK with Zephyr RTOS and it provides support for secure firmware upgrades, considerably simplifying the upgrade process.
If your application requires very quick response to certain events in the hardware then your whole application, including system level components must be designed to enable quick response. This is because things like spinlocks can disable interrupts for a short while and if interrupts are disabled then response to incoming events will potentially be delayed by the same amount of time. This will create jitter in the response that will vary depending on what is happening in the system at the time the event arrives.
The skill of working with realtime constraints involves mastering understanding of hardware, understanding of scheduler, understanding of synchronization primitives, interrupts and interrupt response.
A realtime operating system scheduler is the primary design pattern that enables an application to respond to complex series of events concurrently without complicating the code itself in the process.
The skill of working with an RTOS and partitioning work that needs to be done into tasks (threads) helps the developer structure the code in a clean way while not sacrificing performance.
Many approaches have been explored over the years to try to replace the basic RTOS scheduler with other constructs such as "event driven design", "protocol threads" etc. But the original concept from the 1980s still remains the most versatile and flexible approach on which all other approaches can be built.
Not understanding how Realtime Operating System works and what problems it is designed to solve, often leads development teams to implement substandard and highly fragile systems that break down at scale.
The primary way in which a realtime operating system partitions CPU use between different pieces of code is through threads.
A firmware engineer must understand how threads work and how to use scheduling primitives like spinlocks, semaphores and mutexes to control how and when the RTOS will allow another code to have access to the CPU resource.
Without intimate understanding of threading, programmers tend to either not use it at all, reimplementing scheduling in the worst possible way or using it without any understanding - which leads to tangled up code that breaks randomly and is very difficult to maintain.
Additionally, given the concurrent nature of hardware itself, device drivers often have to collaborate with interruptions by the hardware when hardware executes code in response to a hardware interrupt and this also requires understanding of threading because it is a simplified hardware driven threading mechanism where CPU can be forcibly taken away from code that is currently running and given to code that must run in response to the interrupt.
If a programmer does not understand how to defer interrupt execution using RTOS constructs like spinlock, the project is at high risk of introducing bugs that are very difficult to debug.
RTOS systems are often simplifications of generic operating system design concepts simplified for use with microcontrollers. This does not change the fact that an embedded engineer working with embedded firmware must understand operating system concepts such as scheduler, timers, delays and CPU internals.
An operating system implements a lightweight software layer on top of limited hardware resources to allow these resources to be shared between different bits of code. This way, the CPU can be shared between tasks, a single hardware timer can be shared between unlimited number of software timers, the same I2C peripheral can be used to communicate with multiple devices independently.
As an embedded engineer you need to possess the skill of using existing operating system constructs to build your software so that you don’t end up reinventing common patterns all over again.
If power efficiency is an important aspect of a product, then attention to power management must be done on all levels of the software architecture. Device drivers must correctly respond to system power management requests and put devices that they control into a safe low power state. Dependencies must be managed between devices.
All of this requires understanding of hardware, understanding of the RTOS architecture, and understanding of electronics so that the correct sequence can be configured based on how things are connected on the circuit board (regulators and nested devices that depend on each other).
Another aspect is power management of the CPU itself - which may involve specifically configuring GPIO pins into lowest power state when saving power - but this must be done very carefully because other devices on the board depend on connections to the CPU for proper operation.
Cross compilation in firmware development is standard occurrence today. Most of the code that you write will have to be compilable to at least a few platforms (at the very least your platform and native posix for testing).
Even if your project has no plans to be used on other platforms, cross compilation is a good way to discover some potential bugs since different compilers give different warnings for the same code. Building code designed for a 32 bit microcontroller on a 64 bit platform will highlight assumptions made about sizes of integers so that you can fix them.
As a firmware engineer you need to be at least familiar with ARM, MIPS, RISCV and X86 architectures.
Luckily toolchains usually share a lot of similarities and accept the same options so cross compilation is more a matter of structuring the source code repository in such a way that it can be easily compiled for any supported platform by simply changing one build option.
In embedded firmware development, we often have to write code that interacts with hardware. To do this effectively you need to understand how the microcontroller is put together and how peripherals process data and send signals (interrupts) back to the main cpu core.
When writing time critical code it becomes even more important to pick the right pathways for data. This is something that even the hardware engineer must know when designing the PCB - so that he can utilize the right peripherals in such a way that the software can then configure the hardware to do as much as possible in hardware. But the software part of this work is entirely up to the firmware engineer.
The firmware engineer thus must know how the CPU works and fully understand the data sheets in order to configure the hardware the right way.
When programming embedded systems, it is not uncommon to run into problems that occur because of hardware and software interaction which are not present in conventional application programming. One example is Hard-Fault which can occur upon executing an invalid instruction. This type of fault is specially hard to debug on microcontroller systems because it can be the result of a series of errors in the software which eventually result in the processor executing the invalid instruction.
A programmer can avoid such problems to a degree by applying design patterns such as always clearing memory before using it (to make sure that a NULL pointer results in a clear 'NULL' value instead of some random value), by testing and by fixing potential bugs early through the use of static analysis - but this may still not be enough.
In that case, the programmer needs to understand how to debug the MCU through direct JTAG debugging. How to use manufacturer supplied SVD files that describe CPU layout to inspect bits in CPU status registers to try to find out what the conditions were when the hard fault occurred. This requires knowledge of the CPU architecture so that the programmer can know where to look.
The embedded engineer needs to be skillful at using hardware communication peripherals to communicate with external components on the circuit board. The most important of these are I2C, UART, SPI, SD-Card and Ethernet peripherals.
Even if the programmer is using existing software support for these peripherals, it still important to understand how they are implemented on the CPU in order to make sure that the system as a whole continues to function smoothly.
Not understanding how these peripherals work leads to highly inefficient software that may end up polling for data and wasting CPU resources when in fact configuring the hardware properly and responding to the peripheral events correctly could have been a much more robust approach.
Ability to read and understand data sheets for peripherals you are working with is a primary skill that enables the programmer to implement solutions the right way. This ties into the skill of creating smooth hardware-software interaction. Data sheets are an extremely valuable source of information about the inner workings of peripherals and the processor itself.
Even if you are working on application code at a higher level, it is quite common to have to step through low level peripheral access code when debugging and sometimes it is necessary to adjust configuration options in the device tree or add completely new options in order to make use of some valuable hardware feature.
You can practice this skill by looking through the data sheets and finding good solutions for your application based on the information you get from the data sheet.
ADC and DAC
Whenever you will be dealing with measurements of analog signals (even if you are measuring battery voltage) you will have to use ADC. The ADC peripheral has certain time it needs to sample the signal before the sample is available to the code.
There will also be situations where you will need to synchronize measurements of multiple signals. Microcontrollers provide ability to trigger the ADC from a timer to make it easy to sample the signals in precisely the same time.
As a developer you need to understand how to configure ADC and how to get data from it using the most suitable way for the problem at hand - which may involve having to configure DMA (direct memory access) so that the data can be moved from the ADC directly into a memory buffer in hardware.
You will need to understand how to register ADC interrupt, how to respond to DMA events and how to configure the ADC to be triggered from optimal source (timer, another ADC or another peripheral).
Timers and Counters
In digital electronics, hardware timers are the primary way of generating complex wave forms and driving external hardware. An advanced timer peripheral can be used to generate arbitrary wave forms, to capture signals with precision, to control other peripherals through internal chip connections and to enable the software to implement higher level timing functionality on top.
Hardware timers are often highly configurable and are used as primary way to meet hard realtime deadlines of the application.
The skill of being able to use multiple timers to solve complex control problems is a very valuable area for an embedded engineer to master.
Since you are working with embedded systems, it is very common that you will be looking at schematics and board layouts. You need to be able to understand the schematics and use them for configuring your firmware (for example through device tree configuration).
Having hardware design skills doesn’t just help you to understand schematics, but also makes you better at using simulation tools for testing your firmware code. When you develop plugins for Renode it will help you a lot if you think of the devices you are implementing from a hardware perspective.
Logic level hardware design is like programming - except different rules apply (things are parallel and not sequential as they are in software). Hardware design will help you make the most of programmable hardware such as FPGAs allowing you to extend your design with custom peripherals and being able to implement them yourself in code.
Reading sensors is easy, but doing it efficiently and on time can be a little more complicated.
To do it effectively you need to understand how to utilize hardware as much as possible (using DMA, hardware timers etc) as well as how to structure your firmware code such that sensor data can be read in parallel.
Even such a simple sensor as a push button may require code for de-bouncing the button - which in turn requires knowledge from you on how to implement this cleanly on an RTOS without hogging up the CPU.
Other sensors operate over I2C, SPI or analog inputs. These also need to be accessed effectively - particularly if you have more than one sensor that needs to be read within a certain time frame. The faster you can read all sensors, the less uncertainty you will be introducing into your control loop.
The biggest roadblock for a software only embedded engineer is the complete dependence on existing hardware. If you want to be truly flexible with your creativity, learning to build your own circuit boards enables you to do this.
This overlap of skill is especially important for teams that do full product development because software engineers know and understand how they want the hardware to behave in order to make software simpler, while hardware engineers are experts in hardware but not necessarily in the details of software.
If you are able to quickly spot potential issues when reviewing hardware, it helps the team to produce a better product much faster.
A basic understanding of public/private key cryptography is an absolute necessity. These concepts are used in bootloader for signing firmware and also can be used to secure communication channel to the cloud.
The embedded engineer needs to have skill in applying these concepts to make sure that all sensor messages in a large network of devices can be at least verified trough asymmetric cryptographic signature and also possibly encrypted to avoid leaking sensor messages outside of the sensor network if devices are deployed in public places.
Thus the engineer needs to understand cryptographic key pairs, signing of keys using a certificate and public key access control for sensors.
Incorrectly implemented system can easily exhibit a false sense of being secure if cryptography is being used but being implemented incorrectly.
It is amazing how many IOT systems are transmitting data in plain sight - possibly broadcasting important information in the open.
Networking is a lot more important than most development teams think. Networking protocols have been developed for over 30 years to ensure reliable data transfers between vast networks of devices.
Networking in embedded systems can take the form of transferring network packets over a serial line between a microcontroller and a Linux CPU. It can also be implemented for efficient data transfer over USB - again using standardized implementations without having to re-implement everything from scratch.
Networking protocols like CoAP, LwM2M, MQTT and CANOpen ensure that networks of devices can be connected together over variety of physical media (USB, CAN Bus, Ethernet etc). These protocols have been well tested in the field and can be readily used.
Not understanding networking often leads to developer teams implementing their own "networking" and "message passing" protocols between nodes which almost certainly results in sub-optimal implementation that costs vast amounts of time to maintain and debug.
An efficient network of IoT nodes can be built, configured and connected to the cloud services using entirely the existing set of networking protocols and their time tested implementations.
In a steadily more connected world we have many available wireless protocols that exhibit their own benefits and drawbacks. Some are fast, some are slow but very reliable. Some include built in encryption as part of implementation, some don’t.
Developers need to have skill in using the wireless medium effectively for data transmission and ability to integrate the wireless medium into a full system that uses existing networking protocols to communicate.
When developer team does not have full knowledge of networking protocols they tend to use the wireless medium for direct data transmission - which poses problems with insufficiently large data packets.
If the team further acts on their lack of knowledge about networking protocols, they may end up implementing their own packet fragmentation and assembly code which again tries to reinvent existing concepts only because the team does not understand how to layer other protocols on top of the new transmission medium.
As the scale of embedded systems grows with potentially hundreds of embedded devices on a single premises, so does the need for easy access to this data.
This means that the "product" is rapidly becoming a "swarm" of devices which are all part of a single whole - the API that gives customers access to the data and control interface of all devices combined.
As an embedded engineer you need to be able to work with the API development team and have enough skills in the area of API creation to suggest sensible improvements to the API. You will also find yourself working with integrating your embedded firmware with existing APIs, parsing messages coming from them and sending messages back.
A solid understanding of how to create an efficient and safe API is therefore needed.
An embedded engineer needs to understand how to build embedded linux image from source using Yocto or OpenWRT Linux and to deploy it to a board that has a Linux capable processor. This process has a lot in common with RTOS firmware development - just that it is more complex with a lot more software involved.
Microcontroller systems are powerful but only form a small portion of the embedded system infrastructure. Sensors often communicate with a gateway device which forwards sensor messages to the cloud. This gateway device very often uses a linux environment with standardized routing setup so that the microcontroller firmware does not need to include long range wireless connectivity and full IP networking support.
In such scenarios, the embedded engineer needs to be skilled in Linux firmware development in order to be able to add the necessary software to the linux gateway and establish a connection to the embedded microcontroller system over one of the available interfaces such as UART, USB, Bluetooth, ZigBee or LoRaWAN.
Embedded Linux build systems like OpenWRT provide a fully customized build process that builds everything - including the cross compiler toolchain, linux kernel and linux applications and then packages everything up into a binary firmware image that can be stored on flash on the Linux capable gateway circuit board.
Posix is a standard for writing applications on Linux that need to interact with the outside world. It defines a standard library that provides an interface between Linux system calls and the user application.
An embedded engineer working on Linux applications must have good understanding of the posix APIs including network sockets, threading and device access in order to build software that runs on Linux quickly.
This will be the absolute basis for everything related to signal processing and control systems design (and also for machine learning!)
You need to be able to solve problems with linear algebra by using matrix operations and linear algebra algorithms.
Linear algebra is the root of all modern computing, machine learning, signal processing, game development and countless other areas of software development.
The fact that computers are digital and process data in discrete time steps while the world around us runs continuously, poses a continuously returning problem.
Every natural process that we want to duplicate on a computer, must be transformed to run in discrete.
Discrete mathematics deals with sums while continuous mathematics (calculus) deals with integrals. Concepts present in one branch can usually be transformed to the other using tools like bi-linear transform.
As an embedded engineer you will be working predominantly with digital domain and so you must understand how to work in discrete domain and how to transform frequency domain concepts into discrete domain so that they can be easily implemented in code.
Other areas of discrete mathematics can also be useful to understand but from the perspective of controlling real systems the area you will use the most is the discrete Z-domain representations of dynamic systems.
Signal Processing (DSP)
Even the most simple signal filtering requires basic knowledge of signal processing to implement effectively.
A developer should never try to "invent" filtering algorithms manually in code. Instead he must be skilled in developing filters using mathematics and then converting the resulting discrete domain equations into source code. This is a much more robust approach which yields far better performance without running the risk of instability in the filtering.
When developers don’t understand signal processing and filter design, they invent ways to filter the signal and can sometimes even create code that can easily become unstable. The only way to avoid instability is to prove mathematically that the filter is stable given the sampling frequency of the signal.
Digital signal processing knowledge is also prerequisite for all forms of digital control such as controlling motors, reading sensors, developing feedback control.
When the firmware needs to control some dynamic system, the developer needs to be skilled in control theory to be able to design a good control strategy for controlling the system precisely. When doing this in a firmware application, all calculations must be done in discrete time - taking into account the refresh rate of the control loop.
The developer needs to be able to analyze existing sensor measurements, potentially identify the dynamic system behavior and then mathematically design a control algorithm that will make the system behave as needed.
This requires deep knowledge of control theory and how a system governed by equations behaves when it is being controlled with a digital controller developed in C code.
Hype aside, machine learning simply means solving the parameter optimization problem (with potentially billions of parameters).
At the most basic level, machine learning can be represented by the Kalman filter system identification: recursively identifying the underlying matrices of a dynamic system from input and output data sequence.
At more advanced levels we tend to speak about "tensors" - which are matrices of arbitrary size. We also refer to system identification as "training". This allows us to generalize machine learning to highly complex problems by building our machine learning systems out of much larger building blocks: neural network layers.
It is becoming more and more useful to understand how to evaluate a pre-trained model on resource controlled systems using libraries such as TensorFlow-Lite.
So even if you are not directly working with machine learning as part of your embedded firmware development work, chances are that you will be coming in close contact with it as its applications become more and more widespread in our society.
When working together as a team developers need to set goals and plan their work in chunks in advance. They then have to look at where they are after completing each chunk of tasks, set new goals together and then repeat this process.
The formal way to organize this kind of feedback cycle is scrum. Of course, scrum is a well defined methodology that is a lot more involved than this, but the basic premise of scrum is being "agile" - meaning that project requirements can change at any time and the team is able to reorganize themselves quickly. This is accomplished through continuous goal setting and review.
When the development team is not familiar with ways of working that keep them very agile, the project tends to continue along a pre-planned route regardless of what information is uncovered along the way. This can be disastrous - specially if the project management style mandates that the team must continue along the same pre-planned path regardless of what information comes along.
By training the team to instead set a visual desired state and then find every way possible to make it real, the team is able to stay agile and creatively find ways to accomplish what is important.
Every developer must apply scrum inspired ideas even to their personal work.
This involves managing your own tasks in some way. You don’t need to complicate this too much, even a text file in Emacs Org mode is better than nothing.
Every week you should work according to your own "sprint" that you have set for yourself based on team goals. And you should plan what you will do each day by applying the same principles as the team applies when they utilize scrum.
You just do it on a personal level and this also improves your ability to apply it at team level as well.
Everything that runs on the CPU has in some form started as text. Version control is a way of making changes to this text continuously while keeping the integrity of the whole text in tact.
Things get even more interesting when we consider that all of our infrastructure and deployment workflows are text. Version control, when used correctly, becomes a powerful tool in project management since all changes can be completed in chunks, verified and committed to the product through a well defined and scripted process. We achieve this using CI infrastructure and docker.
Version control is what keeps these vast amounts of text in good order as hundreds of developers make changes to parts of this huge body of text.
The skill of version control comes down to knowing how to rebase, merge and fix conflicts in changes that are represented by commits.
When developers don’t understand version control, it tends to become a barrier to fast communication. The main purpose of workflows such as "trunk based development" is to reduce the communication lag between team members so that everybody is always synchronized as much as possible as they work on new features.
Since everything is text, version control is also an excellent way of tracking productivity.
In a continuous delivery, trunk based workflow, the role of code review is to make sure that everybody can quickly suggest improvements and highlight problems in code that is being committed.
The skill of code review involves being able to quickly browse through the diff of a merge request and highlight important points while moving quickly across the merge request. If code review becomes about arguments between developers and if some developers do their code review as a debate session then the code review can easily become a bottleneck in development.
As a developer you must learn how to view code review in relation to the goals of the current sprint and use mentoring sessions for items that you see during code review which seem to indicate that the author of the merge request is in need of additional training on the relevant subjects.
Attention To Detail
Technical debt is the accumulated errors in judgement perpetuated by developers as they work on a project. You start with a simple shortcut somewhere and then that shortcut forces you to make further shortcuts and before you know it, you have been bogged down by complexity.
These shortcuts are like weeds and mold. You must clean them up continuously.
Attention to detail means paying close attention to even such innocent things like return values from functions. Once you establish clear guidelines for such things and follow them, you can easily avoid introducing shortcuts on an ongoing basis and over time the code is kept in good shape making it easier for the whole team to stay agile and fast in implementing new features that make use of existing code.
This also ties into documentation because the skill of documenting the code also entices looking over return values and even names of parameters of functions to make them easier to understand and use.
The team needs to be skilled at closing the gap between starting work on a small task and the results being deployed. Continuous delivery methodology enables the team to continuously add value in small increments while making sure that the firmware is always in a deployable state.
This absolutely requires that the team is skilled in unit testing their code, test automation, ci infrastructure and agile planning using collaboration tools like GitLab and JIRA. As well as version control since continuous delivery depends on effective use of version control in combination with merge requests and code review.
Continuous delivery is a skill that must be applied to everything that the team
is doing. This means that every single task that can be automated, must be
automated and that every single change that is deemed
completed must be
automatically verified for integrity and quality so that once it is integrated,
software can be deployed safely.
A proper CI infrastructure enables continuous delivery.
It is not enough to just study and practice embedded firmware development skills. In order to truly close the cycle one has to teach it as well.
In fact, "study → practice → teach" is one of the core values of Swedish Embedded Consulting Group. Almost everything we do has been written down and taught to someone at one point or another. And things that have not been written down yet, will be.
Every developer should mentor others and try to put their experience in writing. This is a very powerful way to organize ones own understanding of the subject as well.
If the skill of mentoring is readily practiced by everyone in the team then communication and ideas tend to flow freely and the team becomes stronger.
If the team is unable to mentor, or creates an atmosphere unfriendly to mentoring or sharing knowledge then developers will tend to isolate themselves from their coworkers and create knowledge silos. This will inevitably slow down the whole team and frustrate the developers that are more experienced than others.
Specially if a team makes a lot of decisions "by committee", the effectiveness of such team will be completely reduced to the effectiveness and knowledge of the slowest developer.
Team collaboration tools such as GitLab and JIRA considerably simplify effectiveness of the team - if they are used correctly.
As a developer you need to be skilled at using them correctly - so that the team always has a list of organized tasks to work on, as well as a well maintained backlog of other tasks that will need to be done later.
As the team works through each sprint, new tasks can be scheduled for next sprints and in this way the team never forgets to do something - because it is all in the backlog.
However, for this to work well, each team member must be skilled in maintaining the backlog and creating meaningful tasks as new work items come into view.
This also includes chat and meeting tools like Slack, Discord, Teams, and Zoom.
Software code is very abstract by nature. We are taking highly abstract and mental concepts and structuring them as text - this is hard for some people to understand.
As you increase your skill of mentoring your team, you will also need to present concepts to them in an easy to understand way. To do this effectively, you need to be able to communicate structure of the program through diagrams.
This includes not just tools like draw.io but also special diagram formats such as PlantUML diagrams (for documentation), packet diagrams (also using text format), dot diagrams etc.
Particularly text based diagram creation is very useful because it allows you as a developer still express everything as text - and the other person viewing the diagram can see an image instead.