
My Introduction to Embedded Rust series is being released and last November, I taught an Intro to Embedded Rust workshop at Hackaday Supercon. The goal wasn’t to convince anyone that Rust was the future of embedded systems. I simply wanted participants to get hands-on experience running Rust on real hardware so they could decide for themselves whether it was worth pursuing. The process of building demos and teaching has forced me to confront the kinds of problems that show up in real firmware: interrupts, shared state, hardware drivers, and long-lived codebases.
After all that work, I keep coming back to the same question: is embedded Rust ready for primetime?
My answer is less dramatic than either Rust enthusiasts or skeptics might expect. Embedded Rust is a good candidate for some embedded projects today, but it’s probably not ready to replace C any time soon.
What “Primetime Ready” Actually Means
When people ask whether a technology is “ready,” they often mean very different things. For some, primetime-ready means replacing the dominant tool overnight. For others, it means being stable enough for production use.
For this discussion, I’m using a more practical definition. A technology is ready for primetime if engineers can use it to build and maintain real products without taking on unreasonable risk.
By that definition, embedded Rust has clearly crossed an important threshold. You can build real firmware today using Rust. The ecosystem supports a number of modern microcontrollers, and the tooling is mature enough that experienced developers can be productive.
But “ready for production” is not the same thing as “the obvious default choice.” C still holds that position in embedded systems, and for good reasons.
Where Embedded Rust Clearly Works
The strongest argument for Rust in embedded systems is safety. Rust’s ownership and borrowing rules eliminate entire classes of bugs that are notoriously difficult to catch in C.
In safe Rust code, you don’t get use-after-free errors or accidental data races. Memory lifetimes are explicit, and the compiler enforces rules that embedded developers normally rely on discipline and code reviews to maintain. These guarantees are particularly appealing in systems that must run reliably for years at a time.
This is an interesting contrast with approaches like MISRA C. MISRA provides a set of coding rules intended to reduce the risk of defects in C programs. Those rules are enforced through a combination of static analysis tools, documented exceptions, and structured code reviews. Rust, by comparison, enforces many of the same safety properties directly in the language. MISRA tries to make C behave like a safe language whereas Rust is a safe language that still allows low-level control.
Rust also brings a modern development environment. Cargo handles building and dependency management in a way that feels much more predictable than traditional Makefile-based systems. Reproducible builds and version-locked dependencies are straightforward, and testing tools are built directly into the ecosystem. After years of wrestling with embedded build systems, this alone can feel like a major improvement.
Just as important, the ecosystem is now large enough to support real work. Rust runs well on platforms like the RP2040/RP2350 family, ESP32 devices, STM32 microcontrollers, and Nordic wireless chips. Hardware abstraction layers and peripheral access crates exist for many popular targets, and frameworks such as Embassy and RTIC offer structured ways to handle concurrency.
At this point, embedded Rust is no longer experimental. You can build real firmware with it today.
Friction With Rust
Despite these strengths, working with embedded Rust exposes some real tradeoffs. The first is what I’ve come to think of as a kind of “safety tax,” which is the extra complexity required to express patterns that are trivial in C.
Consider a simple periodic timer interrupt that toggles an LED. In C, the timer peripheral and GPIO pin would typically be stored in global variables and accessed directly from the interrupt handler:
Timer alarm;
Gpio led;
void TIMER_ISR(void) {
alarm_clear_interrupt(&alarm);
gpio_toggle(&led);
alarm_schedule(&alarm, 500);
}
In Rust, the same pattern requires a more structured approach. Because peripherals must have a single owner and shared access must be synchronized, the timer and LED need to be placed in global mutex-protected storage and accessed through a critical section:
static G_ALARM: Mutex<RefCell<Option<Alarm0<CopyableTimer0>>>> =
Mutex::new(RefCell::new(None));
static G_LED: Mutex<RefCell<Option<LedPin>>> =
Mutex::new(RefCell::new(None));
critical_section::with(|cs| {
let mut alarm_ref = G_ALARM.borrow(cs).borrow_mut();
let mut led_ref = G_LED.borrow(cs).borrow_mut();
if let (Some(alarm), Some(led)) = (alarm_ref.as_mut(), led_ref.as_mut()) {
alarm.clear_interrupt();
let _ = led.toggle();
let _ = alarm.schedule(MicrosDurationU32::millis(500));
}
});
This version is safer: the ownership rules make it impossible for multiple parts of the program to modify the peripherals at the same time, and the critical section ensures that access from the interrupt handler is well defined. However, the cost is more code and structure, which is often trickier to read and understand. What would be a few lines of global state in C becomes a combination of mutexes, interior mutability, and critical sections in Rust.
Rust forces you to make correctness explicit, but that explicitness comes with extra complexity.
Another reality is that Rust does not eliminate unsafe code. Low-level embedded work inevitably involves operations that the compiler cannot verify: manipulating registers, configuring interrupts, or managing DMA buffers. These tasks often require unsafe blocks. For example, rp-hal is full of such unsafe blocks like this:
pub fn enable_tick_generation(&mut self, cycles: u16) {
// now we have separate ticks for every destination
// we own the watchdog, so no-one else can be writing to this register
let ticks = unsafe { &*pac::TICKS::ptr() };
for ticker in ticks.tick_iter() {
// TODO: work out how to rename proc0_cycles to cycles in the SVD patch YAML
ticker
.cycles()
.write(|w| unsafe { w.proc0_cycles().bits(cycles) });
ticker.ctrl().write(|w| w.enable().set_bit());
}
}
At first glance, this can feel disappointing. If unsafe code is unavoidable, why not just use C?
The difference is that Rust encourages you to isolate unsafe operations into small, well-defined regions. In other words, you have to explicitly opt in to unsafe code. Ideally, a thin layer of hardware drivers contains the unsafe code, while the rest of the system remains safe. In practice this works well, but it still means embedded Rust developers must understand when and how to cross that boundary.
The ecosystem also has some rough edges. Vendor support varies widely, and debugging tools are not always as polished as their C counterparts. Compile times can be noticeably longer than equivalent C builds, which affects the edit–build–flash cycle that embedded developers rely on.
None of these problems are fatal, but they do add a significant amount of friction.
Rust vs Disciplined C
A more realistic comparison is Rust versus disciplined C. Many commercial embedded systems use strict coding standards such as MISRA C, along with static analysis tools and formal development processes.
MISRA itself is not a certification, but it is widely used to support certifications such as ISO 26262 for automotive systems or IEC 61508 for industrial safety. These environments have decades of experience with C toolchains and analysis tools, and auditors understand how to evaluate MISRA-based development processes.
Rust approaches safety differently. Instead of relying primarily on process and static analysis, Rust encodes many safety guarantees in the language itself. Safe Rust code cannot contain data races or certain forms of memory corruption.
But the surrounding ecosystem still favors C. Certified toolchains, mature static analysis tools, and established workflows make disciplined C the safer organizational choice for many teams, especially in certification-heavy industries.
Technically, Rust-based products can obtain the same certifications as C-based products. In practice, doing so usually requires more up-front engineering effort because the certification ecosystem is still built around C.
A Practical Adoption Strategy
One of the most effective ways to adopt Rust in embedded systems is to build on top of existing low-level layers rather than replacing them outright. Frameworks like Zephyr or established hardware abstraction layers (e.g. rp-hal, esp-hal, nrf-hal) can handle the details of hardware initialization and driver support. Rust can then be used for the parts of the system where complexity is highest and bugs are most expensive: protocol handling, state machines, data processing, and concurrency. This approach keeps the unsafe boundary small and allows teams to gain experience with Rust without rewriting their entire platform.
Moving Rust deeper into the stack makes sense once the team is comfortable and prepared to maintain that low-level code. Starting there is usually much harder.
So Is Embedded Rust Ready?
After spending a few months building projects and video lessons, my conclusion is this:
Embedded Rust is a good candidate for some embedded projects today, but it’s probably not ready to replace C any time soon.
Rust is no longer experimental, as it offers real advantages in safety, tooling, and long-term maintainability. For the right projects and teams, those advantages can outweigh the costs. That being said, Rust isn’t a drop-in replacement for C. It’s a different way of building firmware, one that trades simplicity and maturity for stronger guarantees and modern development practices.
For now, Rust belongs in the embedded toolbox alongside C rather than in place of it.

Rust solves problems that experienced, disciplined and quality-conscious engineers never had.
cool review ! imho it is still easier to work in ‘disciplined’ c vs rust, at least for the cleaner more readable code.
another thing is about c++, rust doesn’t do classes and inheritance, etc, which incidentally makes codes more complex.
but I found it valuable e.g. when implementing libraries where the implementation changes between hardware
https://github.com/ag88/stm32duino_spi_dma
e.g. for this spi library that has dma implementations, virtual functions etc abstract away implementation dependencies so that the higher level function calls remains the same across different stm32 series.
Smdh. Pick an ugly abstraction, get ugly code. Pick a PAC that doesn’t mark creating peripherals as unsafe and you won’t write unsafe code. Pick a HAL that abstracts timers in a pleasant way and write this instead:
let mut led = Output::new(p.PB7, Level::High, Speed::Low);
loop {
led.toggle();
Timer::after_millis(300).await;
}
Pick a different abstraction and write this:
let mut delay = cortex_m::delay::Delay::new(core.SYST, clocks.system_clock.freq().to_Hz());
let pins = rp_pico::Pins::new(pac.IO_BANK0, pac.PADS_BANK0, sio.gpio_bank0, &mut pac.RESETS);
let mut led_pin = pins.led.into_push_pull_output();
loop {
led_pin.set_high().unwrap();
delay.delay_ms(500);
led_pin.set_low().unwrap();
delay.delay_ms(500);
}
Life is what you make of it.