At the end of March, I’ll be attending the annual ECEDHA conference, where I’ll be on a panel with ECE department heads and other industry leaders discussing the future of edge AI curriculum. Before I walk into a room full of department chairs, I’d like to hear from the people actually building products: I would very much appreciate your help in answering the question:
How should universities prepare embedded engineers for a world where AI is increasingly part of the job?
If you’re working in embedded systems today (e.g. hiring engineers, mentoring junior developers, or just trying to ship real hardware) what would you tell universities? What are they getting right? Where are we still missing the mark? Please let me know in the comments!
Two Ways to Teach Embedded AI
In a previous post, I wrote about two approaches to teaching embedded AI.
The first is bottom-up. Build strong foundations in electronics, signals, microcontrollers, real-time systems, and machine learning. Then, once students have depth in each area, bring them together in an advanced elective or capstone. This produces engineers who can reason about tradeoffs. They understand memory hierarchies. They know what latency budgets mean. They appreciate why quantization exists. It’s rigorous and intellectually honest. However, many students never make it that far, as it requires a lot of foundational knowledge.
The second approach is top-down: introduce embedded AI early using scaffolding and high-level tools, let students collect data, train a simple model, and deploy inference to a microcontroller within a few weeks. Build intuition first then worry about deeper theory later. It provides early wins to hook students and creates momentum. However, it does not produce experts.
In reality, strong programs probably need both: early exposure to spark interest and later synthesis to build competence.
Where I need your help is understanding this:
What exactly should that competence look like?
What I See in Industry
Over the past several years, I’ve worked with embedded engineers across startups, mid-sized companies, and larger organizations. I’ve noticed a common trend: most new engineers are smart, creative, and motivated. They understand underlying concepts, can apply formulas, but when asked to build an end-to-end system, they can struggle.
Embedded AI is an extension of this systems-level thinking, as you need to have a full understanding of probability and statistics to analyze data, know enough machine learning to at least work with AI experts, and have a deep understanding of embedded constraints: memory limits, inference time, task scheduling, update mechanisms, and failure modes. The AI model is just one component of a much larger system. If students haven’t practiced building systems that cross course boundaries, they can struggle.
There’s also a workflow gap. University projects are often tightly scoped and relatively clean. Professional engineering is messy, as it often contains things outside the control of the engineer (or the university creating the project):
- Inheriting a legacy codebase
- Dealing with incomplete documentation
- Debugging issues created by another engineer/team
- Forced to use a language or SDK
- Working in a team
Version control, build systems, testing, and documentation can seem like boring, extraneous tasks, but they are crucial to creating robust, industrial products. AI workflows add another layer: datasets, validation splits, model versioning, reproducibility.
Are we teaching students how to operate in that kind of environment? Or are we mostly grading isolated assignments?
What Does “AI Literacy” Actually Mean?
There’s a temptation to say: “We need more AI in the curriculum.” But what does that mean?
Does every embedded engineer need to understand backpropagation in detail? Probably not. Embedded engineers can rely on AI researchers and engineers to help with the finer details of building and training models.
Do they need to understand when machine learning is appropriate versus when a deterministic algorithm is better? Absolutely.
They need to understand what model accuracy really means. They need to appreciate constraints: memory, latency, power, etc. They need to recognize that a 95% accurate model can still be unusable in a real product.
To me, AI literacy for embedded engineers means evaluating whether ML is the right tool, understanding the cost of deploying it, integrating it into a constrained system, and the ability to articulate and mitigate (where possible) the risks of using machine learning
That’s different from training large models from scratch, and I’m honestly not sure we’ve clearly defined that distinction in many ECE programs yet.
Preparing Engineers for the Next Decade
When I think about what will matter most over the next ten years for embedded engineers, it’s having students graduate with adaptability and a drive to continue learning. Particular frameworks, tools, and languages will change quickly. Programmers are starting to lean heavily on LLMs to assist with various coding tasks. If we anchor curriculum too tightly to specific tools, we risk training students for a snapshot in time rather than for a career.
Higher education should absolutely teach the fundamentals and offer hands-on projects for students to see those fundamentals in action, which helps keep them engaged with the material. Students focusing on embedded systems will need to be equipped to quickly learn about new concepts (e.g. languages, frameworks, AI models, hardware, etc.), think on a systems level to tackle machine learning tasks (e.g. data analytics, hardware constraints, timing, etc.), and still be able to debug complex interactions when multiple subsystems fail.
In addition to these hard skills, good engineers should be able to communicate technical tradeoffs to teammates and stakeholders, collaborate in a team environment, and be willing to learn new skills (whether through courses, books, videos, tutors, etc.).
So, I’m Asking You…
If you hire embedded engineers:
- What do new grads consistently struggle with?
- What do you wish they knew on day one?
- Where do you spend most of your mentoring time?
If you’re an embedded engineer early in your career:
- What surprised you most when you entered industry?
- What do you wish you’d practiced more in school?
- Where did you feel unprepared?
And if you’re working directly with edge AI:
- What does “AI-ready embedded engineer” mean to you?
- What skills actually matter in practice?
I’ll be taking these perspectives to ECEDHA and sharing them directly with department heads who are actively revisiting their curricula. Embedded engineering has always worked best when academia and industry talk to each other honestly. AI adds a new twist to the field, much like DSP did through the 90s. If you have thoughts, I’d genuinely love to hear them in the comments here, on LinkedIn, or directly.

I teach embedded & embedded AI. To me the two suggested approaches are identical to performing embedded systems design, either from the ground up by first understanding logic circuits, FPGA’s, simple read modify write register operations compilers etc. and then applying C. Where the alternative is jumping right into Arduino examples without understanding the underlying platform. You can make embedded AI work using edge impulse, without any prerequisites, and you can use EI and get something useful and integrate it deep into your embedded systems design. And then you can learn embedded AI using any of the many tool chains on top of your embedded skills. No or low code tools are great for planting seeds among the non-embedded engineers. The greatest revelation, seen from my perspective as a teacher, is when students see the value of their domain knowledge in whatever field they chose to solve a problem using Edge AI, the Edge AI becomes just a tool to reveal insights, not possible using a regular programming paradigm.
Thank you for your insights! I really like the part about having students apply their particular domain knowledge to solve a problem. Do you have students from different backgrounds? Do you find that some domains are more popular than others where students get excited about using edge AI?
I’ve worked as a practitioner in the industry for 15 years, and now I’m teaching embedded AI as an external university lecturer. In the industry, I saw many juniors were inclined to either theory or practice: Some were theoretically inclined and loved inventing advanced, intricate solutions that weren’t practically feasible or useful, while others were practically inclined but incapable of understanding and analyzing the underlying causes when a model didn’t work. So I agree that we need to teach both, and we need students to see the connection between the two. Inspired by the Hands-On books by Aurélien Geron, I like to teach theory and practice in conjunction for each topic, for example with the pattern: problem (low accuracy, slow inference etc) -> underlying cause -> relevant math or principles -> solution -> fix in PyTorch/Keras code.
A specific topic that I think is criminally underrated in most university courses and textbooks is data quality. In the real world, bad data is maybe the number one reason why machine learning projects fail, at least in my experience. I guess many university researchers haven’t experienced such failures and therefore think it’s fine to let students train on ready-made, high-quality datasets. In my course, the students learn how to collect their own dataset and ensure that it represents the real-world distribution, which requires domain knowledge, imagination, and a lot of practical (but often fun) work.
When it comes to messy systems engineering realities, I’m not so sure that a university course is the right ground to teach this. The industry would surely love highly professional and fully trained engineers right out of university, but I think it’s fair to let the university courses focus on fundamentals and basic skills and leave that further training to the industry.
Hi Gustaf,
Thank you for the input! I was chatting with a professor a few years ago at a conference, and we discussed the importance of letting students collect their own datasets, as most ML exercises use pre-made (often toy) datasets. If you don’t mind sharing: what kinds of data do you have students collect? What seems to pique their interest the most?
I’m happy to share! The default course project is a personal face recognizer, so they need to collect photos of themselves (positive class) as well as others (negative class), using the ESP32-S3 Sense camera. But the students are also allowed to come up with their own image classification application, and about half of the teams choose to develop their own idea, which is obviously more fun and motivating, both for them and for me.
Hi Gustaf,
That sounds like a great approach: give them a prompt but let them modify it if teams want to. Have any of the non-face identification projects stood out to you? Also, what kinds of biases do you focus on when demonstrating this project–in other words, how do you turn mis-classifications and bad accuracy into learning experiences?
Good questions. It’s the first time we’re running this course, and the students haven’t actually started training any models yet, so I have yet to see how it turns out. But the course has plenty of scheduled time for project work where we can supervise the students when they run into trouble along the way, so I hope we can help the students turn problem solving into learning experiences on a case-by-case basis. I’d be happy to share when I learn more.
Hi Gustaf,
Good to know, thank you! I’d love to learn more as you find out what works and what doesn’t work.