When new System on Modules (SoMs) arrive, the default sentiment can be reduced to, “New Chip X + NPU = Better Edge AI.” Geniatech’s i.MX95’s-based modules warrant a closer look. The important details aren’t about individual components, but how the entire system architecture can be optimized for intelligent edge workloads.
The ISPs and More NPU
A primary NPU remains a headline component of most AI architectures, but is usually much more effective alongside other custom components.
- NPU: eIQ Neutron → Designed for energy-efficient matrix computation (convolutions/other).
- GPU: Arm Mali → Often, the Mali GPU can handle pre/post processing of image data (resizing, color space conversion) and can even take on some light workloads. This often helps free the NPU and CPU for more demanding tasks.
- ISP: Image Signal Processor → The “silent hero” for vision AI is the ISP. Some lower-quality ISPs can cut off a frame before processing hits the AI. However, a good ISP can help prep some of the most important processing functions (like noise, exposure, and lens correction) that help the model work better in a real world context before a frame hits the AI processing).
Alone, components do little, but put them together in the right way and you have a perfectly choreographed pipeline. For example, data can flow from a camera sensor -> ISP -> (GPU for prep) -> NPU for inference -> CPU for decision. This even allows for minimal use of the main CPU.
The Real-Time Cores: Not an Afterthought, but a Co-Processor
The dual Cortex-M7 & M33 are often glossed over as “for real-time control.” In an AI-driven system, their role is more strategic.
- Deterministic Data Acquisition: They help manage the timing and control with industrial I/O and system sensors at a microsecond level. This way the data given to the AI is precise and in sync.
- Safety & Supervision: They can do a “watchdog” function, monitoring the main A55 Linux system and the AI inference results. If an anomaly is detected, they do an independent safe response. This is important for functional safety (hinted at by ASIL B support).
- Sensing in Low Power Always-On: In battery-operated systems, the M-cores handle low-power sensors and wake the main AI complex only if an important event is detected.
System-Level Implications for Geniatech’s Design
Geniatech’s task was to uncover the potential of this design. Important design choices support this.
- “Energy Flex” Framework: This probably contains advanced power domain control, enabling independent power on/off cycling of the NPU, GPU, or real-time cores based on current load, thus achieving the 3-8W typical power target.
- Memory Architecture: Support for LPDDR5 is important. LPDDR5 is efficient and has great bandwidth. It fills hungry NVPU and GPU. AI is often memory bound.
- Thermal Design: Compact System on Module (SOM) performs better with careful thermal design. AI inference without throttling must be proven as a key differentiator by Geniatech.
The Bottom Line
When assessing these modules for an edge AI project, you can’t simply ask, “How many TOPS does the NPU have?” Instead you should ask, “How is the whole system designed—from sensor input to reliable output—to handle my intelligent workload from prototype to deployment?” It appears Geniatech’s offering is based on this system-level thinking.

