Posted by hasheddan 5 days ago
For "bigger" devices, it's usually a Cortex inside a system-on-chip or system-on-module, 32 bits single core and a few Mb of RAM for low-end (enough to run regular Linux distro instead of uClinux for instance), 64 bits multicore for high-end devices that deal with audio/video. That kind of business is often resource-hungry in every way.
I work with that kind of stuff, and to me these "microcontrollers" are just monsters that I hesitate to call "micro" when some of my coworkers work on much smaller chips with only a few K of RAM available.
Wouldn't it be advantageous if we used ONNX for everything? https://onnx.ai/
Apart from these, for example, the author implemented the model the traditional way using C, but it is more convenient to use tf-lite micro on esp32s with the Berry script language.
However, since I have never used onnxin this kind of project, I cant speak to its advantages, so comparisons are difficult from my perspective. But as I said, tf-lite and offer benefits like easy integration, good optimization, and as the name implies, tensorFlow.