I'm interested in playing with this, but I'm not interested in paying $3000
for a high-end nVidia NPU. It seems to be in principle possible to split the task into smaller subtasks that could be distributed to a Beowulf-style Pi cluster. Several enclosures exist that let you connect multiple Compute
Modules over a high-speed bus, I hear.
I have minimal experience with Raspberry Pi (not zero, but minimal). I have none with setting up Beowulf clusters, and none with decomposing machine learning tasks and distributing them among processors. Thus, I wonder if
there might be an existing project I could learn from and maybe even
eventually contribute to, even if only as a tester.
Thanks.
--
Carl Fink
carl@finknetwork.com https://reasonablyliterate.com https://nitpicking.com If you want to make a point, somebody will take the point and stab you with it.
-Kenne Estes
--- SoupGate-Win32 v1.05
* Origin: Agency HUB, Dunedin - New Zealand | Fido<>Usenet Gateway (3:770/3)