Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Inference only. (So this is competing with Google TPUv1; a few years late and way more expensive, but with more memory)


1. This isn't inference only, it has the full capabilities of a normal GPU, just small and low power (and therefore much slower than normal GPUs).

2. TPUv1 is a matrix multiply ASIC that requires a host CPU to do anything. This thing is a SoC that includes both a CPU and a GPU. The CPU is pretty fast for what it is - much faster than e.g. raspberry pi, see https://www.phoronix.com/scan.php?page=article&item=nvidia-j....

3. not sure how you know whether this is more expensive than a TPUv1, since the TPUv1 was never sold or available outside of google.

A much better comparison would be between this and the Edge TPU development board.


You can't put a TPUv1 in a car because Google doesn't sell them.


It can do training with its GPU, not the fastest thing in the world though.


So looks like it can do "checkbox training" (add an iota of training capability just so you could check the box labelled "it does training").

Got it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: