Hacker Newsnew | past | comments | ask | show | jobs | submit | r4m18612's commentslogin

Yeah, it’s getting harder to tell. At some point the difference won’t be in the voice itself, but in how the conversation flows.


Even now, i think they're quick enough--but they interrupt at the wrong times, where humans know if they have enough context yet.

So, I agree. But I believe the problem is pretty solvable with enough tokens.


Impressive. Running a 400B model on-device, even at low throughput, is pretty wild.


[flagged]


-∞


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: