Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: ASL Classifier Built with CoreML and Roboflow (github.com/narner)
43 points by narner on Feb 1, 2021 | hide | past | favorite | 16 comments


Neat! I would qualify that this is an ASL fingerspelling classifier, rather than a general classifier for the ASL language.


Great job Narner! I'm David Lee, and I'm glad that you have found my dataset helpful!


Thanks David! It was fun to make something with it :)


For sure!


Did you find the model to perform better / worse when the background varied? I see it's all wood table examples in the gifs.


Yeah; I found that if there was a lot of other objects in view (laptop/phone/notebook/pens/etc) then it had a hard time (I'm not great at signing anyways, ha

It may have performed better if I had done some image segmentation processing before running inference, then it may have performed even better


> I'm not great at signing anyways, ha

You mean well, but it looks like David Lee isn't fluent either. Have y'all included Deaf folks in this? I'm not deaf but I learned from folks who are... and well-meaning hearing folks was a consistent theme in learning about Deaf culture.

The dataset is questionable, and I note that his classifier has problems recognizing the g and h handshapes because...

(quoth wikipedia) > In most drawings or illustrations of the American Manual Alphabet, some of the letters are depicted from the side to better illustrate the desired hand shape. For example, the letters G and H are frequently shown from the side to illustrate the position of the fingers. However, they are signed with the hand in an ergonomically neutral position, palm facing to the side and fingers pointing forward.

David's example shows him holding his wrist in a position that would quickly cause RSI (but his son gets it right). Consider: it takes me under a second to spell "gerontology", and I lost fluency years ago.

This chart depicts those handshapes more accurately:

http://www.queerasl.com/wp-content/uploads/2018/01/ASL-alpha...


Yeah; this was definitely just a very quick project to demonstrate what was possible. If I was trying to build an actual, production-level/commercial project I would work with a Deaf person with this to develop a more comprehensive dataset.


Fun (possibly local dialect) tidbit: the zz in pizza can be spelled in one go; similar to the V handshape but following the Z motion. If somebody is gonna take this to production level, it had better include more than one Deaf person (and, just like facial recognition... folks of diverse skin tone, gender expression and BMI)


Oh interesting; thanks! And yes - I agree


Very cool project.

I love these applications of CV for accessibility for the blind. This is exactly what technology is made for.


Thanks Lenny! And yes, agreed. You should check out Oz Ramos' work: https://handsfree.js.org/


pretty awesome - I wonder if the algo could be used to teach people how to sign the alphabet - new twist on machine "learning"


You could probably do that!


Is the app available on the App Store?


Not at the moment! Just an open-source demo for now; but I think this could be a very cool app for whenever the Apple glasses get released...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: