Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's one way of looking at it I suppose. But it does require stretching the definition of token a little bit if you are eg using regexes to identify arbitrarily long multiplications. I think my prefered perspective is that you are using regexes to parse regular sublanguages and then something more powerful (e.g. a push down automaton, but there are even more powerful parsers than that) to bind those sub-parsers together.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: