Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All these language generation models, in short, base their next word solely on the previous words, right? I'd expect that these generators can be conditioned on e.g. some fact (like in first order logic etc) to express something I want. This is roughly the inverse of for example Natural Language Understanding.

Does anything like this exist?



I'm fairly sure that these models don't work solely on the previous word, but instead are able to remember some level of information from history.

Otherwise, you'd reach a word like 'and' and couldn't possibly follow it with a logical statement that follows on from the previous part.


This is why I said 'words', multiple :-).

My point being that these generation models should be conditioned on something more than just word history, like something they want/are instructed to express.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: