All these language generation models, in short, base their next word solely on the previous words, right?
I'd expect that these generators can be conditioned on e.g. some fact (like in first order logic etc) to express something I want.
This is roughly the inverse of for example Natural Language Understanding.
My point being that these generation models should be conditioned on something more than just word history, like something they want/are instructed to express.
Does anything like this exist?