I tried to use TF to train an RNN recently and ended up using Keras, which is a layer on top of TF/Theano that made it much simpler, and I hear Theano's RNN implementation is better atm anyway.
One of the things I realized when trying to improve my Keras model is that Keras seems to have a lot of good defaults, so it's easier to get something working quickly, and then dig into all the options, rather than having to start from scratch.
I'm not familiar with Neural Tensor Networks, so I'm not super sure. From the bit of reading I did, it seems like they're related to recursive neural networks somehow.
I couldn't find any implementations of either concept on top of Keras, so I probably wouldn't recommend it.
Your best bet is probably to find an existing implementation and study it in whatever language/framework it's in.
Depending on how related recursive neural networks are, I found a few implementations of those in theano, etc, and I'd probably suggest starting there.
If they're not super related, Keras may be a good option, since the learning curve isn't very steep to get started, but you'll probably have to assemble the architecture yourself and since it's an abstraction it will really depend on how easily NTNs can be expressed in that abstraction.
This is the most straight forward TF example I could find, but it uses google's skflow library: https://github.com/tensorflow/tensorflow/blob/master/tensorf...
One of the things I realized when trying to improve my Keras model is that Keras seems to have a lot of good defaults, so it's easier to get something working quickly, and then dig into all the options, rather than having to start from scratch.