Haha, I checked out tree notation in more detail just now, and I actually agree absolutely with one of the underlying hypotheses: that whitespace is essential for languages.
You are taking it to the extreme by immediately just looking at one particularly easy to parse use of whitespace that directly corresponds to trees where inner nodes can have data too.
My work is very related, I was also inspired by the fact that whitespace is essential. I have developed a grammar semantics that is more expressive than both context-free grammars and parsing-expression grammars, and unifies lexing and parsing by seeing them as two different sides of the same coin (namely, terminal symbols, and nonterminal symbols). It is basically just a generalisation of Earley parsing, and some of the puzzle pieces just fell into place for me a few months ago while I was writing a grammar for a Markdown like language. One problem is that it possibly inherits the same performance problems as Earley parsing, but I am thinking that this can be fixed by having the top-level(s) in a simple tree-notation like yours that can be parsed in parallel.
You can also write grammars that accept any input, and so you can check for errors just by checking if certain grammar symbols corresponding to errors are present in the parse tree or not, but the parsing itself will never fail. But I don't have an automatic way of checking if a particular grammar has that property.
I also agree with you in that trees are really important :-)
I am working on a tool and libraries for working with my grammars, and I've decided that all output will be standardised as trees in a simple matter by classifying grammar symbols along two different axes: a) flat vs. nested, and b) auxiliary vs. visible. Based on that division, parse trees are created automatically. This makes things much simpler for now, and (I hope) allows for more composable and modular grammar design.