The search engine gets all the content, at the same url and ip address, but in a way that's optimized for it. There's no redirects, or other cloaking methods involved.
I've tried browsing as 'Googlebot'. The text returned is better for a crawler than nothing (or just Javascript), but isn't the sort of functional and dense link-structure that most helps site rankings.
Also, the crawler-friendly URLs are different from the URLs the search engines will see reported by users' toolbars or discover on inlinks from other sites. So various link- and traffic- based contributions to rankings are likely to suffer on Noloh-style sites.
There can be a dense link structure, depending on several factors. For example, NOLOH itself generates a file that keeps track of possible paths through your application. After we simply upload a newer copy of the file on the live server, NOLOH will generate more links to the search engines. Also, I'm not sure I understand what you meant by your last sentence, but links for search-engines can be used by users too.
I browsed a few clicks in as 'Googlebot'. Rather than typical website links with many targets, and useful anchor-text, each page had only one substantive link, with minimal query-string-like anchor-text (like "section=features").
Meanwhile, if crawlers discover inlinks from other sites that users have copied and pasted, like "http://www.noloh.com/#/section=whoweare", a crawler will only see this as a link to the root page.
Your pagerank is going to be diluted over these arbitrarily different URLs, and traffic analysis via toolbar reports is not going to boost key target pages as strongly as in an application with traditional stable URLs.