I think this is a valid direction for a large category of problems--many complex applications on the web probably should use almost nothing but JavaScript for the "view" layer in the MVC paradigm, from a maintenance, usability and performance perspective. While I'm a big fan of using the simplest possible solution to any problem, and HTML is the simplest solution to almost every website problem, there's also a large class of problems where HTML templates just don't make sense--they're just an unmaintainable mess.
Applications where the UI is always dynamically generated based on the user, the state of the application and data, etc. are probably far more comfortable using JavaScript generated by code. Obviously others are going this route, as well. GWT uses programmatic generation of UI elements, for example, though it does so by compiling Java down to JavaScript (kinda).
I'm not going to rush right out and start using nohloh (I don't enjoy working in PHP, and most of my work is on existing apps), but I certainly see value in the idea. And, in fact, most of my work lately has been on converting a huge application to be friendly to using almost nothing but JavaScript for the front-end (it's an installable systems management app, so I don't have to worry about spiders at all, and the UI was never template based as it is dynamically generated based on all of the criteria mentioned above).
I'm noticing some snarky comments, which seems to be a theme now when people post their creations. I don't think HN is the best place for those. Valid, constructive criticism seems a lot more appropriate here.
Guilty as charged. I apologise to asnyder for my negative tone; to his credit he's done a great job of answering the critics. The marketing of the product touched a nerve seeing as I'm part of a large movement of developers who embrace HTML and client-side development in general as disciplines that are worth investing significant effort in to.
The reddit tone starts to predominate. There has been about 4 threads to address the SEO issues, where a "read existing comments first" then vote-up and "answer in the thread" was the appropriate thing to do.
The part of non-technology related stories has increased trendemously, when some people point it as "non-HN", there are answered the Community Card (they forget Community and Common are two very close words). PG is non-existent has a spoken moderator who tells the rules when it's appropriate to do so (his absence is eminently suspect).
More importantly, a lot of the old users keep their mouth shut now, some have completly disappeared for obvious reasons. Most of them are prolly still reading HN but the idea of posting a comment has become tiring for them. The battle is already lost, but not for everybody.
You're right. We have something in the works that will make back and forward refresh only the necessary parts, but that's a feature for a future build. Thanks for the compliments!
Any chance you could show the PHP source to the doc viewer page http://www.noloh.com/Docs/#/section=apireference ? I'm assuming that's a NOLOH application which would give everyone a more substancial idea of what it's really like to work with your class hierarchy.
We actually plan to open up the entire noloh.com, and noloh developer zone as soon as we finish documenting and adding features. The current developer zone was thrown together in a few minutes.
Throughout this month you'll see it evolve into what we think will be the most easy and intutive developer documentation experience, at which point it will be open for all to see and use.
Constructive comment: I can't middle click links to open them in new tabs.
P.S. You're not getting too much enthusiasm for it here because of the choice of PHP. I can assure you, though, if this takes off, a lot of those "can someone build me a Joomla or Drupal site?" on eLance will rapidly turn into "can someone build me a Noloh app?"
You're right about the tabs, that will be implemented in the next update to the website. We're planning to release Ruby and Python versions of NOLOH at some point.
If the framework does all you promise and starts getting adopted, you could make the language translation an open source project, maybe. That way it will both be quicker to completion and less error-prone (since you're not locking yourself in). In any case, it doesn't seem like turning this into Ruby or Python should be too much of an issue, since they're all OOP languages anyway.
In your application you can specify the be based on whatever units you prefer. We chose px as the default and use it in most places, however you could easily change the base unit to whatever you wish.
A completely SEO application is rendered to search engines. NOLOH applications know when a search engine is visiting and when it's a normal user. This is all described in detail in the features section.
The search engine gets all the content, at the same url and ip address, but in a way that's optimized for it. There's no redirects, or other cloaking methods involved.
I've tried browsing as 'Googlebot'. The text returned is better for a crawler than nothing (or just Javascript), but isn't the sort of functional and dense link-structure that most helps site rankings.
Also, the crawler-friendly URLs are different from the URLs the search engines will see reported by users' toolbars or discover on inlinks from other sites. So various link- and traffic- based contributions to rankings are likely to suffer on Noloh-style sites.
There can be a dense link structure, depending on several factors. For example, NOLOH itself generates a file that keeps track of possible paths through your application. After we simply upload a newer copy of the file on the live server, NOLOH will generate more links to the search engines. Also, I'm not sure I understand what you meant by your last sentence, but links for search-engines can be used by users too.
I browsed a few clicks in as 'Googlebot'. Rather than typical website links with many targets, and useful anchor-text, each page had only one substantive link, with minimal query-string-like anchor-text (like "section=features").
Meanwhile, if crawlers discover inlinks from other sites that users have copied and pasted, like "http://www.noloh.com/#/section=whoweare", a crawler will only see this as a link to the root page.
Your pagerank is going to be diluted over these arbitrarily different URLs, and traffic analysis via toolbar reports is not going to boost key target pages as strongly as in an application with traditional stable URLs.
Yeah, it should. Also, all navigational elements could've also been made links. We chose not to, but might change that in the future. NOLOH as a platform allows you to create whatever elements you wish and have them do whatever you like. So if you wanted the navigation tabs to be links, you could easily instantiate them as such.
I have no real idea how search engines work when they index a page, but I would assume they do not execute any of the Javascript, just grab the text based file from the server and analyze it. (does anyone have any insight into this?)
So I was pretty excited to try it until I realised that if that is the case - this is probably the worst SEOed site I have ever seen, it literally has no SEO.
Its a good idea, but personally after the usability of a site, it being nicely SEOed is really, really important.
NOLOH generates only the content (text, images) for search engines so they won't get any JavaScript or anything extraneous like the styles for their text.
It looks interesting, but what's the deal with Bubble2.0-ish "Register for the Beta program"? Why not just let anyone download the source and install it, assuming that's the way people will eventually use this?
You don't need to commit to any type of support, you'll probably get a lot of useful feedback and, if it's good, some free publicity too.
We're currently doing a beta program hosted on our server so that we can provide support to the users, as well as make sure they always have the latest version of the NOLOH Kernel for their applications. This will help us communicate with the sandbox users, and get quality feedback from them. This is especially necessary because our documentation is far from perfect, so while we're actively working to create more docs, examples, and videos, we can still get valuable insight and feedback from a small group of serious developers.
Eventually is this going to be some sort of hosted service or a framework users download, like Rails or Zend Framework?
If it's the former, please ignore my comments :) If the latter, I for one would be playing around with the code right now if there was a download link... A few examples in a README file would do fine for now. Release early and often, as they say ;)
It would be hard for users to do significant work with NOLOH when the code is hosted on your own server and updated automatically, possibly breaking code they added.
We plan to offer free hosted, free download, as well as commercial hosting and download options. Currently NOLOH is being used in several major companies (ex. CIG) for their website in addition to their internal applications.
The Beta Program is setup to allow us to work closely with a small group of developers so that we can help them develop their applications in NOLOH. If any updates during the beta program break an application we will work with the developer at our cost to fix the problems.
We have lots of documentation, articles, videos, etc, to put together before we can offer it to non-commercial users which is what we hope the beta program will help us accomplish.
BTW the register form is pretty discouraging... Why not let users sign up with email and password and fill up their info/settings later on, as they go along?
If you have JS off you should be directed to an error page. Could you please elaborate on "With JS on, there is nothing displayed". Viewing source on a NOLOH application will not give you anything of interest.
The very definition of cloaking is serving different pages to bots then users. So the 'optimized' stuff is essentially cloaking... What prevents googlebot for using a different header to check if the site is cloaked?
Btw, i prefer haXe (haxe.org) as a true unified web language ;-)
Yes, cloaking is giving a user and a search engine different pages, but the purpose of cloaking is to decieve search engines, and that's why it's evil. With NOLOH, search engines aren't decieved, they are simply given the content in a way that they can understand since they do not understand JavaScript. They're still given the same content (same text, images, etc...) but they're not told to modify the DOM since they can't. So it is a helping hand to search engines, not a deception in any sense. Why would googlebot use a different header to pretend that it is a user in order to recieve a broken page? That would not benefit any party.
haXe is only unified in the sense that it has support for JS, Flash, and a few other languages. But going through their tutorials (for example, http://www.haxe.org/doc/js/ajax) you still see that you have to write mark-up. So all of the traditional problems that NOLOH is designed to address still persist in haXe. Mark-up is static, error-prone, not intuitive for application development, interpreted by browsers differently, and the list goes on and on...
> Why would googlebot use a different header to pretend that it is a user in order to recieve a broken page? That would not benefit any party.
Well... it would benefit Google (and its users), because they could detect and punish cloaking. How would Google detect cloaking? A clerk examining the two served versions, or automagically comparing the two versions of served content? How smart would their comparing algorithm be? Could it determine that your site was not being deceptive? How would you write such a algorithm (and it better be generic, and difficult to exploit)?
Basically, I have no idea what the hell Google does and does not do, but I am scared (and ignorant) enough to always serve search engines exactly what I serve users.
I once built an app similar to this, where all the dom elements were built by the script. Good concept, but I thought the search engines handed out penalties for displaying different facets to robots vs users?
It's the same content that is sent to spiders and users, but the robots just get it optimized and without anything extra like JavaScript or styling elements. They get text, images, and links, which is all they want anyway.
With JS on in FF3, I see the front page... but nothing that I click brings up anything else... nor do I have the right-click open-in-tab option I prefer.
Still no luck for me -- even quit and restarted, and cleared cache. Oddly, though, it's working on a FF3rc1 install I have on a Ubuntu machine.
Tried viewing traffic in Live Headers and script in FireBug to see what was going wrong; attempting a direct fetch of one JS URL fetched came back with a script that was just a comment: "/~NScript~/".
We're working on a cookieless version. Over the past 4 years it wasn't an issue with any of our clients as it's expected for a user to have cookies enabled. However we are aware of this issue and will be addressed in a future releae.
Applications where the UI is always dynamically generated based on the user, the state of the application and data, etc. are probably far more comfortable using JavaScript generated by code. Obviously others are going this route, as well. GWT uses programmatic generation of UI elements, for example, though it does so by compiling Java down to JavaScript (kinda).
I'm not going to rush right out and start using nohloh (I don't enjoy working in PHP, and most of my work is on existing apps), but I certainly see value in the idea. And, in fact, most of my work lately has been on converting a huge application to be friendly to using almost nothing but JavaScript for the front-end (it's an installable systems management app, so I don't have to worry about spiders at all, and the UI was never template based as it is dynamically generated based on all of the criteria mentioned above).
Anyway, looks pretty neat.