I just ran through the tutorial and it was a real breath of fresh air - everything ran out of the box, there was no need for an IDE or a build process and changes were live the second I saved them from my editor (it apparently ships with the on-the-fly compiler from Eclipse). Really impressed.
Agreed. I still think it's a little cumbersome compared to rails, but it's definitely the best java web framework I've seen yet, and about as close to rails as you can probably get with java. Groovy for the template language is a good choice too.
The example app was very useful. Including some real world jquery ajax functionality was a big bonus and a great way to compare functionality with other frameworks. (render methods and routing specifically).
"The Play framework makes it easier to build Web applications with Java"
Maybe you should not be building web applications with Java or C# or <insert language of choice>.
Maybe you should be building web services with these languages and using AJAX, a client side toolkit, Javscript, HTML and CSS to build a UI that communicates with said services.
Maybe this decouples your UI implementation from your business logic implementation so you can go mobile, IVR, Mind Jacking, without having to build the back end again.
Maybe server side templating was wrong and a CMS is your best choice for static templating. Given that assembling templates during run-time as opposed to build time seems to be a waste of resources. A CMS can deliver already templated, flat HTML files, to the web server during build time. As well Ajax is your best bet for delivering dynamic information.
It sure separates your concerns quite nicely, while allowing each developer to work in their domain of expertise.
In my experience breaking everything down into a bunch of uncoupled services, ala SOA, adds a significant amount of performance overhead making all these web service calls to build up a semi-complex page. Performing all the logic within a single process/JVM is MUCH more efficient.
And that's with LAN based WS calls. I can't imaging doing that over the internet. You have a page that has 20 independent dynamic blocks, you really want to make 20 HTTP request/response calls over a 60+ ms latency connection? You really want your server cluster to have to handle 20 connections instead of one?
Maybe for a simple and/or low traffic site, but for a moderately complex website/webapp, you're better off serving out a single page.
No one said that they have to be micro services. One should provide composite services that provide all of the data for a work-flow. Then use a pub-sub event based model to notify the UI of available data. This way components stay loosely coupled and are only listening for an event that injects the relevant data that they need. This results in a single call to a composite service yet the micro services are available should a different UI need a more microscopic flow or data load (for example, limited memory on a mobile device).
As well another common mistake I see is services calling services to build a composite service. If at all possible services should be the facade into the system with logic wrapped up in a jar, DLL, etc. All services if possible should rely on said library and not rely on loosely coupled services for composting, inter-system communication is different than disparate system communication.
Building/maintaining that sounds way more complex than writing a JSP.
In my experience, if you architect your server-side components well, if/when you want to expose something as a web service you can prop up a web service that's basically just a thin wrapper around the relevant backend component(s). It avoids the "premature optimization" of building everything as a bunch of micro and aggregate services that you may or may not actually make use of in the future.
For building a complex web page for a normal browser, (assuming you're using Java on the backend) a JSP built using various local component objects is almost certainly the fastest performance solution, both server side performance and client side perceived performance.
For building other interfaces (iPhone app, dashboard widget, etc...), until you build them you're unlikely to know exactly what calls will make the most sense for them. When you do, just build those calls as wrappers to your existing components. Very fast to do. I've done it many times against Seam and ATG eCommerce applications.
If you are a one man show maybe, but if you are a team it is far more maintainable because each concern is clearly delineated. JSP is not easier for a HTML designer nor is ASP or PHP it is an amalgamation of languages composites into a document. It is only easier for a code developer and that it the crux. Usually a company starts out as a one man shop with a code developer and then graduates to a team. Having to teach designers JSP is a waste of time and resources. The entrance barrier to web development is so high because of this bastardization of development domains. It is only easier because you are reasoning from your view point. To a new developer just learning HTML the list of technologies is insurmountable all because we coupled back end technologies with the front end.
I typically work with largish teams on very large websites (cingular.com, jcrew.com, ulta.com, pcavote.com, etc...).
With good frameworks most JSPs are very close to normal HTML and we actually usually have: designers/UX folks who provide PS docs/images/etc..., front end devs (JSP/JS/AJAX impl folks) who take those and make real working JSP pages out of them, and back-end devs who write the Java backing components.
You seem to be expecting these new developers who are just learning HTML to know how to build complex pages using aggregate web service calls via JS, based on a non-standard in-house system.
Even if you don't have separate designers and front-end devs, and I've worked at several places like that, helping HTML designers figure out JSP using a good framework (ATG, JSF) really isn't that bad, it's just a few new tags which are really well documented all over the web.
While I am a Java dev/arch, my "view point" is typically that of a lead architect, team lead, or project lead, so making sure people can get stuff done, regardless of what role they're in is key.
Do you have any live examples of websites built using your proposed method?
"You seem to be expecting these new developers who are just learning HTML to know how to build complex pages using aggregate web service calls via JS, based on a non-standard in-house system."
Sorry I missed hitting on this one. I do not expect a designer to be doing service calls that is the point. You can write a JavaScript controller that grabs reference to any UI components that it needs, you do not write JavaScript in the HTML, the designer is then working in pure HTML CSS. The JavaScript developer is the one orchestrating the work-flow logic in a code-behind manner. Each person has their domain and once a developer feels comfortable they can graduate to the next domain. Just as I don't expect a HTML designer to have to know Java constructs I don't expect a Objective-C developer to know HTML constructs. It's not about constraining people, it's about re-usability of reusable components and isolating non-reusable components.
You are not taking into account, distributed development teams or completely outsourced development of certain portions of the overall solution In which you have no control over the skill level of the developers.
As well, I am not proposing anything any more non standard than say JSP or Struts and Tiles. Their are plenty of AJAX frameworks that provide all of the bolts to erect this style of system without building and in-house system. Further, JSP is only standard for web development it does not take into account the variety of UI possibilities and that is the point of the matter. The UI much like the database is only a portion of an overall system. Just as you would not write you data-storage in your logic tier you should not be coupling the UI tier to your business logic.
There are plenty of sites that use this development methodology. Pick almost any mature "Web 2.0" site and they will be somewhere along in this evolution. Google, Amazon, Yahoo et. al.
I apologize in advance for being vague, but I would prefer to not loose my anonymity on this site and referencing my work would do so.
"building everything as a bunch of micro and aggregate services that you may or may not actually make use of in the future"
I wanted to touch on this real quick, The point of letting the client deal with it's concern, is that you can prototype and user acceptance test you application before you ever write a service. Then you only have to build what you need to support your UI and what ever batch processing you need. It is a far more agile process. Not only do you have a working UI but you have a service inventory of functionality that need to be implemented to support you UI. From there you can build out your service tier and if you so chose look at the next likely UI and do some due diligence to see if you are going to need more micro services to support you next endeavor. You may not chose to build them but you can at least architect the solution with the next endeavor in mind.
YES! You seem to have gotten a lot of disagreement here, but I know exactly what you're talking about. Someone suggested that this would be bad for scalability, but in fact I was pushed in this direction precisely for scalability purposes. Consider this:
Say you have a page that shows basically the same content for everybody, but has a handful of user-specific elements in there. In a typical MVC setup you'll a controller as the gatekeeper with overall knowledge of the request, it will assemble the data needed for the request and pass it to the view. The view may be one template, or it may be comprised of many, including a layout and several sub-templates. In order to fetch all the content efficiently, you want to know everything up front, so there is no n+1 problem or similar. This works pretty well for the gross content of the page, but not nearly so well for the minutiae. Whether a specific piece of data is often not reasonably known until deep inside a nested template.
So you have a conundrum, do you make assumptions about or replicate the view logic in your controller in order to efficiently load everything? Or do you suffer the performance penalties of looking things up in an ad-hoc basis within the templates at the point their needed?
The answer is simple, you leverage the client. The controller loads the gross content and the templates puts placeholder values for all the other data (classes and maybe some custom attributes). Then you have a javascript library that scrapes up the references to the needed data and requests it directly from the server. This effectively converts your page into a static page that can be served to everybody, cached to your heart's content, etc, and it offloads a good chunk of your processing to client machines which of course by definition scale in constant time.
I've done this in Rails using resources and json, and the performance and scalability benefits are amazing. You just need to know how to write good Javascript and be aware of the security concerns. The only thing that's need to make this uber-efficient in Rails is some kind of request bundling so that if you have more than 2 or 3 resource types needed you don't need to send a separate request for each one.
To all the doubters, you need to open your mind about this. I'm guessing most of you just don't like Javascript, to which I simply direct you to Douglas Crockford.
"YES! You seem to have gotten a lot of disagreement here"
Yes and I am somewhat regretting stiring this up, but I understand it, I was on that side of the fence for a long time, until I was dragged over to the other side.
The other posters are being very pragmatic, and that is a good trait to have. The problem is that we are creatures of habit and when we find something that works no matter how efficient we are hesitant to discard it.
I will tell you that when rich internet applications first came into popular use, I was one of the biggest detractors for the exact reasons that I am defending it's use today.
My initial reaction was that we have just added more convolution to the mix of technology soup that ASP, JSP, PHP have become. Now, not only does a developer need to know a server model but they have to understand a client model.
I thought to myself this is the worst of both worlds. It was not until someone more visionary than I, explained to me my wrong headed thinking. In that you have to abandon the server model for the picture of rich internet applications to become clear.
In the end we have had years of indoctrination to a certain model and it is hard to reason outside of that known reality. I was one of the most guilty of doing this, given that I have been doing this since the beginning.
"To all the doubters, you need to open your mind about this. I'm guessing most of you just don't like JavaScript"
This is a huge concern that I see time and time again, but what is funny is that this liberates the technology selection for so many other systems. No longer do I have to worry about trade-offs like Rails is faster to market but Java has a bigger library. With this form of architecture those decision decoupled. Services can be implemented in whatever language best fits the problem domain. You can use one or you can use many, it is a choice of each development house. Further, you can chose to third party provide whole sections of your application. This works very well at Google some systems are provided in python, some in Java and yet other are implemented in that team of choices language.
As for JavaScript as of now this is a limiting reality. Many of the browsers are working on alternative languages and there are converter that will allow you to write Ruby or Python or whatever and have it converted into JavaScript. But the reality is, yes web client programming is done in JavaScript. For me, it is not a big deal, as I find that UI development is best supported by a rapid prototyping language and JavaScript fits the bill. I am continually amazed at the flexibility of the language. Conversely, I prefer strong typed languages for services.
I liked it when we figured our how to POST to a C application and then Perl came along and ruined the party. I had to learn a new language just to keep doing CGI. After that, no one but yahoo would hire a CGI C developer.
> A CMS can deliver already templated, flat HTML files, to the web server during build time.
This is actually how Jekyll works, which is what I use for my blog. It's pretty awesome, actually. My pages load basically instantly, and Disqus plus some other Ajaxy stuff works for what absolutly has to be dynamic.
Neither is an IPhone app, but I can guarantee you that most of the logic I use for my "Web Application" will be reused by my IPhone application. With the exception of my templating, work-flow and UI specific logic, so it begs the question why am I implementing my templating, workflow UI specific logic, in my "Web Application" (it's not reusable). Unless it really is a web UI to my enterprise application in which my UI should be worried about its concern? I see so many new companies fall into this trap and then have to do major rewrites when they realize they have boxed themselves into a "Web Application"
I've seen companies falling at the other extreme ... making the interface "totally" decoupled, so that they could replace the UI anytime they wanted. That's why monstrosities like XSLT appeared.
But I haven't seen any example of a good application yet that does this successfully, because whether you like it or not, the web UI will require backend logic that your iPhone app does not, and vice-versa. And you'll either create a common denominator, or two different branches.
Also, if your iPhone app is so similar, maybe it shouldn't be an app in the first place, but an iPhone-optimized website. It's Apple's fault if they aren't making the integration between websites and the iPhone easier.
Don't get me wrong, web services have their place, but make no mistake ... a website that assembles the page from web services is a lot trickier to scale (assuming you're the one providing those services).
No XSLT was an attempt to make the UI generic, either you accept that UI are a custom tailored to be optimized for the platform or you accept the dumb terminal philosophy where you try to generalize the UI to adapt to all platforms.
The fact remains that human preference is for tailored UI and not until AJAX took hold was the desktops dominance as a platform threatened. Time and time again, tailored UI's are preferred. It is the reason for Apple's return to stardom as well as the rise of web 2.0.
Developers try to fight this mentality to work in their boxed solution and I understand it, but you have to clearly refute evidence to the contrary to believe that the user prefers application developed in this manner.
So If that is the case why not design an architecture that accounts for this reality and makes peoples jobs whose responsibility it is to assist the user instead of enforcing a developers domain on them.
As well, I do not agree that different UI's require different back end logic they require different work-flow logic and work-flow is not reusable. If they require different back end logic for say getCustomer, addCustomer, validateAddress then there is core architectural flaw in the solution. How a customer is then displayed is a problem for the UI e.g. Do I show all info in one screen do I show pivots to make room for more screen real-state.
It is not Apples fault, that is faulty logic, Apple is giving their users what they want and their last quarter clearly reflects that.
Web sites that rely on web services are not tricky to scale look at my document on amazon I referenced. It is essential to scaling. Google, Amazon et. al. are doing this precisely for that reason scalability and to not be locked into back-end or front end technologies. Not to mention you are distributing all work-flow and dynamic templating to the clients, which in practice has a large impact on resource utilization.
I know that they are using lots of web services, but the assembly is still mostly done on their servers (not in the browser), and you're talking about something else (or have I understood you wrong?).
I've seen a talk from an Amazon fellow. They are using web services for parallel processing ... their frontend servers are sending asynchronous requests to those services, which is a huge win.
But in a browser, for an Ajax frontend that happens to periodically request updates, one thing that happens is that the number of requests made from a single client is a lot larger. Surely it doesn't reload all the static files, but those are non-issues even for a reasonable high-traffic website ... just put something like Varnish to serve them and make sure they have expiration headers in the future.
My company did consultancy work in matters of scalability for http://www.erepublik.com/ and after we placed a Varnish in front (that also functions as a load-balancer for Apache servers) it could handle thousands of concurrent requests to static files. And their biggest problem was definitely not the load on their Apache servers (it was MySql).
different apps from these companies are further along in there utilization of a decoupled architecture than others but most all are working to this end, Many of Google's "applications" completely rely on this approach, I cannot comment on their mainstay (search). But most of Google app are assembled via the browser (Gmail, Docs, Maps, Wave) Its core to their strategy, hence entering into the browser market and the rapid push for HTML 5. As for Amazon most ancillary portions of their offerings use a decoupled strategy. The last understanding that I had, was they where waiting for a concrete SEO solution before moving their main asset .com over. These are large companies with legacy code base so it is natural that this architecture is emerging in their new product offering and acquired products first and is migrating it's way back into the legacy code base. In another post I provide referential links to the subject.
OK, you have a point, but I always was under the impression that Google is using Ajax in Gmail, Docs and Maps for user-experience, not for scalability.
I think there are two important points here. First, that is at best a beta level proposal, and I wouldn't bet the well being of my family on it. I'll be very happy when it stabilizes, but I'd stick to outputting plain html for the engines.
Second, this sounds far more complicated than simply outputting html from your web app. This thread is about decoupling UI from the backend for a clean design. Using this approach is far less clean than outputting the html directly.
If you are doing a web client, that is dynamic, correctly, you should already be reflecting a distinct UI by a URL fragment. having a web proxy run on a seperate box than you production server and routing request for a search bot is not that difficult. There is no custom logic involved you are just proxing through a headless browser. It's not beta it is just that Google is trying to standerdize the practice. Where you get in trouble is if your proxy is giving something different to the bot than the end user is getting. It really is a simple as if a request comes from a bot agent service it from the proxy box, it's a router rule.
Not if you are supporting multipal clients and want to seperate out your concerns. The point is that the UI should be decoupled from your core application logic. The advantages that brings, negates the few hours one will spend setting up an out of the box server and proxy to support SEO. There are many web apps that don't even have SEO as a problem domain.
No I am arguing that setting up a route based on an agent does not require custom logic and therefore removes the problem from a development issue.
It would be just as valid to argue that a fire-walled web environment is more complicated than one that is not fire-walled, while true it ignores a host of realities, but it is accepted that the advantages of security outweigh the complexity of a firewall.
What I am saying is that much like a firewall, it is a task of configuration and not custom development and therefore given its ease of implementation, the advantages outweigh the marginal complexity (I am arguing that it is marginal) that it brings.
As well, I am arguing that much like a firewall ejecting this problem domain out of the application, creates a simpler solution. You can implement access security in your application, but it is commonly accepted that rolling your own is a bad idea. The advantages of dynamic user applications are clear and I am arguing that those advantage clearly outweigh the need for this simple solution, if SEO is even a problem domain of your application, it is not for some.
odd, I've done numerous startups that have succeeded due to seo. I.e. 80+ % of sales were from organic search traffic, 100+ k unique visitors a day from organic search traffic, etc.
Please, leave this fantasy to me then, because I like real money :)
The "way we've always done it" in my brain does not like the thought of putting all the client stuff in, well, the client. I'd like to hear more about this style of development. Do you have examples of this being done, postmortems, blogs espousing this, more info, etc?
http://highscalability.com/amazon-architecture Being a large company Amazon is in different stages of a transition to this form of architecture, but for the most part most all of the large companies are moving towards it IBM is embracing it as a UI development philosophy. Google pioneered it, Yahoo is well on it's way.
The majority prefer JQuery, I will try to not impose my bias on you but to answer you question as best I can, I prefer Dojo. Out off all of them, I would say try both of these and see which one is for you. Given my preference for a decoupled UI, Dojo provides the complete box, but JQuery is far easier to learn. Their is an elegance to both so I urge you to do the due diligence and at least spend a week working with both before you make your decision.
A difference is that the routes, model, controller and template/renderer are in separate directories. It's a bit awkward for the simple project of the one in the video, but essential for more complicated projects.
It even includes a captcha generator, and their examples seem to be of real (though very simple) applications - as opposed to toy academic ones. I get the feeling that its developers are very familiar with the real needs and problems of actual web development.
It almost makes me want to get into web development! The only thing is that combining HTML, Groovy templating, Javascript and JQuery looks a little complex - but they're probably very straightforward for common tasks.
Well, at my day job, not having to throw out 10 years worth of legacy code would be considered "a bit of a plus". We also have 1,200 employees and precisely two engineers who have ever done a Rails application more complicated than the 15-minute-blog demo. I think we have one guy who coded a Python app once.
Our favorite contractors and outsourcing providers also have (comparatively speaking) deep skills in Java and the related stacks. Some of them have forgotten more than I ever want to know about, e.g., serializing Java beans.
If you pick as JVM language as suggested, why would you need to throw out any code?
When I worked at ITA Software which writes much of their stuff in Lisp (not the web stuff though), there was often worry that it meant new programmers would have to learn the language and so wouldn't be productive at first. But learning the code base takes a lot longer than learning the language, at least enough of the language to get significant stuff done.
If you have 10 years worth of legacy code written by 12,000 employees, I assume you're in the same boat.
Java itself is extremely mature on a well documented and expansive platform. Fusing the rapid development aspect of languages like ruby or php with the maturity of Java is pure win for web developers.
Java IDEs are mature and make it easy to refactor your application. (I realize that part of the necessity of this stems from the Java language itself. The tools are still good.) This advantage may be temporary if IDEs for other languages become highly sophisticated.
If you have developers that know Java and nothing else, which is not all that uncommon, it's really your only option. And since any developer who can't or won't learn additional languages is probably not very good, using a language like Java provides the fluff and bloat necessary to keep these untalented devs from screwing too many things up. Java's primary advantage in general is interchangeability of engineers.
If you have developers that know other JVM languages, or who can learn, then I would say there is no advantage because any language on the JVM can refer to object built in any other. This is why things like Jython, JRuby, Rhino, and Clojure are cool.
http://m.mlb.com/ uses a hybrid Freemarker / Groovy / Java framework called MVC+GF. During the 2009 season it was able to scale with out issues during peak hours (9th innings, etc). That's hundreds of requests a sec a server - pretty cool.
Groovy is used cause it has closures (java doesn't) and is simpler and easier than Scala.
Side question: is there a CPAN or Gems repository for similar open-source Java libraries? I know that Apache has lots of Java libs, but beyond that, I have no idea where to look.
From what I understand, Maven is a build dependency manager, but not a package manager the way CPAN is. I'm looking for a central place to find packages that do everything. "ohh you need to interact with AWS, here's the package" type stuff.
Heh - I'm working on building such a monster now. :)
Until mine is ready, you can try mvnrepository.com. The reason that I'm building my own is because mvnrepository.com is often down, slow, and is not very easy to use. But it's there until I'm finished with mine!
There's a bunch of maven repositories (usually googling '+"library name" +"maven" +"pom.xml" works), but I haven't yet found anything like CPAN's search website.
mvn:create can bring up an interface that's very smilar to CPAN's shell. What needs to be done is a shell that allows functionality such as "search /amazon/" or "search guice, jersey" or "info google-collections". Scala would be a great choice to make sort of a "REPL" shell :-)
> "Play is designed to operate in a ‘share nothing’ architecture. The idea is to keep the application completely stateless. By doing this you will allow your application to run on as many server nodes as needed at the same time.
>What are the common traps you should avoid to keep the model stateless? Do not store any object on the Java heap for multiple requests"
If that's all they're talking about when they say "stateless", that's no different than a lot of other "stateless" MVC frameworks like Pylons, Django, Rails, etc.
I haven't done any real dev with those frameworks you list so I'm wondering how do you manage session state? user profile, shopping cart, breadcrumbs, multi-request flows, etc...
Do you write/read all that into the db/memcached for each request?
Yes. PHP does the same thing. Actually, most frameworks do that; very few keep state within the application server between requests. Statelessness, in that respect, improves scalability.
Yes, but most people who make serious use of them hook up a custom session handler that writes session information out to the database (the default implementation that ships with PHP uses the file system). Sessions that are stored in memory by the application server instantly prevent you from load balancing requests across multiple servers and hence kill your scalability - unless you implement sticky sessions which adds yet more complexity. Personally I try to avoid using session storage whenever possible - I've recently started using signed cookies in their place.
Interesting. I mostly build J2EE apps and sticky sessions support is built in. Complex apps require a fair bit of session state (user profile, shopping cart, custom catalog, breadcrumbs, user promos, etc...) in general and the idea of having to persist and reload that data over a network (db or memcached) for each request would absolutely kill page response times. I guess it depends on how heavy the session data is for your app.
jshen: sticky sessions make it a non-issue. Sites I work on run on 3-100 server clusters with each server handling a few thousand active sessions.
I know memcached is very fast. However it's at least an order of magnitude, or more, slower than in memory lookups. Given how simple sticky sessions are I guess I'm not sure why you'd want to incur the overhead of re-buildling your state via external network calls and object serialization/marshalling/etc... for every request, and doing it all backwards at the end of every request.
I'm quite surprised - I've never even considered building a stateful web app that doesn't talk to a database or caching layer at all for most requests. It's fascinating to me how different the approaches taken in the Java world are from the LAMP world I'm accustomed to.
From the Java world, that seems odd:) Scaling your DB and/or caching layer is a lot harder than scaling sticky session based app instance with minimal db interaction.
There's also the option to pack the entire solution into a WAR for deployment within a web app container. Tho the lightweight server is a cool approach.
I'd like to see the Play server support fcgi rather than having it deployed behind a webserver acting as a proxy for load balanced deployment.
Nice collection of web tools. Especially ready made libraries for validation, captcha etc and a built-in template engine.
But I am not convinced with 'on the fly' compilation because most editors already do a better job there and refreshing the web page to look for compilation errors does not look productive.
Hmm, too much code in the template library, too much magic in general. I do like the general packacking, the automatic compilation and the snazzy error and testing interfaces. Seems a bit hard to change from their Groovy template system to something like StringTemplate, though.
Although I think that the built-in template system is very very nice, it would be really easy to write a play module that provide any template engine you want.
Well, both systems are basically diametrical opposites. StringTemplates is very strong on MVC separation whereas the play templates feature a fully-fledged programming language.
I'll have to dive deeper into play to see how much more I would have to do in the Controller part to prepare the output for the template...
Yep there is one step of byte code enhancement, during the compilation phase. But as the compilation phase itself it totally transparent, you can't call that a 'preprocessing step'.
I just watched the screencast. I'm sure that the static methods on the controllers and the public attributes will drive purists up the wall, but the speed of development appears excellent. Looks most definitely faster than running glassfish in debug mode and reloading classes/redeploying all the time.
I can assure you that ASP.NET MVC has its share of copied features. And you know what? That's a good thing. Good ideas should be copied and propagated, and mixed in with whatever makes the next platform unique.
update: Before do stupid down-voting: JNI sucks. It was not designed for today's high-loaded applications. JVM sucks for IO, even with its NIO2. It's not portable any more. There is NO JVM for ARM-based platforms. Even Python is more portable. Java is the corporate platform for internal business-related projects (same as .NET is). It is not an universal or even portable platform. The game is over.
We've been using Grails (Groovy) at our startup with great success. A few warts early on with regard to how Grails wants to handle packages and URLs, but we push a lot of transactions through this system (sorry, I don't have exact numbers) and Grails has held up remarkably well. It also cut our initial development time by about half.
Stripes is a great framework that promotes the "convention over configuration" mentality, and is quite minimalistic. But it doesn't support any specific ORM out-of-the-box, and doesn't have scripts to autogenerate a lot of stuff, so I'm not sure I'd call it "Rails-like." But it's been one of my favorite Java web frameworks to use.