It's written on the JVM for the simple reason that if they wrote it in Go or Erlang, no Enterprise would adopt it as there isn't a CTO at a non-tech Fortune 500 that has every heard of Erlang or GO, and wouldn't know the first thing about trying to hire developers for it. Remember, the jobs written for Map Reduce are done in the same language ( typically ) as the MapReduce code itself.
It is written in Scala with Akka framework, not pure Java. So it uses the same computational model as Erlang. Thanks to being Java-compatible it is easier to integrate with some other popular BigData tools, e.g. Hadoop or Hive (see Shark).
Erlang is not that performant in the general case compared to C/Java/etc. While the conceptual overhead in distribution is much lighter with Erlang, actual computation in an idiomatic implementation tends to be slower than Scala/Akka.
Of course, there are tasks for which an Erlang implementation would be faster, but as others have mentioned, most organizations would prefer to write Scala.
Go is nowhere near the adoption of Scala right now, and its compiler must catch up a lot to get to the similar level of performance as Scala has on top of Oracle JVM.
Considering there are already lots of BigData tools in Java ecosystem (hadoop, hive, pig, mahout etc.) Scala looks like a very reasonable choice.
Can someone explain this to me?