Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google's Chiller-Less Data Center (datacenterknowledge.com)
78 points by 1SockChuck on July 15, 2009 | hide | past | favorite | 20 comments


I love their strategy for when it's too hot--turn it all off and go home. Not a lot of companies have that amount of redundancy so I don't see this being the "next big thing", but it's a nice low tech "hack".


For the companies with global data centers it may prove more profitable to create more data centers around the world and shift workloads than to build one monolithic data center in the midwest (for example).

Costs of hardware have fallen enough that other costs are starting to dominate data center planning.


I wonder if the latency would be too terrible if you sprinkled your datacenters around the globe around the same latitude as (or slightly south of) the Arctic Circle... Alaska, Nunavut (Canada), Norway, Finland, and Russia.

Seems like the fairly persistent cool/cold temperatures would be attractive from a cooling standpoint. Whether the other services needed (cheap power and ample bandwidth) would be hard to provide in those locations is probably another matter I suppose.

Edit: Another thought would be putting them under water. The ocean is a humongous heat sink, and also happens to be conveniently located near major population centers. It's not like you'd have to go very far out from shore (or very deep) to effectively cool a significant heat source. Many power plants use a lake source for water, and while they definitely heat up the lake, the rise in temperature is not significant, and the ocean is much bigger than a lake.


Or high mountains and cities. For instance, Spain is a hot country, but also it is the second highest country in Europe after Switzerland.

So, Google could place datacenters a bit into the mountains where temperatures are very low.


Google did actually file a patent based around floating data centers. There is an article on it here : http://blogs.zdnet.com/BTL/?p=9937


It seems like a lot of these "rules" for systems were created decades ago with different hardware. Not only has the hardware design changed since then, but the economics have changed as well.

With a single big-iron mainframe it probably made sense to spend a lot of money on cooling because any failure was very expensive. With many commodity servers it may be cheaper to let them run hotter and replace any failures, though an Intel study with free-air cooling in Mexico found no significant difference in failure rate.

I'm reminded of hot-spares in RAID configurations. With today's hard drive sizes it can take so long to merge a hot spare into a system that you're better off just increasing the raid level and only having online drives.


Really, you're better off not using any block-level mirroring at all! At scale it makes much more sense to store each (2^24-27) chunk of data on at least 3 independent servers in each facility that has a copy of the dataset (see GFS).


I love the "follow the moon" idea; I picture the globe spinning and sparks of data processing jumping from node to node to stay out of the sun.

It's pretty. Uh, in my head.


Except that the latency for those following the sun would suck pretty much all the time.


You don't serve user loads from infrastructure that "follows the moon". Google does a ton of batch processing and it doesn't care where it computes so long as it has data (either migrating with it or available via tubes).


Interesting article. I wish that it had given an estimate for the magnitude of power savings, however.


Very interesting. Some DCs measure the cost of running at a higher temperature. Examining the cost of purchasing replacement hardware and the strain downtime places on relationships. Having a global presence you can do Smart DC Distribution base on the traffic they receive from the region they can spin down servers at night or during hotter days. I wonder if they reuse the heat(energy) produced.


This actually makes sense, Google's new policy is to work on problems on a scale that no one else can. Too hot in the data center? Ah, just turn it off and redirect the traffic! :)


Are data centers going to be the junkyards of the information age?


They already are, look at all the worthless twitter, blog, and forum posts that Google has stored in their data centers.


It's way crazier than that: Google has the full text of the web stored in RAM three times over (think about how snippets work on SERPs), in each of the dozens of DCs used for search around the globe.

Think about how epically huge the indexes are into all that data...


'Free Cooling' works well in colder climates such as those found in Belgium. A larger fan on the server might be a cheaper solution. Direct air in the racks rather than the room can also be more effective. Since in the evenings air is normally cooler, this can also be used effectively by adding more 'mass' in the room. Anyone thought of having these datacenters on mountains?


I like them to do 1 step further and use the free heated water for warm water needs.


Wouldn't it be better if the boxes were painted white or made reflective?


the parts that often fail are hard drives and fans, so eliminate those. maybe an embedded pc (think router) with huge ssd+ram is stable at ambient room temperature. and almost never breaks (AC power adaptor normally will break first).

if google can subsidize internet with her routers (with huge ssd+ram), she basically outsources the power and maintenance to local people around the globe.

if such router can direct queries into its huge ssd+ram cache (think squid + LRU?), then search can be faster




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: