Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

True, if you look at Intel's flash chips they do this internally, you've got the flash 'controller' which sits astride a bunch of actual flash being written in parallel and its doing the wear leveling. I've played a bit with Intel's PCIe attached flash and can vouch for it being quite a bit faster than going through the SATA port, now if we can get them to move it to the FSB so that I can talk to it via the L1/L2/L3 cache then I'll be really happy.


That's an interesting concept but there would still be a need for the much faster and non-volatile DRAM. At which point you need to discern which is which and decide for each data structure where you want to locate it.

The latency of NAND flash will be so much more than DRAM that you'd have to use DRAM as the cache layer before the flash in which case you could just use one of those block caching schemes coupled with a PCIe SSD to get everything that is good with very little trouble at the architecture level.


Absolutely, there will always be a place for DRAM (in bulk 'fast' memory) and for Static RAM (in caches), this is just a new (for this generation) tier beyond DRAM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: