I think latency of a network will always lag behind that of local storage.
Even traveling at the speed of light, going around the circumference of the earth takes over 100ms. Obviously not all network requests go around the globe, but the fact that local storage is physically closer to your computer will always be a sizeable advantage.
I'm sending right now at home 7TB from my server to my NAS and it's taking aaages over my internal 1Gb/s ethernet network.
Am I right thinking that there are (still) no SOHO network switches that can handle faster speeds (at least 2Gb/s) that don't have active fans & don't get hot and that aren't super-expensive? The last time I checked, about 1 year ago, I didn't manage to find anything.
I've not used one and can't speak to their quality but:
> The CRS305 is a compact yet very powerful switch, featuring four SFP+ ports, for up to 10 Gbit per port. The device has a 1 Gbit copper ethernet port for management access and two DC jacks for power redundancy. The device is a very sleek and compact metallic case without any fans, for silent operation. [0]
Thank you - looks interesting, but they don't write in the specs if it has an active fan or not, right? (e.g. I see in the pics that this random model has fans https://mikrotik.com/product/crs328_4c_20s_4s_rm#fndtn-downl... but they're not mentioned in the specs)
Anything with multi-Gig or 10GbE is still quite expensive, unless you score a good deal on used enterprise gear that will definitely have screaming fans. There are a few switches that have mostly 1GbE ports and a few 10G ports and are fanless.
Define expensive? 10GbaseT is not that much anymore. Also a number of vendors are supporting 2.5/5G speeds. Ubiquity has some reasonable kit. I am planning on dropping a 10G into my FreeNAS box and getting a thunderbolt 10G for my MacBook. The idea that the 10GT PHY can work in an adapter is pretty cool. Back in the day I worked at a startup that did one of the first 48x10GT switches and the PHY was 5 watts each x 48. Sorting the cooling was fun. As DC switch the noise was fine, but working with the prototypes at your desk or in the lab was quite loud.
For switches of around 8 ports, 1GbE is about $2-3 per port. 10GbE over copper is about $70 per port, way higher than justified by the bandwidth increase alone. 10GbE is getting cheaper, sure, but it definitely isn't cheap yet. A 10GbE switch is still more expensive than all the equipment required for a 1GbE+WiFi home network.
What do they mean when they write "...with 4 10Gb SFP+ Uplinks"? Are they meant only to aggregate the traffic that comes from the 1GbE-ports or can they be used as well to exchange traffic between 4 servers, each one using 10GbE?
That switch should be able to do regular switching between its four 10Gb ports, but first you need to buy SFP+ transceivers to plug into those ports. 10G Fiber transceivers start around $20 per port, but 10G transceivers with RJ-45 ports for ordinary twisted-pair copper cabling are $40-70 per port. So to get that switch equipped to actually do 10Gb switching over copper would drive the price per port over $100.
Electro-optic conversion is expensive in terms of power, so you better be sure it's necessary. There are still some people looking at hybrid computers with both optics and electronics. To be practical, you'd need both to be realized in the same platform, but they don't exactly work on the same scales, and laser integration is a big issue.
Honestly, I'd love to have my hands on a terabyte drive with 1TBps speeds.