In terms of computation and storage, maybe, but in terms of the network bandwidth, perhaps not.
The current physical topology of the Internet is already heavily biased towards backbone-to-edge connectivity, and is directionally asymmetric.
By comparison, edge-to-edge connectivity that doesn't go through any backbone is much more limited, and edge nodes often don't have the upload bandwidth to effectively deliver content.
Plus, CDNs and other distribution networks already effectively put easily cachable content closer to the edges to relieve backbone congestion.
I think edge to edge connectivity being limited is mostly a software (routing protocol architecture) issue.
For example, I live in an apartment and the ISP has a router installed in the building. Coax cables are running to each apartment. I can see 15+ wifi networks from my home. Yet to fetch the exact same front page of a news site my neighbor is reading and has a cached copy of, I am routed through the backbone probably half way around the world, or to a CDN at the other end of my country at best.
I don't think photons really care which way they travel but the shortest the path the faster they will get there and the less photons we send the cheaper it ends up being.
If the argument is that this saves on network hardware costs, then you'd think ISPs would have some incentive to invest in decentralization as well.
But I'm not so sure about the thesis that it's just a software issue. It seems wasteful to sling electrons through a backbone and back when those bits are (physically) close by, but the cost here is not the distance those bits traveled, but the marginal cost of the physical hardware to deliver those bits to your doorstep.
It's probably much cheaper, on a cost-per-bandwidth basis, to beef up the existing backbone, because significant economies of scale exist.
OTOH, there's a significant fixed cost every time you try to widen edge links (it's very expensive to dig up and replace residential cables).
My ISP would certainly love if my whole local area (building/street/district/whatever) only fetched each unique piece of data once through their network and then distributed it locally via mesh networks. It's just that they can't make it happen by themselves. It needs to be supported by network protocols, which need support from content providers and content consumers as well, so it goes way beyond the reach of ISPs.
> The current physical topology of the Internet is already heavily biased towards backbone-to-edge connectivity, and is directionally asymmetric.
Couldn't agree more. The internet as we have it today is set up for decentralized consumers and centralized producers.
There is one untapped benefit of the current system which is that there is low latency on the edges. I'm not sure what the killer app to make use of that would be though.
Is it possible we already have an entrenched system in place that squashes the possibility of a decentralized internet before it can get off the ground?
Not only is no one economically motivated to share traffic towards the edge, no one will be building physical network capacity for it either.
I wonder what projected bandwidth numbers look like (if, for instance, youtube or netflix was completely distributed) - is the edge network close to being able to handling this?
The current physical topology of the Internet is already heavily biased towards backbone-to-edge connectivity, and is directionally asymmetric.
By comparison, edge-to-edge connectivity that doesn't go through any backbone is much more limited, and edge nodes often don't have the upload bandwidth to effectively deliver content.
Plus, CDNs and other distribution networks already effectively put easily cachable content closer to the edges to relieve backbone congestion.