It would be nice to corroborate this reason with another source, because my understanding is that clock synchronization was not a factor in determining the MTU, which seems really more like a OSI layer 2/3 consideration.
I am surprised the PLLs could not maintain the correct clocking signal, since the signal encodings for early ethernet were "self-clocking" [1,2,3] (so even if you transmitted all 0s or all 1s, you'd still see plenty of transitions on the wire).
Note that this is different from, for example, the color burst at the beginning of each line in color analog TV transmission [4]. It is also used to "train" a PLL, which is used to demodulate the color signal transmission. After the color burst is over, the PLL has nothing to synchronize to. But the 10base2/5/etc have a carrier throughout the entire transmission.
I would agree given I worked on an Ethernet chipset back in 1988/9 keeping the PLL synched was not a problem. I can't remember what the maximum packet size we supported was (my guess is 2048) but that was more of a buffering to SRAM and needing more space for counters.
The datasheet for the NS8391 has no such requirement for PLL sync.
I am surprised the PLLs could not maintain the correct clocking signal, since the signal encodings for early ethernet were "self-clocking" [1,2,3] (so even if you transmitted all 0s or all 1s, you'd still see plenty of transitions on the wire).
Note that this is different from, for example, the color burst at the beginning of each line in color analog TV transmission [4]. It is also used to "train" a PLL, which is used to demodulate the color signal transmission. After the color burst is over, the PLL has nothing to synchronize to. But the 10base2/5/etc have a carrier throughout the entire transmission.
[1] https://en.wikipedia.org/wiki/Ethernet_physical_layer#Early_...
[2] https://en.wikipedia.org/wiki/10BASE2#Signal_encoding
[3] http://www.aholme.co.uk/Ethernet/EthernetRx.htm
[4] https://en.wikipedia.org/wiki/Colorburst