The article talks about how the 1500 byte MTU came about but doesn't mention that the problem of clock recovery was solved by using 4b/5b or 8b/10b encoding when sending Ethernet through twisted-pair wiring. This encoding technique also provides a neutral voltage bias.
EDIT: As pointed out below, I failed to account for the clock-rate being 25% faster than the bit-rate in my original assertion that Ethernet over twisted-pair was only 80% efficient due to the encoding (see below)
You're absolutely correct ... it's been a long time since I was designing fiber transceivers but I should have remembered this. Ultimately efficiency is also affected by other layers of the protocol stack too (UDP versus TCP headers) which also explains why larger frames can be more efficient. In the early days of RTP and RTSP, there were many discussions about frame size, how it affected contention and prioritization and whether it actually helped to have super-frames if the intermediate networks were splitting and combining the frames anyway.
I'm not a hardware engineer, but from some quick research it appears that 100 megabit ethernet ("fast ethernet") transmits at effectively 125 MHz. So the 100 megabit number describes the usable bit rate, not the electrical pulses on the wire.
Gigabit Ethernet is more complicated, and it uses multiple voltages and all four pairs of wires bidirectionally. So it is not just a single serial stream of on/off.
EDIT: As pointed out below, I failed to account for the clock-rate being 25% faster than the bit-rate in my original assertion that Ethernet over twisted-pair was only 80% efficient due to the encoding (see below)