"Apparent multiplication of bandwidth" = multiple pairs of systems, using multiple pairs of ports, can communicate at the same time ... several logical circuits, as if they were connected directly together (each of the pair in session with the other member of the pair). Each of those pairs are communicating at full speed/bandwidth.
As soon as more than one system attempts to communicate to the same port (whether there is one system attached to that port, or if it's a "gateway" out (like to a cable modem), then that single port is shared bandwidth ... just like a hub ... except the buffering that the switch does to handle contention adds latency to the traffic. Since a system connected to a hub cannot transmit until it senses the line is clear, there is no latency and no chance whatsoever that the packet(s) will time-out in the hub, or due to latency induced by the hub (which is negligible to the point of being null).
Switches will flood all traffic out all ports for a broadcast (just like a hub), multicast (just like a hub), or if it doesn't recognize the MAC of the destination system (hubs don't care).
Switches still have to wait for a clear line (just like a hub); it's part of the Ethernet spec. Lots of broadcasts, multicasts, or unknown MACs will cause the same delay in a switch that it will in a hub. Even if the switch is configured for full duplex, the broadcasts, multicasts, or flooding will add to the latency of traffic behind it (buffered behind the BC, MC, or flood traffic).
The bandwidth of a hub is shared only in that a station must wait for a clear line to begin transmitting. On the rare occasion when two stations talk at the same time, they'll fall back and try again. No big deal in the grand scheme of things. When the stations do talk, they'll talk at full bandwidth supported by the hub; other stations will wait till they hear the line is clear, then the next station will transmit at full bandwidth.
For a couple stations, operating properly, with common applications (including many/most games), there will be no performance difference between a hub and a switch of similar configuration ... except the hub will have slightly less latency. There are, as one would expect, several "official" definitions of latency; in the case of a hub none of the "official" definitions of latency make any difference (the variance of definitions are usually applied to cut-through versus store-and-forward to make that particular vendor's stats look better).
Certainly, a switch properly implemented can offer some advantages. They way most people talk about using switches, they could just as easily use a hub and see the same performance.
Now I gotta go look up the definition of "Pendantic" ....
OH, and BTW: the traffic moves at ~66% the speed of light on UTP, even through most fiber optic. Check out "velocity factor" of the cabling.
Seeya
Scott