Munki's post was a bit of an over-simplification, and not quite correct. You can't simply take 10Mbps and divide it by 8 computers and say that each computer gets 1.25Mbps. That would only be true if all 8 computers were putting the exact same amount of traffic on the network at the same time. That's not usually the case.
Likewise, you can't say that plugging 16 computers into a switch gives them their own dedicated 100Mbps. It depends on what they're talking to. For example, if each of 15 computers is downloading a file from a server connected to the 16th port (as is common) the server only has 100Mbs to work with to talk to all 15 (100/15). Not 1500Mbps if you do the math the other way around. The most you can hope for is 100Mbps between any TWO devices that talk to each other. Anyone else tries to talk to one of those two and you have to share the bandwidth. Granted that the instantaneous bandwidth is going to be 100Mbs (because we can only talk to one device at any point in time) but as we look at the big picture, the bandwidth is shared.
The big difference between the three devices is this:
Hub: Transmits all traffic to all ports. If it's dual speed (10/100), all ports must be running at the same speed.
Switching Hub: Transmits all traffic to all ports. Uses switching technology to allow ports to be at different speeds (10 or 100).
Switch: Transmits traffic only to the port for which it is destined. This effectively creates separate 'collision domains'. If dual speed, ports may be at different speeds.
Bottom line, a switch will always give you better performance, but for your application I don't know how noticeable it will be.