"Speed" can be a misleading term: If you refer to speed as throughput, then the extra bandwidth can improve things considerably, if you are talking about the time it take to get a packet through the network (latency) then media speed (as used by most manufacturers) doesn't mean squat.
If a packet enters the system/component at a gigabit today, and leaves the system tomorrow, it is still considered a gigabit component (because you are indeed clocking traffic / throughput at gigabit speeds). The latency (as possibly demonstrated by a PING time) would be horrendous, but the "speed" is still gigabit.
Bandwidth and latency are two separate critters. The amount of traffic passed is not necessarily relative to the time it took to get through the system/component.
In some/many cases, the higher bandwidth going into a congested system is more likely to put the traffic at risk of being dropped. The packets are buffered / stored and subject to time expiration. Latency through a congested system is higher than latency through an uncongested system. Adding components that amplify the congestion will amplify the latency as well.
The point being: as with just about anything else, some planning and design is always a good thing. Deciding what the system / network is intended to do and using the proper components to accomplish the design goals will offer better overall performance than just dropping in random components (because they're "faster," or have a high IWBC -"It Would Be Cool" - factor).
FWIW / .02
Scott