- Sep 19, 2000
- 1,003
- 0
- 0
I'm curious as to if any of you have noticed an occurence like this.
I've been getting ready to get my network upgraded to Gig-E, and also to move to Exchange 2003.
While reading the white paper from MS about OTGs move to Exchange 2003, there was a brief mention that they were having performance degradation problems with their Gig-E NICs, and that they actually swapped them out for 100BT after they moved to the SAN because they didn't need the throughput anymore, and there was less administrative overhead.
Here is the actual comment fromt the white paper:
"OTG?s experience with Gigabit Ethernet showed a gradual trend of network adapter performance degradation. The administration effort required to manage and resolve the degradation was quite time and resource consuming. ... Moreover, the 100 Mbps Ethernet adapters required much less maintenance overhead."
I asked the author of the paper if he had any more details on this (he's a friend of mine) and here is what I got back:
"As I recall during the interviews I had, Gigabit Ethernet adapters regularly suffered from a progressive degradation of throughput. The servers had to be reset to restore full performance. The manufacturers could do nothing to resolve the issue, despite much effort, so since the high-speed throughput was not necessary with the new configuration, OTG went back to 100 Mbps network adapters."
Has anyone else run across this sort of thing in a production environment? Does it maybe have to do with particular adapters, or some bug in Windows?
I've been getting ready to get my network upgraded to Gig-E, and also to move to Exchange 2003.
While reading the white paper from MS about OTGs move to Exchange 2003, there was a brief mention that they were having performance degradation problems with their Gig-E NICs, and that they actually swapped them out for 100BT after they moved to the SAN because they didn't need the throughput anymore, and there was less administrative overhead.
Here is the actual comment fromt the white paper:
"OTG?s experience with Gigabit Ethernet showed a gradual trend of network adapter performance degradation. The administration effort required to manage and resolve the degradation was quite time and resource consuming. ... Moreover, the 100 Mbps Ethernet adapters required much less maintenance overhead."
I asked the author of the paper if he had any more details on this (he's a friend of mine) and here is what I got back:
"As I recall during the interviews I had, Gigabit Ethernet adapters regularly suffered from a progressive degradation of throughput. The servers had to be reset to restore full performance. The manufacturers could do nothing to resolve the issue, despite much effort, so since the high-speed throughput was not necessary with the new configuration, OTG went back to 100 Mbps network adapters."
Has anyone else run across this sort of thing in a production environment? Does it maybe have to do with particular adapters, or some bug in Windows?
