Go Back   AnandTech Forums > Hardware and Technology > General Hardware

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2013
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 03-29-2007, 01:18 PM   #1
robmurphy
Senior Member
 
Join Date: Feb 2007
Posts: 376
Default PCI Bus with ATI SB400/SB450

I have been having trouble getting good performance out of some Intel Pro GB ethernet cards in the PCI bus of 3 machines using the above south bridge. I did notice that the performance on the Machine using the SB400 was only half that of the machines on the SB450. Using jumbo frames and Iperf I get max 300 Mbps between the 2 SB450 equiped machines, and 300/150 Mbps between the SB400 machine and one of the SB450 machines. I did think that the difference was down to the Intel NIC, but I'm gowing suspicious that the real problem lies with the PCI bus performance of these south bridges. There were supposed to be speed inprovements between the SB400 and SB450.

I started suspecting something as the USB performance on these SBs is known to be slow. Other posts on the net have mentioned that the SATA performance is not great as well. As well as the USB and the PATA/SATA the SB 400/450 control the PCI bus. I then saw this on the net, please read the last post on the topic.

http://www.ocworkbench.com/ocwbcgi/ulti...bb.cgi?ubb=get_topic;f=29;t=001168;p=4

Have any forum members out there had any experience with the PCI bus being slow on these south bridges?

Thanks in advance.

Rob Murphy
robmurphy is offline   Reply With Quote
Old 03-29-2007, 02:01 PM   #2
Matthias99
Diamond Member
 
Join Date: Oct 2003
Posts: 8,808
Default PCI Bus with ATI SB400/SB450

Are you using anything else on the PCI bus (note that onboard sound and Ethernet may be on this bus as well, depending on the exact board)? PCI is about 1Gbps total bandwidth even in the best circumstances, so anything else chewing up clock cycles will significantly limit your throughput.

It's definitely possible that the SB400 might not perform as well, though. The chipsets are underwhelming in general in terms of peripheral performance (SATA/IDE controller, USB, etc.)
Matthias99 is offline   Reply With Quote
Old 03-29-2007, 02:24 PM   #3
airhd823
Member
 
Join Date: Jan 2007
Posts: 90
Default PCI Bus with ATI SB400/SB450

I agree..
airhd823 is offline   Reply With Quote
Old 03-30-2007, 12:31 PM   #4
robmurphy
Senior Member
 
Join Date: Feb 2007
Posts: 376
Default PCI Bus with ATI SB400/SB450

I'm not using any other cards in the PCI slots, and I was not using any of the USB ports, apart from the card readers built in to the machines. The card readers did not have any cards in them. The onboard sound is on the southbridge, and given that the onboard Ethernet is 10/100 I would expect that is hanging of the southbridge as well.

I did try disabling the 10/100 onboard Ethernet but it did not make any difference. The traffic over the Gigabit cards was just Iperf and whatever file sharing traffic there is when no files are being transfered. Each end of the link was configured with static IP addresses, and the windows firewall was turned off.

The limit seems to be in the transmit direction. I had reconnected all the machines to the Netgear GS608 switch as direct connection using cables made no difference to going through the switch. I decided to try from one desktop machine to my Laptop and back using Iperf. The gigabit ethernet on the laptop (HP/Compaq NC8000) does not support jumbo frames so I disabled that on the desktop machine used for the test. The results were that the laptop could send at 400+ Mbps. It received at 300 Mbps, but that is all the desktop will send at. I may try the test with 2 desktops connecting to the laptop and see what happens. To be honest 400 Mbps from a laptop with no jumbo frames, and no setting up, i.e. all default settings is pretty good.

I've also tried a DLink card. This only supports jumbo frames upto 7K. I set the intel card to the same and the D-Link maxed out at about 180Mbps in the transmit direction.

I have raised support queries with MSI and AMD/ATI over this, but I will not be holding my breath.

Whats the betting the firewire interface on motherboards with these 2 southbridges will be slow as well. I do not have a PC fire wire to PC fire wire conection so I will not be testing it.

I'll try to keep this updated, and post any thing usefull from AMD or MSI.

Rob Murphy
robmurphy is offline   Reply With Quote
Old 03-31-2007, 10:17 AM   #5
Remedy
Diamond Member
 
Join Date: Dec 1999
Posts: 3,981
Default PCI Bus with ATI SB400/SB450

Have you checked for a new BIOS update? I know the SB400 was poor performance wise if the BIOS wasn't mature. But, certain BIOS updates patched a few performance issues.
Remedy is offline   Reply With Quote
Old 04-01-2007, 09:47 AM   #6
robmurphy
Senior Member
 
Join Date: Feb 2007
Posts: 376
Default PCI Bus with ATI SB400/SB450

Bios was updated a month or so back before upgrading CPU to x2 4600.

The machine with the SB400 will only send data at 140 Mbps using Iperf. I swapped the intel pro card with another machine, and used a different PCI slot. It would appear that there are major differences between the SB400 and SB450. The SB450 is running at twice the rate of the SB400.

I tried quckly sending from 2 macihnes to one at the same time, and both sending machines showed the same bandwidth.. The receiving machine was one with the SB450. This would seem to indicate that the SB450 is limited on sending data from memory to the PCI bus. I have not been able to max out the receiving side yet. I need to read up on Iperf.

Does anyone know of any PCI diagnostic software that might reveal what is going on?

Rob Murphy.
robmurphy is offline   Reply With Quote
Old 04-01-2007, 10:34 AM   #7
Madwand1
Diamond Member
 
Madwand1's Avatar
 
Join Date: Jan 2006
Posts: 3,304
Default PCI Bus with ATI SB400/SB450

What iperf options are you using?
Madwand1 is offline   Reply With Quote
Old 04-01-2007, 11:42 AM   #8
robmurphy
Senior Member
 
Join Date: Feb 2007
Posts: 376
Default PCI Bus with ATI SB400/SB450

commands used as far as I can remember were:

iperf -s -w 64k - receiving machine
iperf -c 10.0.0.3 -w 64k sending machine

The receiving machine has the SB450.

The receiving machine with the SB400 has X2 4600 cpu, 1 gig ram dual channel ram, XP - Pro.

The sending machine with the SB450 has a semperon 3400 cpu, 512 single channel ram, XP Home, and WinTV 150 PVR cards in the other 2 PCI slots.

The sending machine with the SB400 gets 143 Mbps, and the one with the SB450 gets 290 Mbps.

What I want to do is get Iperf to run the test for longer i.e. use 2 gig of data instead of 100 Meg. I can then run the tests with multiple senders. I have 4 machines with the intel pro cards in them, so could send from 3 and receive on 1 machine. This should give the max for the receiving bandwidth.

The 4th machine mentioned is an old PC with win2k, 1.2 gig tualatin celeron CPU, and 256 meg SDRAM. This has the intel pro card to keep the 10.0.0.0 network at 9014 jumbo frames and avoid the switch having to split up frames.

robmurphy is offline   Reply With Quote
Old 04-01-2007, 10:50 PM   #9
Madwand1
Diamond Member
 
Madwand1's Avatar
 
Join Date: Jan 2006
Posts: 3,304
Default PCI Bus with ATI SB400/SB450

Even 300 Mb/s is pretty slow for raw gigabit with decent CPUs, NICs, etc.

I suggest trying to get the iperf test "right" first.

I use iperf 1.7 as more recent versions have given me misleading results under Windows.

My "standard" parameters are as follows:

server: iperf -s
client: iperf -c server -l 64k -t 12 -i 3 -r

In your case, I'd suggest bumping up the -l parameter to 1M just to see what happens (hopefully it'll increase the synthetic throughput to limit). You might also try -l 256k. (This is also the value that AT uses in their reviews.)

I assume that what you want to see here is the best network speed, and thereby infer something about the PCI bus, etc., first, instead of getting the most representative network performance for a particular application (for which -l 64k might be better).

I suggest working with just the two SB450's to start, to get their performance up, and thereafter trying similar techniques on the SB400 as far as they go. Try enabling flow control, adaptive interrupt moderation, increase the buffers, etc., among the NIC properties to see if they make a difference.

I'd expect a minimum of 500-600 Mb/s under iperf, hopefully closer to 800 Mb/s. Under Vista, I've seen an Intel PCI NIC exceed 900 Mb/s. With PCI / other inefficiency assumptions applying on both ends, we should lower our throughput expectations further, but 300 Mb/s still looks too low.
Madwand1 is offline   Reply With Quote
Old 04-02-2007, 06:23 AM   #10
robmurphy
Senior Member
 
Join Date: Feb 2007
Posts: 376
Default PCI Bus with ATI SB400/SB450

Befor going any further I rebooted the machine with the SB400, and disabled the 1394 port, and removed the legacy support for the USB ports. There was not an option to disable the USB ports.

I used Ipfer with the 64k window, and the default send time of 10 seconds between the 2 machines with the SB450. I varied the performance settings, the major difference was the number of TX and RX descriptors. I found the best throughtput, and then tried backing them off. I ended up with 290 Mbps send from the machine with the semperon CPU, and 25 - 40% cpu occupancy for iperf. I could get 295 - 300, but it was running at 75 - 99 % occupancy. Did the same exercise on the older machine. This was done and ended up with about 240 Mbps send and 290 - 300 receive, again using the performance settings this ended up with 30 - 40% cpu occupancy ( the is only a P3 1.2 gig celeron with 256k l2 cache). I did the same again with the machine with the SB400, and got 155 Mbps send and 290 Mbps receive.

I had not read the above post when doing these tests, but appart from the different Iperf settings I think the settings for the cards are correct now. You can get better speed on the old machine using a smaller TCP window, but the other machines then do not go as quick. 64k gives about the best perfomance for the HP machines, at a slight penalty for the older machine.

The NICS are all set to use flow control, generate and respond, adaptive interrupt moderation. I varied the TX and RX descriotors. I was using 2048 on all of them. Using Iperf I found I got better throuput by reducing these. I found you could get 95 - 98 % of the perfromance using 50 % less cpu bandwidth, so I tuned them accordingly.

Once I had got all the systems at the best I could I used a longer runntime to comfirm it. I ran Iperf for 100 seconds in both directions using the -r option. This confirmed the settings.

I then tried using 2 machines sending to one machine. The receiving machine was the one I had been using as a server, SB450, X2 4600, 1 Gig DDR dual channel ram. The sending machines were the old one and the other machine with the SB450. This was done using a long test (100 sec) and the receiving machine was receiving 550 - 570 Mbps. The graph on the network monitor in task manager was straight, and showed link usage of 55%. I tried the same test adding a third machine and the result was worse performance on the receiving machine. The graph varied from 25% to 45&. adding up the combined sending rates gave a receive bandwidth of between 410 and 510 mbps. It was not particulary stable.

There may be more receive bandwidth, but I've pushed it as far as I can without a quicker pair of machines to send the data. The max receive bandwidth obtained is very closs the the sum of the max send bandwidth of the 2 machines used to send data.

I'll try the different Iperf settings when I get chance.

Rob Murphy
robmurphy is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 07:20 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.