VIA Chipset PCI latency issue

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Pacinamac23

Senior member
Feb 23, 2001
585
0
0
The fact that this problem is still around is PATHETIC. The problem was epidemic on the kt133 boards, everyone and their mother was having a problem with data corruption and so on. The fact that VIA hasn't fixed it yet says a lot about their company.
 

oldfart

Lifer
Dec 2, 1999
10,207
0
0
I keep my systems 100% VIA free. Never have to worry about all this patch nonsense to try and work around poor hardware.
 

Vette73

Lifer
Jul 5, 2000
21,503
9
0
Originally posted by: oldfart
Promise IDE controllers ride the IDE bus are are affected by the latency bug.


From what I have seen the Highpoint Raid is not as bad as the promise. If you read the last review my Anandtech, you would see the performance of the higpoint was about the same with all boards, but compare that to the Promise boards and it is every where.

I run a Shuttle AK35GTR board and my Raid channel is very fast. My roommate had a MSI board with Promise and is was a pile of S__T.

Get a KT333 board with highpouint and install teh 19d patch and you will be rocking.

The best bang for your buck board right now is the Shuttle AK35GT2R

 

Pabster

Lifer
Apr 15, 2001
16,986
1
0
oldfart wrote:

"I keep my systems 100% VIA free. Never have to worry about all this patch nonsense to try and work around poor hardware."

Ditto :)
 

NicColt

Diamond Member
Jul 23, 2000
4,362
0
71
>The fact that this problem is still around is PATHETIC.

The problem is not around, it's only around and talked about by the SiS FUD mongers who don't know any better.

>The problem was epidemic on the kt133 boards

The problems was so minor that even Anandtech's own test showed that the fasted SiS board could only tie via with the problem.

>everyone and their mother was having a problem with data corruption and so on.

People who still think that this is a problem need to have their heads examined. The data corruption problem was not caused by VIA.

>The fact that VIA hasn't fixed it yet says a lot about their company.

The true fact is that VIA did officially fix it even though it was so minor that no one in the "real world" performance would never see any benefit from it.

Plz let this die once and for all.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Well said, NicColt.

The TRUE problem here is that the PCI bus as we know it is more than saturated by everything we'd like to have on it.

So a chipset is either set up for good single device burst throughput at the expense of bandwidth distribution fairness among multiple cards - or the other way round.

No matter which way you tweak it, something IS going to suck.

regards, Peter
 

mechBgon

Super Moderator<br>Elite Member
Oct 31, 1999
30,699
1
0
Are VIA-based motherboards unstable? Mine have been quite stable under harsh loads for days on end. There may be performance bugs, but stability? I guess everyone's mileage may vary...

It's worth noting that Intel has their own pet performance bug, a problem with their ICH2 which limits the PCI bus to 90Mb/sec. This hub is used in i845/i845D, i850 and i850e (edit: as Peter points out, the i845/i845D is not affected, my bad). According to their documentation, they have no plans to try to fix it. Nobody's perfect, eh? ;) Not that it is likely to make the board a bad board, unless you're addicted to benchmarks for their own sakes.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
That's not an ICH2 problem. It's an 850 northbridge problem: it doesn't keep up in keeping the Hublink FIFO filled for outgoing DMA. This starves the PCI bridge in ICH2, who subsequently has no other choice than to take bus ownership away from the active PCI bus master until more data arrive from upstream.

ICH2 combined with other northbridges (845 series) doesn't exhibit the problem.

regards, Peter
 
Dec 18, 2001
82
0
0
I think mechBgon makes a very good point. I think we get a bit too wrapped up in benchmark results sometimes. My major concern in starting this thread was the stability issue. Given all the potential bottlenecks in a system, it stands to reason that something isn't going to work perfectly. It does seem odd that VIA hasn't been able to completely correct this though, despite the length of time the problem has been around.
 

WetWilly

Golden Member
Oct 13, 1999
1,126
0
0
This is definitely not FUD. VIA has admitted that the southbridge PCI controller in ALL southbridges up to the VT8233 doesn't implement bus parking - something which a lot of high-throughput cards seem to use. The question that I haven't heard resolved was whether bus parking was a mandatory or optional part of the PCI 2.1 specs. I know there were people hunting this one down since I'd heard grumblings elsewhere about suing VIA for misrepresenting PCI 2.1 compliance in their chipsets. Regardless, it seems to be utter stupidity on VIA's part not to implement it. If Intel (and obviously SiS and AMD) implemented it in their PCI controllers, why the heck didn't VIA do it? Worse, they've ignored this for years and done damage to the Athlon's reputation since an awful lot of people don't differentiate between CPUs and chipsets when they have problems.

BTW, VIA has admitted the issue will be addressed/fixed (depending on your religion on the issue) with a revised PCI controller in the VT8235 southbridge. If there's not an issue, I don't know why they'd be revising the PCI controller.
 

mechBgon

Super Moderator<br>Elite Member
Oct 31, 1999
30,699
1
0
Peter, re-reading the PDF file on the subject, I see that you are correct, this affects the MCH in i850 and i850e, and the ICH2 is merely the victim. ;)
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
WetWilly, bus parking is an optional feature, even in PCI 2.2. PCI bus arbiters may or may not implement it. The difference is that on idle bus condition with no parking, all the address/data line states are undefined (because noone drives them), and any device requesting the bus will get to own it after one clock cycle. With parking, the device where bus ownership is parked is to drive the address/data lines, and requesting devices will win the bus after two cycles (yes it takes longer!) except if it's the device that already owns the previously parked bus anyway, in that case it's zero cycles.

VIA didn't "admit" anything, they just said the reason for those cards not working is that they rely on bus parking being done. It's the cards' fault, it's just that VIA identified it. Any PCI device must be ready to cope with a truly idle PCI bus anytime. This is not at all optional.

There's plenty of PCI bridge gear out there that doesn't do bus parking, even in the high end 66-MHz/64-bit area. Cards that rely on it are broken by design, no point in kicking VIA for their choice. The butt to kick is in the opposite corner.

Yes, VIA is being nice to the broken cards out there and will apparently implement bus parking in the 8235. Praise them for that, after all they're changing their previously correct chip to allow other people's stuff to remain faulty and work nonetheless.

regards, Peter
 

WetWilly

Golden Member
Oct 13, 1999
1,126
0
0
Peter,

The difference is that on idle bus condition with no parking ...

That's precisely the point. VIA's arbiter isn't waiting for an idle bus condition to deassert a device's GNT#, it deasserts when the latency timers expire. With this scheme, it's blatently obvious that high bandwidth devices are at the mercy of the latency timers which makes it even more inexcusable that it's taken them YEARS to come out with the latency patches. In the meanwhile, users who experience this very real limitation are referred to as anything ranging from "VIA bashers" to delusional. Also, didn't it enter anybody's head at VIA that bus parking was there for high bandwidth devices? What was their thinking process? "Let's see, I'm implementing a PCI bridge but I'm not going to implement that optional portion of the spec that facilitates high bandwith transfers, even though this is for a system board chipset. And the fact that Intel implemented it is totally irrelevant, too."

Cards that rely on it are broken by design

Why are they broken? It's part of the PCI spec, and every system board PCI bridge/controller VIA competes with supports it. VIA isn't making add-in cards - they're making system chipsets and it seems the onus for broad compatibility ought to be on them. I still find it utterly inconceivable VIA hasn't implemented bus parking yet regardless of PCI specs, because it would have removed virtually all complaints about VIA southbridges and ultimately been in VIA's own self interest. This may be news to VIA's marketing department, but VIA isn't Intel and everybody isn't going to put themselves through the grief of figuring out where/how VIA deviated from the industry standard PCI implementation, which is definitely Intel's. It's not like VIA is great at documenting for this stuff, either.

BTW, accurate or not, I haven't ever heard a card called "broken" when it works perfectly on Intel, AMD and SiS chipsets but fails on a VIA chipset. It's usually the VIA chipset that's called "broken."

There's plenty of PCI bridge gear out there that doesn't do bus parking, even in the high end 66-MHz/64-bit area

The dedicated PCI bridges' datasheets I've seen would indicate that far more PCI bridges implement it than don't. But the issue isn't whether plenty of PCI bridge gear do/don't do it, it's that:

1) lots of high bandwidth PCI cards do use bus parking, and
2) more significantly, Intel bridges support it and have for a long time. If VIA is selling PC chipsets that compete with Intel, and Intel implements a non-proprietary superset of the PCI specs, it seems common sensical that VIA would implement Intel's superset. Obviously it's not common sense to VIA.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
WetWilly, you're confusing matters.

Bus parking or not is NOTHING to do with whether or not the PCI arbiter actively takes away bus ownership from a bus master mid-transfer. Nothing. That's two entirely different matters.

Bus parking or not makes a difference WHILE the bus is idle, it's nothing to do with WHY it is. Cards that can't cope with the AD lines floating during bus idle phase are broken, ain't no discussing that. It's also nothing to do with the performance level of a certain card. Non-parked bus also benefits multi-card situations - you get 1-clock arbitration for everyone from a non-parked bus, vs. 2-clock arbitration from a bus parked elsewhere (0 for the card where it parked). Again, it's a choice of fairness vs. single-card throughput.

Now to the arbitrating scheme - PCI bridges that take bus ownership away from a bus master who transfers for long periods may limit that one device's throughput, but in turn they maintain a certain level of fairness. Question for you: Why do you think did Intel implement exactly that feature in the 8xx ICH (after they attempted to in the i440BX but then didn't document its existence)? (Refer to ICH datasheet, "MTT Multi Transaction Timer" register if you're curious). Answer: Because quite a few kinds of cards can't cope with having to wait for too long (NICs, sound and TV cards, any other kind of realtime equipment). The only thing to complain about is that most BIOSes did (and some still do) set the VIA PCI bridge up to very short allowed burst periods. This is easily set up differently in software (that's what VIA's PCI performance patch does, no more no less), so what?

Re broken cards vs. broken chipsets ... try running the SB!Live (original series, not 5.1) on an SiS chipset. You'd be surprised about how it doesn't work there either. This is a hardware bug - Creative fixed it in the 5.1 series.

On the bottom line, we have one hardware bug at Creative, none at VIA, and a pointless discussion on whether single card burst throughput or multi-card fairness is to be preferred. Pointless it is because when total bus bandwidth hits the ceiling, you're just managing the lack of it this or that way. Either way something IS going to suck then, you choose whether this is burst throughput or multi-card fairness.

regards, Peter
 

onelin

Senior member
Dec 11, 2001
874
0
0
I'm running an original SBLive! value still myself, works just fine on my Via chipset boards (Asus P3V4X, now Asus A7V333). It's was an issue on Creative's side which seems to be fixed in the recent cards on the market. Want to know why I think so? The only time I ever encounted the BSOD mayhem in windows (98 or 98SE at the time) was on an Intel 440BX chipset (Abit BX6 rv 2.0 I believe) running a Celeron 400 maybe ~3 years ago and it had to do with Creative's drivers. Just thought I'd share.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
YMMV ... the issue was a glitch on a PCI signal line. Depending on chipset and mainboard design, the glitch may or may not confuse the chipset. So on mainboards that happen to dampen this particular signal more than others, the SB!Live will work even with a chipset that is susceptible to being disturbed by this glitch.
Besides, the VIA recommended "alternate" PCI bridge setup to be used when an SB!Live is present has found its way into manier BIOSes by now. You just shouldn't install the "PCI performance patch" on such a setup.

regards, Peter
 

Insane3D

Elite Member
May 24, 2000
19,446
0
0
Peter -

I was under the impression that the SB Live/686B bug was also due to a flawed ACPI header implementation in Creative's drivers? Regardless, lots of good info in this thread! :)
 

WetWilly

Golden Member
Oct 13, 1999
1,126
0
0
Peter,

Before I reply, I do need to correct something - when editing my last post, I left off one part. After VIA deasserts GNT#, they apparently "park" the bus at the CPU. This isn't true bus parking according to the PCI spec. If you've got a PDF to confirm or deny this, I'd like to see it. That said,

Bus parking or not is NOTHING to do with whether or not the PCI arbiter actively takes away bus ownership from a bus master mid-transfer. Nothing. That's two entirely different matters.

Here is a quote from the PCI Local Bus Specification 2.2 (Dec 1998):

Implementation Note: Bus Parking

When no REQ#s are asserted, it is recommended not to remove the current master's GNT# to park the bus at a different master until the bus enters its Idle state. If the current bus master's GNT# is deasserted, the duration of the current transaction is limited to the value of the Latency Timer. If the master is limited by the Latency Timer, it must rearbitrate for the bus which would waste bus bandwidth (my emphasis). It is recommended to leave GNT# asserted at the current master (when no other REQ#s are asserted) until the bus enters its Idle state. When the bus is in the Idle state and no REQ# are asserted, the arbiter may park the bus at any agent it desires.


From that quote:

1) I'd say bus parking is obviously affected by the bus' Idle state
2) Removing the current master's GNT# and bus parking are obviously interrelated

Again, it's a choice of fairness vs. single-card throughput.

Exactly. VIA clearly chose not to follow the spec's recommendation which explicitly says if not implemented would waste bandwidth. No one in the VIA cheering section has indicated an good excuse why it taken several YEARS to implement a latency patch when the PCI spec from 1998 explicitly states If the master is limited by the Latency Timer, it must rearbitrate for the bus which would waste bus bandwidth. Like you said, it's a choice. If you have cards that need high throughput, don't choose VIA.

Re broken cards vs. broken chipsets ... try running the SB!Live (original series, not 5.1) on an SiS chipset.

My definition re broken cards vs. broken chipsets was I haven't ever heard a card called "broken" when it works perfectly on Intel, AMD and SiS chipsets but fails on a VIA chipset. Obviously the SBLive! would be broken by that definition. I'd be the first in line to call Creative cards broken.

On the bottom line, we have one hardware bug at Creative, none at VIA, and a pointless discussion on whether single card burst throughput or multi-card fairness is to be preferred. Pointless it is because when total bus bandwidth hits the ceiling, you're just managing the lack of it this or that way.

1) I've never said that VIA had a hardware bug, I've said VIA has made poor design decisions that ultimately cause users needless grief. BTW, if you think VIA is fixing this "issue" in the VT8235 southbridge out of the goodness of their hearts, think again. Guess which northbridge the VT8235 was designed? Not the KT400, but the K8HTA. VIA had a awful lot of slack in the Athlon market; they were able to keep their customers from adopting SiS chips, and nVidia shot themselves in the foot with late, overpriced chipsets and the very late appearance of the 220D and 415D chipsets. When it's Hammer time, they're starting out on equal footing with nVidia and SiS. Worse for them, all they're providing is an AGP controller and southbridge. nVidia has a far better positive reputation, and SiS will always be able to uncut VIA in cost. That doesn't leave VIA in a great spot and they don't need "issues" with PCI performance hovering around when the competition is that tight.

2) This discussion isn't pointless because for YEARS, users have been lambasted when they've posted problems with high bandwidth transfers on VIA boards that don't happen with other chipsets. Check the forums, it's still happening now. I don't see why it's pointless either when users find that the same high bandwidth transfers that choke VIA chipsets don't choke the BX or other chipsets. In fact, based on the PCI specs, VIA chipsets behave exactly as should be expected.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
OK, we're getting somewhere ...

Regarding GNT# deassertion. What you emphasized in your quote from the PCI specification is of course true - this single device loses bandwidth - and I never said anything else. Again, you need to consider that there are two factors that make a usable PCI system (with multiple agents): Bandwidth and fairness. The latter becomes increasingly important as the bus saturates. Several things are being done. The latency timer mechanism, by which the active master yields the bus by itself after a certain number of clocks, has been introduced years ago (in PCI 2.1). About at the same time, arbiters became programmable to force the active master off the bus after a certain amount of time. Intel has had that since the i440BX (officially since the 8xx series), VIA has it, other less recognized chipsets (Cyrix MediaGX for example) have it too.

The quoted recommendation means one shouldn't preemptively grant the bus to someone else while the current transfer is in its finishing state ... but this is only recommended if noone else wants the bus anyway (no REQs asserted). Good idea, definitely, because there is a certain chance that this device might want to follow up right away, and if noone else got some work to do then why not stay with the current device. Whether or not the VIA chipsets do this would have to be examined. However this is not about the _idle_ bus parking we're discussing here. The one important sentence for us (and for the affected sound cards apparently) is the last one in that paragraph. The keyword is "may". By the wording conventions, this means it doesn't have to park the bus at any PCI agent at all - and apparently it's this completely idle bus state that some cards can't cope with.

The only valid complaint against VIA here is that they let their OEMs get away with BIOSes that leant way too far towards the fairness side until recently. Being a BIOS engineer myself, I feel ashamed for the poor job these people made. One should think that with a prototype on your desk and a chipset manual on your knees, you've got all it takes to put a couple of brain cells into optimizing your register setup ... doesn't happen everywhere it seems.

Intel's chipsets (i850 aside) approached from the other side - they've always had good bandwidth (and no programmability hence no BIOS engineer wannabees screwing it up either :)), and added the MTT fairness feature much later.

Oh, and start queuing up for calling the SB!Live broken. Please note, only the original SB!Live is affected - everything above and below was and is OK as far as one can tell from the outside.

regards, Peter
 

WetWilly

Golden Member
Oct 13, 1999
1,126
0
0
Peter,

The only valid complaint against VIA here is that they let their OEMs get away with BIOSes that leant way too far towards the fairness side until recently. Being a BIOS engineer myself, I feel ashamed for the poor job these people made. One should think that with a prototype on your desk and a chipset manual on your knees, you've got all it takes to put a couple of brain cells into optimizing your register setup ... doesn't happen everywhere it seems.

This is the fundamental problem. Obviously we could (and from the looks of it, we have) go back and forth repeatedly about about VIA's PCI implementation. What astounds me about VIA is that you CAN'T easily find out exactly what they're doing. With Intel, you've at least got a shot at finding a relevant datasheet or errata document. I'm not a BIOS engineer, but if I was looking at the PCI specs about bus parking I'd be going through the following thought process.

1) Read "If master is limited by the latency timer ... waste bus bandwidth" in the specs.
2) Make decision that bus parking must be evaluated for implementation
3) Look at what did Intel implemeted in their chipsets
4) Document whatever gets implemented for developers

VIA apparently ignores step 4. The killer is that they have a knack for doing this even when it hurts VIA's marketing and reputation. Look at the 4-way memory interleave option back on the Apollo Pro 133A chipsets. Considering the Asus P3V4X was the first board to take advantage of it and their competitors didn't, it looks like Asus not VIA showed the practical use of the option. This, even though the chipsets developed a reputation for sub-BX performance until 4-way interleaving reduced a good part of that deficit.

The other thing is that problems that arise from an undocumented, alternative implementation that's within the specs, but has issues that aren't/can't be easily/timely fixed due to the lack of clear documentation would be called "broken" by most typical users. As a BIOS engineer, you can appreciate the nuance between those two things; a user with poor RAID perfrmance accompanied by occasional data corruption won't. And I'll repeat for the umpteenth time - there's NO excuse for the number of years it took to get the latency patches from VIA. The first patches didn't even come from VIA. It wasn't until the "hacked" patches got publicity that suddenly VIA shows up with their own.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
One more time ... there is no occasional data corruption being caused by VIA's chipsets. Back when everyone jumped on it and made copy-and-verify tests with gigabytes of data, noone bothered look at the statistical error rate of the HDDs themselves - which is typically one uncorrectable in 10^13 ...

Regarding the performance configuration not being done right - I totally agree ... but it's my fellow BIOS engineers at the mainboard companies who is to blame here. Got an un-configurable Intel chipset? OK, be lazy, noone will notice. Got a chipset that allows (and requires making) certain choices? Then you better sit down and do your job RIGHT. Hats off to the ASUS folks for getting it right :)

The information is and has been there - VIA do hand out the full datasheets, all you need to do to get them is name a good reason and sign an NDA. I've just been looking into the (now public) MVP3 datasheet from back in 1998. It's all in there - SDRAM 4-bank interleave, PCI tweaking options, everything. Read and use, dear fellow BIOS engineers.

btw, this doument shows that the active force-into-arbitration strategy can be disabled altogether (pg. 31, device 0 reg. 75) ... need I say more?

regards, Peter