- Jun 30, 2004
- 16,619
- 2,024
- 126
We've touched on this topic many times.
On the matter of thermal equilibriums for over-clocks, the THG "C2D Temperature Guide" notes that ORTHOS at "priority 9" puts the processor under 88% of the stress experienced under Intel TAT. It suggests that 10 minutes of ORTHOS will give you a fairly accurate temperature profile.
This, of course, doesn't get you to stability testing. For the process of adjusting voltage and speed, some suggest running ORTHOS for two hours between adjustments to see if stability is likely or possible.
Various opinions, mine included, suggest running ORTHOS for six, eight, twelve, sixteen or twenty-four hours to guarantee that a setting is stable. Yet, I have occasionally discovered that a nine-hour run suddenly terminates with a "STOPPED ERROR."
This doesn't mean that an eight-hour ORTHOS run is a bad rule, but it suggests what might be a Poisson-distributed failure-rate under stress-testing. That particular distribution is skewed to the left axis, and has been used to model things like the flaws in a bolt of cloth at a textile mill, the interarrival rates of customers at a checkout counter in a queueing model for grocery-stores and other phenomena.
I would also suggest a sort of Bayesian logic or conditional probability for stress-testing: "If it completes one hour with 0 errors, 0 warnings, the probability is 'high' that it will go two hours to 0,0." "If it completes four hours with 0,0, the probability is 'high' that it will go eight hours with 0,0," and so forth. Maybe we could measure exactly what those conditional probabilities are, but the results would be so specific to hardware choices, the motherboard settings themselves, the accuracy of room-ambient measurements and so many other things that it would seem like a waste of effort. So we result in concepts like "high" and "low."
Even so, "probability" is always a gamble (there seems something almost funny and redundant about saying it that way), which suggests why statistical methods have come late to acceptance in the court system for both criminal and civil litigation. Judges have trouble accepting that "Your honor, we're 99.9% confident that the bank records show the defendant stole the plaintiff's briefcase from the ATM booth" is "beyond reasonable doubt."
So I can accept a 13-hour ORTHOS run as "proving" the stability of my motherboard settings. I can also understand why someone might want to impose an arbitrary rule on the "Quad OVerclock Thread!" that "all results must be validated with a 24-hour ORTHOS or Prime95 test."
But I won't insist on it, and I won't suggest that all the data thus far is "unreliable" because of different test standards.
On the matter of thermal equilibriums for over-clocks, the THG "C2D Temperature Guide" notes that ORTHOS at "priority 9" puts the processor under 88% of the stress experienced under Intel TAT. It suggests that 10 minutes of ORTHOS will give you a fairly accurate temperature profile.
This, of course, doesn't get you to stability testing. For the process of adjusting voltage and speed, some suggest running ORTHOS for two hours between adjustments to see if stability is likely or possible.
Various opinions, mine included, suggest running ORTHOS for six, eight, twelve, sixteen or twenty-four hours to guarantee that a setting is stable. Yet, I have occasionally discovered that a nine-hour run suddenly terminates with a "STOPPED ERROR."
This doesn't mean that an eight-hour ORTHOS run is a bad rule, but it suggests what might be a Poisson-distributed failure-rate under stress-testing. That particular distribution is skewed to the left axis, and has been used to model things like the flaws in a bolt of cloth at a textile mill, the interarrival rates of customers at a checkout counter in a queueing model for grocery-stores and other phenomena.
I would also suggest a sort of Bayesian logic or conditional probability for stress-testing: "If it completes one hour with 0 errors, 0 warnings, the probability is 'high' that it will go two hours to 0,0." "If it completes four hours with 0,0, the probability is 'high' that it will go eight hours with 0,0," and so forth. Maybe we could measure exactly what those conditional probabilities are, but the results would be so specific to hardware choices, the motherboard settings themselves, the accuracy of room-ambient measurements and so many other things that it would seem like a waste of effort. So we result in concepts like "high" and "low."
Even so, "probability" is always a gamble (there seems something almost funny and redundant about saying it that way), which suggests why statistical methods have come late to acceptance in the court system for both criminal and civil litigation. Judges have trouble accepting that "Your honor, we're 99.9% confident that the bank records show the defendant stole the plaintiff's briefcase from the ATM booth" is "beyond reasonable doubt."
So I can accept a 13-hour ORTHOS run as "proving" the stability of my motherboard settings. I can also understand why someone might want to impose an arbitrary rule on the "Quad OVerclock Thread!" that "all results must be validated with a 24-hour ORTHOS or Prime95 test."
But I won't insist on it, and I won't suggest that all the data thus far is "unreliable" because of different test standards.