After weeks of searching the web. Came to one conclusion

devilchrist

Member
Feb 11, 2008
161
0
0
After searching of day's on end of what causing in-place large fft, there was no conclusive data that it points to anything.

reason i've been searching is that I've tested small fft on my E650 at 3.7ghz on 1.48v for 9 hours without errors.it also passed the blend and memtest86+ for 6 hours on both,

yes it fails on in-place large fft test at random times. sometimes immediately, sometimes after 4 hours. sometimes after 1 hour.. I've ran it for 10 hours without error before also.

I can't seem to place an exact location of what's causing the error in prime95

On a side note:

I've read that in-place large fft uses same address on the memmory for the entire duration. stressing the link between the CPU and the ram.

Now if my MB was the weak link, then wouln't it fail pretty much all the time immediately?

Only thing i can think of is that it randomly picks a specific but different address on the ram on different tests. So certain chips on the ram isn't able to handle the 920mhz speed.

Anyone else know a secret that they like to share about in-place large fft?
 

nerp

Diamond Member
Dec 31, 2005
9,865
105
106
Are you seeing system instability anywhere else? Any crashing, BSODs or program errors? Ever get display glitches?
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
FFT is "Fast Fourier Transform" - it's a mathematical technique to solve the discrete Fourier transform in a "fast" algorithm that fits well with the way that microprocessors work. FFT's and DFT's are used in a variety of applications such as signal processing, and solving differential equations. "in place" means that it doesn't require additional buffer memory for the calculation - so that it doesn't have the original data and then make a copy as it executes, but executes in the same memory space. This reduces the memory overhead required. "large" could refer to the dataset size, or the integer size, but this is a bit more vague and I'm not sure what they mean by it - although I'd guess that it's the size of the dataset.

As far as what is causing your failure... there's too many variables to know. I'd try to margin each component separately... like lower the NB voltage and see if it fails faster, or raise the NB or memory voltage and see if it goes away. But do it one at a time.

Figure out what component is causing the problem by stressing each componenent individually until you find what causes the problem, or, by relaxing them all one by one until you see the problem go away. A failure in a large in-place FFT doesn't really tell you much - you just know you have a chunk of code that's failing some speedpath somewhere. As you well know, it could be anything in the system that's causing it. You need to narrow down the problem by looking at each component separate to the others.
 

devilchrist

Member
Feb 11, 2008
161
0
0
Originally posted by: nerp
Are you seeing system instability anywhere else? Any crashing, BSODs or program errors? Ever get display glitches?

I think out 20+ tests of large fft's i ran only time I crashed was when i was accidently mis set the CPU voltage too low. Other than than, it just get's the usual "round expected greater than 0.5 got 0.4" something in that line

Figure out what component is causing the problem by stressing each componenent individually until you find what causes the problem, or, by relaxing them all one by one until you see the problem go away. A failure in a large in-place FFT doesn't really tell you much - you just know you have a chunk of code that's failing some speedpath somewhere. As you well know, it could be anything in the system that's causing it. You need to narrow down the problem by looking at each component separate to the others.

I've tried everythig from raising the FSB voltage to 1.5v NB up to 1.6v. None of them produce any solid results to say they are at fault. errors times are random. sometimes it will test for an hour, sometimes 10 min.

"in place" means that it doesn't require additional buffer memory for the calculation - so that it doesn't have the original data and then make a copy as it executes, but executes in the same memory space. This reduces the memory overhead required.

can't what you're saying really mean that when the test is run it sticks to only what it needs? i believe it uses about 150mb. And does it mean that it uses only a portion of ram and on that portion?
In doing so if it picked a not as stable ram chip it will more likely fail, where as it picks a better chip it will not fail as quickly or not at all. BUT...
When i keep my CPU at 3.2ghz 460x7 and keep the eveything same. it passes the in-place large fft. where that result points to CPU.. but CPU passes small FFT..

Can it be that in-place large FFT stresses the CPU more than the small FFT?


"large" could refer to the dataset size, or the integer size, but this is a bit more vague and I'm not sure what they mean by it - although I'd guess that it's the size of the dataset.
I've read in quite a few places that "large" dataset size does not fit in to the CPU cache and will be accessed from ram, that it stresses the CPU to RAM interface most. but that would mean the NB. but increasing the NB does nothing to stability. doesn't change it' random error occurance.

I should be happy that I can stabilize at 3.6ghz for my first OC unit. But I don't like unanswered questions and unsolved problems.
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,086
3,593
126
whats your psu?

PS. Welcome to AT
 

devilchrist

Member
Feb 11, 2008
161
0
0
Well there are few things i can stil try,

1. try different positions on ram slots on the MOBO,
2. try removing 5 120m case fans. to reduce power draw,
3. remove 120m cpu fan from the MB directly connect to PSU.
4. switch out to ddr8500, I have corsiar ddr8500c5d on the way so we'll see if that helps any.

I have a 600W 80% efficiency antec power supply so i don't think i'm underpowered. even with the 6 120mm fans.