JEDEC Standard

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Not quite sure if this belongs in highly technical, but it is kinda borderline.

JEDEC standard for DDR is 3200. We have memory that EASILY reaches 433 466 500 533 566 and some can crack 600.

Why do they not increase the standard? Is there some way they determine it that is more technical than OCing.

Again if this is not highly technical i apologize but it is borderline so i wasn't sure, and im sure there is some technical stuff in the standard.

-Kevin
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
That's because all the toying around with DDR-1 at above 200 MHz is deep deep into YMMV territory, and the limitations of doing so make it rather useless as a common standard. Remember that JEDEC PC3200U standard is for a single DIMM at a raised voltage already, because signal integrity is fairly down the drain at /that/ speed already?
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
What is YMMV?

As for the rest of the post, it is just pointless to support that. Nothing else would (CPU FSB's etc) and it is already pushing the signal quality at 3200?

-Kevin
 

sao123

Lifer
May 27, 2002
12,656
207
106
YMMV means Your Mileage May Vary.

IE.... at speeds above DDR3200, the chip is under such stress from voltage, that it is much more likely to fail sooner than a chip running at standard.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Well couldn't that go the same way for 3200 and 2700. They are all above 1600, yet they still make it, why dont they just give it more voltage.

-Kevin
 

BEL6772

Senior member
Oct 26, 2004
225
0
0
ICs are very complicated devices. If all it took to make a chip go fast was to raise the voltage supply, I'm sure we'd all be talking about how to get more Kilovolts into our computers:D

Actually, as the circuits get smaller and smaller, they have a smaller and smaller voltage ceiling. There was a large family of logic ICs that ran at 5v. You can still find those, but more modern chips run at 3.3v. Those are mostly general purpose logic chips. When you start optimizing for speed, you start looking at really small devices and much lower voltages. One reason for the low voltage is slew rate. It takes less time for a given device to go from 0v to 1.5v than it does to go from 0v to 3.3v. Another reason has to do with the physical properties of the devices. Transistors are at the heart of almost all ICs. Remember that smaller devices are faster, so fast memory is made of the smallest transistors possible. The smaller a transistor is the lower the breakdown voltage is. In other words, a smaller voltage is required to break down a smaller transistor. Yet another problem that raising voltage has is power dissipation. The more 'juice' you give a chip, the hotter it will get. Once temperature gets too high, performance gets erratic. Raise the temperature even more and you risk permanent damage.

So, the standards process looks at what is curently possible. They look at yield and performance and device characteristics. They want standards that are realistic and will have a high probability of generating devices that work. They could 'specify' memory that is a hundred times faster than anything on the market now, but nobody would be able to manufacture it.

By selecting the best chips rolling off the assembly lines, manufacturers can meet the current specs. Some chips can run even faster, hence the YMMV comment above. It is pointless (right now, anyway) to set up a spec beyond what we already have. Of course, manufacturers are hard at work making smaller, faster, better chips. When the technology matures, a standard will emerge that will keep all the devices happily working together.