My MOB Design

FreemanHL2

Member
Dec 20, 2004
33
0
0
Here is a design for a motherboard I have been thinking about with dual processors?

http://n.1asphost.com/FreemanHL2/DESIGN%201.jpg

The beauty of this design is you only need one processor in the machine, the second is an optional upgrade, and the new control unit can access the ram at any one time through a DIRECT pin connection and send it to the CPU instantaneously. Because the Control Unit is now monitoring the information going in and out of the CPU, it is only logical to combine it with the Northbridge to create a new type of chipset that monitors and directs all dataflow through the PC.

I also have a new concept for memory. I think that we should get rid of current memory and shift to ?direct pin memory? by integrating the RAM directly onto the board using pins (like a CPU). This would greatly increase the speed of the memory, and placing it directly next to the combined Northbridge would ensure direct data communication by eliminating the need for a front side bus, thus tackling current memory bottlenecks. The memory itself is simply current memory modules imbedded into a heat sink (15 modules in total), giving the RAM greater protection and heat management than current solutions. And it obviously has pins instead of current gold edge connectors. The idea is NOT that the pins make it faster but allow it FASTER ACCESS TO THE NORTHBRRIDGE.

I also have this idea of a multi-purpose PCI card that handles the printer, sound, USB and PS2 ports saving room on the board and allowing the room for the second CPU (after all none of these ports are transmitting data faster than standard gold pin connectors can transfer information). The board also has 2 power connectors to allow both CPU?s to access enough power and prevent an overload on any transistors. I don?t know much about motherboard architecture so the rest of my diagram is very primitive to say the least. However logic dictates that this concept can be achieved.

I recon one memory chip (15 modules) could be 2.25GB and both CPU's could prob reach 5Ghz.
 

Bassyhead

Diamond Member
Nov 19, 2001
4,545
0
0
It seems that you are trying to accomplish too much at one time. The CPU, motherboard, RAM, and power design are all different areas. I would suggest you pick up a book on computer organization ("computer" as in the microprocessor) as most of the components in today's PCs are designed around the CPU. Honestly, I don't think any of your ideas are feasible. Companies like Intel likely spend hundreds of millions to optimize their CPU designs every little bit.

I don't understand why you would want to remove the control unit from the processor and place it into the northbridge. The control unit, well, controls the operation of a processor. The control unit is coordinated by software instructions, which are executed by the processor, so it makes no sense to put it in a separate entity.

I would be interested in hearing other's comments about putting memory in packages like processors are. I don't believe it makes much difference, though. Intel is perfectly content in using a slot mechanism for PCI Express, which runs at a pretty high bandwidth. A chunk of the large number of pins that processor packages have are allocated for power. RAM doesn't use a whole lot of power, so this wouldn't be necessary.
 

FreemanHL2

Member
Dec 20, 2004
33
0
0
obviously none of these are feasable because... im not intel, and i wouldn't now the first thing about MOB architecture. I was just sitting down yesterday wondering about our current memory bottlenecks and ways of getting around it. I thought i'd do a primitive diagram of some of the concepts i thought would work...

I think that MOB's need to re-designed some how to eliminate the front-side bus. After all most ALL of our latency occurs in the FSB. My intention was not to REMOVE the control unit, rather create ANOTHER control unit to coordinate the moevment of data between both CPU's, which is similar to the job of the northbridge, so combining the 2 makes sense.

I have no doubt INTEL know what they are doing, i'd rather not touch the CPU's. I was thinking more of MOB design... and the massive amount of latency that occurs when data is going around the board.
 

Bassyhead

Diamond Member
Nov 19, 2001
4,545
0
0
Take a look at AMD's Athlon 64/Opteron line, then. These CPUs feature an integrated memory controller eliminating some latency. Additionally, the socket 939 and 940 CPUs feature multiple hypertransport links that allows each CPU to directly connect to another CPU with a hypertransport link, eliminating any "coordinating" you have to do between any CPUs.

I know there are other CPUs with similar features but can't remember them all. I know that the Sun Sparc CPUs also feature integrated memory controllers.
 

BEL6772

Senior member
Oct 26, 2004
225
0
0
Seems like what you're really after is a 'system-on-a-chip'. One piece of silicon that contains the video core, memory, memory controller, I/O drivers, sound processor, and CPU.

IIRC, Intel played with the SOC idea before they abandoned it and launched the Centrino platform instead. AMD used the same idea and brought the memory controller onto the CPU, and that has done wonders for their system's performance.

Even if you put everything onto a single piece of silicon, you would still have some type of 'FSB' to get all of the differnet sub-systems talking and sharing data. Any time you have multiple users on a data bus, you have to have some type of control system which adds latency. Most modern systems use cache schemes to keep their CPUs from 'starving'.

If the FSB gets eliminated, you would have to replace it with a bunch of dedicated, independant busses. The problem would still arise that all of those fast, low-latency busses would all have one common point (the CPU or some controller). You would still have to choose which bus to talk to and let the others sit there, waiting their turn. There would have to be a lot of traces on the MOB to support all the busses, which would add a lot of cost. I guess you'd have to serialize all the busses to keep things at least somewhat reasonable. Of course then layout would be a nightmare, trying to route a slew of high frequency busses around the board.

The FSB issue is currently being worked by 'big industry'. The result, so far, is the new PCIe bus. I'm sure more is on the way.

 

jagec

Lifer
Apr 30, 2004
24,442
6
81
I see a major problem with your motherboard design.

It's using a SiS chipset :p
 

FreemanHL2

Member
Dec 20, 2004
33
0
0
Originally posted by: jagec
I see a major problem with your motherboard design.

It's using a SiS chipset :p

No one asked for your opinion!:p

Anyway i don't see any reason why the ram and the north bridge cannot be placed next to each other and given direct access. And for that matter why the CPU and northbridge cannot be placed next to each other and given direct access.

I'm just talking theoretically, don't take any of this seriously.:D

The idea is to have ONE independant northbridge that acts as the core coordinator of data going between ram and the 2 CPU's, thus eliminating the need for any other buses.
 

BigPoppa

Golden Member
Oct 9, 1999
1,930
0
0
If you say "MOB" one more time, I'm gonna do something horrible to some poor unsuspecting person in a fit of rage.

WTF is a MOB? Man-on-base? MB is at least a semi acronym of MotherBoard. And, it uses one less letter!
 

elecrzy

Member
Sep 30, 2004
184
0
71
I don't really understand your MB design, but I can tell you this. The trend is to dump parallel signaling and adopt serial, LVDS like SATA for storage, Hypertransport for Chip to Chip, PCIe for internal/External Peripherals and connection to another PC, FB-DIMM for RAM, USB/Firewire for external Periperals. Also, PS2, FDD, Printer are gonna go out too, leaving USB and Firewire to do their work. Also, I like the idea of FB-DIMM, since it reduces the number of cpu and RAM socket changes everytime upgrade to a new RAM type.
 

GoHAnSoN

Senior member
Mar 21, 2001
732
0
0
Originally posted by: BigPoppa
If you say "MOB" one more time, I'm gonna do something horrible to some poor unsuspecting person in a fit of rage.

WTF is a MOB? Man-on-base? MB is at least a semi acronym of MotherBoard. And, it uses one less letter!

MOB ??
 

TuxDave

Lifer
Oct 8, 2002
10,571
3
71
Hahahaha.... I'm sorry, but exactly what technical background have you used to build this motherboard? Like, what are those little blue and red circles scattered around the motherboard without names and along with those other grey rectangles. I admire your diligence in drawing out all those lines though.
 

MetalStorm

Member
Dec 22, 2004
148
0
0
Actually, to correct my initial statement:

There are so many things WRONG with that design, and the things you've said, it isn't even funny.
 

Malak

Lifer
Dec 4, 2004
14,696
2
0
The very least you could free up room on it by removing the other PCIe slot. The SLI fad won't last.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
I also have this idea of a multi-purpose PCI card that handles the printer, sound, USB and PS2 ports

Well we already have something like that. They are called, AMR/ACR/ Riser cards. No need to create another one.

The very least you could free up room on it by removing the other PCIe slot. The SLI fad won't last.

Unless Nvidia suddenly somehow declares bankrupcty its going to be around for a long time.

As for the 5ghz theory with that memory. Memory isn't what is holding us back. The "gates" can only open and close so fast on the processor. Electrons can only move so fast. Addittionally haven't you learned yet that clock frequency isn't all that. OPC (Operations Per Clock, and IPC Instructions Per Clock) are what matter. That is why AMD beats intel at a lower clock speed.

I think you have some great ideas but not a thourough understanding of how everything works. Things might seem apparent but you need to know well more than what you are planning. I mean where do you plan on putting all the capacitors, the traces. On top of that how do you expect to cool this. There is a reason that Dual CPU boards have one cpu * and the other here .

-Kevin
 

Malak

Lifer
Dec 4, 2004
14,696
2
0
Nvidia is already moving on to new tech and there are many reasons why the average joe shouldn't choose sli. It's just a fad.
 

BigPoppa

Golden Member
Oct 9, 1999
1,930
0
0
Originally posted by: Gamingphreek
I also have this idea of a multi-purpose PCI card that handles the printer, sound, USB and PS2 ports

Well we already have something like that. They are called, AMR/ACR/ Riser cards. No need to create another one.

The very least you could free up room on it by removing the other PCIe slot. The SLI fad won't last.

Unless Nvidia suddenly somehow declares bankrupcty its going to be around for a long time.

As for the 5ghz theory with that memory. Memory isn't what is holding us back. The "gates" can only open and close so fast on the processor. Electrons can only move so fast. Addittionally haven't you learned yet that clock frequency isn't all that. OPC (Operations Per Clock, and IPC Instructions Per Clock) are what matter. That is why AMD beats intel at a lower clock speed.

I think you have some great ideas but not a thourough understanding of how everything works. Things might seem apparent but you need to know well more than what you are planning. I mean where do you plan on putting all the capacitors, the traces. On top of that how do you expect to cool this. There is a reason that Dual CPU boards have one cpu * and the other here .

-Kevin

Just like Alienware's SLI system is still around right? You could even use 2 completely different cards with that system.
 

Gamingphreek

Lifer
Mar 31, 2003
11,679
0
81
Well its debatable. They are having a lot of driver problems with that right now. I mean sure you can run to cards in them but you cannot SLI them unless Alienware has some heavily modified drivers up their sleeves.

-Kevin
 

elecrzy

Member
Sep 30, 2004
184
0
71
whats wrong with having 2 PCIe x16 slots? Some people actually have a use for 2+ monitors. Also, PCIe isn't just for graphics. You actually use the slots for something else like RAID.
 

Peter

Elite Member
Oct 15, 1999
9,640
1
0
Shall we REALLY get started about how far the design is from the simple facts of life? For one, PCIE isn't a bus. You need point-to-point connections from the root complex to each individual slot. Secondly, do the CPUs not get any power? Look at how much board space CPU power circuitry actually consumes.