How Exactly Do the CPU, RAM, and Peripherals See One Another?

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
Hello,

In modern x86-64 computers, how exactly do hardware subsystems e.g. the CPU, RAM, NIC card, USB ports, SD card port, etc. all see each other?

I'm asking because I've got half a dozen or so ideas in my head about how everything is connected.

Unfortunately, this is one of those things where the more you learn the more questions you have.

I understand the concept of control, address, and data buses, PCI and ISA buses, as well as point-to-point paradigms like QPI, PCI express, etc.

I understand memory-mapped I/O vs. isolated I/O, programmed I/O vs. DMA.

It's also my understanding that the point-to point systems use some kind of internal packet switching protocol.

But, which of these systems is used today in modern computers to link hardware units together?

Which are obsolete?

Thanks.

Moved from Programming -- Programming Moderator Ken g6
 
Last edited by a moderator:

Ken g6

Programming Moderator, Elite Member
Moderator
Dec 11, 1999
16,665
4,604
75
I found this nice quote about how the hardware is arranged these days:
The northbridge has been integrated into the CPU for quite some time now :) The vestigial "chipset" is basically the old southbridge- it's just a bunch of connections for devices, hanging off a glorified PCIe link.
RAM is connected directly to the CPU by an Integrated Memory Controller. On Intel consumer boards, the GPU(s) get their own PCIe x16 connection to the CPU. Pretty much everything else gets a PCIe interface to the chipset, which itself has a PCIe x4 interface (called "DMI", but it's basically PCIe) to the CPU.

Here is a diagram:

Z170%20Platform_575px.jpg
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
I was going to write a long winded thing, but I decided it was better to just link to an article that does a better job of explaining things

http://xillybus.com/tutorials/pci-express-tlp-pcie-primer-tutorial-guide-1

The way communication works depends on the port, so there is just 1 example for PCI express. It is, relatively, similar for the other subsystems that you mentioned. The only difference is ram. Right now, the CPU and Ram will directly communicate via the IMC on the CPU. The CPU will literally push bits on the line connected directly to the RAM controller and the ram will respond on a line connected directly to the CPU that is translated by the IMC. That used to happen over a northbridge (which allowed for CPUs to support many types of RAM. In particular, at one time Intel supported both DDR and RDRAM).

AFAIK, pretty much all communication has turned serial. There are few things now-a-days that communicate over parallel data lines. So pretty much all the lines going to and from the CPU are serial data lines.
 

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
I was going to write a long winded thing, but I decided it was better to just link to an article that does a better job of explaining things

http://xillybus.com/tutorials/pci-express-tlp-pcie-primer-tutorial-guide-1

Thanks for the link.

I'm curious though.

How are isolated I/O and memory mapped I/O addresses assigned?

I found this quote from the posted link:

There are no MAC addresses, but we have the card’s physical (“geographic”) position instead to define it, before it’s allocated with high-level means of addressing it (a chunk in the I/O and address space).

The author doesn't say, though, how these high-level I/O addresses are assigned.

Also, before higher level memory mapped I/O addresses are even assigned, apparently, from the quote, the system has some very low level understanding of what's plugged into the PCIe system and where.

What is this functionality?

How does it work?

Thanks.
 

zir_blazer

Golden Member
Jun 6, 2013
1,251
565
136
I think that what he is asking is not how things are connected, but how things "talk" to each other. Its rather complex to explain, since during the lifetime of the x86 architecture, you had different mainstream methods.


The earlier methods available on x86 (Basically on the 8088 IBM PC, since we're still sort of PC IBM compatible) were IRQs for the devices to signal something to the Processor, and I/O Ports to talk back to them, plus some earlier form of ISA DMA to allow a device to talk directly to the memory.
Also reelevant, the BIOS provided Software Interrupt services (INT XXh on Assembler) which were the functions you had to call to talk to the Hardware. DOS type OSes extended on that providing more INT functions for Software related tasks. At the point of time of the original IBM PC, you can say that the BIOS was effectively a sort of Hardware Abstraction Layer and provided Driver like functionality. However, for things like games, it was rendered obsolete rather fast since performance was much lower than implementing your own basic Driver to do things like writting to video memory, which could be done rather easy since they were at fixed memory address.
The beginning of the clone wars means than BIOS providing its own Drivers was no longer viable, it was better to have discrete Drivers that bloating the hard-to-upgrade BIOS with support for a ton of different devices (Through initially most were functional clones and sort of fully compatible). At most, OEM vendors developed custom BIOS INT functions in systems shipping with non standard Hardware out of the box.

During the 386 (Or was it 286?) era, x86 got a MMU, which initially was just to implement Paging (Virtual memory address to Physical memory address translations).

During the 486 era, x86 got APIC, that could supercede the older traditional IRQ system. The later PCI 2.2 included MSI, which relied on APIC, allowing a device to write in a specific memory address that is being watched by other device (Possibily the MMU), and it then signals the Processor. I think this is mostly what we use today.

Things have been rather stagnant in this area, through there were some successors to the previous things. APIC was superceded by xAPIC (Pentium 4), x2APIC (Nehalem) and APICv (Ivy Bridge-E, not sure if consumer Haswell/Broadwell/Skylake have it). PCI MSI was superceded by MSI-X in PCI 3.0, and PCI Express devices should universally support it. The only groundbreaker thing has been the IOMMU introduced on Core 2 Duo Wolfdale Chipsets.


A legacy of all this, is that you have some things like the Video BIOS and the way to use the VGA protocol, that are always expected to be at a predefined memory place like in the early days. Actually, I think than is the most notorious ultra legacy part that you can still commonly see. Unless you have both, a Motherboard with UEFI, a Video Card whose Firmware has UEFI GOP support, and a OS supporting UEFI Boot and UEFI GOP, you will be doing the entire boot process using the legacy VGA protocol (Including loading its VBIOS) at least until the OS loads the proper GPU Drivers.
 
Last edited:

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
I think that what he is asking is not how things are connected, but how things "talk" to each other. Its rather complex to explain, since during the lifetime of the x86 architecture, you had different mainstream methods.

Thanks for your reply.

It seems that you've answered my original question.

But, my second question remains:

How are memory-mapped I/O and isolated I/O addresses assigned at boot?

Also, PCIe is plug-and-play, right?

I think that that means that you can plug a peripheral card into a slot and start using it immediately without rebooting.

How, then, are memory-mapped and isolated I/O addresses assigned dynamically on-the-fly?

Thanks.
 

exdeath

Lifer
Jan 29, 2004
13,679
10
81
Well PCIe is pretty easy since addressing is all in-band and software defined. The PCIe controller and the device negotiate a range of addresses the device wants and when that device is being addressed the PCIe controller alerts that particular lane/device/etc base on prior auto negotiation. It's a hub and spoke network topology more like a network than a CPU bus.

PNP in the general sense think of it like using programmable registers that can be used to map the device in a certain address space via software rather than hard wiring it with jumpers.

Remember that even on a shared parallel bus, any device can see all traffic and data all the time. Each device is purposely designed to only respond and latch or drive data when it's own chip select logic is activated with the proper address decoding logic. Whether this logic and the exact address defined is completed with jumpers on the card, hard wired to a fixed address, or chosen by re-configurable registers, it all works the same. The device only responds when it sees it's address, and tri states when it doesn't.

Most of this functionality in a modern computer would be traditionally associated with Northbridge functions, eg: Pentium to PCI bridge, in conjunction with a PNP BIOS that has software written to manipulate this bridge and it's root ports including following protocols to initiate a PNP scan (like a hello broadcast to discover all devices on the bus, aka PNP enumeration). PCI had dedicated out of band pins for PNP functions if I recall until it could be assigned I/O and memory map addresses to communicate with it normally. PCIe everything is in band over the same wire.

This of course means the PCI/PCIe bridge itself has memory and I/O addresses as part of the chipset in order to manipulate the bus controller itself rather than a device on said bus.

Most of this stuff is going to be in IO address ranges above 0x0400 in the x86 IO map.

Time to start learning about APIC and ACPI if you want to go further.
 
Last edited:

MongGrel

Lifer
Dec 3, 2013
38,466
3,067
121
I think that that means that you can plug a peripheral card into a slot and start using it immediately without rebooting.

No.

I'd recommend not ever doing that, if your computer is powered on.

Or even plugged in at all.

You maybe should read up about ESD a bit to begin with if you are doing things along those lines.

You can damage hardware outside of the case not handling it right before you ever install it from ESD damage, let along plugging it into a live board.
 
Last edited:

exdeath

Lifer
Jan 29, 2004
13,679
10
81
A legacy of all this, is that you have some things like the Video BIOS and the way to use the VGA protocol, that are always expected to be at a predefined memory place like in the early days. Actually, I think than is the most notorious ultra legacy part that you can still commonly see. Unless you have both, a Motherboard with UEFI, a Video Card whose Firmware has UEFI GOP support, and a OS supporting UEFI Boot and UEFI GOP, you will be doing the entire boot process using the legacy VGA protocol (Including loading its VBIOS) at least until the OS loads the proper GPU Drivers.

Just to add on to some chrstrbrts OS posts in other forums.

When you question odd things like why page tables are set up a certain way or wonder why process entry points don't start at 0, this is one of many reasons why.

0 to 1MB and 16 MB are pretty special ranges for legacy reasons for PC/XT and AT device compatibility. Even modern operating systems still treat these ranges as "sacred".
 

chrstrbrts

Senior member
Aug 12, 2014
522
3
81
PNP in the general sense think of it like using programmable registers that can be used to map the device in a certain address space via software rather than hard wiring it with jumpers.

These are called BARs. Base Address Registers. I looked it up.

No.

I'd recommend not ever doing that, if your computer is powered on.

Or even plugged in at all.

You maybe should read up about ESD a bit to begin with if you are doing things along those lines.

You can damage hardware outside of the case not handling it right before you ever install it from ESD damage, let along plugging it into a live board.

I think that I confused 'plug-and-play' with 'hot pluggable'.

USB devices are hot pluggable.

But, I guess you should shut down your computer before plugging I/O cards into the PCIe network lest you send a static discharge that fries your board.

Just to add on to some chrstrbrts OS posts in other forums.

When you question odd things like why page tables are set up a certain way or wonder why process entry points don't start at 0, this is one of many reasons why.

0 to 1MB and 16 MB are pretty special ranges for legacy reasons for PC/XT and AT device compatibility. Even modern operating systems still treat these ranges as "sacred".

OK. So, I/O space is reserved for all concurrent processes in the same virtual space?