Parallel VS serial - I don't get it

yak8998

Member
May 2, 2003
135
0
0
I understand basically how each works, and of course, it would seem to make sense that parallel is faster. But then why are we still using a lot of serial tech, and even releasing new serial technology (SATA, USB)?

if this isn't technical enough, my apologies.


thanks
 

jagilbertvt

Senior member
Jun 3, 2001
653
0
76
The main reason for releasing serial technology is that it's easier to implement. This in turn makes for cheaper hardware. Anyone else care to add to this? :)
 

Fencer128

Platinum Member
Jun 18, 2001
2,700
1
91
I'm guessing that apart from the reasons already mentioned, the timing issues with serial mean that it is more difficult, as opposed to using current serial tech, to up the bandwidth/speed. When the current serial tech starts to max out - try it in parallel... ad infintum?

Andy
 

KalTorak

Member
Jun 5, 2001
55
0
0
Problem is, the 80's definitions of parallel (more than one bit at a time) and serial (one bit at a time) really don't apply any longer. PCI Express, for instance, is clearly a serial bus, even tho it can run 32 bits in each direction at once.

A more modern definition of serial, I'd argue, is a bus where there are no hardware-defined command, address, or data signals - just bits getting passed, which could be any one of those three things depending on context and regardless of which wires they're on. Parallel, by contrast, is a bus where a particular wire is pre-determined in hardware to send command or address or data signals.

By my rule, then, PCI Express is clearly serial, as are SATA, USB, and Ethernet. The DDR memory interface, PCI, PCI-X, and AGP are all parallel interfaces.

Fencer's got the right idea then, about why serial's taking over the world. Parallel busses are limited in their switching speeds by the need to maintain timing relationships between the various signals (the command signals, for instance, need to do their thing when a particular unit of data is on the data lines). Serial doesn't have that restriction; it can just switch the bujeezus out of the signals (2.5Gbps for first generation PCI Express, and it'll go up to the signaling limits of copper), and let the state machines at each end figure out the context of each bit and what to do with it.

Those state machines are NOT simple - they take a lot of transistors, which is part of why we haven't been high-speed serial all along. Now we're getting the transistor budgets to do that sorta thing, which is why serial's now taking over the landscape.
 

Mark R

Diamond Member
Oct 9, 1999
8,513
16
81
I think some of it has to do with balance of cost. Making a connection wider is relatively easy, until it gets to a critical level when it starts to get much harder. Making a connection faster was expensive and is now cheap.

In the early days of PCs, the high-speed busses were relatively narrow (e.g. memory bus was 8 bits, peripheral bus (ISA) was 8 bits.) When the next generation of PCs came out, they had new faster busses (memory speed took a major leap and was upgraded to 32 bits - you had to install memory 4 matched SIMMs at a time, and ISA was upgraded to 16).

The alternative would have been to upgrade the speeds of the busses , but there were too many problems with developing high-speed circuits to make chips and circuit boards cheap enough. The alternative of adding another 10-20 wires to the ISA bus, or another 30-40 to the memory, was much simpler. Motherboards were by necessity large, and at the relatively low speeds used, it didn't really matter that some wires were a bit longer than others.

As technology has progressed, busses have got both faster and wider. There are big problems with making the busses much wider, because they are already huge. The Dual channel DDR memory connection on modern motherboards is nearly 400 wires. You want to double the bandwidth again - you're going to need another 400 wires - and where are they going to go? At the high speeds used, timings and interference are critical. To make such a system work requires very expensive design and construction techniques. Modern graphics cards don't use laser-drilled 16-layer circuit boards for decoration - it's the only way to get an 800-wire bus from the GPU to the RAM.

Narrow busses, are relatively easier to manage and mean smaller chip packages (less pins) and less PCB area for the traces. Smaller circuit boards mean lower cost, and open the door for smaller form factor devices. With current technology, GHz frequencies are no longer the realm of esoteric non-silicon semiconductors, but can readily be achieved on standard production processes, with affordable price tags. The result is that there is little cost difference between serial ATA and conventional ATA - even when you have to pay for the advanced double-shielded impedence-matched balanced twisted-pair SATA cable.
 

Fencer128

Platinum Member
Jun 18, 2001
2,700
1
91
Originally posted by: KalTorak
Yeap, clock skew is a big problem with wide parallel interfaces at high clock frequencies. But parallel data transfer should win in the end...

Uh... why?

Because once you get to your theoretically perfect serial line that cannot be improved upon, you'll have to run it as best you can in parallel to up the bandwidth. Hence parallel wins "in the end".

Andy
 

KalTorak

Member
Jun 5, 2001
55
0
0
Because once you get to your theoretically perfect serial line that cannot be improved upon, you'll have to run it as best you can in parallel to up the bandwidth. Hence parallel wins "in the end".

Andy

If you define parallel to mean "more than one signal ganged together", then sure, that's true, but it's also kinda a "duh" - more signals means more bandwidth than less signals.

It's more useful to ask, given N signals, what do you do with them to transfer data at the highest bandwidth? And this is where I refer to the definitions of parallel and serial I gave above (parallel = dedicated control/data/address signals), and conclude that using all N signals as basically independant serial lines gives the most bandwidth. (that's basically what PCI Express _is_, in fact.)
 

Fencer128

Platinum Member
Jun 18, 2001
2,700
1
91
Originally posted by: KalTorak
Because once you get to your theoretically perfect serial line that cannot be improved upon, you'll have to run it as best you can in parallel to up the bandwidth. Hence parallel wins "in the end".

Andy

If you define parallel to mean "more than one signal ganged together", then sure, that's true, but it's also kinda a "duh" - more signals means more bandwidth than less signals.

It's more useful to ask, given N signals, what do you do with them to transfer data at the highest bandwidth? And this is where I refer to the definitions of parallel and serial I gave above (parallel = dedicated control/data/address signals), and conclude that using all N signals as basically independant serial lines gives the most bandwidth. (that's basically what PCI Express _is_, in fact.)

Ok, so we're really saying the same thing - if we use each others definitions. You're saying (in my less precise terminology) parallel. In your terminology multiple independant serial lines? Have I got you right?

Cheers,

Andy
 

Mday

Lifer
Oct 14, 1999
18,647
1
81
you do realize that the serial today is several hundred to several thousand times faster than the serial of old? and the parallel of old is pretty much a few serialized communicatoins, which means it's only a few times faster than the serial of old.

the current serial is used because it's easier to handle, and ics are fast enough to have that serialized data be faster than what exists today in parallel. of course future versions of SATA will become parallelized with multiple i\o lines. usb is usb, it doesnt need any parallelization due to applications.

this is a repost btw. someone has already mentioned this.

if you have a delivery service on roads where 10mph is the law, and you go exactly 10mph, if you have 10 cars, you have 100mph effective. if the road is 100mph, and you go 100mph, you have 100mph. and guess what ,you have 9 less cars to take care of.
 

sgtroyer

Member
Feb 14, 2000
94
0
0
The terminology really is at fault. We associate parallel with multiple lines and serial with single lines. So to call PCI-Express serial is misleading, because there are multiple lines. Really, modern serial means the clock is included with the data, removing the need for clock-data synchronization. We need a new word.
 

sgtroyer

Member
Feb 14, 2000
94
0
0
And then I read the whole thread and realize that KalTorak said almost exactly the same thing, near the beginning. Apologies for redundancy.