• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

x16 PCI-Express Video Card in a x1 slot ?

CyberZenn

Senior member
As the title says, I'm wondering if it's possible to put a x16 PCIe card into a x1 slot. I have seen this sucessfully done on some nForce4 Ultra boards to enable "fake" SLI with an x4 slot.

I have no intention of doing that (using Radeons), but I would like to use the second card (an x300) to give me an extra DVI output to connect to my HDTV's HDMI port. If I were to cut out the plastic obstruction at the rear of the x1 slot the card will fit, I just don't know if it will actually work.

Note that the PCIe x1 standard runs at 500mhz, so performance-wise it should be fine for what I am looking to do. Any ideas?

Thanks!
 
No the card physically doesn't fit into the x1 slot.

And yes, it would perform pretty well, much better than you can do with a 33MHz 32 bit PCI slot.

And yes, this item is one of the reasons for my disappointment in PCIe. I wanted lots of slots and card and everything fits into every slot, just at different speeds. As it is now the situation is actually worse than with a PCI/AGP board.
 
No the card physically doesn't fit into the x1 slot.

And yes, it would perform pretty well, much better than you can do with a 33MHz 32 bit PCI slot.

And yes, this item is one of the reasons for my disappointment in PCIe. I wanted lots of slots and card and everything fits into every slot, just at different speeds. As it is now the situation is actually worse than with a PCI/AGP board.

Actually, any PCIe board can run at any speed (or, at least, at any lower speed than it supports) -- if the motherboard provides slots that are the right size. It's just that the first-gen boards have pretty much all gone with one x16 slot and a couple of x4 or x1 slots (using the physically smaller connectors). Frankly, I think they'd be a lot smarter to have two full-size slots running at x8, and then a couple x4 or x1 or whatever (or, frankly, put four or five full-size slots running at x4, since even graphics cards can't come close to saturating that). I suspect you'll see more designs like this once there are more expansion cards (e.g., RAID controllers, video capture boards, sound cards, etc.) available in PCIe.

Did you see AT's article on the new NForce4 Pro chipset? It supports up to four physical PCIe connectors, and has 20 lanes worth of PCIe bandwidth that can be configured to run to any of the connectors.
 
Originally posted by: MartinCracauer
No the card physically doesn't fit into the x1 slot.

As I mentioned, I actually have seen x16 cards put into smaller slots (x4 and x8) by carefully cutting away the plastic on the end of the slot. I am still suspicious that it might work with the x1 slot as well.... I just dont want to hack away at my new mobo with an exacto knife unless I'm reasonably sure I'm not going to fry something...
 
Originally posted by: CyberZenn
Originally posted by: MartinCracauer
No the card physically doesn't fit into the x1 slot.

As I mentioned, I actually have seen x16 cards put into smaller slots (x4 and x8) by carefully cutting away the plastic on the end of the slot. I am still suspicious that it might work with the x1 slot as well.... I just dont want to hack away at my new mobo with an exacto knife unless I'm reasonably sure I'm not going to fry something...

In theory, yes (at least in terms of data; I'm not sure if x1 slots can provide the same amount of power as the larger ones).

In practice? No clue. You're in uncharted waters here.
 
Power: No they don't.

There have been mainboards on show that have been wiring 4x into the 16x slot mechanics. THAT works OK, since every PCIE device is supposed to negotiate the link width.
 
Originally posted by: Peter
Power: No they don't.

There have been mainboards on show that have been wiring 4x into the 16x slot mechanics. THAT works OK, since every PCIE device is supposed to negotiate the link width.

Hey Peter, any idea why they didn't just go for a standard, with multiple notches in the card's edge connector, delimiting PCIe X1, X2, X4, X8, X16 (so an X16 card's edge connector would run the full length, but with 4 notches), and then allow one to plug it into any sized socket? Power issues could be taken care of with a seperate jack on the card and a cable from the PSU, I would think. That just makes so much more sense to me. Kind of like how 64-bit PCI cards will (mostly, voltage permitting) run in a 32-bit PCI slot, but in 32-bit mode?

I also swear that I some some early design diagrams, where the little "stub" PCIe X1 connector was nestled directly behind a regular PCI slot, so that card position could be used for either a PCIe X1 card or a 32-bit PCI card. (Obviously, you wouldn't be able to plug a PCIe card with a longer card-edge connector than an X1 into that slot position though.) Whatever happened to that capability?

It just seems like the current situation hampers both decent performance of PCIe multi-mon rigs, and limits the choice/amount of card expandability. I guess the first could be solved by mobo makers, if they built boards with all X16-sized PCIe slot connectors, and then implemented "steerable" PCIe lanes, but that just sounds insanely-messy from a board-layout/engineering/timing-margin sort of perspective. Obviously cost and chipset limitations prevent a board from implementing all fully-wired PCIe X16 slots right now.

Maybe computers in the future will switch to all fiber-optic interconnections, and computers will look like a bunch of bubbles with hoses coming out of them, kind of like advanced alien technology in anime flicks. 😛

 
Hey Peter, any idea why they didn't just go for a standard, with multiple notches in the card's edge connector, delimiting PCIe X1, X2, X4, X8, X16 (so an X16 card's edge connector would run the full length, but with 4 notches), and then allow one to plug it into any sized socket?

Wouldn't that make the socket noticeably longer? Part of the impetus behind the design is to lower the amount of MB real-estate needed for expansion cards. The issue here is that a) video cards are only available in x16 sizes, which is stupid because it's way too much bandwidth for even the fastest cards out there today, and b) motherboard makers are skimping and only putting one full-size slot, which is stupid because the only reason to use PCIe right now is for video cards (which only fit in x16 slots).

Power issues could be taken care of with a seperate jack on the card and a cable from the PSU, I would think.

Well, PCIe supplies a lot more power through the slot than AGP (~75W for an x16 slot, IIRC). The objective is to NOT have to run separate cables from your PSU for expansion cards, not to add more of them!

I also swear that I some some early design diagrams, where the little "stub" PCIe X1 connector was nestled directly behind a regular PCI slot, so that card position could be used for either a PCIe X1 card or a 32-bit PCI card. (Obviously, you wouldn't be able to plug a PCIe card with a longer card-edge connector than an X1 into that slot position though.) Whatever happened to that capability?

AFAIK, this is still possible, but nobody did it on the first-generation boards that I've seen.

It just seems like the current situation hampers both decent performance of PCIe multi-mon rigs, and limits the choice/amount of card expandability. I guess the first could be solved by mobo makers, if they built boards with all X16-sized PCIe slot connectors, and then implemented "steerable" PCIe lanes, but that just sounds insanely-messy from a board-layout/engineering/timing-margin sort of perspective. Obviously cost and chipset limitations prevent a board from implementing all fully-wired PCIe X16 slots right now.

NVIDIA's NForce4Pro chipset is supposed to be able to do that (see article on AT front page), splitting 20 lanes over up to four physical connectors in any configuration. Can we wait for more than the very first-gen boards before we jump all over how bad PCIe is? 😛
 
Originally posted by: Matthias99
NVIDIA's NForce4Pro chipset is supposed to be able to do that (see article on AT front page), splitting 20 lanes over up to four physical connectors in any configuration. Can we wait for more than the very first-gen boards before we jump all over how bad PCIe is? 😛

Thanks for the reply. I guess that's what I was really trying to ask - are those issues inherent limitations with the PCIe specs themsevls, or just limitations stemming from current mobo implementations of PCIe. If it's the latter, then I guess I shouldn't be so concerned, although I would definately prefer a board with 5 combo 32-bit PCI/PCIe X1 slots, for maximum expansion possibility.

I don't like the fact that they are trying to push so much power through what is essentially an I/O interface. I would worry about induction issues with signal traces routed nearby, and can you imagine the kind of ground-plane hum you might get, between either onboard or PCI audio, and a high-powered PCIe video card rythmically drawing 75W max power, for each frame that it renders, 60-75 times a second? I would personally much prefer a (shielded) power lead from the PSU to the video card directly, although it is an additional hassle to those building the PC. Although, it was the engineering solution that Intel chose for their recent CPUs/boards, which is why we now have seperate power cables for just the CPU coming off of the PSU.

Maybe I'm just being a bit paranoid, but I'm all-too-familiar with noise/interference effects with onboard audio on cheaper boards, and pushing ever more power load through the mobo and expansion slots seems like it will only make the problem worse, and you know that cheaper boards won't waste the expensive board real-estate to give those signals the preper isolation-distance they deserve in the board routing, they'll just leave the customer to discover the problems once they upgrade to a high-end video card on an el-cheapo mobo.
 
Back
Top