Newegg: Intel gigabit ethernet network card, $49 shipped

glenn1

Lifer
Sep 6, 2000
25,383
1,013
126
Newegg link

Not too bad IMHO since not a few people routinely spend $25 or so for a plain old used 3Com 10/100 card like the 3C905C-TX....
 

mindless1

Diamond Member
Aug 11, 2001
8,733
1,746
126
That is a nice deal for an Intel, but I think Pricewatch has the D-Links slightly cheaper... now if only the switches with more than a single gigabit port would drop to under $200.

Anybody know if a standard Cat5e crossover cable would work on a couple of these?
 

Lichee

Senior member
Jan 2, 2001
645
0
71
Originally posted by: mindless1
That is a nice deal for an Intel, but I think Pricewatch has the D-Links slightly cheaper... now if only the switches with more than a single gigabit port would drop to under $200. Anybody know if a standard Cat5e crossover cable would work on a couple of these?


dont you need a CAT6 for gigabit??
 

sjwaste

Diamond Member
Aug 2, 2000
8,757
12
81
I'm pretty sure regular ol' Cat5 will work. I wish I could dig up the article I found comparing Cat5 and Cat5e. I remember that one drawback to Cat5e was its ability to catch fire faster :)
 

Lichee

Senior member
Jan 2, 2001
645
0
71
that would make it a hot deal right?! ;)

anyways, im sure you can use cat5, but to utilize the full bandwidth that you spent on for the hardware, dont u need cat6?

ahh wait. you are right. 5e is for gigabit but cat 6 is going to replace cat5e though (maybe cuz the fire issue?) and cat 7 is on the way as well.

good read
 

ChadPage

Junior Member
Jul 30, 2001
20
0
0
Originally posted by: mindless1
That is a nice deal for an Intel, but I think Pricewatch has the D-Links slightly cheaper... now if only the switches with more than a single gigabit port would drop to under $200.

The DLink 500 card (NatSemi chip) is not in the same league as Intel's gigabit cards performance wise. This hasn't stopped people from buying Realtek cards en masse, but if you're going to pay $40+ for a NIC might as well pay $10 more and get this. BTW, the DLink 550 card (64-bit) is much better than the 500 card but it costs more.

I just ordered two of the boxed cards myself, should be getting them Monday. I play around with clustering stuff every so often so I can use this to connect up my two Athlons... :)
 

RGN

Diamond Member
Feb 24, 2000
6,623
6
81
The cat 6 and 7 standards have not been completed yet. 5e is the highest rating to date.
 

ozone13

Senior member
Apr 5, 2001
498
0
0
I want to buy two to connect my main rig with my server since I do a lot of file transfers between the two. I want to keep my current 100mb network intact and just use a crossover cable between the two gigabit cards (since the gigabit hubs are too expensive right now)....it would work, right? I'd just change the order in which the computers communicate (eg: change the gigabit cards to a higher priority over the 100mb card currently installed). Anyone else have any suggestions?
 

RGN

Diamond Member
Feb 24, 2000
6,623
6
81
I saw a review I think linked from slashdot that shows gig-e performance. Using a 33Mhz PCI card, your max was like 300Mbt, not true gig. I don't think that the 66Mhz/64bit card reached much more than 500Mbt.

Sooo, you need to ask yourself if the performance increase is worth the headache of switching connections.
 

zervun

Member
Aug 8, 2001
118
0
0
I did gig testing at work with 2 dell dual processor servers brand spanking new with intel 1000baseT fiber cards chained together back to back running redhat linux 7.1. Maximum throughoutput we could get through these was just over 600mpbs. In the real world applications 1000baseT cards are going to give you about 300mbps or so because the computers can't keep up with the massive transfer these cards can put out. The servers have scsi arrays, rehat running on dualies, 1gig ddr, etc etc. There isn't that much of a point in throwing 1000baseT's in a home system, 10/100 cards do just fine.
 

Stefan2000

Member
Jan 12, 2001
133
0
0
Does anyone know if these cards support 66Mhz operation? Based on the pictures of the card at Newegg, they do seem to be keyed for 3.3v slots which most newer 66Mhz PCI slots are, but do they operate at 66Mhz in a 66Mhz slot or default down to 33Mhz? The reason I'm asking I'd like to use one in the 66Mhz slots on my supermicro P4DC6, but I have another 66Mhz card (a raid card) in one of the 66Mhz slots and I won't want this card to slow the 66Mhz PCI bus down to 33Mhz.
 

JameyF

Senior member
Oct 5, 2001
845
0
76
Originally posted by: zervun
I did gig testing at work with 2 dell dual processor servers brand spanking new with intel 1000baseT fiber cards chained together back to back running redhat linux 7.1. Maximum throughoutput we could get through these was just over 600mpbs. In the real world applications 1000baseT cards are going to give you about 300mbps or so because the computers can't keep up with the massive transfer these cards can put out. The servers have scsi arrays, rehat running on dualies, 1gig ddr, etc etc. There isn't that much of a point in throwing 1000baseT's in a home system, 10/100 cards do just fine.


I'd buy a gigabit solution even if it's only 300bps. That would be more than 3 times faster 'cause 100bps systems aren't 100% efficient either. The only issue for me so far is price. I wouldn't pay 10 times as much which is where it's at or close to now.
 

EXman

Lifer
Jul 12, 2001
20,079
15
81
ok I am stoopid I thought for gigabit you needed a 64bit PCI slot which most home PCs are lacking?
 

docinthebox

Golden Member
Jun 9, 2000
1,118
0
0
Originally posted by: zervun
There isn't that much of a point in throwing 1000baseT's in a home system, 10/100 cards do just fine.

That's a good point. In most usage scenarios, the raw network throughput is not the bottleneck. For example, if you're doing a disk-to-disk data transfer, then the disk transfer rate is likely to be your bottleneck, unless you're striping across many many high-end SCSI disks. Even for memory-to-memory data transfer, the actual throughput you get still depends on things like how multi-threaded your application is, whether you're using TCP or UDP, etc.

Also, if you're serious in getting high throughputs, I think it's pretty important to use a wider PCI bus than a 33MHz 32-bit bus, which can only afford slightly more than 1 gigabit/second throughput. Considering this bus is shared among your PCI, EIDE, USB devices, and in most cases, by the north bridge as well, it's hard for your gigabit ethernet card to be able to achieve anything close to the advertised gigabit/second throughput. Upgrading to 66 MHz or 64 bit or both will give you a much wider PCI bus to play with.

Edit: Stefan2000, I looked up the card (Intel PRO/1000 MT Desktop adapter) at Intel's website and it supports both 33MHz and 66MHz, but only 32-bit, not 64-bit. Its sibling the Intel PRO/1000 MT Server adapter supports all of the above plus 64-bit as well. Another thing I noticed is that it has drivers for all the favorite Unix variants including Linux 2.4.x and FreeBSD which is good.
 

zervun

Member
Aug 8, 2001
118
0
0
I'd buy a gigabit solution even if it's only 300bps. That would be more than 3 times faster 'cause 100bps systems aren't 100% efficient either. The only issue for me so far is price. I wouldn't pay 10 times as much which is where it's at or close to now.


Well, what I'm saying is that on your home system, unless you are running 64bit pci, and have an immense scsi raid you might not even get that. If you got money to drop then it's all cool, I just don't want people to go out thinking this is going to give a new breath of fresh air to their network. Unless you are doing some serious transfers between pc's it's pretty pointless. Latency is little faster on the fiber but on copper cat5e going from 100baseT to 1000baseT is going to yeild pretty much the same results. There is a little bit more optimization on the 1000 cards, but networking tech is pretty cut and dry when it comes to latency, it's already quite low. This also isn't going to effect your internet speeds of course unless you have a pretty stellar connection (> T1, latency & bandwidth). So the only reason to hit up 1000baseT cards would be if your transfering > gigs frequently across the computers, and then with only the 32bit bus, it might not even be that much faster than a good 100baseT card of course depending on what all you have riding on the bus. This is also taking into consideration that you chain the comps together. I'm not quite sure what a 1000baseT switch is going for right now, I know that hp switches at work just came out with 1000baseT copper cards (fiber been out for a while for them) and each blade is running upwards of 1.5k, switch is hovering around 15k I think. We won't even talk about the cisco prices =) (upwards of 15k for a single blade on some of them).


Oh yea the cards that I were using were the Pro's 64bit only 1 fiber connection on them.
 

Steve Guilliot

Senior member
Dec 8, 1999
295
0
0
Cat 6 specs were just finalized within the last couple weeks. RFC period ends tomorrow, so we should see genuine Cat 6 products showing up soon. Note that Cat 6e is a marketing term, and is no better (sometimes worse) than Cat 6. Gigabit (802.3ab) is supported on Cat 5, but won't work well on damaged or poor quality cables that no longer (or never did) meet Cat 5 spec.
 

Stefan2000

Member
Jan 12, 2001
133
0
0
Originally posted by: mindless1
Here's some Gb NIC BM including a couple systems with 33MHz 32bit PCI:
http://www.syskonnect.com/syskonnect/performance/gig-over-copper.htm

Yes, I've seen that review before and I find it quite interesting. However, there is something that puzzles me about that review. It seems that in that review all of the cards perform significantly better in a 64bit slot as opposed to a 32bit slot. Now, you might say that this is to be expected. However, if you notice some of the cards reviewed are 32bit cards like the Dlink DGE-500T and the Ark Soho-GA2500T . Now what seems odd is that these cards also shows much greater performance when it's used in a 64bit slot.

It says this about the Dlink cards performance.

http://www.syskonnect.com/syskonnect/performance/gig-over-copper.htm#D-Link DGE-500T|outline


Peak throughput while operated in a 32bit bus was 192.21 Mbps. This was achieved in the Dell systems. The Athlon systems only obtained a peak of 172.21 Mbps when these cards were inserted into the 32-bit bus. Both systems show a slight drop in throughput but eventually level out. Peak throughput while operated in a 64bit bus running at 33Mhz was 315.96 Mbps.

Also note that the graph actually shows higher numbers than "315.96Mbps". The graph seem to show it's peak as somewhere in the 606 to 607Mbps range. So, which iis it?


Also, it tells us the Ark Soho-GA2500T is a 32bit PCI card.

http://www.syskonnect.com/syskonnect/performance/gig-over-copper.htm#Ark Soho-GA2500T|outline

The Ark Soho-GA2500T is also a 32-bit PCI card design. Like the D-Link DGE-500T and the Asante GigaNIX cards

Then it tells us this about it's performance as well.

The peak throughput achieved while in a 32bit 33Mhz bus was in the Dell system: 192.62 Mbps. While the Athlon system in the same bus setup only reached 172.19 Mbps. As before, there is a performance drop at the 1Kb and 5-10Kb packet sizes.

Peak throughput while operated in a 64bit bus running at 33Mhz was 610.83 Mbps and 609.98 Mbps when running at 66Mhz respectively.

Now keeping in mind that these are 32bit cards and are thus only using 32bits of the 64bit slot they are plugged into how can this be? One possible explaination might be that it has to do with how the systems are configured. What I mean by this is the 64bit PCI bus and the 32bit PCI bus in these systems may be 2 separate buses and it may be that most or all of their other PCI devices are on the 32bit buss whereas the 64bit bus only has the Gigabit cards installed thus what we are really seeing is that when the Gigabit cards are sharing a bus with other PCI devices they don't perform as well as they do when they are on their own PCI bus all by themselves.