• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

MIT Breakthrough could make internet 1000 times faster

I don't really buy it. The latency added in the fiber serialization is very slim and a tiny fraction of the overall latency. Overall the article makes no sense. There are already direct links between major cities.

Also, that's a cisco catalyst 6500. Pretty outdated technology. Maybe I need to have a talk with this guy about using distributed CEF switching instead of the older flow based switching. I don't see the supervisor in slot 5 which means that's using the ancient sup1/2.
 
Last edited:
I don't really buy it. The latency added in the fiber serialization is very slim and a tiny fraction of the overall latency. Overall the article makes no sense. There are already direct links between major cities.

Also, that's a cisco catalyst 6500. Pretty outdated technology. Maybe I need to have a talk with this guy about using distributed CEF switching instead of the older flow based switching. I don't see the supervisor in slot 5 which means that's using the ancient sup1/2.

You're assuming that picture even has anything to do with the article/"researcher" (please note the quotes there). The entire article and its pictures are pure shit (as is much of everything at DT lately).

Why do they even mention BT and file sharing? This new "discovery" (please note the quotes there) would have ZERO impact on the end user(s).

States Dan Olds, an analyst at Gabriel Consulting Group Inc., "If this can truly jack up Internet data speeds by 100 times, that would have a huge impact on the usability of the Net. We'd see the era of 3D computing and fully immersive Internet experiences come much sooner.... If this turns out to be practical, it could be a very big step forward."

Remind me never to use that Consulting Group
 
You're assuming that picture even has anything to do with the article/"researcher" (please note the quotes there). The entire article and its pictures are pure shit (as is much of everything at DT lately).

You mean you don't enjoy all the pandering to that Adrien Lamo fellow? 😛
 

Thanks. That makes more sense. It is an optical switching management and provisioning system. That does indeed make it pretty ingenious.

It's going to manage the different lamdas on existing optical networks. That is a radical shift from wavelengths being dedicated to endpoints that are "nailed up" point to point.

It's almost going on the RSVP, resource reservation, concept. The control protocols ask for bandwidth and the network obliges. But in this way they are asking for the actual optical wavelength resources.

It sounds like it could get pretty damn complicated. Instead of using BGP as your layer3 path, you're introducing a layer1 optical switching path decision. How the two can work together would be my main concern.

This part makes a lot of sense. There really isn't a huge demand or need for it.
But, Gerstel says, it’s not clear that there’s currently enough demand for a faster Internet to warrant that expense. “Flow switching works fairly well for fairly large demand — if you have users who need a lot of bandwidth and want low delay through the network,” Gerstel says. “But most customers are not in that niche today.”
 
Last edited:
ISPs would still both throttle everything and charge you a per MB fee over some small number of GB per month for the data plan, simply because they can.
 
ISPs would still both throttle everything and charge you a per MB fee over some small number of GB per month for the data plan, simply because they can.

This thread is about technology. Keep your conspiracy theories out of it please. It has nothing to do with the common choke points of distribution to access layer and more core to core or long haul SONET on existing fiber. It's squeezing more capacity out of in place fiber without adding wavelengths.
 
Thanks. That makes more sense. It is an optical switching management and provisioning system. That does indeed make it pretty ingenious.

It's going to manage the different lamdas on existing optical networks. That is a radical shift from wavelengths being dedicated to endpoints that are "nailed up" point to point.

It's almost going on the RSVP, resource reservation, concept. The control protocols ask for bandwidth and the network obliges. But in this way they are asking for the actual optical wavelength resources.

It sounds like it could get pretty damn complicated. Instead of using BGP as your layer3 path, you're introducing a layer1 optical switching path decision. How the two can work together would be my main concern.

This part makes a lot of sense. There really isn't a huge demand or need for it.

I could see it reducing WAN costs along with lower latency and jitter which would be great for businesses, especially small/medium shops who want to set up DR and BC solutions.
 
I could see it reducing WAN costs along with lower latency and jitter which would be great for businesses, especially small/medium shops who want to set up DR and BC solutions.

But is there really a market for that? Jitter and latency aren't much of a concern anymore thanks to MPLS and QoS. I read the first comment on the MIT link provided and tend to agree, this is moving backwards. Trying to push the intelligence to layer1. We don't really have an optical capacity problem at the core-to-core level.
 
But is there really a market for that? Jitter and latency aren't much of a concern anymore thanks to MPLS and QoS. I read the first comment on the MIT link provided and tend to agree, this is moving backwards. Trying to push the intelligence to layer1. We don't really have an optical capacity problem at the core-to-core level.

2 years ago I was designing a DR solution using HP SANs and Continuous Access. Even with QoS on the MPLS we still experienced enough jitter and latency to make VMs running on the SANs crash, even in async replication mode.

Maybe things have improved since then or the WAN provider just sucked at the time.
 
Too bad we'll still have 10/40gb caps. We'll just use it up in a week now.

At this point, this new fiber switching technology would probably so expensive initially that only the largest data centers in the world (Think Google, IBM, large research complexes, etc) would be able to afford to use it.

If you want cheap residential broadband that doesn't suck, however, we'll need another competitor to shake up the existing duopoly that the phone companies and cable companies have over residential access. My hopes are still on wireless broadband, assuming that someone other than a huge telco doesn't buy up all of the frequency spectrum the next time the FCC has an auction.
 
Last edited:
Back
Top