• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

How do download managers increase dl speed?

dfi

Golden Member
Just started using freedownloadmanager, and noticed that my download speed has improved dramatically. How are they able to do this?
 
some webservers are set up that each user can connect for example with two open connections and each maybe limited to 30kb/s. What download managers do is open up those two connections, therefore doubling the speed. Some also search for mirrors of the file you are downloading and get it from other sites as well.
 
Originally posted by: dfi
Just started using freedownloadmanager, and noticed that my download speed has improved dramatically. How are they able to do this?

Also, TCP has horribly recovery after scaling back on congested links. Some of these tools simple open new TCP sessions and resume the transfer when they see this happening.
 
Originally posted by: bsobel
Originally posted by: dfi
Just started using freedownloadmanager, and noticed that my download speed has improved dramatically. How are they able to do this?

Also, TCP has horribly recovery after scaling back on congested links. Some of these tools simple open new TCP sessions and resume the transfer when they see this happening.
Actually, that depends on the TCP protocol used. There are several versions of the TCP protocol that implement different algorithm for handling congestion. The more recent ones (which I believe are widely used now, but I'm not positive about this) are quite good at recovering from congestion issues.

To reply to the OP, most download managers open several TCP sessions to the server which may or may not actually help depending on what the source of the congestion is. If the problem is that the server's link is congested, it will help (a lot). In effect, it does this by stealing bandwidth from everyone else.

If we take a simplified example of what is happening, let n clients with identical latency and unlimited bandwidth connect each via a single TCP connection to a server with finite bandwidth B. Each client will eventually end up with bandwidth B/n. Now if you use a download manager, it will open several connections to the server, thereby increasing your bandwidth at everyone else's expense. Let's say you open 5 threads instead of one. Each TCP conneciton will then get B/(n+4) bandwidth. Since you have 5 threads open, you are receiving 5*B/(n+4), which in the case of large n means you will receive nearly 5 times the bandwidth! Everyone else, however, will only be receiving B/(n+4) instead of B/n bandwidth.

Once more and more people start using several threads it becomes a sort of "arms race" in that whoever is using more threads gets more bandwidth. In the end, all of this is very poor netiquette because each TCP thread requires overhead both in the network and in the server. This is why I don't like it when people mention that we should use Firefox to increase speed by using more threads.

On the other hand, if the congestion is caused by some overloaded router between you and the server that's dropping packets, a download manager may not make much of a difference at all. Same thing if the limiting factor in the transfer rate is the connection between you and your ISP. Remember that a Internet transfer is like the bucket brigade, the slowest person determines the rate at which buckets are passed along.

Getting back to the download managers, as Czar said, some of them look for mirrors of the file you are requesting. This is obviously a preferable approach to what I've explained above. It's also possible that the download manager changes your TCP settings because IIRC, the Windows default settings are optimal for LAN connections (low latency, high bandwidth).
 
Actually, that depends on the TCP protocol used. There are several versions of the TCP protocol that implement different algorithm for handling congestion. The more recent ones (which I believe are widely used now, but I'm not positive about this) are quite good at recovering from congestion issues.

Reno, Tahoe, etc, doesn't really matter. Their recovery is still very poor, a lot of work remaining to do in this area.

Bill

 
Originally posted by: bsobel
Reno, Tahoe, etc, doesn't really matter. Their recovery is still very poor, a lot of work remaining to do in this area.

Bill
Well I guess it depends on your definition of very poor and under what conditions you're working. Getting a simple protocol to work well under all circumstances in a heterogeneous decentralized network is quite challenging. Still, I think the biggest issue is the increasing amount of UDP traffic which has no congestion control at all and throws a monkey wrench into the problem.
 
Back
Top