• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Distributed (P2P) website hosting?

mike3411

Member
Has anyone developed software capable of distributed or P2P website hosting? Something along the lines of a little prog that could serve up a website to other users, it could use a scalable cached (mebbe 5 - 500gb) to serve a content-rich heavy-strain website, such as gutenberg project/everything2 type sites. By sharing the bandwidth sites like this could operate much faster. Anyone thought of this/done it?
-Mike
 
there does exist such software.

a friend of mine worked for a company that went belly-up when the .net bubble burst though they did complete a product that did exactly that. In any case, there exist simillar technologies, for example bit-torrent, and there's one other big one too, though I can't quite remember the name.
 
would all the sites have a full copy of the site though? certainly, there would have to be some redundancy, otherwise if only 1 computer goes down, then a portion of the website is missing.

it should be quite easy to do... nothing more than distributing the data, and coming up with a way to track it, so a click leads to correct possiblities.
 
interesting idea

if it grew too big, wouldnt there be a problem with updating a website? i think it would model the dns servers which take a few days to allow changes to flush through the system. I guess you wouldnt use it for time sensitive information.
 
it isn't that simple, especially if you want to do it properly.

I know that in the product that said company developed it would also send in the content from different hosts at the same time (for example, different parts of the same image would be delivered by different locations).

It can seriously improve available bandwidth.

Oh, and the best part... nothing to download other than a small active-x (like 80K)
 
What's the difference between this and existing caching/load balancing proxy servers minus the proxy?

Why couldn't you:
Client contacts main server, req. webpage
main server polls multiple mirrors for utilization
client is redirected to mirror with the lowest current utilization

For updates:
author uploads site to main server
mirrors poll main server for updates when their utilization drops below a threshold
main server publishes available updates to requesting mirrors
once a certain number of mirrors get the site main server redirects update requests from other mirrors to those that already have it
 
rayster,

there is a big difference between a client and a mirror site. most clients only have 5.6K for uploads, and redirecting that way would leave a lot of potential bandwidth unutilized.
 
Back
Top