This whole SETI bandwidth thing is really beginning to suck...

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

conjur

No Lifer
Jun 7, 2001
58,686
3
0
Won't even mention how my team did against the Hoosiers.

But, we did spank OSU earlier this season!! :p

Just give Pitino another coupla years...about the time Berkeley's bandwidth issues are solved... :Q
 

RaySun2Be

Lifer
Oct 10, 1999
16,565
6
71


<< <Now I've got to go. Heading down to Bloomington to see the Indiana Hoosiers destroy the evil Ohio State Buckeyes in a battle for the Big 10 regular season title! GO HOOSIERS > >>



In your dreams! :p



<< <Guess again, HoosierBoy!!!
Although I must admit you're not nearly as much fun to hate since Booby Knight got canned. (No, that's not a typo...) >
>>



Hehe@HoosierBoy, GO BUCKS! :D (not that big on basketball, but hey, I AM in Columbus, gotta stick up for em. ;))

:D

T-Shirts! :D
Hey Poof, Wet T-Shirt contest at the Hot-Tub! :D:D


 

Assimilator1

Elite Member
Nov 4, 1999
24,165
524
126
Gul Dukat & Yield
Don't forget SETI survives on volunteers,donations & some sponsorship money.

Lets hope they get donated more bandwidth from another site
 

CraigRT

Lifer
Jun 16, 2000
31,440
5
0


<< Lets hope they get donated more bandwidth from another site >>



and hopefully that comes soon.. this is getting silly
rolleye.gif
 

Shuxclams

Diamond Member
Oct 10, 1999
9,286
15
81
For the people who have uploaded results to the scarieville Q, there are Wu's moved into your accounts.













SHUX
 

Rattledagger

Elite Member
Feb 5, 2001
2,994
19
81
I just saw one possible solution.

"Anyone having a direct connection to CalREN2, or who is at a University
connected to "Abilene" (aka Internet 2) shouldn't be rate limited to the server.
They'll still need to fight for the limited connections (~1500) on the server,
but once they get a connection, the transfer should be rapid. In theory
they should be able to set up a publicly available proxy, or a SETIQueue
which would also not be rate limited.

Of course, they need to get permission from whomever is paying for their
bandwidth before doing so.

Eric"
 

Poof

Diamond Member
Jul 27, 2000
4,305
0
0
LOL@Ray & JWmiddleton!!! :D

I guess we have to survive this mess first and then print up some T-shirts!!
(/me gets ready to practice wearing a wet "I survived the SETI Bandwidth problems" T-shirt in the hot tub behind the bushes... lol :p)
 

Eponymous

Golden Member
Jun 7, 2001
1,186
0
0
Rattledagger,

For some reason I don't think it works that way. :(

I'm on an Internet 2 site and I still can't move any WU up or down from Berkeley.

Now it may be that the connection gets broken after every WU so I'm constantly
trying to win the connection Port lottery. However, either way I'm getting squat.

Interesting plan though.

:D

--------------------------------------
Ps. If you look at the Berkeley Switch infomation, you will notice they are cut down to only 2Mb/s either way. OUCH@#!! :(
Looks like it has been like this for a few hours now. :( :(

(For those of you who don't follow along.... They normally run above 30Mb/s when life is good.)

 

Derango

Diamond Member
Jan 1, 2002
3,113
1
0
I agree with the seti bandwidth thing! I have my 99th and 100th WU's waiting to be sent to berkley, and they're not going through. agh! :)
 

Robor

Elite Member
Oct 9, 1999
16,979
0
76
What is the UB solution to this? How are we supposed to participate in their project if we can't download WU's to process or upload completed results? Are they seeking bandwidth from an outside source? Are they working on a new (slowed-down) client?

I've got 525 completed results ready to upload and 292 WU's to process. My queue hasn't connected in nearly 4 days and if this trend continues I'll be out of WU's by Saturday. When that happens my clients will sit idle until UB works out their problems. I'm not going to jump through hoops to d/l WU's that I can't even upload.

I wonder how much of this bandwidth problem is related to the faster processors? I mean, we've gone from a "fast" system 2 years ago doing 3-4 WU's per day to today's dualie AMD's churning out 14 WU's per day. Just look at how everyone's production has gone up. I don't have more clients than I did a year ago, I just have faster clients. I'm sure it's the same for most everyone. On one side I can see that UB has limited funds for the project but the other side of the issue is they should've seen this coming and done some planning to avoid it. I just hope the solution isn't another slowed down client.
 

m2kewl

Diamond Member
Oct 7, 2001
8,263
0
0


<< ...(/me gets ready to practice wearing a wet "I survived the SETI Bandwidth problems" T-shirt in the hot tub behind the bushes... lol :p) >>



LMAO! :D
 

networkman

Lifer
Apr 23, 2000
10,436
1
0
>> On one side I can see that UB has limited funds for the project but the other side of the issue is they should've seen this coming and done some planning to avoid it. I just hope the solution isn't another slowed down client.

It's always hard when you're a victim of your own success.
rolleye.gif

 

KCjeeper

Senior member
Sep 29, 2000
210
0
0
I don't have a problem with UB making the WU's take longer. When I started doing this about 1.5 years ago, it was taking about 24-hours/WU. Now I'm averaging closer to 5 for all of my rigs. Sure, it won't be as exciting, but at least we would get plenty of WU's. It would also provide additional incentive to upgrade our hardware.
 

Russ

Lifer
Oct 9, 1999
21,093
3
0


<< I've got 525 completed results ready to upload and 292 WU's to process. My queue hasn't connected in nearly 4 days and if this trend continues I'll be out of WU's by Saturday. >>



Same here, except make that 1031 completed results and 482 waiting.

Russ, NCNE
 

whpromo

Junior Member
Jan 21, 2002
15
0
0


<< I don't have a problem with UB making the WU's take longer. When I started doing this about 1.5 years ago, it was taking about 24-hours/WU. Now I'm averaging closer to 5 for all of my rigs. Sure, it won't be as exciting, but at least we would get plenty of WU's. It would also provide additional incentive to upgrade our hardware. >>



I really don't see how taking longer to longer to process WUs is going to help in the long run. It will still be a bandwidth problem once those WUs start queing up to critical mass again. Especially if as you say, we upgrade our equipment and they do not upgrade their bandwidth. I saw great celebration when we brought more team members on board, but I'm sorry to say that this seems to have added to the problem. The only way I can see them solving this is with additional bandwidth or having fewer clients processing. And the latter solution is a sad one to contemplate indeed. :(

P.S. I'm going to be out of WUs to process by the end of the day. Switched from direct to Force's queue, but still haven't gotten any WUs uploaded.
 

Tarca

Platinum Member
Sep 6, 2001
2,200
0
0
We need to have some hope...


I wish they(Berk) would give us more info. Today was a good day though. My queue was able to send up 25 more units than I produced. So some relief in site?.....

If the bandwidth stays like this my Queue should be free of completed units in the morning.:)
 

Rattledagger

Elite Member
Feb 5, 2001
2,994
19
81
Eponymous, there's another problem also:

"The problem is that we are bandwidth limited to the rest of the world,
and connection limited overall. The low rate we're getting to the rest
of the world (15 Mb/s or less most of the time), and the server limit
of 1500 active connections combine to make connections difficult regardless
of where you connect from. All 1500 connections are continuously in use
pumping out data a low rate to the rest of the world leaving no local
high-speed connections available.

Eric J. Korpela"

In other words, all high-ping-slow-modem-users that manages to get a connection is blocking out the gigabit-internet2-connectors.
But I'll guess there's a much higher probability to get a connection via internet2, and then you get one, you'll only tie up the connection for 1 second. :)

Looking on the graphs, it's obvious seti was down for some hours yesterday.
After something was changed at the packetstrangler last evening, there seems the campus-connection is back to normal. The last couple of days, it was 5-10 mbit below normal...
Today at around 12 it looks like they'we done some more changes, resulting in 5 mbits more.


Oh, and I've come across another comment that doesn't bode well for the future:

"Unfortunately, throwing more bandwidth at the problem only postpones the problem, whether or not P2P clients are dealt with.

At last week's NAG meeting, Cliff Frost showed us some graphs based on network usage over the past three months, and they were revealing.
The summary:

* P2P clients consume 20-30% of the 70Mbps pipe from the main campus. Total campus traffic, including the dorms, is more like 50 P2P, but much of it is under the dorms' separate 40Mbps cap. (Dorm traffic is something like 80% P2P, but the students pay for it with their own fees, and to some extent they are free to decide to do that).

* Last November, http traffic was in the 300-350 gigabytes per day range. Now, it's 400-450 gigabytes per day. kazaa+gnutella combine to be 150-250 gigabytes per day (they are spikier than http traffic). (I may have the units wrong, I'm doing this from memory). The other big bandwidth usages on campus are SETI@Home, NNTP, and FTP. The first two aren't as important, because they both are configured to use mostly excess bandwidth. FTP is small compared to HTTP; it looked like about half of kazaa.

What this means is, killing all the kazaa/gnutella users, even if it were possible, wouldn't solve the problem for long. In six months, our http traffic will have swelled by 150-250 gigabytes per day--the trend line is obvious. And what percentage of the http traffic is any more appropriate than kazaa/gnutella?

There are rarely technological solutions to social problems.

Let's say we do try to place significant technical restrictions on kazaa/gnutella traffic. Maybe we use the plan suggested here of "whitelisting" http, ssh, and ftp traffic, and sectioning off everything else. The first thing that happens is we spend a lot of time and effort troubleshooting things, like IPSEC, or Corporate Time, that we missed on the initial whitelist. Then, some kid comes up with the bright idea of using port 80 for gnutella, and then what do we do? And even if everything works perfectly, 6 months down the road we'll be hitting the cap with whitelisted traffic alone.

I think eventually the campus will be forced to start charging users for bandwidth usage over a certain baseline. There are provisions for that in the network funding model which spawned the node bank. CNS is now capable of tracking traffic down to a single IP, but they need quite a bit more data before they can start billing down to a single IP; there are a lot of thorny issues around that. It's really similar to phone billing, but people aren't used to the idea of network billing, and it will take a while before usage habits change.

peer-to-peer filesharing is a good example of the tragedy of the commons; a collection of individuals, each acting in his own best interests, creates a situation that is negative for everyone involved. (Other examples are the morning commute and the parking situation near campus, which also lack technical solutions, and which most propsed remedies actually make worse).

The original "The Tragedy of the Commons" article by Garrett Hardin in Science is available at . The example of a communal pastureland for grazing is instructive to our case.
--
Tom Holub (tom_holub@LS.Berkeley.EDU, 510-642-9069)
College of Letters & Science
249 Campbell Hall"

 

Poof

Diamond Member
Jul 27, 2000
4,305
0
0
:Q

Sorry! :eek:

And I've been good at manually changing those too!!! :p

Thanks NWM! :D