Abstract

In October of '86, the Internet had the first of what became a series of 'congestion collapses'. During this period, the data throughput from LBL to UC Berkeley (sites separated by 400 yards and three IMP hops) dropped from 32 Kbps to 40 bps. Mike Karels1 and I were fascinated by this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why things had gotten so bad. We wondered, in particular, if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or if it could be tuned to work better under abysmal network conditions. The answer to both of these questions was "yes".

Keywords

UnixNetwork congestionComputer scienceThe InternetBandwidth (computing)ThroughputComputer networkOperating systemComputer securityNetwork packet

Affiliated Institutions

Related Publications

C4.5: Programs for Machine Learning

Classifier systems play a major role in machine learning and knowledge-based systems, and Ross Quinlan's work on ID3 and C4.5 is widely acknowledged to have made some of the mos...

1992 23665 citations

Publication Info

Year
1988
Type
article
Pages
314-329
Citations
2447
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

2447
OpenAlex

Cite This

Van Jacobson (1988). Congestion avoidance and control. , 314-329. https://doi.org/10.1145/52324.52356

Identifiers

DOI
10.1145/52324.52356