Computer Scientist Shares Strategy for a Light-Speed Internet

0

The internet is nowhere near as fast as it can be, and the right changes could lead to a giant leap in productivity, a computer scientist told a gathering at the Lincoln Center campus.

In “The Internet at the Speed of Light,” Bruce Maggs, Ph.D. the Pelham Wilder Professor of Computer Science at Duke University and vice president for research at Akamai Technologies, described why slower internet speeds hurt consumers and businesses and how the system often thwarts efforts to speed up the delivery of data from computer to computer.

To illustrate the challenge, Maggs and his team downloaded the first 20 kilobytes from the 500 most popular websites in each of 103 countries—roughly 28,000 distinct websites. The medium download was 35 times slower than the time it would have taken at the speed of light.

This is nothing new, he said. Amazon.com has determined that every 1/10th of a second—a seemingly trivial amount of time—that customers are waiting for a website to load accounts for the loss of 1 percent of revenue.

“Whether we say we do or not or whether [we]perceive it or not, people browsing interactive web pages get frustrated and quit if the results aren’t fast,” he said.

Bruce Maggs being introduced by Joseph M. MCShane, SJ, president of Fordham. Maggs said it was a homecoming of sorts, as the very first university lecture he delivered was at Fordham, in 1989.

Maggs said there are both technical and economic challenges related to the speed of data transmission. On the economic side, Internet Service Providers promote bandwidths as high as 500 megabits per second, but ignore “latency,” or how many milliseconds it takes data to travel between its source and destination.

On the technical side, internet traffic travels on optical fiber cables, which are 1.5 times slower than the speed of light in a vacuum. More vexing, however, is the way the internet is designed as a “network of networks,” as Maggs dubbed it. This is because internet protocols were designed to allow independent organizations to run their own networks and to route traffic as they see fit, he said.

In an experiment, he sent data from one machine in Warsaw, Poland, to another just across the city. It arrived, but not before it took a 660-mile detour through Frankfurt, Germany.

“There are thousands of internet service providers around the world all running networks. Each one is self-contained, but if it ever needs to talk to the rest of the world, there has to be a place where two networks peer and exchange traffic,” he said. “Although these two networks had a presence in Warsaw, they didn’t actually have a connection in Warsaw” and needed to make the “exchange” through Germany.

To get around these problems, Maggs suggested that networks work together more closely and to create incentives for faster delivery—perhaps through a separate subscription rate to those willing to pay for it. If the number of steps computers have to go through to establish a connection could be lowered, speeds could improve.

Another option, Maggs said, is the use of microwave towers like the system that high-frequency traders have built between Chicago and New York. The latest network is within 95 percent of the speed of light, and a second one is being considered between Chicago and Seattle, where a trans-oceanic cable runs to Tokyo.

Maggs said his research showed a system like it could be established among the 120 largest U.S. cities, on 3,000 existing towers. It would have limited capacity, however; thus would only be useful for people who would pay a premium—gamers, content providers, and content delivery networks, for example.

“You’d have to still bundle this with traditional service and you’d have to mediate with traffic that goes over the fast network versus the traditional network,” he said.

But by “shaving latency off interactions with their users, they’d make more profit.”

Maggs’ appearance was part of the Clavius Distinguished Lecture series.

Share.

Comments are closed.