A paper co-authored by Prof. Harsha Madhyastha has been awarded an Applied Networking Research Prize by the Internet Engineering Task Force (IETF). In “Engineering Egress with Edge Fabric: Steering Oceans of Content to the World,” Madhyastha and researchers from Facebook, University of Southern California, Columbia University, and Universidade Federal de Minas Gerais presented a system that large content providers can use to smartly direct traffic over the internet to their users. Facebook uses this system to serve over two billion users across six continents.
To serve millions of users at once, large websites have to build access points around the world that each connect to tens or hundreds of networks. Even then, peak traffic demands can be hard for the provider to handle while trying to balance two major constraints: the bandwidth limit of any particular route, and the latency it causes for users.
Different routes have different bandwidth limits – the provider can’t send every user’s traffic down one route, or the provider risks the traffic getting throttled and dropped. Likewise, different routing options result in different latencies to users, some routes being longer or shorter than others. To complicate matters further, these decisions are made mostly blind: the Internet’s routing protocol doesn’t tell the providers anything about these constraints up front.
“When Facebook learns about these routes, it just knows which different routes exist to choose from,” says Madhyastha. “It doesn’t know how much bandwidth there is on any one or how much latency traffic will incur on each route.”
This default protocol, called Border Gateway Protocol (BGP), is now 20 years old, and makes it difficult for providers to make the best use of their huge connectivity. To remedy this, Madhyastha and collaborators designed Edge Fabric, a new system that addresses these issues in real time.
Edge Fabric offers two features to providers – a real-time performance analysis of different routes, effectively outlining the bandwidth and latency of different options, and a way to incorporate this data into routing decisions.
“On the one hand, you’re trying to pack as many users as possible onto each route without creating any bottlenecks, and on the other hand you’re trying to ensure you pick a route for each user that minimizes latency,” Madhyastha says.
The team’s goal was to efficiently use the many interconnections available to a company like Facebook without congesting them and degrading users’ performance. The paper, which was presented at the ACM SIGCOMM conference in 2017, was awarded the Applied Networking Research Prize by the IETF for its scientific excellence and substance, timeliness, relevance, and potential impact on the Internet.
Posted April 30, 2019