Kevin Webb: Networking and Distributed Systems
My general research areas are networking and distributed systems, with a current focus on resource provisioning in large networks.
Large data centers host thousands of services and end-user applications "in the cloud." Such services include user-facing systems like web search and email as well as pay-per-use cloud providers from whom anyone can rent computing resources. For efficiency, network providers multiplex multiple services (tenants) across a shared physical network infrastructure, leveraging server virtualization to carve out isolated units of CPU, memory, and storage. In contrast, their network platforms take a tenant-agnostic, one-size-fits-all approach to supporting data center services -- tenants typically receive only loose, qualitative descriptions of network performance, and at best they are given an ad-hoc set of additional network functionality.
My work on a system named Blender addresses this disparity by providing an "App Store" framework for network operators. Blender enables network operators to improve tenant performance by tailoring the network's behavior according to tenant needs. Tenants can augment their portion of the network with the specific set of features they need, and Blender composes their selections to simultaneously execute across a shared network infrastructure. The current Blender prototype leverages software defined networking to provide tenants with performance isolation, failure recovery, and flow scheduling, using a graph-based resource allocation model.
I'm also interested in research that advances our understanding of how to best teach computer science. Specifically, I'm involved in research on assessment that aims to iteratively and openly develop concept inventories (CIs) for CS education. CIs have been widely adopted in other sciences (particularly Physics), and their results have motivated pedagogical transformations that led to substantial learning gains. My hope is that we can similarly improve CS teaching methods by better understanding how students conceptualize our classroom materials.
My past research projects include:
- Distributed Rate Limiting (DRL): Provisioning and accounting for resource usage in cloud computing environments is a challenging technical problem. Distributed Rate Limiting provides a cost control mechanism whereby multiple traffic limiters work together to enforce a global rate limit across multiple sites. My work has been in building a fully-functional implementation of DRL for use in the PlanetLab research testbed.
- Continuous Bulk Processing (CBP): Many large-scale distributed processing systems ("big data!") like MapReduce re-execute large computations when new data arrives. CBP's goal is to build support for incremental data processing such that new data can be incorporated without re-executing over previously processed data.
- Delay/Disruption Tolerant Networking (DTN): The goal of DTN is to provide reliable network connectivity in the absence of typical communication infrastructure. Long ago, I built cross-platform DTN software for experimental use on PDAs, phones, and other hand-held equipment.