Tiny Packet Programs
Table of Contents
1 What is a Tiny Packet Program?
A Tiny Packet Program is a packet with an embedded program that can read and (optionally) write into switch memory using values stored within the packet. It's a highly restrictive, yet useful formulation of the `Active Networks' vision.
The key idea is to empower end-hosts with explicit network information to evolve data plane functions in software. For now, TPPs are only meant to be used in private networks, and not across the Internet.
Figure 1: A TPP is just another regular packet encapsulated in a TPP header. The TPP header contains a few instructions that primarily LOAD from and STORE into switch memory.
Figure 2: A visualisation of how end-hosts can send TPPs to collect statistics from the network to get a detailed breakdown of queueing latencies at each and every hop.
1.1 Why?
Today, there are several network `functions' needed to make any network work. These network functions can be broadly divided into control and data plane functions on the basis of the timescales at which they operate. Control plane functions typically operate at seconds to minutes (e.g., routing, traffic engineering, etc.), whereas data plane functions operate at packet and round-trip timescales (e.g., congestion control).
With SDN, the path to introducing new control plane functions is conceptually clear: Write new software running on commodity servers, controlling commodity switches through an open interface such as OpenFlow.
For new data plane functions that typically need hardware support for performance reasons, the path is less clear. You are stuck with whatever your vendor gives you. Moreover, for tasks such as congestion control for which there is no one right answer, what do you do even if you own an ASIC team?
We think TPPs strike a delicate balance between what is possible in hardware, across vendors, across device characteristics, while being sufficiently powerful to break the impasse in introducing new data plane tasks.
1.2 What does it help you do?
In the paper, we show how you can implement the following tasks simply by having new software at the end-hosts sending specially crafted TPPs:
- Congestion control: Rate Control Protocol (max-min fair version), Rate Control Protocol (proportional fair version).
- CONGA: A distributed network traffic load balancing mechanism available in Cisco's next-generation ASICs. If you have a TPP-enabled network, you can get benefits similar to CONGA just by using TPPs.
- NetSight: A platform for detailed diagnosis of packet provenance in a large network. You can learn more about NetSight in our NSDI 2014 paper (pdf).
- Performance diagnosis: Identifying links that have high queueing occupancy for a short period of time (also known `micro-bursts').
- High speed, low-overhead monitoring: A detailed breakdown of per-link latency for any specific packet.
We think TPPs are a good fit for a number of other data plane functions below, but we haven't had a chance to implement them yet:
- Other congestion control schemes: XCP, FCP, QCP, MCP, EyeQ, or perhaps feeding richer signals into Remy.
- Fast network updates: If TPPs take half a round-trip time to update network state (once the new state is computed, of course), can we mitigate the effect of transient network state?
- DeTail: Reducing tail latency for flows by quickly switching to alternate least-congested paths (pdf).
- Packet drop location: Figuring out exactly where a packet was dropped, either due to congestion, or due to corrupted bits.
- Dynamic flow prioritisation: Today, flows are prioritised into bins statically based on headers. With TPPs, one can collect finer statistics such as link utilisation, flow size distribution (on a per-link basis) to dynamically set flow prioritisation thresholds on a per-link basis. With a low-latency interface, these configurations could potentially be updated quickly in response to traffic changes.
If you have new ideas, please let me know!
1.3 What can't you do?
Despite the benefits, the TPP is not a panacea. You cannot do the following tasks using TPPs, although there might be ways you can get reasonably close to achieving similar functionality.
- Per-packet scheduling: TPP does not affect packet scheduling at a per-packet level, so you cannot implement say deficit round robin.
- Event-driven notifications: The TPP specification doesn't enable notifications whenever the network changes its state. For instance, you can't get notifications and take action when a switch updates its forwarding entry in hardware. Alternatively, if your goal is to take real-time actions on changes to network state, you can simply poll the switch state.
- Define new header schemes: You can't use TPPs to tell the switch how to interpret new header formats (say IPv7). This paper might be a better fit for that task: forwarding metamorphosis.
2 People
- Vimalkumar Jeyakumar
- Mohammad Alizadeh (Cisco Systems)
- Changhoon Kim (Barefoot Networks)
- Yilong Geng
- David Mazières
3 Papers and Presentations
- Tiny Packet Programs for low-latency network control and monitoring (HotNets'13): A position paper articulating the basic ideas behind TPPs.
- Millions of Little Minions: Using Packets for Low Latency Network Programming and Visibility (SIGCOMM'14): A longer version of the
paper with a full system design and evaluation.
- Paper: http://www.scs.stanford.edu/~jvimal/tpp-sigcomm14.pdf.
- Talk: (coming soon!)
- An extended version of the SIGCOMM paper: http://arxiv.org/abs/1405.7143.
4 Code
Coming soon!