SENIC Documentation

Table of Contents

1 What is SENIC?

SENIC is a new hardware/software codesigned network stack for the datacenter. SENIC enables 1000s of rate limiters in hardware which are managed by the operating system.

For more details, check:

People who worked on SENIC:

2 Getting started

2.1 Repository quick links

There are four main repositories for SENIC.

The following are additional tools we used when evaluating SENIC.

2.2 Software Prototype

The SENIC software prototype (SENIC-s) consists of a queueing discipline that implements rate limiters. It differs from the current Linux network stack in the following ways:

  • SENIC-s parallelizes packet handoff: each CPU enqueues in its own local transmit queue.
  • SENIC-s serializes egress scheduling: a single CPU core computes transmit schedules and hands off packets to the NIC.
  • SENIC-s caches packet classification at the kernel socket level.

We have compared SENIC-s with today's rate limiters (htb) and a parallel rate limiter (ptb) that was implemented in EyeQ. For comparison, see our paper. ptb can be downloaded from this repository: You can find more info about installing ptb here:

2.2.1 Step 1: Installing the Kernel

The modified kernel that implements the parallel handoff is here: You can see the commits from Sivasankar in this page: These changes implement the parallel packet handoff.

# git clone
# cd linux-rl
# make menuconfig

... select your options
... use your favourite installation method.  I prefer make-kpkg.
# export CONCURRENCY_LEVEL=$(grep ^processor /proc/cpuinfo | wc -l)

# make-kpkg --initrd --append-to-version $APPEND_TO_VERSION kernel_image
# make-kpkg --append-to-version $APPEND_TO_VERSION kernel_headers

... The above commands will produce two .deb files which you can
... install using dpkg -i /path/to/deb

2.2.2 Step 2: Downloading SENIC-software

We built SENIC-s on top of QFQ packet scheduling algorithm. Linux already has QFQ qdisc sch_qfq. The modified qdisc is available here:

# git clone

The following iproute2 package incorporates additional per-class statistics for debugging, such as the class's virtual time, system virtual time, etc.

# git clone
# cd iproute2
# ./configure
# make -j10

2.2.3 Step 3: Trying it out

Reserving a CPU

SENIC-s works best if the one CPU core it uses is reserved exclusively for egress scheduling. To do this, you have to isolate one CPU core (by default set to 2 in sch_qfq.c) from the kernel scheduler.

# vi /path/to/grub/menu.lst
.. search for kernel boot options, e.g.:
module          /vmlinuz-x.y.z root=/dev/sda2 ro console=tty0

.. change the above to:
module          /vmlinuz-x.y.z root=/dev/sda2 ro console=tty0 isolcpus=2

If you boot into the kernel and run top or htop, you should not see any activity on that CPU.

Configuring link rate

SENIC-s is by default configured for 10Gb/s NIC. If you want to run it at 1Gb/s, you will have to do the following:

# cd qfq-rl
# vi sch_qfq.c +118
 * Link speed in Mbps. System time V will be incremented at this rate and the
 * rate limits of flows (still using the weight variable) should be also
 * indicated in Mbps.
 * This value should actually be about 9844Mb/s but we leave it at
 * 9800 with the hope of having small queues in the NIC.  The reason
 * is that with a given MTU, each packet has an Ethernet preamble (4B)
 * and the frame check sequence (8B) and a minimum recommended
 * inter-packet gap (0.0096us for 10GbE = 12B).  Thus the max
 * achievable data rate is MTU / (MTU + 24), which is 0.98439 with MTU
 * = 1500B and and 0.99734 with MTU=9000B.
--> #define LINK_SPEED              9800    // 10Gbps link
#define QFQ_DRAIN_RATE          ((u64)LINK_SPEED * 125000 * ONE_FP / NSEC_PER_SEC)

.. edit LINK_SPEED accordingly while reading the instructions

The following commands add the new qfq module and create two rate limiters at 1Gb/s and 2Gb/s. The rate limits are implicit weights. So, if your NIC only supports 1Gb/s, then the capacity will be divided in the ratio 1000:2000 between the two rate limiters.

$ cat

tc qdisc del dev $dev root
rmmod sch_qfq

cd qfq-rl
insmod ./sch_qfq.ko
tc qdisc add dev $dev root handle 1: qfq

# Create rate limiters
tc class add dev $dev parent 1: classid 1:1 qfq weight 1000 maxpkt $mtu
tc class add dev $dev parent 1: classid 1:2 qfq weight 2000 maxpkt $mtu
tc filter add dev $dev parent 1: protocol all prio 1 u32 match ip dport 5001 0xffff flowid 1:1
# This filter matches all pkts
tc filter add dev $dev parent 1: protocol all prio 2 u32 match u32 0 0 flowid 1:2

If you want to add a new rate limiter at 100Mb/s, you need to create two things: (1) a new class, and (2) a new filter.

tc class add dev $dev parent 1: classid 1:$classid qfq weight 100 maxpkt $mtu
tc filter add dev $dev parent 1: ..(filter string).. flowid 1:$classid

2.3 Hardware Prototype

We also have a proof-of-concept hardware prototype (SENIC-h) which our coauthor Yilong built on top of NetFPGA-10G platform.

The verilog code and experiments are here:

2.4 NSDI 2014 experiments

The scripts used in all our experiments in the NSDI paper are available online in the test repository.

We used the following traffic generators for our experiments.

  1. Trafgen – to generate sustained UDP and TCP traffic:
  2. mcperf – This is a fork off Twitter's mcperf utility that can generate sustained load on a memcached server. We added a few features to report statistics at a 100-microsecond granularity.

Author: Vimalkumar, Sivasankar (,

Date: 2014-02-28 21:13:06 PST

Generated by Org version 7.4 with Emacs version 24

Validate XHTML 1.0