Google’s Data Center Switching Fabric


(T) Google on its Cloud Platform Blog and at the 2015 Open Networking Summit (ONS) disclosed for the first time some details about the architecture of its latest data center switching fabric code-name Jupiter. We knew that Google’s network is from end-to-end SDN-based (for more on that, read the following articles from previous Google’s lectures that I attended: “Providing Large Scale Cloud Services by Virtualizing Network Services over Software Defined Networks” and “How Google Manages its SDN Network”) and that Google has been building its own switches running standard Linux but details about the switch itself besides that its control plane is SDN-based were mainly not known.

But at this year ONS, Google Fellow Amin Vahdat gave a deeper dive…

“The Jupiter fabrics — can deliver more than 1 Petabit/sec of total bisection bandwidth. To put this in perspective, such capacity would be enough for 100,000 servers to exchange information at 10Gb/s each, enough to read the entire scanned contents of the Library of Congress in less than 1/10th of a second…

We used three key principles in designing our datacenter networks:

  • We arrange our network around a Clos topology, a network configuration where a collection of smaller (cheaper) switches are arranged to provide the properties of a much larger logical switch.

  • We use a centralized software control stack to manage thousands of switches within the data center, making them effectively act as one large fabric.

  • We build our own software and hardware using silicon from vendors, relying less on standard Internet protocols and more on custom protocols tailored to the data center.“

Jupiter is Google’s fifth generation of in-house made switches:



Mr. Vahdat gave a few details about the fabric itself:


The datacenter switches are not running OSPF or IS-IS. Instead, Google has designed its own routing protocol for the following reasons:

  • Limited support for multi-path forwarding

  • No high-quality open source stacks

  • Broadcast protocol scalability at scale

  • Network manageability with individual switch configurations

As OSPF is shortest-path first, Google felt that multi-path forwarding is better used to optimize the aggregated bandwidth of the datacenter. I am sure that Google never considered MPLS to make OSPF multi-path forwarding probably due to the high level of configuration that would be required to set up the various paths.

However, Google proprietary routing protocol’s Firepath is link-state as is OSPF and has many standard routing messages that are used in both OSPF and BGP: keep alive, link-state database exchange, interface state update.

Firepath also can interface to external BGP routers used by Google outside its data center networks.


And Google has adapted the Linux stack on the switches to support Firepath:


Jupiter switches are running an updated version of Firepath.

For more on the switching architecture and key networking technologies, please watch the complete talk of Mr. Vahdat:

Update: Google has now published a detailed paper (43837) about Jupiter’s architecture and routing protocol.

Note: The picture above is Google’s Jupiter switches.

Copyright © 2005-2015 by Serge-Paul Carrasco. All rights reserved.
Contact Us: asvinsider at gmail dot com.

Categories: Networking