Annual SC Conference Powered Again by SCinet Network

Ultra High Capacity SCinet Network to Accommodate Multiple Terabytes of Data for Annual Conference on High Performance Computing, Networking and Storage

For seven days, the Oregon Convention Center will be home to the most powerful network in the world – SCinet. Built each year for the annual SC conference, SCinet brings to life a highly sophisticated and extreme local and wide area network fabric that can support the revolutionary applications and network experiments that have become a trademark of the conference.

SC09 held from November 14-20, 2009, focuses on the latest advances in high performance computing, networking, storage and analysis. With a massive 400 Gigabits per second in bandwidth capacity – more than most networks in the world – SCinet supports exhibitors from industry and academia to demonstrate their most aggressive supercomputing and networking applications that are either in production or in an experimental or pre-commercial state.

SCinet is designed and built by over 140 volunteers from universities, government and industry and leverages $20 million in donated equipment from leaders in the technology sector who seek the opportunity to showcase their products in this highly advanced network environment. This year, the team has deployed over 200 miles of fiber optic cable in the Oregon Convention Center and is utilizing over 34 miles of fiber in the regional Portland area to make these network capabilities possible.

SCinet links the conference center to research and commercial networks around the world like the Department of Energy’s ESnet, the Internet2 Network and National LambdaRail. SCinet will support the exa-floods of data from SC09 exhibitors and attendees and will then measure and monitor every aspect of the network's performance to provide the public a unique real-time window into the core of this powerful network’s inner-workings and operations. Live network traffic will be available online via: http://measurement.sc09.org/public

"The SCinet team has worked diligently over the past twelve months to design and build what we consider to be the most robust network capable of supporting the leading-edge applications of the conference’s exhibitors and attendees, who are known for pushing network and computing resources to the extreme each year,” said Ralph McEldowney, Chief, Advanced Technologies Section, Air Force Research Laboratory Supercomputing Resource Center and SC09 SCinet committee chair.

Highlights of the SCinet infrastructure include:

Showfloor Infrastructure
Made possible through routing and switching equipment from contributors including Juniper Networks, Cisco and Brocade, SCinet’s showfloor infrastructure supports more than 150 industry and research exhibitors with network connectivity ranging from one Gigabit Ethernet (GbE) to 10 GbE. This creates a high performance infrastructure where exhibitors will showcase their latest systems, services, research and scientific achievements.

In addition, the SCinet team will also enable Wi-Fi wireless connectivity throughout the convention center for exhibitors and attendees using equipment provided by the Oregon Convention Center and Xirrus. High-speed access to the commercial Internet will be provided for all SC09 participants through SCinet’s collaboration with the Network for Education and Research in Oregon (NERO) and 360networks Inc.

Wide Area Connectivity
The SCinet wide area network (WAN) team is led by engineers from NERO, which is providing the critical regional networking to connect the convention center to the Internet and research networks around the world. This year, several collaborators will leverage the wide area network to showcase new advances in 40 and 100 Gigabit per second (Gbps) network transport technology, which would at least quadruple the average capacity of most networks today.

Vendors contributing equipment for wide area network connectivity include: Nortel, Ciena, Infinera, Fujitsu, and Cisco. Fiber optic cable between the Oregon Convention Center and downtown Portland is provided by Qwest and Level 3 Communications. Internet exchange facilities used by SCinet in Portland are provided by Level 3.

OpenFabrics
For the past four years, SCinet’s OpenFabrics team has built an InfiniBand network infrastructure at SC providing connectivity to numerous organizations and vendors in support of OpenFabrics-enabled demonstrations. InfiniBand is a standards-based interconnect technology that delivers low latency, high bandwidth capabilities typical of high performance computing environments like those demonstrated at SC. These capabilities will enable a variety of application and storage services allowing exhibitors to use the InfiniBand network for cloud computing, server-to-server processing, and visualization. For SC09, the InfiniBand network will be built using 12X InfiniBand Quad Data Rate (QDR) 120-gigabit per second (Gbps) circuits throughout the entire network. The InfiniBand network is built on technology and services provided by: Avago Technologies, Bay Microsystems, Finisar, Intel, Luxtera, Mellanox, NERSC, and Qlogic.

Extreme Networks (Xnet)
Since 1999, SCinet has provided a capability called Xnet for research and industry leaders to demonstrate emerging, pre-commercial, cutting-edge breakthroughs in high performance networking technologies. This year Xnet will highlight three experimental technology demonstrations including:

  • 12X InfiniBand QDR (96Gbps): SCinet’s OpenFabrics initiative has teamed up with the HPC Advisory Council to make use of 12X InfiniBand QDR infrastructure and will demonstrate several different applications between all InfiniBand network participants including a Remote Desktop over InfiniBand (RDI) for live desktop sharing between 21 unique users.
  • Multi-Layer Application-Aware Network Provisioning, and Grooming Using OpenFlow: Ciena, Stanford University, and Internet2 will demonstrate a unified control plane for packet and circuit networks based on the OpenFlow architecture which separates data and control plane; provides a common flow based data path abstraction for layers L0-L4; and a control plane API to the network.
  • The High Performance Digital Media Network (HPDMnet) is an experimental network research initiative that is designing and implementing the first international high performance service created for high quality, large-scale digital media, including support for extremely high volume flows.

The Bandwidth Challenge
At every SC conference since 2000, teams of scientists and engineers have competed in the Bandwidth Challenge to see who could make the most productive use of the huge bandwidth provided by SCinet. And while no group has achieved the unstated goal of flooding the network to the breaking point, each year has seen creative applications, which move record amounts of data across the network. In today’s petaflop-era of high performance computing, this year's challenge asks participants to demonstrate how large a data set they can transmit in a fixed period of time and demonstrate new techniques for moving data sets for scientific projects. Five teams representing multiple institutions from around the world will compete in the Challenge and the winner will be announced on Thursday, November 19, 2009.

Sustainability
As the entire SC09 conference focuses on the theme of sustainability – SCinet will also play a role. For SC09, SCinet engineers will focus on collecting quantitative sustainability measurements to provide a benchmark for future conferences, as well as research data for other energy efficiency and sustainability research projects. SCinet engineers will collect detailed power consumption data for the main SCinet Network Operations Center and display those measurements through the SCinet measurement infrastructure (http://measurement.sc09.org/public). SCinet will also provide power measurement and network infrastructure to other sustainability efforts in the conference that wish to directly measure and quantify their carbon impact from electrical power usage.

Collaborators
SCinet is the result of the hard work and significant contributions of many government, research, education and corporate collaborators. Collaborators in SCinet for 2009 include:

Air Force Research Laboratory DoD Supercomputing Resource Center, AMD, Ames Laboratory, Apparent Networks, Argonne National Laboratory, Arista, Army Research Laboratory DoD Supercomputing Resource Center, Army Space and Missile Defense Command, Avago Technologies, AVETeC, Bay Microsystems, Bivio, Brocade, CA Labs, Ciena, Cinch, Cisco, Clemson University, Computer Sciences Corporation, Corporation for Education Network Initiatives in California, Darkstrand, Electronic Visualization Laboratory, EMCORE, Energy Sciences Network, Finisar, Florida LambdaRail, Force10, Fujitsu, Gigamon, HPC Advisory Council, Indiana University, Infinera, InfiniBand Trade Association, InMon, Intel, Internet2, Juniper Networks, Lamprey Networks, Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Level 3 Communications, Lockheed Martin, Lonestar Education and Research Network, Los Alamos National Laboratory, Louisiana Optical Network Initiative, Luxtera, Mellanox, National Center for Supercomputing Applications, National Energy Research Scientific Computing Center, National LambdaRail, Nemean, Network for Education and Research in Oregon, Nortel, Oak Ridge National Laboratory, Obsidian, Ohio Academic Resources Network, Ohio Supercomputer Center, OpenFabrics Alliance, Pacific Northwest National Laboratory, Pacific Wave, Purdue University, QLogic, Qwest, RAID Inc., San Diego Supercomputer Center, Sandia National Laboratories, SARA Computing and Networking Services, Solera Networks, South Carolina Computing, Starlight, Sun Microsystems, Texas A&M University, Translight, Tyco Electronics, University of California-San Francisco, University of Amsterdam, University of Delaware, University of Florida, University of New Hampshire InterOperability Laboratory, University of Texas, University of Utah, University of Wisconsin, Voltaire, Zarlink and Ziti.

Source: http://sc09.supercomputing.org/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.