Contact Us Intranet


News Home
Video on Demand
Subscribe to Our Newsletter
Frequently Asked Questions

High-Performance Supercomputing at Mail Order Prices

The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign (UIUC) announced today that it has successfully run a complex astrophysical hydrodynamics computation on a Windows NT supercluster, built from several hundred processors of mass market PCs. The National Computational Science Alliance (Alliance) and NCSA, its leading-edge site, have found a way to tightly couple commercial PCs into superclusters, which promise to eventually reduce the cost of running many supercomputing applications by an order of magnitude, said Larry Smarr, director of the Alliance and NCSA.

Andrew Chien, a professor in the University of Illinois Department of Computer Science and a member of the Alliance Parallel Computing Team, and his research group working with staff from NCSA, recently completed construction of a 256-processor Windows NT supercluster for high performance computing research. The supercluster consists of 32 Compaq and 96 Hewlett Packard Windows NT PC workstations. These 128 workstations, connected by a Myrinet network, use Chien's Illinois Fast Messages middleware to create a powerful 256-processor supercluster. NCSA has plans to upgrade the cluster to 512-processors next year, perhaps eventually to thousands of processors.

This week, ZEUS-MP, a computational fluid dynamics code written by the UIUC Laboratory for Computational Astrophysics and used for simulations of astrophysical phenomena, ran on 192 processors of the new supercluster using the message passing interface (MPI) standard. The ZEUS code is one of the major application codes used by the Alliance Cosmology Team. The supercluster achieved this high level of parallelism running a sophisticated application only six weeks after the initial delivery of the PCs. Furthermore, unlike earlier national experiments, the NCSA cluster employs the widely used Microsoft Windows NT operating system.

In addition to ZEUS, ISIS++, a portable object-oriented framework for solving sparse systems of linear equations such as found in large-scale finite element analysis codes, also ran on 192 processors. The code was developed at Sandia National Laboratory. Other supercomputer codes that will run on the NT supercluster during the next month include applications in elementary particles, materials, and fluid turbulence. While it is still too early to predict how many types of applications will parallelize efficiently on the NT supercluster or what the precise price-performance obtained will be, NCSA researchers believe it is encouraging that these supercomputer codes have so easily been ported to this new environment.

"NCSA and the Alliance believe we are at the beginning of the third wave in supercomputing," said Smarr. "The first wave started in the mid-70s with the Cray 1, when supercomputers were created from specialized vector processors and used proprietary operating systems. The second wa ve began in the early '90s with the rise of UNIX/RISC microprocessor-based scalable systems, such as those which will remain NCSA's premier supercomputers for the next five years. The third wave represents the commoditization of supercomputing, in which we grow arbitrarily large clusters of NT/Intel PCs. The Alliance is creating the software systems needed to begin to match such clusters with the needs of high-end science and engineering users in the country."

The NT supercluster will be demonstrated at Alliance'98, a conference that brings together Alliance partners at the University of Illinois campus on April 27 - 30.

While the supercluster does not have the more sophisticated Distributed Shared Memory architecture of either NCSA's SGI/Cray Origin2000 or Hewlett-Packard SPP-2000, it should run many applications that currently use those systems at NCSA. The cluster uses software called High Performance Virtual Machine, or HPVM, which synthesizes clusters of Windows NT processors into a high-performance environment. HPVM, developed by Chien and his students in the Concurrent Systems Architecture Group, enables each node of an NT cluster to communicate at a bandwidth of just under 80 megabytes per second and a latency under 11 microseconds using Myricom's Myrinet interconnect. As a result, it will offer users in the national scientific and engineering community a low-cost alternative to conventional high performance machines used to carry out high-end computational research. The demo will involve 192 processors of the supercluster working together as one HPVM machine with 50 gigabytes of memory, 400 gigabytes of disk space and almost 4 gigabytes per second of bisection bandwidth across a Myrinet network.

Using HPVM on the NT cluster will allow users to immediately port MPI codes to the cluster. Further, a version of High Performance Fortran (HPF) that takes advantage of HPVM's Fast Messages Application Programming Interface (API) is expected in several months from the Portland Group.

"By creating HPVM, Chien added the missing ingredient that allows scientists to build huge parallel computations on Windows NT clusters, said Jim Gray, senior researcher at Microsoft Corp. "The 192 processor ZEUS example shows what is possible -- supercomputer performance at mail-order prices. NCSA's Windows NT supercluster is pioneering the use of commodity software and hardware for large-scale scientific computation. The software that NCSA offers, like HPVM and Symera, promise to make it much easier to program and manage these superclusters."

"The NCSA team has done it again!" said George Spix, chief architect of Microsoft's Consumer Platforms Division. "From the days of the Illiac IV to today's leveraging of the New Computing Industry's solutions, the University of Illinois at Urbana-Champaign has provided not only leading supercomputing platforms, but the tools, applications, and business alliances necessary for their productive use and widespread adoption."

Besides providing users with familiar programming and run-time services through the HPVM software, the Alliance NT cluster research team is working to provide a supercluster that is large enough and balanced enough to be useful for application scientists doing real scientific research.

"Much of what we expect to learn with this initial cluster will come from working with real applications, not just benchmarking tools," said Charlie Catlett, NCSA's Senior Associate Director of Science and Technology. "We hope to use most or all of these processors on a single application. This really is scalable, commodity-based supercomputing."

The supercluster workstations are the Compaq Professional Workstation 6000 model, which features dual Intel 333 MHz Pentium II processors, and the HP Kayak XU PC workstation, featuring dual 300 MHz Intel Pentium II processors. Myrinet interconnects from Myricom network the individual workstations together and provide the high bandwidth and low latency needed for high-performance computing.

"This is important work," said Dick Lampman, director of Enterprise Systems and Solutions Research Center at HP Labs, HP's central research organization in Palo Alto, Calif. "NCSA and the Alliance are pioneering the use of NT clusters for scalable, high-performance computing."

"NCSA's work in delivering supercomputing capabilities with Windows NT-based platforms like Compaq Professional Workstations proves that NT and NT-based systems are eminently scalable and powerful," said Les Crudele, Vice President and General Manager of Compaq's Workstation Division. "Their work with our high-performance, open standards-based and affordable platforms will make supercomputing power available to a much broader range of scientific and engineering application vendors. Ultimately, the possible benefits to scientists and engineers in scalability, cost savings and productivity are truly impressive."

The National Computational Science Alliance is a partnership to prototype an advanced computational infrastructure for the 21st century and includes more than 50 academic, government and industry research partners from across the United States. The Alliance receives core funding from the National Science Foundation and cost-sharing at partner institutions.

The National Center for Supercomputing Applications is the leading-edge site for the Alliance. NCSA is a leader in the development and deployment of cutting-edge high-performance computing, networking, and information technologies. The National Science Foundation, the state of Illinois, the University of Illinois, industrial partners, and other federal agencies fund NCSA.

The Laboratory for Computational Astrophysics (LCA) is a joint project of the National Computational Science Alliance and the Department of Astronomy at the University of Illinois at Urbana-Champaign. The LCA develops and disseminates theoretical modeling software for astrophysics research. The LCA is directed by Michael L. Norman, Senior Research Scientist at NCSA and Professor in the Department of Astronomy.




NCSA Access ©1998 Board of Trustees of the University of Illinois.
All rights reserved. Do not copy or redistribute in any form.
Published by the NCSA Communications Group Staff. Send comments to