Virtual Reality and Networks

Why visualization experts need to know about networks.

An interview with Tom DeFanti.

Until recently most computer graphics people didn't particularly care how their data got from one computer to another, but that was before distributed computing forced networking into a higher profile. Now optimizing the links between computers can be as critical to the performance of a visualization as is the software that converts the bits into images.

In a recent interview with Heide Foley of Mondo 2000 magazine, Tom DeFanti, associate director of NCSA's Virtual Environments Graphics Division and director of UIC's Electronic Visualization Laboratory, talked about implications of these cojoining technologies. He discussed the lessons he learned from I-WAY at SC'95 as well as the networking hurdles virtual reality (VR) is trying to overcome.

HF: What's the big deal about ATM?

TD: ATM -- Asynchronous Transfer Mode -- networking is roughly 10,000 times faster than your PC's modem. Another 4- to 64-times speedup is in the works. Getting email 10,000 times faster is not particularly important, but let's look at what is worth a $10,000 per month phone bill and why.

HF: Clearly this is all aimed at virtual worlds . . .

TD: And mass suspension of disbelief, which requires more power and infrastructure than anyone can afford. The genius of ATM is not so much that it works (people have been doing even faster networking for years now), but that it works using the same equipment that switches voice. This means that when the switch that handles the phone calls in Anytown, USA, is upgraded, it will automatically be able to handle both voice and high-speed data. It also means that the computing power you need to warp reality is not limited to what you have onsite. It can be anywhere. In fact, computing power may migrate into the switches themselves; after all, they're just computers. The dream [for high-performance computing] is to access computer power like you do electrical power; that is, you plug in, draw as much as you want, and pay for it by the unit. Nobody cares where the power is coming from.

The wonderful thing about fiber optics is that the fiber itself is not the limiting factor in transmitting data -- we have barely touched its potential capacity. The bottleneck is the electronics -- the routers and switches -- that take information on and off the fiber. They are slow, but they are getting faster. Perhaps photonics -- optical processing -- will take over. Replacing electronics with photonics is easier and far, far cheaper than redoing the fiber. This is very good news and a reason for optimism.

HF: So I-WAY at SC'95 was a stress test for hardware- software collaboration, right?

TD: That's for sure. The pieces were all there, it was just that no one had put them together. We wanted to connect thousands of fast processors over high-speed networks. I-WAY (way in the sense of extreme as in way cool) became a hardware-software networking experiment totally tuned to driving virtual reality with high-end computational science.

HF: How big can such a network get? What are the limiting factors of scalability of parallel processing over networks?

TD: I-WAY was designed to help ferret out the right questions and to be a cyberlaboratory for working out the answers. The basic problem with massively scalable parallel processing is in breaking down or decomposing the problem so that individual processors can do significant work in parallel with lots of other ones yet keep data updated in synch. Certain problems decompose elegantly; some very badly. Most are in between. They exhibit sensitivity to the number of processors, the amount of memory in each processor, and the efficiency with which the processors cross- communicate. The communication mechanisms are the focus of lots of study now, and the I-WAY is an ideal national facility for such experiments. The VR we do -- the CAVE and so on -- was developed specifically to stress test the networks and supercomputers and to provide a human-computer interface intense enough to display and navigate through the incredible number of parameters.

HF: The buzz is that supercomputing is going away.

TD: I was once asked what would replace supercomputing. I answered, "superduper computing." In the last 10 years, computational science has become a reality. Funding and machines have become generally available outside the major government high-energy physics labs. This is the revolution that will change our lives. The Web, by comparison, is simply a data structure.

HF: What? Certainly the Web is more than a data structure!

TD: Ultimately it will be. Adding intelligence to the Web is the direction in which many of us are going. Java applets, for instance, are downloadable code pieces that, among other things, do simulations (equations, basically). This advances the Web beyond being a huge storage disk, and it is why Java is so hot. Applets, though, are limited to the processing power on your desktop machine. Right now the Web stores information, like the sine and cosine tables you had in math class. But you really want to compute the sines and cosines directly from equations so that you don't have to interpolate between values. Some things should be computed directly, like spreadsheets. The goal of I-WAY is to let you click on an applet to run computations and simulations on dozens of major machines.

HF: Can't workstations do computations well enough already?

TD: The lines are blurring, which is why classical super-computing is vanishing. Workstations now have the memory and speed (and 64-bit hardware-software) that Cray supercomputers had just a few years ago. Jo(e) Scientist can do truly significant simulations on the desktop. However, computing in real time is different. I cannot imagine having enough computing for that. We will get major advances through parallelism. Supercomputers will not disappear, but they will be constructed differently. For instance, frames in the movie Toy Story likely took 10,000 to 1 million times longer than real time to compute. Pixar used massive workstation parallelism to finish the job. If Toy Story had been done on one processor, it wouldn't have made it to theaters in your lifetime.

HF: Is massive parallelism a way to give the Web consciousness?

TD: We wish! We really do. Maybe it is the word consciousness thatŐs difficult to deal with as opposed to smartness. If there is consciousness to be had, though, it would probably be a good idea to be able to document and recall it somehow.

One major gap we have is in recording virtual reality worlds and our paths through them. Remember that writing was invented to preserve culture and knowledge. TV and movies are used in the same way. VR needs to have the capability for recording, editing, and playing back experiences for it to be taken seriously as a cultural transformation mechanism. Another way of saying it is "if you canŐt reproduce it, it ain't science." Recording is a fundamental part of science.

HF: Why can't you record VR?

TD: An argument we have had for a long time with industry is about why visualization systems should be video compatible and why HDTV and workstation screens ought to be the same, or at least be interoperable. The TV manufacturing industry has its own serious legacy problems. Recording VR is not an issue of getting lots of high-resolution screens on and off tape because too much important geometrical information is lost through projection onto the frame. Since VR works at gigabits per second, you have to somehow compress the data to get it over networks or even off a local SCSI disk. We are working on compressing VR experiences by saving only the geometrical data and a user's path through it. Remember, a path is really a series of points. Just as a movie is a series of still images that you buzz through, a path is a series of perspective points that show you where youŐre pointing. If you preserve the paths as geometries, you can interact with them.

What you don't want to do in virtual reality is transmit bit maps -- you want to send them as geometries. The bandwidth of the CAVE is something like 8 gigabits a second, which is absurd. So you wouldn't record them as bit maps anyway -- at least not until you can recreate everything you could possibly want in real time -- which, as you know, is the frontier.

HF: What did you discover from the SC'95 stress test?

TD: The need for quality human-computer interfaces to the ATM networking gear is extreme. I-WAY was designed to bring together networking and computer philosophies in very deep ways. Computer scientists know advanced methods of presenting information via graphical user interfaces, and they know how to efficiently manage large databases. Networking experts need these tools implemented. Virtual reality -- which amazingly enough seems to be a mature technology in 1996 --- is likely to provide the best graphical computer interface. The amount of spatial data and its real-time demands far exceed the capacity of current Web-browser technology.

Portrait of Tom DeFanti by Jeff Carpenter, NCSA.

Return to the Table of Contents.

NCSA: The National Center for Supercomputing Applications
access / Summer 1996 issue

Email comments to NCSA Publications Group:

Last Modified: July 17, 1996