Computer World. Матросова Т.А. - 77 стр.

UptoLike

Составители: 

76
country into 7.8 million square cells, each with an area of one square kilometer. For
each cell we had to consider as many as 25 variables, ranging from average monthly
precipitation to the nitrogen content of the soil. A single PC or workstation could not
accomplish the task. We needed a parallel-processing supercomputer–and one that we
could afford!
Our solution was to construct a computing cluster using obsolete PCs that
ORNL would have otherwise discarded. Dubbed the Stone SouperComputer because
it was built essentially at no cost, our cluster of PCs was powerful enough to produce
ecoregion maps of unprecedented detail. Other research groups have devised even
more capable clusters that rival the performance of the world’s best supercomputers
at a mere fraction of their cost. This advantageous price-to-performance ratio has
already attracted the attention of some corporations, which plan to use the clusters for
such complex tasks as deciphering the human genome. In fact, the cluster concept
promises to revolutionize the computing field by offering tremendous processing
power to any research group, school or business that wants it.
Beowulf and Grendel
The notion of linking computers together is not new. In the 1950s and 1960s the
U.S. Air Force established a network of vacuum-tube computers called SAGE to
guard against a Soviet nuclear attack. In the mid-1980s Digital Equipment
Corporation coined the term «cluster» when it integrated its mid-range VAX
minicomputers into larger systems. Networks of workstations– generally less
powerful than minicomputers but faster than PCs - soon became common at research
institutions. By the early 1990s scientists began to consider building clusters of PCs,
partly because their mass-produced microprocessors had become so inexpensive.
What made the idea even more appealing was the falling cost of Ethernet, the
dominant technology for connecting computers in local-area networks.
Advances in software also paved the way for PC clusters. In the 1980s Unix
emerged as the dominant operating system for scientific and technical computing.
Unfortunately, the operating systems for PCs lacked the power and flexibility of
Unix. But in 1991 Finnish college student Linus Torvalds created Linux, a Unix-like
operating system that ran on a PC. Torvalds made Linux available free of charge on
the Internet, and soon hundreds of programmers began contributing improvements.
Now wildly popular as an operating system for stand-alone computers, Linux is also
ideal for clustered PCs.
The first PC cluster was born in 1994 at the NASA Goddard Space Flight
Center. NASA had been searching for a cheaper way to solve the knotty
computational problems typically encountered in earth and space science. The space
agency needed a machine that could achieve one gigaflops–that is, perform a billion
floating-point operations per second. (A floating-point operation is equivalent to a
simple calculation such as addition or multiplication.) At the time, however,
commercial supercomputers with that level of performance cost about $1 million,
which was too expensive to be dedicated to a single group of researchers.