Contrary to the opinions of some commentators, it’s hardly a disaster for IBM that Cray was chosen by the Department of Energy (DOE) to build its next-gen El Capitan supercomputer at the agency’s Lawrence Livermore National Laboratory (LLNL) Nor was the DOE’s choosing Cray to develop the Frontier system for the Oak Ridge National Lab and the Shasta exascale system for the Argonne National Lab.
Disappointing? Sure. It would have been nice for IBM to follow the triumphs of its CORAL-class Summit and Sierra systems with another DOE win. But in supercomputing, as in most things, it’s wiser to look forward and plan for future opportunities than it is to waste time dwelling on minor setbacks. And you should believe that there will be opportunities in the future for IBM to build more world-class supercomputers.
Go here to see eWEEK’s listing for Top Linux Server Vendors.
Why do I say that? First and foremost, the journey up to and down from peak supercomputing performance follows a notoriously slippery slope. Once you reach the top be sure to enjoy the view while you can, because it won’t last for long. This is a reality that IBM knows and understands far better than most vendors. In addition, there are technologies under development or on the horizon that are likely to take supercomputers to whole new levels.
Not surprisingly, IBM is working to develop and evolve those new technologies and appears to be well-ahead of much of the competition. Let’s consider these points and what they mean to IBM, its customers and the supercomputing industry.
Life at the Top
To get a sense of just how fragile success in high-end supercomputing can be, let’s take a look at the systems featured in the biannual Top500.org lists of leading supercomputing installations. In the lists posted on the group’s Web site (1993-present), the length of time a No. 1-ranked supercomputer resides in that position varies from six months (one list) to three years.
There are a few notable exceptions, with some countries having exceptional runs. Installations in Japan dominated the Top500 lists in the early 1990s (the Fujitsu-built Numerical Wind Tunnel system) and 2000s (the NEC-built Earth Simulator). The Tianhe2A at the National Super Computer Center in Guangzhou, China achieved the No. 1 position in June 2013 and stayed there until it was replaced by Sunway TaihuLight at the National Supercomputing Center in Wuxi, China (2016-2017).
U.S. vendors have also done well. Intel’s ASCI Red installation at the DOE’s Sandia National Lab stayed in the No. 1 position from 1997 to the fall of 2000. IBM’s BlueGeneL system at the LLNL system hit the top spot on the November 2004 list and remained there until it was bumped off by another IBM supercomputer–RoadRunner at the Los Alamos National Lab in 2008. That system stayed in the lead until the November 2009 list was published, then jumped back on top for one more list the following June.
Overall, IBM has developed six systems that were ranked No. 1 by Top500.org since it began publishing: ASCI White (2000-2001), BlueGeneL (2004-2007), RoadRunner (2008-2010), Sequoia BlueGeneQ (2012) and the Summit system at the DOE’s Oak Ridge Laboratory that has been No. 1 since June 2018. Along with Summit, IBM has the #2-, #10-, #11- and #13-ranked systems on the current Top500 list, as well as three others in the top 100.
Also notable is that three of the four most highly ranked Top500 IBM systems are also listed the top 10 of the latest Green500 list of most energy efficient supercomputers: #2 (Summit), #6 (Pangea III) and #7 (Sierra). That’s a critical point, especially when one considers the potential impact of climate change on traditional energy sources.
That can also deliver significant financial benefits to IBM customers. For example, Total, a supermajor global energy company that commissioned Pangea III, reported that the new system uses less than 10% the energy consumption per petaflop as its predecessor, Pangea, an SGI/HPE system that currently ranks 38th on the Top500 list and 172nd on the Green500. It’s likely that IBM’s ability to blend top-line performance and energy efficiency will help supercomputing move more swiftly into commercial applications and use cases.
IBM’s Supercomputing Innovations
It’s worth noting how often IBM’s supercomputers have featured unique technologies and designs. ASCI White leveraged the company’s POWER3 processors, while the BlueGene systems featured low frequency, low power-embedded PowerPC cores with floating point accelerators. Roadrunner was the first No. 1-ranked hybrid supercomputer and combined the Cell processors developed by IBM, Toshiba and Sony with AMD Opteron CPUs. Summit is another hybrid system that features IBM POWER9 processors, NVIDIA Tesla GPUs and Mellanox EDR InfiniBand interconnect.
Why are these unique elements and shifts in design important? Because they reflect the changing nature of supercomputing. These systems aren’t simply built to be big and go fast. They’re highly complex tools designed to perform specific kinds of astoundingly difficult work. For answering questions most people consider impenetrable. For making possible what were once considered impossible tasks. Just as supercomputers and supercomputing have evolved, so too have the jobs and questions they’re used to address.
The Collaboration of Oak Ridge, Argonne and Livermore (CORAL) program of which Summit is a part was designed to develop heterogeneous systems that blended classic supercomputing with AI and deep learning capabilities. That addressed a key practical issue—that ever-larger systems were creating massive volumes of data whose analysis was beyond the capabilities of traditional tools and applications.
Not surprisingly, lessons learned in developing Summit and Sierra are finding their way into other IBM efforts. For example, IBM researchers and geoscientists from Eni, a global energy company, built an AI-based augmented intelligence platform to enable so-called “cognitive discovery” functions supporting Eni’s oil and gas exploration efforts. Using public and proprietary data, combined with knowledge derived from numerical simulations and experimental setups, cognitive discovery aims to enhance initial assessments of potential drilling sites and to identify viable opportunities for oil and gas exploration.
The cognitive-enabled heterogeneous supercomputing model that the CORAL program envisioned will continue to evolve in the Aurora, Frontier and El Capitan systems, but what comes after that? A still-emerging area that seems particularly intriguing is quantum computing (see photo above), where qubit-based systems are enabling researchers to become familiar and experiment with quantum concepts. Areas where quantum solutions could be advantageously deployed include material science and discovery, risk analysis, financial services and machine learning.
IBM has been active in quantum computing development for nearly four decades, resulting in the IBM Q quantum systems that the company makes available via the IBM Q Experience. That online service provides access to two 5-qubit processors and a 16-qubit processor, where researchers can explore tutorials and simulations and run algorithms and experiments (more than 100,000 experiments have been run to date). IBM is also developing larger quantum systems, including the 20-qubit IBM Q System One announced in January and a 50-qubit prototype system.
In short, IBM appears well-positioned to continue developing advanced IBM Q systems and to pursue and capture commercial opportunities in related areas, including business applications and hybrid quantum/supercomputing systems.
Over the past quarter-century, IBM and its strategic partners have been at the forefront of supercomputing systems and achievements. The company has enjoyed an inordinate amount of time at the peak of the Top500.org list. Like every other supercomputing vendor, IBM has seen its industry-leading solutions pursued and eventually overtaken by newer systems. But rather than retire from the field, the company responded to those setbacks with fresh innovations that put it back on top. Given that history, it is entirely likely that IBM will continue to add to its notable record of supercomputing success.
Charles King is a principal analyst at PUND-IT and a regular contributor to eWEEK. © 2019 Pund-IT, Inc. All rights reserved.
.(tagsToTranslate)IBM(t)Past(t)Present(t)Future of Supercomputing(t)HPC(t)Cray(t)HPE(t)Japan(t)China(t)big data(t)Sandia Labs(t)Livermore Labs
This is a syndicated post. Read the original post at Source link .