For the past few years, for example, ATPESC has included an entire day focused on training attendees to use machine learning and deep learning methods for science.
“All too often, students and postdocs join research groups and accumulate an ad-hoc knowledge of high performance computing that is mainly focused on the methods already in use by their group,” Loy added. “ATPESC provides exposure to a very broad range of topics and serves as a core curriculum. It plays an important role in priming the pipeline of the next generation of computational scientists. Some of our attendees have gone on to lead prominent DOE computing projects or become faculty who have, in turn, sent their students to ATPESC. We even had ATPESC alumni as speakers this year.”
One such alum was Suyash Tandon, a software system design engineer at AMD. He was an ATPESC attendee last year and returned this year as a speaker.
“ATPESC is a great venue for the scientific computing community to meet and learn from one another,” said Tandon. “The ATPESC workshop brings the ‘bleeding tip’ of the developments in the HPC realm to the attendees, and they get to ‘dip their hands in grease’ with hands-on sessions.”
As part of the training on hardware architectures, attendees learned about DOE’s upcoming exascale supercomputers as well as leading-edge AI platforms, including the Cerebras, SambaNova, Groq and Habana systems being deployed at Argonne. ATPESC provided an opportunity for DOE computing facilities and AI companies to connect with and further educate the community about emerging technologies that are redefining scientific computing.
“We hope that the attendees learned about the differentiation and value that SambaNova’s complete software and hardware solutions bring to running large-scale deep learning and AI for science applications with ease of use at the highest levels of performance, accuracy and scale,” said Marshall Choy, vice president of products at SambaNova.
While this year’s virtual program was safe for attendees during the pandemic, Loy and his team hope they can return in person next year when ATPESC marks its 10th anniversary.
“While we have continually tuned and updated the curriculum, we will be making a more thorough review in the coming year,” said Loy. “Additionally, we are thinking about ways to engage the growing number of ATPESC alumni, which is now up to more than 700 total.”
The ALCF, OLCF, and NERSC are DOE Office of Science User Facilities located at Argonne, Oak Ridge, and Lawrence Berkeley national laboratories, respectively.
See original with photos here: https://www.alcf.anl.gov/news/attendees-worldwide-learn-supercomputing-skills-annual-argonne-training-program
The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. Supported by the U.S. Department of Energy’s (DOE’s) Office of Science, Advanced Scientific Computing Research (ASCR) program, the ALCF is one of two DOE Leadership Computing Facilities in the nation dedicated to open science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future. With employees from more than 60 nations, Argonne is managed by UChicago Argonne, LLC for the U.S. Department of Energy’s Office of Science.
The U.S. Department of Energy’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science.
Source: Argonne Leadership Computing Facility
This is a syndicated post. Read the original post at Source link .