Collaboration to develop distributed training of Large Language Models

https://www.r-ccs.riken.jp/en/fugaku/about/
Credit: Riken

… Tokyo Tech, Tohoku University, Fujitsu, and RIKEN.

Tokyo Institute of Technology (Tokyo Tech), Tohoku University, Fujitsu Limited, and RIKEN today announced that they will embark on the research and development of a distributed training of Large Language Models (LLM) [1]on supercomputer Fugaku in May 2023, within the scope of the initiatives for use of Fugaku defined by Japanese policy.

LLMs are AI models for deep learning that serve as the core of generative AI including ChatGPT[2]. The four organizations aim to improve the environment for creating LLMs that can be widely used by academia and companies, contribute to improving the research capabilities of AI in Japan, and increase the value of utilizing Fugaku in both academic and industrial fields by disclosing the results of this R&D in the future.

Background

While many anticipate that LLMs and generative AI will play a fundamental role in as the research and development of technologies for security, the economy, and society overall, the advancement and refinement of these models will require high-performance computing resources that can efficiently process large amounts of data.

Tokyo Tech, Tohoku University, Fujitsu, and RIKEN are undertaking an initiative to this end that will focus on research and development toward distributed training of LLMs.

Implementation period

From May 24, 2023 to March 31, 2024 *Period of the initiative for use Fugaku for Japanese policies

Roles of each organization and company

The technology used in this initiative will allow the organizations to efficiently perform large-scale language model training on the large-scale parallel computing environment of the supercomputer Fugaku. The roles of each organization and company are as follows:

  • Tokyo Institute of Technology: Oversight of overall processes, parallelization and acceleration of LLMs
  • Tohoku University: Collection of learning data, selection of models
  • Fujitsu: Acceleration of LLMs
  • RIKEN: Distributed parallelization and accelerating communication of LLMs, acceleration of LLMs

Future plans

To support Japanese researchers and engineers to develop LLMs in the future, the four organizations plan to publish the research results obtained through the scope of the initiatives for use of Fugaku defined by Japanese policy on GitHub[3] and Hugging Face[4] in fiscal 2024. It is also anticipated that many researchers and engineers will participate in the improvement of the basic model and new applied research to create efficient methods that lead to the next generation of innovative research and business results.

The four organizations will additionally consider collaborations with Nagoya University, which develops data generation and learning methods for multimodal applications in industrial fields such as manufacturing, and CyberAgent, Inc., which provides data and technology for building LLMs.

Comments

Comment from Toshio Endo, Professor, Global Scientific Information and Computing Center, Tokyo Institute of Technology:

“The collaboration will integrate parallelization and acceleration of large-scale language models using the supercomputer “Fugaku” by Tokyo Tech and RIKEN, Fujitsu’s development of high-performance computing infrastructure software for Fugaku and performance tuning of AI models, and Tohoku University’s natural language processing technology. In collaboration with Fujitsu, we will also utilize the small research lab we established under the name of “Fujitsu Collaborative Research Center for Next Generation Computing Infrastructure” in 202X. We look forward to working together with our colleagues to contribute to the improvement of Japan’s AI research capabilities, taking advantage of the large-scale distributed deep learning capabilities offered by “Fugaku”.”

Comment from Kentaro Inui, Professor, Graduate School of Information Sciences, Tohoku University:

“We aim to build a large-scale language model that is open-source, available for commercial use, and primarily based on Japanese data, with transparency in its training data. By enabling traceability of the learning data, we anticipate that this will facilitate research robust enough to scientifically verify issues related to the black box problem, bias, misinformation, and so-called “hallucination” phenomena common to AI. Leveraging the insights gained from deep learning from Japanese natural language processing developed at Tohoku University, we will construct large-scale models. We look forward to contributing to the enhancement of AI research capabilities in our country and beyond, sharing the results of the research we obtain through the initiative for researchers and developers.”

Comment from Seishi Okamoto, EVP, Head of Fujitsu Research, Fujitsu Limited:

“We are excited for the chance to leverage the powerful, parallel computing resources of the supercomputer Fugaku to supercharge research into AI and advance research and development of LLMS. Going forward, we aim to incorporate the fruits of this research into Fujitsu’s new AI Platform, codenamed “Kozuchi,” to deliver paradigm-shifting applications that contribute to the realization of a sustainable society.”

Comment from Satoshi Matsuoka, Director, RIKEN Center for Computational Science:

“The A64FX[5] CPU is equipped with an AI acceleration function known as SVE.
Software development and optimization are essential to maximize its capabilities and to utilize it for AI applications, however. We feel that this joint research will play an important role in bringing together experts of LLMs and computer science in Japan, including RIKEN R-CCS researchers and engineers, to advance techniques for building LLMs on the supercomputer “Fugaku”. Together with our collaborators, we contribute to the realization of Society 5.0.”

Project name

Distributed Training of Large Language Models on Fugaku (Project Number: hp230254)

[Terms]

[1] Large-scale language models : Neural networks with hundreds of millions to billions of parameters that have been pre-learned using large amounts of data. Recently, GPT in language processing and ViT in image processing are known as representative large-scale learning models.

[2] ChatGPT : A large-scale language model for natural language processing developed by OpenAI that supports tasks such as interactive systems and automatic sentence generation with high accuracy.

[3] GitHub : A platform used to publish open-source software around the world. GitHub

[4] Hugging Face : A platform used to publish AI datasets around the world. Hugging Face

[5] A64FX : An ARM-based CPU developed by Fujitsu installed in supercomputer Fugaku.

 

About Tokyo Institute of Technology

Tokyo Tech stands at the forefront of research and higher education as the leading university for science and technology in Japan. Tokyo Tech researchers excel in fields ranging from materials science to biology, computer science, and physics. Founded in 1881, Tokyo Tech hosts over 10,000 undergraduate and graduate students per year, who develop into scientific leaders and some of the most sought-after engineers in industry. Embodying the Japanese philosophy of “monotsukuri,” meaning “technical ingenuity and innovation,” the Tokyo Tech community strives to contribute to society through high-impact research. https://www.titech.ac.jp/english/

Media Contact

Emiko Kawaguchi
Tokyo Institute of Technology
kawaguchi.e.aa@m.titech.ac.jp
Office: +81-3-57342975

Media Contact

Emiko Kawaguchi
Tokyo Institute of Technology

All latest news from the category: Information Technology

Here you can find a summary of innovations in the fields of information and data processing and up-to-date developments on IT equipment and hardware.

This area covers topics such as IT services, IT architectures, IT management and telecommunications.

Back to home

Comments (0)

Write a comment

Newest articles

Largest magnetic anisotropy of a molecule measured at BESSY II

At the Berlin synchrotron radiation source BESSY II, the largest magnetic anisotropy of a single molecule ever measured experimentally has been determined. The larger this anisotropy is, the better a…

Breaking boundaries: Researchers isolate quantum coherence in classical light systems

LSU quantum researchers uncover hidden quantum behaviors within classical light, which could make quantum technologies robust. Understanding the boundary between classical and quantum physics has long been a central question…

MRI-first strategy for prostate cancer detection proves to be safe

Active monitoring is a sufficiently safe option when prostate MRI findings are negative. There are several strategies for the early detection of prostate cancer. The first step is often a…