Scientists Develop New Scientific GPTs with Ethics and Trust

Scientists Develop New Scientific GPTs with Ethics and Trust

image provided by pixabay

This post is also available in: heעברית (Hebrew)

Recently a group of scientists shared that they want to develop their own trillion-parameter-sized digital brain that’s fed with scientific information only.

To do this, they’ve kickstarted the Trillion Parameter Consortium (TPC) along with the National Center for Supercomputing Applications (NCSA) as a founding member. This group of scientists come from the world’s most prestigious research institutes, federal laboratories, academia, and industry, all coming together to tackle the challenge of building large-scale artificial intelligence systems and advancing trustworthy and reliable AI for scientific discovery.

According to Cybernews, the name “Trillion Parameter Consortium” includes the ambition of building state-of-the-art LLMs for science and engineering. The idea for collaboration began several years back when the scientific community realized they should join forces since training LLMs requires a lot of machine time and effort.

The TPC website reads: “It became clear that while the community could develop a number of smaller models independently and compete for cycles, a broader “AI for Science” community must work together if we are to create models that are at the scale of the largest private models.”

The scientists hope that their AI models will be trustworthy and reliable. Trillion parameter models represent “the frontier of large-scale AI” for them. Rick Stevens, Argonne associate laboratory director for computing, environment, and life sciences explained that at their laboratory and at a growing number of partner institutions around the world, teams are beginning to develop frontier AI models for scientific use and are preparing enormous collections of previously untapped scientific data for training.

The NCSA is reportedly developing its own AI-focused advanced computing and data resource called DeltaAI which is supposed to play an instrumental role in the efforts undertaken by the TPC. According to the press release, DeltaAI is set to come online in 2024, triple NCSA’s AI-focused computing capacity, and greatly expand the capacity available within the NSF-funded advanced computing ecosystem.

Another AI model that is being developed by founding members is Argonne National Laboratory’s AuroraGPT, which could ultimately become a massive brain for scientific researchers after months of training.

Ultimately, the TPC collaboration aims to leverage global efforts, identify and prepare high-quality training data, design and evaluate model architectures, and develop innovations in model evaluation strategies with respect to bias, trustworthiness, and goal alignment.