Insider Brief
Multiverse Computing has received funding and supercomputer time to build a large-scale language model (LLM) for the AI-BOOST Large AI Grand Challenge, an open challenge program designed to be a benchmark for the European artificial intelligence (AI) community. Funded by the European Commission, AI-BOOST awarded Multiverse Computing 800,000 supercomputer hours.
PRESS RELEASE — Multiverse Computing, a global leader in AI and quantum software solutions, has been awarded funding and supercomputer time to build a large-scale language model (LLM) for the Large AI Grand Challenge by AI-BOOST, an open challenge program designed to benchmark the European artificial intelligence (AI) community.
AI-BOOST, funded by the European Commission, has awarded Multiverse Computing 800,000 computing hours on its supercomputer to build and train the LLM from the ground up using quantum and quantum-inspired techniques.
“Winning this competition is recognition of Multiverse Computing’s strength in using quantum and quantum-inspired technologies to build faster, more energy-efficient LLMs,” said co-founder and CEO Enrique Lizaso Olmos. “We are proud that EU leaders have placed their trust in our expertise and capabilities to create a new class of LLMs with faster training, smaller datasets, and lower operational costs.”
The winning team of the Large-Scale AI Grand Challenge will have 12 months to develop a large-scale AI model with a minimum of 30 billion parameters and train it on one of Europe’s supercomputers.
The award also represents an important step in the European Commission’s efforts to apply quantum and quantum-inspired technologies in the field of AI, particularly in the Master of Laws program. The field of quantum AI is attracting increasing attention, as it is recognised as a potential solution to the growing demand for large-scale computational and energy resources in AI.
Multiverse Computing’s new CompactifAI software uses quantum-inspired techniques to make LLMs 95% smaller and 50% cheaper to train and run, while maintaining high accuracy. A new study from the Multiverse Computing science team compared CompactifAI’s performance to common compression techniques and found that CompactifAI produces the exact same detailed performance while removing unnecessary elements of the model.
As described in the paper, CompactifAI reduces the number of parameters in the model by 70-80% and cuts memory requirements by over 95%. This significant reduction cuts both the training process and inference time by at least half. All these changes result in only a 2% loss in accuracy. With a typical LLM currently costing over $100 million to train, these savings point a clear path to making LLMs cheaper and greener.
“Our recent benchmark paper shows that our software significantly reduced the size of a large language model with 7 billion parameters,” said Roman Orth, chief scientific officer at Multiverse Computing. “We are excited to rise to the challenge of using quantum-inspired techniques in developing our models from scratch, and we also have detailed plans to use quantum computers in the near future to further accelerate the LLM training process.”
The winner of the Large-Scale AI Grand Challenge will also have the opportunity to collaborate with the European Commission’s AI and Robotics Group and the European High Performance Computing Joint Undertaking.
Challenge applicants submitted a detailed project plan for developing a large-scale AI model from scratch, a justification for the use of HPC, and a demonstration of their team’s expertise in using HPC systems to train the underlying model.
The overall goal of AI-BOOST is to attract talent from across the EU and associated countries to drive scientific progress in AI. The project will foster collaboration between key players in the AI community to define compelling AI challenges that have the potential to lead to reliable, human-centric, real-world solutions.