Multiverse Computing Secures Funding for Energy-Efficient Quantum AI

In an era dominated by the rapid growth of artificial intelligence (AI), the escalating energy demands of large language models (LLMs) like ChatGPT and Bard pose significant sustainability challenges. To address this issue, Multiverse Computing, a Spanish quantum AI software startup, is leveraging cutting-edge quantum-inspired tensor networks to create smaller, Energy-Efficient Quantum AI models. Their breakthrough platform, CompactifAI, promises to drastically reduce energy consumption while maintaining model performance.

Multiverse Computing Secures Funding for Energy-Efficient Quantum AI

Recently, Multiverse Computing secured a significant, undisclosed investment from CDP Venture Capital, a Rome-based firm specializing in early-stage ventures across Italy. This investment follows a €25 million oversubscribed funding round concluded nine months ago. The funds will enable the company to enhance CompactifAI and expand its commercial presence in Italy.


Strategic Expansion into Italy

Multiverse Computing’s decision to grow its footprint in Italy is a strategic move to strengthen its presence in Europe and tap into a G7 market. Speaking to EE Times Europe, Gianni Del Bimbo, COO of Multiverse Computing, emphasized the company’s intent to collaborate with both public and private entities in Italy.

“We chose to expand into Italy to strategically reinforce our presence in Europe and enter a new G7 market,” said Del Bimbo. He noted the potential for partnerships with top Italian corporations and public institutions. Additionally, the company aims to build strong connections with Italian universities to access local talent.

This expansion includes growing Multiverse’s Milan office and creating new public-private collaborations, enabling the company to enhance its quantum AI solutions while fostering innovation within Italy’s technology ecosystem.

Also Read: Pika 2.0 Revolutionizes AI Video Creation with User-Friendly Tools


CompactifAI: A Energy-Efficient Quantum AI

Multiverse Computing’s flagship technology, CompactifAI, is a large language model compression technique based on quantum-inspired tensor networks (TNs). By truncating the correlations within a model’s self-attention and multi-layer perceptron layers, CompactifAI achieves significant reductions in model size without compromising accuracy.

How CompactifAI Works:

  • Tensorization: Applies specific tensor networks to compress layers of the LLM.
  • Correlation Truncation: Reduces unnecessary data correlations, allowing models to maintain performance.
  • Customizable Compression: Controls the level of compression through bond dimension adjustments.

CompactifAI by the Numbers:

  • Parameter Reduction: Cuts LLM parameters by 70% to 80%.
  • Memory Savings: Reduces memory requirements by 93%.
  • Training Efficiency: Shortens training time by 50%.
  • Inference Speed: Improves inference time by 25%.

These optimizations not only reduce the computational resources required but also significantly lower the energy consumption of LLMs.

Also Read: BBC Files Complaint with Apple Over AI-Generated Fake News


Benchmarking with Leonardo Supercomputer

As part of its efforts to validate CompactifAI, Multiverse Computing has collaborated with the European High-Performance Computing Joint Undertaking (EuroHPC JU) and the Leonardo supercomputer.

The Leonardo supercomputer, operated by Cineca, ranks 9th on the TOP500 list and plays a crucial role in Europe’s high-performance computing ecosystem. The system consists of two primary partitions:

  1. Data-Centric Module: Comprising 1,536 compute nodes with Intel Sapphire Rapids CPUs.
  2. Booster Module: Featuring 3,456 compute nodes powered by Nvidia A100 GPUs.

Multiverse Computing has been allocated GPU node hours on Leonardo to benchmark CompactifAI’s energy efficiency and performance. This collaboration aims to compare CompactifAI against industry-leading models, such as Meta’s LLaMA family.

Additionally, Cineca is preparing to introduce a new AI-focused partition, LISA (Leonardo Improved Supercomputer Architecture), by early 2025. While Multiverse Computing is not yet involved in the LISA upgrade, Del Bimbo hinted at potential future collaborations.


Addressing AI’s Energy Consumption Problem

Large-scale LLMs are notorious for their high computational demands. For example, generating a single 100-word email using GPT-4 requires 0.14 kilowatt-hours (kWh) of electricity, equivalent to powering 14 LED bulbs for one hour.

The energy-intensive nature of LLMs has even driven companies like Microsoft to reopen shuttered nuclear power plants to meet their energy needs. Microsoft’s agreement to purchase all the energy generated by a Pennsylvania-based nuclear plant underscores the scale of the challenge.

CompactifAI addresses these concerns by enabling smaller, more efficient models that can be deployed on-premises, eliminating reliance on cloud servers and reducing operational costs.

Also Read: Insights from Ilya Sutskever: Superintelligent AI will be ‘unpredictable’


Funding and Partnerships

The latest investment in Multiverse Computing was facilitated by CDP Venture Capital as part of a Series A round. The financing came through the Corporate Partners I fund, which focuses on energy and technology sectors. Limited partners include prominent Italian corporations such as:

  • Baker Hughes
  • BNL BNP Paribas
  • Edison
  • GPI
  • Italgas
  • Snam
  • Terna Forward

These partnerships highlight Multiverse Computing’s strategic alignment with energy and technology leaders to drive innovation in AI.


Future Vision and Global Impact

Multiverse Computing envisions a future where AI models are both powerful and sustainable. By reducing the energy consumption of LLMs, CompactifAI sets a new standard for efficiency in AI. The company’s expansion into Italy marks the beginning of its journey to revolutionize AI across Europe and beyond.

With CompactifAI, Multiverse Computing is not just addressing the computational and energy challenges of AI but also paving the way for broader adoption of quantum-inspired solutions in both public and private sectors.

FAQs

  1. What is CompactifAI?
    CompactifAI is a quantum-inspired AI compression technology by Multiverse Computing that reduces the size, energy use, and memory requirements of large language models.
  2. How does CompactifAI work?
    It uses tensor networks to compress layers within LLMs, maintaining performance while drastically cutting computational and energy demands.
  3. What is the goal of CompactifAI?
    To create smaller, more energy-efficient AI models that can operate on-premises without relying on cloud servers.
  4. How much energy does CompactifAI save?
    CompactifAI reduces memory requirements by 93%, training time by 50%, and inference time by 25%, significantly lowering energy consumption.
  5. Why is Multiverse Computing expanding into Italy?
    The company aims to strengthen its presence in Europe, forge partnerships with Italian firms, and tap into the local talent pool.
  6. What is the Leonardo supercomputer?
    Leonardo is a European pre-exascale supercomputer operated by Cineca. It supports high-performance computing and AI research projects.
  7. Is Multiverse Computing part of the Leonardo upgrade?
    Not currently, but the company is open to future collaborations involving the upcoming LISA partition.
  8. Who are Multiverse Computing’s key investors?
    Their recent funding round involved CDP Venture Capital and several Italian corporations, including Baker Hughes and Edison.
  9. Why are LLMs energy-intensive?
    LLMs require significant computational resources for training and inference, leading to high energy consumption.
  10. What industries can benefit from CompactifAI?
    Industries such as finance, healthcare, energy, and education can leverage CompactifAI for efficient and sustainable AI solutions.

Leave a Comment