Run:AI, Israeli startup providing innovative orchestration and virtualization software for artificial intelligence, announced that it had raised additional B-series funding of $30 million that will be used to rapidly expand the organic.

The growth of artificial intelligence in recent years, explained Run:AI team, is directly linked to the availability of high computing power. As technological challenges grow in complexity, artificial intelligence models running on huge data sets require more computing power for training.

The most advanced form of artificial intelligence, deep learning, typically uses graphics processing units, GPUs, or other specialized hardware to train deep learning patterns.

Run:AI cites an OpenAI study that the demand for calculation by deep learning networks has doubled every 3.5 months since 2012. To support this demand, huge on-premise artificial intelligence clusters are distributed in public cloud environments and even on the edge.

Due to this rapidly increasing demand, Run’s inefficiencies in computing infrastructure are slowing down the ability of companies to bring practical artificial intelligence solutions to the market.

When GPUs are assigned statically to researchers, the Run:AI team points out that even if GPU demand grows, resources remain inactive. In addition, most artificial intelligence is developed on cloud-native infrastructure, which were originally built to support the execution of workloads on CPUs, not GPUs, and lack many computing scalability capabilities needed for intelligence

To make things worse, Run:AI points out, GPUs are not virtualized and cannot be shared between multiple applications or users. This results in a typical 25% use in artificial intelligence clusters and low productivity of data science teams.

Run:AI wants to give answers to these challenges. The Israeli startup has developed a tailor-made layer of orchestration and virtualization software for the unique needs of artificial intelligence workloads running on GPUs and similar chipsets.

The platform is the first to bring OS-level virtualization software to the GPU running workloads, an approach inspired by virtualization and CPU management that revolutionized computer computing in the 1990s.

The container platform based on Run:AI Kubernetes for artificial intelligence clouds efficiently groups and shares GPUs by automatically assigning the necessary amount of resources from GPU fractions to multiple GPUs, to multiple nodes of

Businesses and large research centres are using Run:AI to solve their resource optimization problems for both training and inference; better use of their computing infrastructure allows them to bring artificial intelligence solutions to the market more

Since its launch in 2019, Run:AI has built a global customer base, especially in the automotive, financial, defence, manufacturing and healthcare sectors. Customers using Run:AI, according to data provided by the company, record an increase in GPU usage from 25 to 75 percent on average; a customer saw the speed of their experiments increase by 3000 percent after installing the Run:AI platform.

Leave a Reply

Your email address will not be published.

You May Also Like