Run:AI is the Israeli startup that has developed the compute management platform of the same name designed for the orchestration and acceleration of artificial intelligence and machine learning.

The company has now announced the launch of a new ResearcherUI, as well as integration with machine learning tools including Kubeflow, MLflow and Apache Airflow.

The new UI option is part of the

There are dozens of data science tools used to perform experiments, highlights Run:AI, and of course some data scientists are more comfortable with one tool, others with another.

Run:AI dynamically allocates GPUs to data science jobs of an entire organization, regardless of the machine learning tools that users use to build and manage models.

Teams may have guaranteed quotas, but their workloads can use any available inactive GPU resource, creating logical GPU fractions, extending work on multiple GPUs and multiple GPU nodes for distributed training and maximizing the ratio

With

As Omri Geller, CEO of Run:AI, pointed out, some data scientists like Kubeflow, some prefer MLFlow, others prefer to use YAML files. There are companies that come to use 50 different data science tools.

With Run:AI, the company’s CEO still highlights, there is no need to force all data science teams to use a specific machine learning tool to take advantage of the GPU orchestration platform.

Instead, each team can run it its own way (from here the definition

Leave a Reply

Your email address will not be published.

You May Also Like