Hot on the heels of Uber’s open-sourcing of an AI debugging tool this morning, Lyft announced the release of Flyte, which it describes as a structured and distributed platform for concurrent, scalable, and maintainable machine learning workflows. The company says that Flyte has been serving AI model training and data processing internally for over three years, becoming the tool of choice for teams furthering Lyft’s pricing, locations, estimate time of arrivals, mapping, and self-driving products. In fact, Lyft says that Flyte now manages over 7,000 unique workflows totaling over 100,000 monthly executions, 1 million tasks, and 10 million containers.
“With data now being a primary asset for companies, executing large-scale compute jobs is critical to the business, but problematic from an operational standpoint. Scaling, monitoring, and managing compute clusters becomes a burden on each product team, slowing down iteration and subsequently product innovation. Moreover, these workflows often have complex data dependencies,” wrote Lyft in a blog post. “Flyte’s mission is to increase development velocity for machine learning and data processing by abstracting this overhead.”
Flyte is a multi-tenant service, enabling teams to work on separate repositories and deploy them without affecting the rest of the platform. Code is versioned and containerized with its dependencies, ensuring all executions remain reproducible. (Container images are bound to a task.) Workflows can be parametized and have rich data lineage, enabling a developer to, for instance, invoke different parameters (variables internal to AI models that define the skill of said model on a problem) with each run. And Flyte intelligently uses cached outputs from previous execution, saving both time and memory.
Workflows in Flyte can link tasks together and pass data between them with a Python-based domain-specific programming language. Plus, because every entity in Flyte is immutable (with every change explicitly captured as a new version), workflows can be iterated or rolled back quickly and versioned tasks can be shared across workflows.
Flyte tasks can be arbitrarily complex, Lyft says — anything from a single container execution to a remote query in a hive cluster. Better yet, they can be extended with FlyteKit extensions and backend plugins, which allow contributors to provide integrations with third-party services and systems while affording fine-grained control over resources.
“Flyte is built to power and accelerate machine learning and data orchestration at the scale required by modern products, companies, and applications,” wrote Lyft. “Together, Lyft and Flyte have grown to see the massive advantage a modern processing platform provides, and we hope that in open sourcing Flyte you too can reap the benefits.”
Flyte’s launch comes two years after Lyft made available a few of the tools it uses to simulate the results of machine learning algorithms. More recently, the transportation giant open-sourced a large data set for autonomous vehicle development.
The audio problem: Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here