The project has been under development for 18 months and launches the same day as PyTorch 1.0, which includes deeper integrations with Caffe2, ONNX, and a series of integrations with cloud providers like Google Cloud and Azure Machine Learning as well as hardware providers like Intel and Qualcomm.
“Fastai is the first deep learning library to provide a single consistent interface to all the most commonly used deep learning applications for vision, text, tabular data, time series, and collaborative filtering. This is important for practitioners, because it means if you’ve learnt to create practical computer vision models with fastai, then you can use the same approach to create natural language processing (NLP) models, or any of the other types of model we support,” Fast.ai cofounder Jeremy Howard said in a Medium post today.
In addition to being utilized by researchers and developers alike, fastai includes recent advances by the Fast.ai team that allowed them to train Imagenet in less than 30 minutes.
The first version of fastai was released in September 2017 and has since been used to do things like carry out transfer learning with computer vision, execute art projects like style reversion, and create Clara, a neural net made by an OpenAI research fellow that generates music.
Fastai v1 can work with preinstalled datasets on Google Cloud; it also works with AWS SageMaker and with pre-configured environments with the AWS Deep Learning AMIs.
Fastai is free to use with GitHub, conda, and pip, with support for AWS coming soon.
Fast.ai seeks to democratize access to deep learning with tutorials, tools, and state of the art AI models. More than 200,000 people have taken Fast.ai’s seven-week course Practical Deep Learning for Coders.