Machine learning is becoming increasingly popular with the multiplication of applications in many sectors that are relying on this artificial intelligence (AI) approach to improve their processes.
However, as stressed by Isabelle Guyon, a pioneering AI researcher and professor at Paris-Saclay University, machine learning models are not simply machines. They rely on the expertise of specialists to perform a certain number of tasks, such as preparing data, selecting learning algorithms, or setting training parameters. The complexity of these tasks often exceeds the competencies of non-specialists.
AutoML aims to simplify all the steps of machine learning to make it more accessible without compromising on the models’ quality.
The rapid growth of machine learning applications and the rarity of talent in this area have therefore created a strong demand for ready-to-use methods that can be used without specialist knowledge. This has led to the emergence of a new area of research: automated machine learning, or AutoML.
Automating all the steps of machine learning
AutoML is the process of automating the development and implementation tasks of machine learning models so as to solve various kinds of problems. It potentially covers all the steps of machine learning, the idea being to take responsibility for the many decisions that AI researchers and engineers have to make during the conception of new models. For example, in deep learning, the number of hidden layers and nodes in each layer within an artificial neural network.
In a traditional approach, to make the data useable by machine learning, the engineers have to apply data pre-processing and feature engineering methods, which consist in extracting particular and measurable properties or attributes of a phenomena from raw data.
They must then proceed with the selection and configuration of the learning algorithms. This is what is known as “hyperparameter optimization”. This process of finding the optimal configuration of the algorithm’s hyperparameters is tedious, time-consuming, and error prone.
For deep learning, the neural network architecture must also be chosen. This is a field of research called Neural Architecture Search (NAS), a set of techniques aiming to automatically discover high-performing neural network architectures.
All of these steps can be complex and computing resource-intensive, which constitutes an entry barrier and thus a major obstacle to the generalization of machine learning. AutoML therefore aims to simplify these steps and make the technology more accessible to “novices” but without compromising on the AI models’ quality.
Algorithm selection and meta-learning
Based on the open machine learning library scikit-learn, auto-sklearn makes it possible for example to automate the second step: the learning algorithm selection and configuration phase. From a dataset it will determine the best algorithm to solve a given problem and optimize its hyperparameters. It makes use of recent advances in various machine learning methods, and meta-learning in particular.
Meta-learning offers to build models that can (quickly) learn to learn new tasks by extracting knowledge from previous task, the performances of these models improve with experience. The most well-known example is probably MAML (for Model-Agnostic Meta-Learning), proposed in 2017 by Chelsea Finn who, at the time, was a doctoral student at the University of California, Berkeley.
Simpler and better-performing solutions
The gradual automation of machine learning provides users with a low level of expertise in the area with the possibility to employ machine learning models and techniques.
For example, Google are offering an “AutoML” certified solution, and their commercial promise is to enable developers with little machine learning experience to create and train high-quality personalized models. This approach has sparked increasing interest and the Internet giants have most certainly joined the race.
End-to-end automation of the automatic learning process provides several advantages: the creation of simpler solutions, faster development, and better-performing models (AutoML facilitating the reduction of the approximations of manual processes). These NAS (Neural Architecture Search) methods, for example, can considerably speed up the development of deep learning as developers no longer need to minutely assess different architectures. Apparently, they can even make it possible to obtain architectures whose performances rival those of models made by hand, be this in terms of precision or performance speed.
What’s more, AutoML can fall within a Frugal AI approach, with the development of techniques that are less computing resource-intensive and therefore more efficient from an energy point of view. The conception of more efficient NAS methods is an example. In effect, neural architecture research implies testing a certain number of architectures with different parameters to identify the one that will yield the best result, which requires high computing power.
Some researchers are exploring alternative algorithms called “zero-cost proxies” in order to considerably reduce computing times (a few seconds versus several days with more or less the same precision). These techniques are still in the early stages, but the machine learning researchers are confident as to their potential.