Google’s EfficientNet Offers up to a 10x Boost in Image Analysis Efficiency
Google has earned a reputation for pushing out new AI technologies and upgrades at a remarkable pace and their announcement of EfficientNet serves as the latest example. Leveraging their work with AutoML, Google’s scientists employed a scaling method that offers up to a tenfold increase in network efficiency.
The company writes: “The conventional practice for model scaling is to arbitrarily increase the CNN depth or width, or to use larger input image resolution for training and evaluation. While these methods do improve accuracy, they usually require tedious manual tuning, and still often yield suboptimal performance. What if, instead, we could find a more principled method to scale up a CNN to obtain better accuracy and efficiency?”
Google software engineer Mingxing Tan explains the new development:
Unlike conventional approaches that arbitrarily scale network dimensions, such as width, depth and resolution, our method uniformly scales each dimension with a fixed set of scaling coefficients. Powered by this novel scaling method and recent progress on AutoML, we have developed a family of models, called EfficientNets, which superpass [sic] state-of-the-art accuracy with up to 10x better efficiency (smaller and faster).
These networks are well-suited for tasks like image classification and facial recognition which offer advantages for high usage scenarios as well as the use of more accurate and efficient models in mobile technology. Like most AI of its kind, EfficientNet utilizes pre-trained CNNs (convolutional neural networks) designed to perform image-related tasks as a base network. These base networks can learn from a range of more generalized visual data sets to allow faster creation of more specific models with limited training data.
While the standard arbitrary scaling process still yields functional results, EfficientNet first conducts a grid search of the base network to determine the relationships between the network’s different scaling dimensions (e.g. width and height) while considering both the size of the model and available computational resources. EfficientNet then scales up the base network based on this assessment. The results from initial testing indicate a higher level of accuracy and speed in the majority of circumstances.
EfficientNet also performed exceptionally well with over half of the eight most commonly-utilized image datasets, such as CIFAR-100 (91.7%) and Flowers (98.8%). Because this new method may significantly improve computer vision tasks across the board, Google has open-sourced EfficientNet with access through GitHub.
Given that image recognition models have a bit of a reputation for making strange mistakes, EfficientNet may help mitigate that problem across the board as AI developers build on Google’s recent efforts.