Recently, the Google Brain’s team decided to throw a challenge to the AI AutoML of creating a ‘child’ that outdid all of its human-made counterparts by using an approach called reinforcement learning. AutoML acts as a controller neural network that can create a “child” network to execute a specific task. Called as NASNet, the child AI was given the task to recognize objects in a real-time video feed, like people, cars, traffic lights, handbags, or backpacks. The “child” model trains for the task and gets evaluated by AutoML’s controller neural net, which learns from the feedback and enhances the child model until it gets a superior version of NASNet. After tweaking and improving the NASNet endlessly, it was tested on the ImageNet image classification and COCO object detection datasets – both known for being “two of the most respected large-scale academic data sets in computer vision.” According to Google, NASNet outperformed all other computer vision systems, reports Futurism. On ImageNet image classification, NASNet achieved a prediction accuracy of 82.7% on the validation set, which is 1.2% higher than all previously published results, according to the researchers. Also, the system is 4% more efficient than previously published state-of-the-art and has a 43.1% mean Average Precision (mAP). Additionally, a less computationally demanding version of NASNet outperformed mobile platforms by 3.1%. The Google researchers acknowledged that the image features learned by NASNet on ImageNet and COCO may be reused for many computer vision applications. As a result, they have open-sourced NASNet for inference on image classification and for object detection in the Slim and Object Detection TensorFlow repositories. “We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined,” the researchers wrote in their blog post. While there are many possible uses of AutoML and NASNet, there are also ethical issues related to AI. For instance, what if AutoML creates AI systems at such a speed that the society simply cannot keep up with them or what if the AI parent passes down unwanted biases to its child. To keep all these things in human control, it is very important to implement more strict regulations and enhanced ethical standards to prevent the use of AI for malicious purposes. Source: Futurism