MIT’s new artificial intelligence model speeds up high-resolution computer vision up to 9 times. The system could improve image quality in streaming video or help self-driving cars detect road hazards in real time.
According to Hoshio, an autonomous vehicle must quickly and accurately recognize the objects it encounters, from a delivery truck parked on a corner to a bicyclist approaching an intersection.
To do this, the car might use a powerful computer vision model to categorize each pixel in a high-resolution image of the scene, so it doesn’t miss objects that might be hidden in a lower-resolution image. But this task, known as semantic segmentation, is complex and requires a large amount of computation when the image is of high resolution.
Researchers at MIT, the MIT-IBM Watson AI Lab, and elsewhere have developed a more efficient computer vision model that greatly reduces the computational complexity of this task. Their model can accurately perform semantic segmentation in real-time on a device with limited hardware resources, such as on-board computers that enable an autonomous vehicle to make decisions in just a few seconds.
New semantic segmentation models directly learn the interaction between each pair of pixels in an image, so their computations grow rapidly as image resolution increases. This makes these models very accurate but too slow to process high-resolution images in real time on a device such as a sensor or mobile phone.
MIT researchers designed a new component for semantic segmentation models that has similar capabilities to these advanced models, but with only linear computational complexity and efficient hardware operations.
The result is a series of new models for high-resolution computer vision that perform up to 9 times faster than previous models when deployed on a mobile device. Importantly, this new model series shows the same or better accuracy than these alternatives.
Not only could this technique be used to help self-driving cars make real-time decisions, but it could also improve the performance of other high-resolution computer vision tasks, such as medical image segmentation.
While researchers have been using traditional vision transformers for a long time and have obtained surprising results, this research wants people to pay attention to the efficiency aspect of these models as well. “Our work shows that the computations can be drastically reduced so that this real-time image segmentation can happen locally on a device,” said Song Han, an associate professor in the Department of Electrical Engineering and Computer Science (EECS).
RCO NEWS