During the I/O 2021 event, Google unveiled a new technology called Multitask Unified Model to increase the capabilities of its search engine. Then in September of the same year, it showed that thanks to this feature, Google Lens users can benefit from simultaneous search of images and text. Now, after months of waiting, this feature has been activated as a test for a group of users.
According to Google, users living in America can use this feature by using the Android or iOS version of the Google Lens application. This feature simply allows users to ask questions about different images or perform a more detailed search. For example, you can take a picture of a bicycle changer, which is responsible for moving the bicycle chain on the front and rear chain wheels. Then, using this photo, you will be able to search on how to repair this part. By combining text and images, Google makes it very easy to search more precisely. Because many cyclists don’t know which part of the bike is the shifter, and that’s why they have trouble searching for its repair.
On the other hand, suppose you see a dress with an attractive design, but on the other hand, you intend to find a sock with a similar design. Naturally, you can’t describe many of the designs, but by taking a picture of the desired dress and then searching for socks with similar designs, you can get a better result. Of course, Google has announced that for now this feature works better for searching between online stores and its possibilities may be increased later.
This cool feature is on the way and will undoubtedly improve over the coming months. Google has not yet provided information about the date of this feature for users living in other countries.
Source: 9To5Google
RCO NEWS