New Feature: Google Assistant Recommends Food in the Restaurant – Google Maps & Google Lens

Google Maps contains an incredible amount of information about almost every place in the world, both in text and image form. Maps should not only provide information, but also offer it at the right time. Now the integration of Google Lens into Google Maps, which was announced a long time ago, is to be rolled out, which should make the selection in the restaurant easier in the first step. The eyes of the Google Assistant also become a menu assistant.

Who does not know that? You come to a restaurant, you are literally overwhelmed by the long menu and have no idea what to eat. So that in the end you don’t always have to take the same basic dishes that are available in almost every restaurant, Google’s tools can now also help. In the interplay of lens and maps, it is possible to have individual dishes stand out from the menu and have them recommended.

The whole thing works directly via Google Lens: start the camera and just hold it on the menu and the rest will happen automatically. As you can see in the animation above, Lens marks the dishes on the menu with a star and has more information and photos ready. Of course, the photos come from the numerous users of Google Maps, so that they are more true than the beautiful pictures on the menu or at the entrance of the restaurant.

Google first announced this feature a few months ago, but never gave the official go-ahead. However, because it is now appearing in the Android app with more and more users and has been announced again between the lines, one can assume that the function is now available. Because of the necessary data needed, it is not rolled out worldwide yet and only for Android users. It is not available on iOS.

So if you visit a restaurant in the next few days, you can simply try out this function. Even if the result is very simple and the offer looks very simple, a very impressive technology is used in the background, which is finally going in the direction that Google promised for Lens over two years ago . To understand this, we just have to think about how much data has to be queried and combined to make this possible:

Google Lens needs to know the location of the user and match it with Google Maps to know which restaurant the user is in. Then the layout of the menu must be recorded, the text recognized and the names of the dishes assigned. This assignment is then compared again with Google Maps, the best dishes are selected and then the text in the live image is virtually marked. All of this happens practically live – impressive.

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on pinterest
Pinterest