Google Search is getting a slew of new features, many of which will ensure richer, more visually-focused results, the company announced at its “Search On” event. “We are going far beyond the search box and creating search experiences that are more like our minds – as multi-dimensional as people. As we enter this new era of search, you will be able to accurately Find what you’re looking for. We say this makes search more natural and intuitive,” Prabhakar Raghavan, Google’s senior vice president of search, said in the keynote.
First, Google is expanding the multi-search feature, which launched in beta in April this year, to English globally, and will add 70 more languages over the next few months. The multi-search feature allows users to search for multiple things at the same time by combining images and text. This feature also works with Google Lens. According to Google, users rely on its Lens feature nearly 8 billion times a month to search for what they see.
But by combining Lens with multiple searches, users will be able to take a photo of an item and then use the phrase “near me” to find it nearby. This “new way of searching will help users find and connect with local businesses,” Google said. “Multisearch near me” will be available in English in the US later this fall.
“This is made possible by deep insight into local and product inventory. We are informed by millions of images and reviews on the web,” Raghavan said of MultiSearch and Lens.
Google is improving how translations are displayed on images. According to the company, people use Google to translate text on images more than 1 billion times a month in more than 100 languages. With this new feature, Google will be able to “blend translated text into complex images that look and feel more natural.” As a result, translated text looks more seamless and is the same as the original image part of the text instead of the translated text stands out. According to Google, it’s using “generative adversarial networks (also known as GAN models), which help power the technology behind Magic Eraser on Pixels,” to ensure this experience. The feature will roll out later this year.
It’s also improving its iOS app so users can use shortcuts below the search bar. This will help users shop using their screenshots, translate any text with their camera, find a song and more.
Google Search results are also visually richer when users browse for information about a place or topic. In the example Google showed, when searching for a city in Mexico, the results also showed videos, images, and other information about the place in the first set of results. Google says this will ensure users don’t have to open multiple tabs when trying to get more information about a place or topic.
Over the next month, it will provide more relevant information even as users start typing questions. Google will offer “keyword or topic options to help” users with their questions. It will also showcase content from creators on some of these topics (such as cities, etc.) on the open web, as well as travel tips and more. The company’s blog post states that “the most relevant content from a variety of sources, regardless of the format of the information — whether it’s text, images, or video,” will be displayed. New features will be rolled out in the coming months.
When searching for food — which could be a specific dish or restaurant item, Google will show visually richer results, including photos of related dishes. It’s also expanding “the reach of digital menus, making them more visually rich and reliable”.
According to the company, it is combining “menu information provided by people and businesses, as well as menu information found on restaurant websites using open standards data sharing,” and is relying on its “imagery and language, including a unified model for multitasking.” Understanding Technology” to power these new results.
“These menus will showcase the most popular dishes and help to call up different dietary options, starting with vegetarian and vegan options,” Google said in a blog post.
It will also adjust the way shopping results are displayed in search, making them more intuitive along with links, and letting them shop for a “complete look.” The search results will also support 3D shopping for sneakers, and users will be able to view these specific items in a 3D view.
Google Maps is also getting some new features that will provide more visual information, although most of them will be limited to select cities. On the one hand, users will be able to check out “community vibes,” which means finding out where to eat, where to visit, etc. in a particular area.
This will attract tourists who can use this information to better understand an area. Google said it was using “artificial intelligence derived from the local knowledge of Google Maps users” to provide this information. Neighborhood Vibe starts rolling out globally in the coming months, on Android and iOS.
It also extends the Immersive View feature, allowing users to see 250 photorealistic aerial views of global landmarks from the Tokyo Tower to the Acropolis. According to Google’s blog post, it’s using “predictive modeling,” which is how Immersive Views automatically learns a place’s historical trends. Immersive View will roll out to Android and iOS in Los Angeles, London, New York, San Francisco and Tokyo in the coming months.
Users can also view useful information through the Live View feature. Search with Live View helps users find places around them, such as markets or stores, as they move around. Search with Live View will be available in London, Los Angeles, New York, San Francisco, Paris and Tokyo in the coming months for Android and iOS.
It has also extended its eco-friendly routing feature (launched earlier in the US, Canada and Europe) to third-party developers via the Google Maps platform. Google wants companies in other industries, like delivery or ride-sharing services, to have the option to enable eco-friendly routing and measure gas consumption in their apps.