- Google Search Live is now available globally in 200 countries and 98 languages
- Search Live uses the new Gemini 3.1 Flash Live audio and voice model to enable a “more natural” conversational search
- Audio responses have links to the information source
Google has rolled out its AI-powered conversational search tool, Search Live, globally to more than 200 countries and territories, and is available in 98 languages. First launched in the US in September 2025, Search Live lets you point your phone or tablet’s camera at something and ask the AI tool about it out loud, such as what model washing machine you have and how to use it.
The AI then responds with an audio answer that’s also, handily, captioned, and will continue listening for any clarifications and follow-up questions to emulate a natural conversation.
You can access Search Live through the Google app on Android or iOS by tapping the “Live” button under the search bar, placed between the AI Mode and Nano Banana buttons. It can also be accessed through Google Lens and the dedicated Gemini app.
Article continues below
You may like
Google has said the expansion has been made possible thanks to the launch of a new audio and voice model called Gemini 3.1 Flash Live, which it says is “inherently multilingual”. The company also claims the model also responds to queries faster, and aims to deliver “more natural and intuitive conversations”.
Analysis: Good but not perfect
Search Live uses query fan-out — an information retrieval technique that broadens the search by looking at related answers beyond a specific question — to provide a more comprehensive response and double down on the conversational aspect.
We tried Search Live in June last year, and noted how the tool continues to work in the background to use query fan-out, and my colleague Eric Hal Schwartz said the answers “didn’t feel boxed into a single form of response, even on relatively straightforward queries”.
I took it for a spin myself, testing it on my bike. While Search Live was good at identifying the specific model, year of release and why it had a specific paint job, it failed to recognize that I had swapped out the stock wheelset for a third-party set and thought that it still had the integrated handlebars that it originally came with. It also failed to correctly identify the accessories on the bike, like my rear light, water bottle and the bottle cages.
(Image credit: Future | Nico Arboleda)
In a similar test, it failed to identify the Nothing Phone 4a Pro that was on my desk, calling it the Nothing Phone 2a instead. I compared the results with the same question on Gemini Live, and I received identical answers.
It’s understandable why some of the results were incorrect as the AI assistant was drawing from existing sources online and new products won’t necessarily have information for the model to learn from but, as it stands, it can handle a fair few general queries.
According to Google, over 1.5 billion people were using Google Lens to identify objects around them as of June 2025 and there are about 750 million Gemini Live users, so it would be interesting to see what the uptake of Search Live will be globally and if this becomes the default way to search for information online.

