While Pixel 9 owners and Samsung Galaxy S25 owners have been able to use Gemini Live’s lens setting for a while, Google announced that the element has started to be available for both Android users and iOS users at its I/O meeting earlier this month. The big news here is that iPhone users can now have access to one of the best AI capabilities we’ve seen in a while then, particularly since that all other Android people supposedly got admittance to the camera setting back in April.  ,
To put it simply, Google successfully gave Gemini the ability to see, as it can identify things you place in front of your lens. If you’re aware of what the camera function have is, let’s put it in plain English.  ,
It’s not just a trick, sometimes. You can ask questions about things as well, and it works quite well for the most part. Not only can it detect things, but it also recognizes them. In addition, you can promote your camera with Gemini so it can identify issues you surface on your device’s screen. You now have the opportunity to allow a life camera see when you first begin a life session with Gemini, where you can talk to the chatbot and ask questions about anything the camera sees.  ,
I spent some time with it when it showed up on my Pixel 9 Pro XL in first April and was very impressed overall. When I asked Gemini where my scissors went when I first lost them during one of my tests, I was most amazed.
” Only spotted your knives on the table, right next to the pistachio alternative package. Do you see them”?
The talkative new camera feature on Gemini Live was appropriate. My scissors were exactly where they were when I had a 15-minute live video of myself showing them around my flat. All I had to do was go my cameras in front of them.
When the fresh cameras have popped up on my celltelefon, I didn’t hesitate to try it out. I turned it on and began to search through my room while observing what Gemini had observed during one of my longer testing. Without any issues, it identified a couple fruit, ChapStick, and a few other common items. I was wowed when it found my shears.  ,
That’s because I didn’t even mention the shears. Gemini had discreetly identified them somewhere along the approach and then , recalled the spot with precision. I had to do more screening because it felt so much like the prospect.  ,
My experiment with Gemini Live’s lens element was following the lead of the video that Google did last summer when it first showed off these live film AI features. It seemed too good to be true, Gemini reminded the demo host of where they had left their glasses. However, as I later learned, it was indeed very true.
Gemini Live will recognize a whole lot more than household odds and ends. Google claims it can help you find the filling in a pastry or navigate a crowded train station. It can provide more detailed details about artwork, such as where an item came from and whether it was a limited edition piece.
It’s more than just a souped-up Google Lens. You communicate with it, and it communicates with you. It was as casual as any conversation, so I didn’t need to speak to Gemini in any particular way. Way better than talking with the old Google Assistant that the company is quickly phasing out.
Image enlarge
There is also a new YouTube video for the feature for Google’s aprilie and 2025 Pixel Drop, which is available on the Google Store, and there is also a dedicated page for it.
You can activate the camera, go live with Gemini, and start talking right away. That’s it.
Google’s Project Astra, which was first unveiled last year as one of the company’s biggest” we’re in the future,” or experimental next step for generative AI capabilities, is followed by Gemini Live, which is a follow-on to Google’s Project Astra, which is a chatbot like ChatGPT, Claude, or Gemini. It comes as AI companies continue to significantly improve their skills with AI tools, ranging from video production to raw processing power. Similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year.  ,
My main takeaway is that using your camera in front of almost anything, you can change how we interact with the world around us. This could be done by combining our digital and physical worlds.
I put Gemini Live to a real test
When I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view, Gemini was shockingly accurate the first time I tried it. I showed it to a friend in an art gallery the second time. It identified the tortoise on a cross ( don’t ask me ) and immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a positive way, I believe.
I began to consider how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I took it if I left it alone? I have a lot of collectibles, trinkets, and other things to keep in mind because I’m a huge fan of the horror genre, which includes movies, TV shows, video games, and TV shows. How well would it do with more obscure stuff– like my horror-themed collectibles?
In the same round of questions, let me say that Gemini can be both ridiculously amazing and ridiculously frustrating. I pleaded with Gemini to identify roughly 11 objects, and it would occasionally get worse as the live session progressed. As a result, I had to keep the sessions to just one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately, neither I nor it benefited from this.
Gemini could have easily landed the correct answers without much fuss or confusion, but this was more often the case with more recent or well-known objects. For instance, I was surprised when it realized one of my test items was a limited edition of a seasonal event from the previous year right away.  ,
Gemini would occasionally be off the mark, and I would need to give it more hints to get in the mood for the right response. And occasionally, it appeared as though Gemini was using context from my prior live sessions to identify various objects as coming from Silent Hill when they weren’t. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.
Gemini can get full-on bugged out at times. One of the items was mistakenly identified as a made-up character from the unreleased Silent Hill: f game on more than one occasion, blending elements from various titles into something that never was. Another frequent bug I’ve encountered was when Gemini would give me incorrect answers, prompt me to correct them, and then give them a closer look, only to have the incorrect answer repeated as if it were a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.
I discovered a trick: some conversations ended up being more successful than others. If I went through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live from that chat again, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language.  ,
Google didn’t respond to my inquiries for more details about Gemini Live’s operation.
I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were occasionally helpful, but not always. Below are a number of objects that I attempted to identify and learn more about.  ,