When Apple’s senior vice president of software engineering, Craig Federighi, started talking about the Visual Intelligence feature in iOS 26 at WWDC 2025, I hoped for significant changes beyond its existing ability to tell you information about the places and objects you point your camera at on recent iPhones. Instead, we got the somewhat underwhelming news that Visual Intelligence options would soon be available directly in the iOS screenshot interface.
I can’t deny that these capabilities are practical (if a bit unexciting). But Visual Intelligence still falls short of Google’s Gemini Live and Microsoft’s Copilot Vision in that it can’t converse with you out loud about what you see. This sort of live interactivity isn’t necessarily vital, but it does feel exciting and natural to use. The foundation of Visual Intelligence is solid, but I still want Apple to push things forward in a way that aligns with its measured approach to AI.
What Does Visual Intelligence Do Well?
Like many of iOS’s best features, Visual Intelligence is a core part of the OS and works seamlessly with its default apps. That means you don’t need to open a separate app and upload an image to have the AI analyze it.
And the new ability to access the tool whenever you snap a screenshot certainly extends its usefulness. Related options appear on the screenshot interface along the bottom: Ask, which sends the image out to ChatGPT for analysis, or Search, which keeps scans on-device. With the latter, Visual Intelligence can, for example, look for information about an event and create a calendar entry with all the important details.
Get Our Best Stories!
Love All Things Apple?
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
Left to right: Apple Visual Intelligence and ChatGPT integration in iOS 26 (Credit: Apple/PCMag)
You can also draw over a part of the image to identify it, such as an article of clothing that catches your eye. Visual Intelligence can recognize it and either search it on Google or take you directly to its product page on a shopping app, such as Etsy. Apple is making an API available to app developers so Visual Intelligence can open dedicated apps when it detects relevant content or products.
Recommended by Our Editors
Falling Short on the Cool Factor
All that said, I still feel like Visual Intelligence is missing a level of interactivity I can get with other tools. On either my Android phone or iPhone, I can converse back and forth with Copilot Vision or Gemini Live about what I’m looking at via the camera app. When I pointed my phone’s camera out a motel window recently, for example, Gemini Live identified the tree in the courtyard as an olive tree. I could then continue to ask related questions, such as where the tree species was native. This ability to point my camera at something and simply chat with an AI about it feels orders of magnitude cooler than anything Visual Intelligence currently does. And more importantly, it feels like something I expect an AI assistant to be able to do.
I understand that Apple is prioritizing on-device AI, which isn’t yet capable of such feats, but it seems like it should be able to develop a similar feature given how much emphasis it puts on the Private Cloud Compute tech. We can only hope the company catches up with its competitors before their AI tools take an even greater leap ahead.

About Michael Muchmore
Lead Software Analyst

Leave a Comment
Your email address will not be published. Required fields are marked *