Federico Viticci

9577 posts on MacStories since April 2009

Federico is the founder and Editor-in-Chief of MacStories, where he writes about Apple with a focus on apps, developers, iPad, and iOS productivity. He founded MacStories in April 2009 and has been writing about Apple since. Federico is also the co-host of AppStories, a weekly podcast exploring the world of apps, Unwind, a fun exploration of media and more, and NPC: Next Portable Console, a show about portable gaming and the handheld revolution.


Apple Vision Glasses Will Be Irresistible

I found myself nodding in agreement from beginning to end with this story by Lachlan Campbell, who, after a year of Vision Pro, imagines what future Apple Vision glasses may be able to do and how they’d reshape our societal norms:

I’ve written about my long-term belief in spatial computing, and how visionOS 2 made small but notable progress. The pieces have clicked into place more recently for me for what an AR glasses version of Apple Vision would look like, and how it will change us. We don’t have the technology, hardware-wise, to build this product today, or we’d already be wearing it. We need significant leaps in batteries, mobile silicon, and displays to make this product work. Leaps in AI assistance, cameras, and computer vision would make this product better, too. But the industry is hard at work at all of these problems. This product is coming.

The basic pitch: augmented reality glasses with transparent lenses that can project more screen than you could ever own, wherever you are. The power of real software like iPad/Mac, an always-on intelligent assistant, POV photos/video/audio, and listening to audio without headphones. Control it like Apple Vision Pro with your eyes, hands, and voice, optionally pairing accessories (primarily AirPods and any of stylus/keyboard/trackpad/mice work for faster/more precise inputs). It’s cellular (with an Apple-designed modem) and entirely wireless. It combines the ideas of ambient computing that Humane (RIP) and Meta Ray-Bans have begun, including a wearable assistant, POV photography, and ambient audio with everything you love about your current Apple products.

I may be stating the obvious here, but I fundamentally believe that headsets are a dead end and glasses are the ultimate form factor we should be striving for. Or let me put it another way: every time I use visionOS, I remember how futuristic everything about it still feels…and how much I wish I was looking at it through glasses instead.

There’s a real possibility we may have Apple glasses (and an Apple foldable?) by 2030, and I wish I could just skip ahead five years now. As Lachlan argues, we’re marching toward all of this.

Permalink

One AI to Rule Them All?

I enjoyed this look by M.G. Siegler at the current AI landscape, evaluating the positions of all the big players and trying to predict who will come out on top based on what we can see today. I’ve been thinking about this a lot lately. The space is changing so rapidly, with weekly announcements and rumors, that it’s challenging to keep up with all the latest models, app integrations, and reasoning modes. But one thing seems certain: with 400 million weekly users, ChatGPT is winning in the public eye.

However, I was captivated by this analogy, and I wish I’d thought of it myself:

Professionals and power users will undoubtedly pay for, and get value out of, multiple models and products. But just as with the streaming wars, consumers are not going to buy all of these services. And unlike that war, where all of the players had differentiating content, again, the AI services are reaching some level of parity (for consumer use cases). So whereas you might have three or four streaming services that you pay for, you will likely just have one main AI service. Again, it’s more like search in that way.

I see the parallels between different streaming services and different AI models, and I wonder if it’s the sort of diversification that happens before inevitable consolidation. Right now, I find ChatGPT’s Deep Research superior to Google Gemini, but Google has a more fascinating and useful ecosystem story; Claude is better at coding, editing prose, and following complex instructions than any other model I’ve tested, but it feels limited by a lack of extensions and web search (for now). As a result, I find myself jumping between different LLMs for different tasks. And that’s not to mention the more specific products I use on a regular basis, such as NotebookLM, Readwise Chat, and Whisper. Could it be that, just like I’ve always appreciated distinct native apps for specific tasks, maybe I also prefer dedicated AIs for different purposes now?

I continue to think that, long term, it’ll once again come down to iOS versus Android, as it’s always been. But I also believe that M.G. Siegler is correct: until the dust settles (if it ever does), power users will likely use multiple AIs in lieu of one AI to rule them all. And for regular users, at least for the time being, that one AI is ChatGPT.

Permalink

Chrome for iOS Adds ‘Circle to Search’ Feature

Circle to Search in Chrome for iOS.

Circle to Search in Chrome for iOS.

Jess Weatherbed, writing for The Verge:

Google is rolling out new search gestures that allow iPhone users to highlight anything on their screen to quickly search for it. The Lens screen-searching feature is available on iOS in both the Google app and Chrome browser and provides a similar experience to Android’s Circle to Search, which isn’t supported on iPhones.
[…]
To use the new Lens gestures, iPhone users need to open the three-dot menu within the Google or Chrome apps and select “Search Screen with Google Lens.” You can then use “any gesture that feels natural” to highlight what you want to search. Google says a new Lens icon for quickly accessing the feature will also be added to the address bar “in the coming months.”

This is a nifty addition to Chrome for iOS, albeit a far cry from how the same integration works on modern Pixel phones, where you can long-press the navigation handle to activate Circle to Search system-wide. In my tests, it worked pretty well on iPhone, and I especially appreciate the haptic feedback you get when circling something. Given the platform constraints, it’s pretty well done.1

I’ve been using Chrome a bit more lately, and while it has a handful of advantages over Safari2, it lacks a series of foundational features that I consider table stakes in a modern browser for iOS and iPadOS. On iPad, for whatever reason, Chrome does not support pinned tabs and can’t display the favorites bar at all times, both of which are downright nonsensical decisions. Also, despite the existence of Gemini, Chrome for iOS and iPadOS cannot summarize webpages, nor does it offer any integration with Gemini in the first place. I shouldn’t be surprised that Chrome for iOS doesn’t offer any Shortcuts actions, either, but that’s worth pointing out.

Chrome makes sense as an option for people who want to use the same browser across multiple platforms, but there’s something to be said for the productivity gains of Safari on iOS and iPadOS. While Google is still shipping a baby version of Chrome, UI- and interaction-wise, Safari is – despite its flaws – a mature browser that takes the iPhone and iPad seriously.


  1. Speaking of which, I think holding the navigation handle to summon a system-wide feature is a great gesture on Android. Currently, Apple uses a double-tap gesture on the Home indicator to summon Type to Siri; I wouldn’t be surprised if iOS 19 brings an Android-like holding gesture to do something with Apple Intelligence. ↩︎
  2. For starters, it’s available everywhere, whereas Safari is nowhere to be found on Windows (sigh) or Android. Plus, Chrome for iOS has an excellent widget to quickly search from the Home Screen, and I prefer its tab group UI with colorful folders displayed in the tab switcher. ↩︎

Gemini 2.0 and LLMs Integrated with Apps

Busy day at Google today: the company rolled out version 2.0 of its Gemini AI assistant (previously announced in December) with a variety of new and updated models to more users. From the Google blog:

Today, we’re making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI. Developers can now build production applications with 2.0 Flash.

We’re also releasing an experimental version of Gemini 2.0 Pro, our best model yet for coding performance and complex prompts. It is available in Google AI Studio and Vertex AI, and in the Gemini app for Gemini Advanced users.

We’re releasing a new model, Gemini 2.0 Flash-Lite, our most cost-efficient model yet, in public preview in Google AI Studio and Vertex AI.

Finally, 2.0 Flash Thinking Experimental will be available to Gemini app users in the model dropdown on desktop and mobile.

Read more


The Many Purposes of Timeline Apps for the Open Web

Tapestry (left) and Reeder.

Tapestry (left) and Reeder.

Writing at The Verge following the release of The Iconfactory’s new app Tapestry, David Pierce perfectly encapsulates how I feel about the idea of “timeline apps” (a name that I’m totally going to steal, thanks David):

⁠⁠What I like even more, though, is the idea behind Tapestry. There’s actually a whole genre of apps like this one, which I’ve taken to calling “timeline apps.” So far, in addition to Tapestry, there’s ReederUnreadFeeeedSurf, and a few others. They all have slightly different interface and feature ideas, but they all have the same basic premise: that pretty much everything on the internet is just feeds. And that you might want a better place to read them.⁠⁠
[…]
These apps can also take some getting used to. If you’re coming from an RSS reader, where everything has the same format — headline, image, intro, link — a timeline app will look hopelessly chaotic. If you’re coming from social, where everything moves impossibly fast and there’s more to see every time you pull to refresh, the timeline you curate is guaranteed to feel boring by comparison.⁠⁠

I have a somewhat peculiar stance on this new breed of timeline apps, and since I’ve never written about them on MacStories before, allow me to clarify and share some recent developments in my workflow while I’m at it.

Read more


Six Colors’ Apple in 2024 Report Card

Average scores from the 2024 Six Colors report card. Source: [Six Colors](https://sixcolors.com/post/2025/02/apple-in-2024-the-six-colors-report-card/).

Average scores from the 2024 Six Colors report card. Source: Six Colors.

For the past 10 years, Six Colors’ Jason Snell has put together an “Apple report card” – a survey to assess the current state of Apple “as seen through the eyes of writers, editors, developers, podcasters, and other people who spend an awful lot of time thinking about Apple”.

The 2024 edition of the Six Colors Apple Report Card has been published, and you can find an excellent summary of all the submitted comments along with charts featuring average scores for the different categories here.

I’m grateful that Jason invited me to take part again and share my thoughts on Apple’s 2024. As you’ll see from my comments below, last year represented the end of an interesting transition period for me: after years of experiments, I settled on the iPad Pro as my main computer. Despite my personal enthusiasm, however, the overall iPad story remained frustrating with its peculiar mix of phenomenal M4 hardware and stagnant software. The iPhone lineup impressed me with its hardware (across all models), though I’m still wishing for that elusive foldable form factor. I was very surprised by the AirPods 4, and while Vision Pro initially showed incredible promise, I found myself not using it that much by the end of the year.

I’ve prepared the full text of my responses for the Six Colors report card, which you can find below.

Read more


Doing Research with NotebookLM

Fascinating blog post by Vidit Bhargava (creator of the excellent LookUp dictionary app) about how he worked on his master thesis with the aid of Google’s NotebookLM.

I used NotebookLM throughout my thesis, not because I was interested in it generating content for me (I think AI generated text and images are sloppy and classless); but because it’s a genuinely great research organization tool that provides utility of drawing connections between discreet topics and helping me understand my own journey better.

Make sure to check out the examples of his interviews and research material as indexed by the service.

As I explained in an episode of AppStories a while back, and as John also expanded upon in the latest issue of the Monthly Log for Club members, we believe that assistive AI tools that leverage modern LLM advancements to help people work better (and less) are infinitely superior to whatever useless slop generative tools produce.

Google’s NotebookLM is, in my opinion, one of the most intriguing new tools in this field. For the past two months, I’ve been using it as a personal search assistant for the entire archive of 10 years of annual iOS reviews – that’s more than half a million words in total. Not only can NotebookLM search that entire library in seconds, but it does so with even the most random natural language queries about the most obscure details I’ve ever covered in my stories, such as “When was the copy and paste menu renamed to edit menu?” (It was iOS 16.). It’s becoming increasingly challenging for me, after all these years, to keep track of the growing list of iOS-related minutiae; from a personal productivity standpoint, NotebookLM has to be one of the most exciting new products I’ve tried in a while. (Alongside Shortwave for email.)

Just today, I discovered that my read-later tool of choice – Readwise Reader – offers a native integration to let you search highlights with NotebookLM. That’s another source that I’m definitely adding to NotebookLM, and I’m thinking of how I could replicate the same Readwise Reader setup (highlights are appended to a single Google Doc) with Zapier and RSS feeds. Wouldn’t it be fun, for instance, if I could search the entire archive of AppStories show notes in NotebookLM, or if I could turn starred items from Feedbin into a standalone notebook as well?

I’m probably going to have to sign up for NotebookLM Plus when it launches for non-business accounts, which, according to Google, should happen in early 2025.

Permalink

“I Live My Life a Quarter Century at a Time”

Two days ago was the 25th anniversary of Steve Jobs unveiling the Aqua interface for Mac OS X for first time at Macworld Expo. James Thomson published a great personal retrospective on one particular item of the Aqua UI that was shown off at the event: the original dock.

The version he showed was quite different to what actually ended up shipping, with square boxes around the icons, and an actual “Dock” folder in your user’s home folder that contained aliases to the items stored. I should know – I had spent the previous 18 months or so as the main engineer working away on it. At that very moment, I was watching from a cubicle in Apple Cork, in Ireland. For the second time in my short Apple career, I said a quiet prayer to the gods of demos, hoping that things didn’t break. For context, I was in my twenties at this point and scared witless.

James has told this story before, but there are new details I wasn’t familiar with, as well as some links worth clicking in the full story.

Permalink

NVIDIA Announces GeForce NOW Support Coming to Safari on Vision Pro Later This Month

With a press release following an otherwise packed keynote at CES (which John and Brendon, my NPC co-hosts, attended in person last night), NVIDIA announced that their streaming service GeForce NOW is going to natively support the Apple Vision Pro…well, sort of.

There aren’t that many details in NVIDIA’s announcement, but the gist of it is that Vision Pro users will be able to stream games by visiting the GeForce NOW website when a new version launches “later this month”.

Get immersed in a new dimension of big-screen gaming as GeForce NOW brings AAA titles to life on Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality headsets. Later this month, these supported devices will give members access to an extensive library of games to stream through GeForce NOW by opening the browser to play.geforcenow.com when the newest app update, version 2.0.70, starts rolling out later this month.

This is all NVIDIA said in their announcement, which isn’t much, but we can speculate on a few things based on the existing limitations of visionOS.

For starters, the current version of Safari on visionOS does not support adding PWAs to the visionOS Home Screen. Given that the existing version of GeForce NOW requires saving a web app to begin the setup process, this either means that a) NVIDIA knows a visionOS software update in January will add the ability to save web apps or b) GeForce NOW won’t require that additional step to start playing on visionOS. The latter option seems more likely.

Second, as we covered last year, there is a workaround to play with GeForce NOW on visionOS, and that is the Nexus⁺ app. I’ve been using the Nexus⁺ app on my Vision Pro to stream Indiana Jones and other games from the cloud, and while the resolution is good enough1, what bothers me is the lack of HDR and Spatial Audio support (which should work with the Web Audio API in Safari for visionOS 2.0) in GeForce NOW when accessed from Nexus⁺’s built-in web browser.

The Nexus⁺ app supports ultra-wide aspect ratios, but HDR is nowhere to be found.

The Nexus⁺ app supports ultra-wide aspect ratios, but HDR is nowhere to be found.

With all this in mind, I’m going to guess that, at a minimum, NVIDIA will support a PWA-free installation method in Safari for visionOS. I’m less optimistic about HDR and Spatial Audio, but as I gravitate more and more toward cloud streaming rather than local PC streaming2, I’d be happily proven wrong here.

My only question is: with the App Store’s “new” rules, why isn’t NVIDIA making a native GeForce NOW app for Apple platforms?


  1. I’d love to know from people who know more about this stuff than I do whether Safari 18’s support for the WebRTC HEVC RFC 7789 RTP Payload Format makes a difference for GeForce NOW streaming or not. ↩︎
  2. I’m actually thinking about selling my 4090 FE GPU in an effort to go all-in on cloud streaming and SteamOS in lieu of Windows in 2025. But this is a future topic for NPC↩︎