Apple Intelligence to Expand to New Languages, the EU, and Vision Pro in April

Apple has announced that its personal intelligence system dubbed Apple Intelligence will debut in additional languages in April when iOS 18.4, iPadOS 18.4, and macOS 15.4 are released.

In addition to the currently supported languages, Apple will add French, German, Italian, Portuguese (Brazil), Spanish, Japanese, Korean, Chinese (simplified), and localized English for Singapore and India. Developers running the latest betas can start testing their apps in these languages today.

The updates in April will also bring Apple Intelligence features to the iPhone and iPad for EU users and Apple Vision Pro users.

It’s great to see that Apple Intelligence will expand in so many ways soon. For the feature to be widely adopted, localization is important, as is availability in the EU and on as many devices as possible, including the Vision Pro.


Apple Vision Glasses Will Be Irresistible

I found myself nodding in agreement from beginning to end with this story by Lachlan Campbell, who, after a year of Vision Pro, imagines what future Apple Vision glasses may be able to do and how they’d reshape our societal norms:

I’ve written about my long-term belief in spatial computing, and how visionOS 2 made small but notable progress. The pieces have clicked into place more recently for me for what an AR glasses version of Apple Vision would look like, and how it will change us. We don’t have the technology, hardware-wise, to build this product today, or we’d already be wearing it. We need significant leaps in batteries, mobile silicon, and displays to make this product work. Leaps in AI assistance, cameras, and computer vision would make this product better, too. But the industry is hard at work at all of these problems. This product is coming.

The basic pitch: augmented reality glasses with transparent lenses that can project more screen than you could ever own, wherever you are. The power of real software like iPad/Mac, an always-on intelligent assistant, POV photos/video/audio, and listening to audio without headphones. Control it like Apple Vision Pro with your eyes, hands, and voice, optionally pairing accessories (primarily AirPods and any of stylus/keyboard/trackpad/mice work for faster/more precise inputs). It’s cellular (with an Apple-designed modem) and entirely wireless. It combines the ideas of ambient computing that Humane (RIP) and Meta Ray-Bans have begun, including a wearable assistant, POV photography, and ambient audio with everything you love about your current Apple products.

I may be stating the obvious here, but I fundamentally believe that headsets are a dead end and glasses are the ultimate form factor we should be striving for. Or let me put it another way: every time I use visionOS, I remember how futuristic everything about it still feels…and how much I wish I was looking at it through glasses instead.

There’s a real possibility we may have Apple glasses (and an Apple foldable?) by 2030, and I wish I could just skip ahead five years now. As Lachlan argues, we’re marching toward all of this.

Permalink

One AI to Rule Them All?

I enjoyed this look by M.G. Siegler at the current AI landscape, evaluating the positions of all the big players and trying to predict who will come out on top based on what we can see today. I’ve been thinking about this a lot lately. The space is changing so rapidly, with weekly announcements and rumors, that it’s challenging to keep up with all the latest models, app integrations, and reasoning modes. But one thing seems certain: with 400 million weekly users, ChatGPT is winning in the public eye.

However, I was captivated by this analogy, and I wish I’d thought of it myself:

Professionals and power users will undoubtedly pay for, and get value out of, multiple models and products. But just as with the streaming wars, consumers are not going to buy all of these services. And unlike that war, where all of the players had differentiating content, again, the AI services are reaching some level of parity (for consumer use cases). So whereas you might have three or four streaming services that you pay for, you will likely just have one main AI service. Again, it’s more like search in that way.

I see the parallels between different streaming services and different AI models, and I wonder if it’s the sort of diversification that happens before inevitable consolidation. Right now, I find ChatGPT’s Deep Research superior to Google Gemini, but Google has a more fascinating and useful ecosystem story; Claude is better at coding, editing prose, and following complex instructions than any other model I’ve tested, but it feels limited by a lack of extensions and web search (for now). As a result, I find myself jumping between different LLMs for different tasks. And that’s not to mention the more specific products I use on a regular basis, such as NotebookLM, Readwise Chat, and Whisper. Could it be that, just like I’ve always appreciated distinct native apps for specific tasks, maybe I also prefer dedicated AIs for different purposes now?

I continue to think that, long term, it’ll once again come down to iOS versus Android, as it’s always been. But I also believe that M.G. Siegler is correct: until the dust settles (if it ever does), power users will likely use multiple AIs in lieu of one AI to rule them all. And for regular users, at least for the time being, that one AI is ChatGPT.

Permalink

Apple to Eliminate Advanced Data Protection for iCloud Accounts in the UK

A couple of weeks ago, I linked to a report from The Washington Post, which said that the UK government has demanded that Apple create a back door to access the encrypted iCloud data of Apple’s customers. Today, instead of creating the access the UK demanded, Apple announced it will remove Advanced Data Protection for its UK customers, which is the feature that allows users to end-to-end encrypt their iCloud data.

In doing so, Apple told 9to5Mac:

Apple can no longer offer Advanced Data Protection (ADP) in the United Kingdom to new users and current UK users will eventually need to disable this security feature. ADP protects iCloud data with end-to-end encryption, which means the data can only be decrypted by the user who owns it, and only on their trusted devices. We are gravely disappointed that the protections provided by ADP will not be available to our customers in the UK given the continuing rise of data breaches and other threats to customer privacy. Enhancing the security of cloud storage with end-to-end encryption is more urgent than ever before. Apple remains committed to offering our users the highest level of security for their personal data and are hopeful that we will be able to do so in the future in the United Kingdom. As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will.

This is a real shame to see and something I hope doesn’t spread to other countries, but I’m not optimistic that will be the case.

UK users who have enabled Advanced Data Protection will need to disable it to continue using their iCloud accounts. More details on the process and time frame for doing so are expected from Apple soon.

Permalink

My Latest Mac Hacks Column: Using Google Gemini with Read-Later and Listen-Later Services for Research

A Google Gemini report on the Sony PlayStation Portable.

A Google Gemini report on the Sony PlayStation Portable.

Yesterday, I published the latest installment of my Mac Hacks column, an exclusive perk of Club MacStories+ and Club Premier, covering how I use Google Gemini combined with read- and listen-later services to do preliminary research for projects.

What started as a way to reduce distractions when doing research with the help of Google Gemini quickly evolved into something more. As I explain in the conclusion:

The result of this workflow is that I can generate a Gemini report for an ongoing project and then read it at my leisure somewhere other than at my desk, whether I’m using my laptop, an iPad, or an e-ink device. I also have the option of heading out to my local coffee shop for a change of scenery and listening to a report as I walk. On a busy day, it’s a nice way to get some exercise and knock out some research at the same time. That flexibility, combined with fewer up-front distractions, has proven to be a great productivity boost.

Research is a universal task that touches every sort of project. It’s also a place where it’s easy to get bogged down. If you’re interested in streamlining the process, don’t miss the latest Mac Hacks.

Discounts are just one of the many Club MacStories perks.

Discounts are just one of the many Club MacStories perks.

Mac Hacks is just one of many perks that Club MacStories+ and Club Premier members enjoy, which also include:

  • weekly and monthly newsletters,
  • a sophisticated web app with search and filtering tools to navigate eight years of content,
  • customizable RSS feeds,
  • bonus columns,
  • an early and ad-free version of MacStories Unwind, our Internet culture and media podcast,
  • a vibrant Discord community of smart app and automation fans who trade a wealth of tips and discoveries every day, and
  • live Discord audio events after Apple events and at other times of the year.

On top of that, Club Premier members get AppStories+, an extended, ad-free version of our flagship podcast that we deliver early every week in high-bitrate audio.

Use the buttons below to learn more and sign up for Club MacStories+ or Club Premier.

Join Club MacStories+:

Join Club Premier:

Permalink

Chrome for iOS Adds ‘Circle to Search’ Feature

Circle to Search in Chrome for iOS.

Circle to Search in Chrome for iOS.

Jess Weatherbed, writing for The Verge:

Google is rolling out new search gestures that allow iPhone users to highlight anything on their screen to quickly search for it. The Lens screen-searching feature is available on iOS in both the Google app and Chrome browser and provides a similar experience to Android’s Circle to Search, which isn’t supported on iPhones.
[…]
To use the new Lens gestures, iPhone users need to open the three-dot menu within the Google or Chrome apps and select “Search Screen with Google Lens.” You can then use “any gesture that feels natural” to highlight what you want to search. Google says a new Lens icon for quickly accessing the feature will also be added to the address bar “in the coming months.”

This is a nifty addition to Chrome for iOS, albeit a far cry from how the same integration works on modern Pixel phones, where you can long-press the navigation handle to activate Circle to Search system-wide. In my tests, it worked pretty well on iPhone, and I especially appreciate the haptic feedback you get when circling something. Given the platform constraints, it’s pretty well done.1

I’ve been using Chrome a bit more lately, and while it has a handful of advantages over Safari2, it lacks a series of foundational features that I consider table stakes in a modern browser for iOS and iPadOS. On iPad, for whatever reason, Chrome does not support pinned tabs and can’t display the favorites bar at all times, both of which are downright nonsensical decisions. Also, despite the existence of Gemini, Chrome for iOS and iPadOS cannot summarize webpages, nor does it offer any integration with Gemini in the first place. I shouldn’t be surprised that Chrome for iOS doesn’t offer any Shortcuts actions, either, but that’s worth pointing out.

Chrome makes sense as an option for people who want to use the same browser across multiple platforms, but there’s something to be said for the productivity gains of Safari on iOS and iPadOS. While Google is still shipping a baby version of Chrome, UI- and interaction-wise, Safari is – despite its flaws – a mature browser that takes the iPhone and iPad seriously.


  1. Speaking of which, I think holding the navigation handle to summon a system-wide feature is a great gesture on Android. Currently, Apple uses a double-tap gesture on the Home indicator to summon Type to Siri; I wouldn’t be surprised if iOS 19 brings an Android-like holding gesture to do something with Apple Intelligence. ↩︎
  2. For starters, it’s available everywhere, whereas Safari is nowhere to be found on Windows (sigh) or Android. Plus, Chrome for iOS has an excellent widget to quickly search from the Home Screen, and I prefer its tab group UI with colorful folders displayed in the tab switcher. ↩︎

The Latest from AppStories and Ruminate

Enjoy the latest episodes from MacStories’ family of podcasts:

AppStories

This week, Federico and I each pick two apps you may or may not have heard of or considered using and explain why you should give them a try.

On AppStories+, we extend our picks with several more apps we’ve been testing recently.

This episode is sponsored by:

  • Memberful – Easy-to-Use Reliable Membership Software
  • Incogni – Take your personal data back with Incogni! Use code APPSTORIES with this link and get 60% off an annual plan.

Ruminate

A live snack test, some new task managers, and I bought a 15 year old handheld.

Read more


Apple Reveals New iPhone 16e with Face ID and 48MP Camera

Source: Apple.

Source: Apple.

Today, Apple unveiled the iPhone 16e, which replaces the iPhone SE. The new iPhone tracks with the rumors that have been circulating for months, but for those who don’t follow rumors closely, it’s worth running down the specs of Apple’s most affordable iPhone, because the changes are significant.

Let’s start with the design. With this update, the phone moves from an iPhone 8-era look to a style that fits in better with today’s iPhones. Similar to the iPhone 14, which debuted a couple of years ago, the new 16e includes a notch at the top of the screen that houses the front-facing camera and other sensors. The screen has been expanded to 6.1” and switched to Super Retina XDR OLED as well.

The new 16e ditches the Home button for Face ID, which goes a long way toward refreshing its look. The new budget phone doesn’t include the Camera Control like the iPhone 16, but it does feature the Action button, which debuted on the iPhone 15 Pro.

Source: Apple.

Source: Apple.

Notwithstanding the lack of Camera Control, the new iPhone 16e ushers in a significant upgrade to its camera. The single rear-facing camera now features a 48MP sensor, first introduced in the iPhone 14 Pro. That’s a big step up from the iPhone SE, which only had a 12MP sensor. That camera upgrade will pair nicely for photographers with the 16e’s new USB-C port, which is compatible with a wider range of accessories than Lightning, such as external storage.

The new iPhone 16e is powered by an A18 processor, making it capable of running Apple Intelligence. I’m not sure that’s a huge selling point yet, but the increased processor power and memory headroom should also make the 16e far more capable at tasks like transcoding and editing video, too.

A less welcome change is the 16e’s price, which is significantly more than the discontinued iPhone SE. The SE started at $429, but upgrading to this model will cost you at least $599 with 128GB of storage (twice what the SE offered). The price isn’t surprising considering the many updates included in this generation, but it will make it harder for some consumers to justify the purchase.

Another strange omission is the lack of MagSafe. That not only limits how the device can be charged, but it also rules out a wide variety of third-party accessories.

That said, I’m intrigued by the iPhone 16e and may buy one – not because I need a new phone, but because I want a new camera for shooting multicam video with Final Cut Pro for iPad. It’s such an incredibly efficient workflow for shooting videos for the MacStories YouTube channel that I’ve resorted to using my iPad mini’s 12MP camera alongside my iPhone 16 Pro Max. That has worked reasonably well, but the iPad mini’s camera can’t match my iPhone’s. With the 16e, I’d have a lightweight, highly portable option that’s perfect for my needs. Still, the price and lack of MagSafe are issues that make me hesitate.

The new iPhone 16e will be available for preorder starting February 21, with deliveries and in-store availability beginning Friday, February 28.


From a Turntable to an iPad Home Dashboard: My First Experience with Vinyl

This month, amidst the increasingly chaotic rumblings of the world in the news, I found myself looking for a new distraction. Since Civilization VII wasn’t out just yet, my eyes quickly turned to the vinyl record player that my partner and I had been storing in its cardboard box for months without ever taking the time to set it up. It’s not even ours; we’ve been keeping it safe, along with a sizable vinyl collection, for a close friend who unfortunately doesn’t have enough space for it in their current home.

This turntable is definitely not fancy – it’s even quite affordable compared to similar models – but it looks pretty, and our friend gracefully gave us permission to set it up for ourselves in the living room. While I’m sure they only pitied my desperate need for a new distraction, I took them up on this offer and opened the turntable’s box for the first time.

At the risk of sounding like a total youngster, I must disclose that until three weeks ago, I had never interacted with vinyl before. All I had were some presumptuous preconceptions. ”Doesn’t music sound worse on vinyl? Also, why should I bother with large, fragile music discs and a whole record player when I already have Apple Music in my pocket with lossless audio and Dolby Atmos?”

Still, I was not only intrigued, but also motivated to solve the main problem that setting up this record player posed: how can I make it work when our audio gear at home consists only of a handful of HomePod minis, one pair of wired headphones, and several pairs of Bluetooth headphones? While some turntables ship with built-in Bluetooth connectivity, ours can only output audio over USB or RCA with the help of a sound amplifier, and it definitely can’t broadcast audio to AirPlay devices like our HomePod minis.

Allow me to spoil the ending of this story for you: in the end, unboxing this turntable escalated into a legitimately awesome tech upgrade to our living room. It’s now equipped with a docked 11“ iPad Pro that acts as a shared dashboard for controlling our HomeKit devices, performing everyday tasks like consulting the weather and setting up timers, and of course, broadcasting our vinyls to any HomePod mini or Bluetooth device in the apartment. This setup is amazing, and it works perfectly; however, getting there was a tedious process that drastically reinforced my long-standing frustrations with Apple’s self-imposed software limitations.

Let me tell you how it all went.

Read more