Yesterday, Apple released developer beta 1 of iOS 26.4, which among other things adds a feature to the Music app that uses Apple Intelligence to generate a playlist from a short description of what the user wants to hear. That immediately reminded Federico and me of The Sentence, a Beats Music feature that sadly didn’t survive the app’s acquisition by Apple.
The Sentence allowed subscribers to describe the music they wanted to hear based on a Mad Libs-style sentence construction. Every sentence was structured as “I’m [location] & feel like [mood] with [person/group] to [music genre].” The feature was a fantastic innovation that made playlist creation fun and easy. As Federico described it in 2014:
It’s The Sentence, though, that steals the spotlight in how it combines regular, Pandora-like song shuffling with a context/mood-based menu to tell Beats what you want to listen to. The Sentence, as the name implies, lets you construct a sentence using variable tokens for location, mood, user, and music genre. You can request things like “I’m at my computer and feel like dancing with myself to pop”, “I’m in the car and feel like driving with my friends to indie”, or more absurd contexts such as “I’m underpaid and I feel like shoveling snow with my lover to metal”. As reported by Re/code [Ed. note: This is a dead link], Beats explained that “the content, and the filters, are selected and tuned by humans, and an algorithm generates the playlist from your choices”.






