Posts in stories

From a Turntable to an iPad Home Dashboard: My First Experience with Vinyl

This month, amidst the increasingly chaotic rumblings of the world in the news, I found myself looking for a new distraction. Since Civilization VII wasn’t out just yet, my eyes quickly turned to the vinyl record player that my partner and I had been storing in its cardboard box for months without ever taking the time to set it up. It’s not even ours; we’ve been keeping it safe, along with a sizable vinyl collection, for a close friend who unfortunately doesn’t have enough space for it in their current home.

This turntable is definitely not fancy – it’s even quite affordable compared to similar models – but it looks pretty, and our friend gracefully gave us permission to set it up for ourselves in the living room. While I’m sure they only pitied my desperate need for a new distraction, I took them up on this offer and opened the turntable’s box for the first time.

At the risk of sounding like a total youngster, I must disclose that until three weeks ago, I had never interacted with vinyl before. All I had were some presumptuous preconceptions. ”Doesn’t music sound worse on vinyl? Also, why should I bother with large, fragile music discs and a whole record player when I already have Apple Music in my pocket with lossless audio and Dolby Atmos?”

Still, I was not only intrigued, but also motivated to solve the main problem that setting up this record player posed: how can I make it work when our audio gear at home consists only of a handful of HomePod minis, one pair of wired headphones, and several pairs of Bluetooth headphones? While some turntables ship with built-in Bluetooth connectivity, ours can only output audio over USB or RCA with the help of a sound amplifier, and it definitely can’t broadcast audio to AirPlay devices like our HomePod minis.

Allow me to spoil the ending of this story for you: in the end, unboxing this turntable escalated into a legitimately awesome tech upgrade to our living room. It’s now equipped with a docked 11“ iPad Pro that acts as a shared dashboard for controlling our HomeKit devices, performing everyday tasks like consulting the weather and setting up timers, and of course, broadcasting our vinyls to any HomePod mini or Bluetooth device in the apartment. This setup is amazing, and it works perfectly; however, getting there was a tedious process that drastically reinforced my long-standing frustrations with Apple’s self-imposed software limitations.

Let me tell you how it all went.

Read more


DefaultSMS Lets You Choose Your Default Messaging App

In iOS 18.2, Apple introduced the ability for users to set their default apps for messaging, calling, call filtering, passwords, contactless payments, and keyboards. Previously, it was only possible to specify default apps for mail and browsing, so this was a big step forward.

While apps like 1Password quickly took advantage of these new changes, there have been few to no takers in the calling, contactless payments, and messaging categories. Enter DefaultSMS, a new app that, as far as I can tell, seems to be the first to make use of the default messaging app setting.

Default SMS is not a messaging app. What it does is use this new setting to effectively bounce the user into the messaging app of their choice when they tap on a phone number elsewhere within iOS. Telegram, WhatsApp, and Signal are the options currently supported in the app.

Initial setup is quick. First, you select the messaging app you would like to use within DefaultSMS. Then, you head to Settings → Apps → Default Apps → Messaging and select DefaultSMS instead of Messages.

When you tap on a phone number, DefaultSMS will launch your chosen messaging app to start a conversation.

When you tap on a phone number, DefaultSMS will launch your chosen messaging app to start a conversation.

Now, whenever you tap on a phone number from a website, email, note, or other source within iOS, the system will recognize the sms:// link and open a new message to that number in your default messaging app, now specified as DefaultSMS. The app will then bounce you into your messaging app of choice to start the conversation. The developer says the process is 100% private, with DefaultSMS retaining none of this information.

It’s worth pointing out a few things about the app:

  • You can only message someone who already has the app you are messaging from.
  • If someone sends you an SMS, it will still be delivered to the Messages app.
  • Once you start a conversation, you will be messaging from the app you have chosen (such as WhatsApp), not via SMS.

So why does this app exist? I put this question to the developer, Vincent Neo, who said, “The focus of the app is more towards countries where a significant part of the population already prefers a specific platform very frequently, such that users are very likely to prefer that over other platforms (including SMS), similar to your case, where everyone you know has WhatsApp.”

Quite simply, DefaultSMS allows you to choose which app you want to use to start a conversation when you tap a phone number, rather than always reverting to Messages. The app also highlights a flaw in the phrase “default messaging app”: there are still no APIs for apps to receive SMS messages. Until those are added, we will have to rely on clever third-party utilities like DefaultSMS to get us halfway there.

DefaultSMS is available on the App Store for $0.99.


Gemini 2.0 and LLMs Integrated with Apps

Busy day at Google today: the company rolled out version 2.0 of its Gemini AI assistant (previously announced in December) with a variety of new and updated models to more users. From the Google blog:

Today, we’re making the updated Gemini 2.0 Flash generally available via the Gemini API in Google AI Studio and Vertex AI. Developers can now build production applications with 2.0 Flash.

We’re also releasing an experimental version of Gemini 2.0 Pro, our best model yet for coding performance and complex prompts. It is available in Google AI Studio and Vertex AI, and in the Gemini app for Gemini Advanced users.

We’re releasing a new model, Gemini 2.0 Flash-Lite, our most cost-efficient model yet, in public preview in Google AI Studio and Vertex AI.

Finally, 2.0 Flash Thinking Experimental will be available to Gemini app users in the model dropdown on desktop and mobile.

Read more


The Many Purposes of Timeline Apps for the Open Web

Tapestry (left) and Reeder.

Tapestry (left) and Reeder.

Writing at The Verge following the release of The Iconfactory’s new app Tapestry, David Pierce perfectly encapsulates how I feel about the idea of “timeline apps” (a name that I’m totally going to steal, thanks David):

⁠⁠What I like even more, though, is the idea behind Tapestry. There’s actually a whole genre of apps like this one, which I’ve taken to calling “timeline apps.” So far, in addition to Tapestry, there’s ReederUnreadFeeeedSurf, and a few others. They all have slightly different interface and feature ideas, but they all have the same basic premise: that pretty much everything on the internet is just feeds. And that you might want a better place to read them.⁠⁠
[…]
These apps can also take some getting used to. If you’re coming from an RSS reader, where everything has the same format — headline, image, intro, link — a timeline app will look hopelessly chaotic. If you’re coming from social, where everything moves impossibly fast and there’s more to see every time you pull to refresh, the timeline you curate is guaranteed to feel boring by comparison.⁠⁠

I have a somewhat peculiar stance on this new breed of timeline apps, and since I’ve never written about them on MacStories before, allow me to clarify and share some recent developments in my workflow while I’m at it.

Read more


Six Colors’ Apple in 2024 Report Card

Average scores from the 2024 Six Colors report card. Source: [Six Colors](https://sixcolors.com/post/2025/02/apple-in-2024-the-six-colors-report-card/).

Average scores from the 2024 Six Colors report card. Source: Six Colors.

For the past 10 years, Six Colors’ Jason Snell has put together an “Apple report card” – a survey to assess the current state of Apple “as seen through the eyes of writers, editors, developers, podcasters, and other people who spend an awful lot of time thinking about Apple”.

The 2024 edition of the Six Colors Apple Report Card has been published, and you can find an excellent summary of all the submitted comments along with charts featuring average scores for the different categories here.

I’m grateful that Jason invited me to take part again and share my thoughts on Apple’s 2024. As you’ll see from my comments below, last year represented the end of an interesting transition period for me: after years of experiments, I settled on the iPad Pro as my main computer. Despite my personal enthusiasm, however, the overall iPad story remained frustrating with its peculiar mix of phenomenal M4 hardware and stagnant software. The iPhone lineup impressed me with its hardware (across all models), though I’m still wishing for that elusive foldable form factor. I was very surprised by the AirPods 4, and while Vision Pro initially showed incredible promise, I found myself not using it that much by the end of the year.

I’ve prepared the full text of my responses for the Six Colors report card, which you can find below.

Read more


MacStories Won’t Stand for Meta’s Dehumanizing and Harmful Moderation Policies

Just over two years ago, MacStories left Twitter behind. We left when Elon Musk began dismantling the company’s trust and safety infrastructure, allowing hateful speech and harassment on the platform. Meta is now doing the same thing with Threads and Instagram, so we’re leaving them behind, too.

We were initially optimistic about Threads because of its support for federation and interoperability with Mastodon. The relatively young service has never done as much as it should to protect its users from hateful content, as Niléane documented last year. Yet as bad as it already was for LGBT people and others, things took a much darker turn this week when Meta announced a series of new policies that significantly scaled back moderation on Threads and Instagram.

Meta has abandoned its relationships with third-party fact-checking organizations in favor of a “community notes” approach similar to X. The company has also eliminated filters it had in place to protect users from a wide variety of harmful speech. As Casey Newton reported yesterday, the internal Meta documents that implement these new policies now allow for posts like:

“There’s no such thing as trans children.”
“God created two genders, ‘transgender’ people are not a real thing.”
“This whole nonbinary thing is made up. Those people don’t exist, they’re just in need of some therapy.”
“A trans woman isn’t a woman, it’s a pathetic confused man.”
“A trans person isn’t a he or she, it’s an it.”

Newton also reports:

So in addition to being able to call gay people insane on Facebook, you can now also say that gay people don’t belong in the military, or that trans people shouldn’t be able to use the bathroom of their choice, or blame COVID-19 on Chinese people, according to this round-up in Wired. (You can also now call women household objects and property, per CNN.) The company also (why not?!) removed a sentence from its policy explaining that hateful speech can “promote offline violence.”

For more on Meta’s new policies and their impact, we encourage MacStories readers to read both of Casey Newton’s excellent Platformer articles linked above.

This is ugly, dehumanizing stuff that has no place on the Internet or anywhere else and runs counter to everything we believe in at MacStories. We believe that platforms should protect all of their users from harm and harassment. Technology should bring people together not divide and dehumanize them, which is why we’re finished with Threads and Instagram.

I’d like to think other media companies will join us in taking similar action, but we understand why many won’t. Meta’s social networks drive a significant amount of traffic to websites like MacStories, and walking away from that isn’t easy in an economy where media companies are under a lot of financial pressure. We’ll be okay thanks to the support of our readers who subscribe to Club MacStories, but many others don’t have that, which is why it’s important for individuals to do what they can to help too.

We know that in times like these, it’s often hard to know what to do because we’ve felt that way ourselves. One way you can help is to make a donation to groups that are working to support the rights of LGBT people who increasingly find themselves threatened by the actions of companies, governments, and others. With Niléane’s assistance, we have identified organizations you can donate in the U.S., E.U., and U.K. that are working to protect the rights of LGBT people:

Thanks to all of you who donate. The world of tech is not immune from the troubles facing our world, but with your help, we can make MacStories a bright spot on the tech landscape where people feel safe and welcome.

– Federico and John


What’s in My CES Bag?

Packing for CES has been a little different than WWDC. The biggest differences are the huge crowds at CES and the limits the conference puts on the bags you can carry into venues.

My trusty Tom Bihn Synapse 25 backpack isn’t big, but it’s too large for CES, so the first thing I did was look for a bag that was small enough to meet the CES security rules but big enough to hold my 14” MacBook Pro and 11” iPad Pro, plus accessories. I decided on a medium-sized Tomtoc Navigator T24 sling bag, which is the perfect size. It holds 7 liters of stuff and has built-in padding to protect the corners of the MacBook Pro and iPad as well as pockets on the inside and outside to help organize cables and other things.

Tomtoc's medium Navigator T24 sling bag. Source: Tomtoc.

Tomtoc’s medium Navigator T24 sling bag. Source: Tomtoc.

I don’t plan to carry my MacBook Pro with me during the day. The iPad Pro will be plenty for any writing and video production I do on the go, but it will be good to have the power and flexibility of the MacBook Pro when I return to my hotel room. For traveling to and from Las Vegas, I appreciate that the Tomtoc bag can fit everything I’m bringing.

A surprising amount of stuff fits in the T24. Source: Tomtoc.

A surprising amount of stuff fits in the T24. Source: Tomtoc.

With little room to spare, my setup is minimal. I’ll write on the iPad Pro and MacBook Pro, carrying the iPad with me tethered to my iPhone for Internet access. That’s a tried-and-true setup I already use whenever I’m away from home.

Read more


iPad Pro for Everything: How I Rethought My Entire Workflow Around the New 11” iPad Pro

My 11" iPad Pro.

My 11” iPad Pro.

For the past two years since my girlfriend and I moved into our new apartment, my desk has been in a constant state of flux. Those who have been reading MacStories for a while know why. There were two reasons: I couldn’t figure out how to use my iPad Pro for everything I do, specifically for recording podcasts the way I like, and I couldn’t find an external monitor that would let me both work with the iPad Pro and play videogames when I wasn’t working.

This article – which has been six months in the making – is the story of how I finally did it.

Over the past six months, I completely rethought my setup around the 11” iPad Pro and a monitor that gives me the best of both worlds: a USB-C connection for when I want to work with iPadOS at my desk and multiple HDMI inputs for when I want to play my PS5 Pro or Nintendo Switch. Getting to this point has been a journey, which I have documented in detail on the MacStories Setups page.

This article started as an in-depth examination of my desk, the accessories I use, and the hardware I recommend. As I was writing it, however, I realized that it had turned into something bigger. It’s become the story of how, after more than a decade of working on the iPad, I was able to figure out how to accomplish the last remaining task in my workflow, but also how I fell in love with the 11” iPad Pro all over again thanks to its nano-texture display.

I started using the iPad as my main computer 12 years ago. Today, I am finally able to say that I can use it for everything I do on a daily basis.

Here’s how.

Read more


Apple Intelligence in iOS 18.2: A Deep Dive into Working with Siri and ChatGPT, Together

The ChatGPT integration in iOS 18.2.

The ChatGPT integration in iOS 18.2.

Apple is releasing iOS and iPadOS 18.2 today, and with those software updates, the company is rolling out the second wave of Apple Intelligence features as part of their previously announced roadmap that will culminate with the arrival of deeper integration between Siri and third-party apps next year.

In today’s release, users will find native integration between Siri and ChatGPT, more options in Writing Tools, a smarter Mail app with automatic message categorization, generative image creation in Image Playground, Genmoji, Visual Intelligence, and more. It’s certainly a more ambitious rollout than the somewhat disjointed debut of Apple Intelligence with iOS 18.1, and one that will garner more attention if only by virtue of Siri’s native access to OpenAI’s ChatGPT.

And yet, despite the long list of AI features in these software updates, I find myself mostly underwhelmed – if not downright annoyed – by the majority of the Apple Intelligence changes, but not for the reasons you may expect coming from me.

Some context is necessary here. As I explained in a recent episode of AppStories, I’ve embarked on a bit of a journey lately in terms of understanding the role of AI products and features in modern software. I’ve been doing a lot of research, testing, and reading about the different flavors of AI tools that we see pop up on almost a daily basis now in a rapidly changing landscape. As I discussed on the show, I’ve landed on two takeaways, at least for now:

  • I’m completely uninterested in generative products that aim to produce images, video, or text to replace human creativity and input. I find products that create fake “art” sloppy, distasteful, and objectively harmful for humankind because they aim to replace the creative process with a thoughtless approximation of what it means to be creative and express one’s feelings, culture, and craft through genuine, meaningful creative work.
  • I’m deeply interested in the idea of assistive and agentic AI as a means to remove busywork from people’s lives and, well, assist people in the creative process. In my opinion, this is where the more intriguing parts of the modern AI industry lie:
    • agents that can perform boring tasks for humans with a higher degree of precision and faster output;
    • coding assistants to put software in the hands of more people and allow programmers to tackle higher-level tasks;
    • RAG-infused assistive tools that can help academics and researchers; and
    • protocols that can map an LLM to external data sources such as Claude’s Model Context Protocol.

I see these tools as a natural evolution of automation and, as you can guess, that has inevitably caught my interest. The implications for the Accessibility community in this field are also something we should keep in mind.

To put it more simply, I think empowering LLMs to be “creative” with the goal of displacing artists is a mistake, and also a distraction – a glossy facade largely amounting to a party trick that gets boring fast and misses the bigger picture of how these AI tools may practically help us in the workplace, healthcare, biology, and other industries.

This is how I approached my tests with Apple Intelligence in iOS and iPadOS 18.2. For the past month, I’ve extensively used Claude to assist me with the making of advanced shortcuts, used ChatGPT’s search feature as a Google replacement, indexed the archive of my iOS reviews with NotebookLM, relied on Zapier’s Copilot to more quickly spin up web automations, and used both Sonnet 3.5 and GPT-4o to rethink my Obsidian templating system and note-taking workflow. I’ve used AI tools for real, meaningful work that revolved around me – the creative person – doing the actual work and letting software assist me. And at the same time, I tried to add Apple’s new AI features to the mix.

Perhaps it’s not “fair” to compare Apple’s newfangled efforts to products by companies that have been iterating on their LLMs and related services for the past five years, but when the biggest tech company in the world makes bold claims about their entrance into the AI space, we have to take them at face value.

It’s been an interesting exercise to see how far behind Apple is compared to OpenAI and Anthropic in terms of the sheer capabilities of their respective assistants; at the same time, I believe Apple has some serious advantages in the long term as the platform owner, with untapped potential for integrating AI more deeply within the OS and apps in a way that other AI companies won’t be able to. There are parts of Apple Intelligence in 18.2 that hint at much bigger things to come in the future that I find exciting, as well as features available today that I’ve found useful and, occasionally, even surprising.

With this context in mind, in this story you won’t see any coverage of Image Playground and Image Wand, which I believe are ridiculously primitive and perfect examples of why Apple may think they’re two years behind their competitors. Image Playground in particular produces “illustrations” that you’d be kind to call abominations; they remind me of the worst Midjourney creations from 2022. Instead, I will focus on the more assistive aspects of AI and share my experience with trying to get work done using Apple Intelligence on my iPhone and iPad alongside its integration with ChatGPT, which is the marquee addition of this release.

Let’s dive in.

Read more