This Week's Sponsor:

Winterfest 2024

The Festival of Artisanal Software


Posts tagged with "iOS 18"

iOS and iPadOS 18.2: Everything New Besides Apple Intelligence

Today, Apple is releasing iOS and iPadOS 18.2, the second major updates to the iPhone and iPad’s latest operating system versions. Once again, this release’s main highlight is a wave of new Apple Intelligence features that are now available to the public. And just like in October, we’re covering these new AI features separately in a special story for MacStories readers. Be sure to check out Federico’s story, which goes over the new Apple Intelligence features included in iOS and iPadOS 18.2.

But besides another batch of Apple Intelligence features, this release also includes a series of changes to the system, from updates to Safari, Find My, and Photos to the arrival of new system-wide settings for Default Apps and more. Here’s a roundup of everything new besides Apple Intelligence in iOS and iPadOS 18.2.

Read more


Apple Intelligence in iOS 18.2: A Deep Dive into Working with Siri and ChatGPT, Together

The ChatGPT integration in iOS 18.2.

The ChatGPT integration in iOS 18.2.

Apple is releasing iOS and iPadOS 18.2 today, and with those software updates, the company is rolling out the second wave of Apple Intelligence features as part of their previously announced roadmap that will culminate with the arrival of deeper integration between Siri and third-party apps next year.

In today’s release, users will find native integration between Siri and ChatGPT, more options in Writing Tools, a smarter Mail app with automatic message categorization, generative image creation in Image Playground, Genmoji, Visual Intelligence, and more. It’s certainly a more ambitious rollout than the somewhat disjointed debut of Apple Intelligence with iOS 18.1, and one that will garner more attention if only by virtue of Siri’s native access to OpenAI’s ChatGPT.

And yet, despite the long list of AI features in these software updates, I find myself mostly underwhelmed – if not downright annoyed – by the majority of the Apple Intelligence changes, but not for the reasons you may expect coming from me.

Some context is necessary here. As I explained in a recent episode of AppStories, I’ve embarked on a bit of a journey lately in terms of understanding the role of AI products and features in modern software. I’ve been doing a lot of research, testing, and reading about the different flavors of AI tools that we see pop up on almost a daily basis now in a rapidly changing landscape. As I discussed on the show, I’ve landed on two takeaways, at least for now:

  • I’m completely uninterested in generative products that aim to produce images, video, or text to replace human creativity and input. I find products that create fake “art” sloppy, distasteful, and objectively harmful for humankind because they aim to replace the creative process with a thoughtless approximation of what it means to be creative and express one’s feelings, culture, and craft through genuine, meaningful creative work.
  • I’m deeply interested in the idea of assistive and agentic AI as a means to remove busywork from people’s lives and, well, assist people in the creative process. In my opinion, this is where the more intriguing parts of the modern AI industry lie:
    • agents that can perform boring tasks for humans with a higher degree of precision and faster output;
    • coding assistants to put software in the hands of more people and allow programmers to tackle higher-level tasks;
    • RAG-infused assistive tools that can help academics and researchers; and
    • protocols that can map an LLM to external data sources such as Claude’s Model Context Protocol.

I see these tools as a natural evolution of automation and, as you can guess, that has inevitably caught my interest. The implications for the Accessibility community in this field are also something we should keep in mind.

To put it more simply, I think empowering LLMs to be “creative” with the goal of displacing artists is a mistake, and also a distraction – a glossy facade largely amounting to a party trick that gets boring fast and misses the bigger picture of how these AI tools may practically help us in the workplace, healthcare, biology, and other industries.

This is how I approached my tests with Apple Intelligence in iOS and iPadOS 18.2. For the past month, I’ve extensively used Claude to assist me with the making of advanced shortcuts, used ChatGPT’s search feature as a Google replacement, indexed the archive of my iOS reviews with NotebookLM, relied on Zapier’s Copilot to more quickly spin up web automations, and used both Sonnet 3.5 and GPT-4o to rethink my Obsidian templating system and note-taking workflow. I’ve used AI tools for real, meaningful work that revolved around me – the creative person – doing the actual work and letting software assist me. And at the same time, I tried to add Apple’s new AI features to the mix.

Perhaps it’s not “fair” to compare Apple’s newfangled efforts to products by companies that have been iterating on their LLMs and related services for the past five years, but when the biggest tech company in the world makes bold claims about their entrance into the AI space, we have to take them at face value.

It’s been an interesting exercise to see how far behind Apple is compared to OpenAI and Anthropic in terms of the sheer capabilities of their respective assistants; at the same time, I believe Apple has some serious advantages in the long term as the platform owner, with untapped potential for integrating AI more deeply within the OS and apps in a way that other AI companies won’t be able to. There are parts of Apple Intelligence in 18.2 that hint at much bigger things to come in the future that I find exciting, as well as features available today that I’ve found useful and, occasionally, even surprising.

With this context in mind, in this story you won’t see any coverage of Image Playground and Image Wand, which I believe are ridiculously primitive and perfect examples of why Apple may think they’re two years behind their competitors. Image Playground in particular produces “illustrations” that you’d be kind to call abominations; they remind me of the worst Midjourney creations from 2022. Instead, I will focus on the more assistive aspects of AI and share my experience with trying to get work done using Apple Intelligence on my iPhone and iPad alongside its integration with ChatGPT, which is the marquee addition of this release.

Let’s dive in.

Read more


Apple Reveals A Partial Timeline for the Rollout of More Apple Intelligence Features

Last week, Apple released the first developer betas of iOS 18.2, iPadOS 18.2, and macOS 15.2, which the press speculated would be out by the end of the year. It turns out that was a good call because today, Apple confirmed that timing. In its press release about the Apple Intelligence features released today, Apple revealed that the next round is coming in December and will include the following:

  • Users will be able to describe changes they want made to text using Writing Tools. For example, you can have text rewritten with a certain tone or in the form of a poem.
  • ChatGPT will be available in Writing Tools and when using Siri.
  • Image Playground will allow users to create images with Apple’s generative AI model.
  • Users will be able to use prompts to create Genmoji, custom emoji-style images that can be sent to friends in iMessage and used as stickers.
  • Visual intelligence will be available via the Camera Control on the iPhone 16 and iPhone 16 Pro. The feature will allow users to point the iPhone’s camera at something and learn about it from Google or ChatGPT. Apple also mentions that visual intelligence will work with other unspecified “third-party tools.”
  • Apple Intelligence will be available in localized English in Australia, Canada, Ireland, New Zealand, South Africa, and the U.K.

Apple’s press release also explains when other languages are coming:

…in April, a software update will deliver expanded language support, with more coming throughout the year. Chinese, English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, Vietnamese, and other languages will be supported.

And Apple’s Newsroom in Ireland offers information on the Apple Intelligence rollout in the EU:

Mac users in the EU can access Apple Intelligence in U.S. English with macOS Sequoia 15.1. This April, Apple Intelligence features will start to roll out to iPhone and iPad users in the EU. This will include many of the core features of Apple Intelligence, including Writing Tools, Genmoji, a redesigned Siri with richer language understanding, ChatGPT integration, and more.

It’s a shame it’s going to be another six months before EU customers can take advantage of Apple Intelligence features on their iPhones and iPads, but it’s nonetheless good to hear when it will happen.

It’s also worth noting that the timing of other pieces of Apple Intelligence is unclear. There is still no word on precisely when Siri will gain knowledge of your personal context or perform actions in apps on your behalf, for instance. Even so, today’s reveal is more than Apple usually shares, which is both nice and a sign of the importance the company places on these features.


You Can Use Clean Up with a Clear Conscience

I enjoyed this take on Apple Intelligence’s Clean Up feature by Joe Rosensteel, writing for Six Colors last week:

The photographs you take are not courtroom evidence. They’re not historical documents. Well, they could be, but mostly they’re images to remember a moment or share that moment with other people. If someone rear-ended your car and you’re taking photos for the insurance company, then that is not the time to use Clean Up to get rid of people in the background, of course. Use common sense.

Clean Up is a fairly conservative photo editing tool in comparison to what other companies offer. Sometimes, people like to apply a uniform narrative that Silicon Valley companies are all destroying reality equally in the quest for AI dominance, but that just doesn’t suit this tool that lets you remove some distractions from your image.

It’s easy to get swept up in the “But what is a photo” philosophical debate (which I think raises a lot of interesting points), but I agree with Joe: we should also keep in mind that, sometimes, we’re just removing that random tourist from the background and our edit isn’t going to change the course of humankind’s history.

Also worth remembering:

For some reason, even the most literal of literal people is fine with composing a shot to not include things. To even (gasp!) crop things out of photos. You can absolutely change meaning and context just as much through framing and cropping as you can with a tool like Clean Up. No one is suggesting that the crop tool be removed or that we should only be allowed to take the widest wide-angle photographs possible to include all context at all times, like security camera footage.

Permalink

iOS and iPadOS 18.1: Everything New Besides Apple Intelligence

Today, Apple released iOS and iPadOS 18.1, the first major release since the operating system versions that launched in September and were reviewed by Federico.

As you may know, the main highlight of this new release is the first wave of Apple Intelligence features available to the public. AI has arrived, and for better or for worse for Apple’s platforms, this is only the beginning. Be sure to check out John’s review of all the new Apple Intelligence features included in iOS and iPadOS 18.1 (as well as macOS Sequoia 15.1) for the details.

Fortunately, Apple Intelligence isn’t the only highlight of this release. It also includes a series of changes to the system, from Control Center and the Camera app to Shortcuts and the arrival of new health features for AirPods Pro 2 users.

Here’s a roundup of everything new besides Apple Intelligence in iOS and iPadOS 18.1.

Read more


New Developer Betas Released for iOS, iPadOS, and macOS with Image Playground, ChatGPT Integration, and More Apple Intelligence Features

iOS 18.1, iPadOS 18.1, and macOS 15.1 aren’t quite out the door, but Apple has already updated its developer betas with the next round of upcoming Apple Intelligence features. Developer betas of iOS 18.2, iPadOS 18.2, and macOS 15.2 are now available for download and include the following:

  • image generation in the form of Image Playground and Image Wand;
  • Genmoji (iOS and iPadOS only)
  • Visual Intelligence (iPhone 16 line only)
  • ChatGPT integration with Siri; and
  • new text manipulation features.
Image Playground. Source: Apple.

Image Playground. Source: Apple.

Image Playground is a feature that allows you to create images in two styles using in-app themes and other tools. Image Playground is available in apps like Messages, Freeform, Pages, and Keynote, but it’s also a standalone app. Regardless of where you use it, Image Playground looks like it’s designed to make it easy to create animated and sketch-style images using a variety of tools such as suggested concepts that pull from the context the image is created in, like a Messages thread. Creations can be previewed, there’s a history feature that allows you to undo changes made to images, and images are saved to an Image Playground Library that syncs across devices via iCloud.

Image Wand. Source: Apple.

Image Wand. Source: Apple.

Image Wand, which appears in the Apple Pencil tool palette, takes a rough hand-drawn sketch, photo, or note and turns any of them into an image similar to one created by Image Playground. Image Wand can be further refined by adding text, and if you circle a blank space, it will use surrounding text to build an image.

Also, Genmoji – which is only in the iOS and iPadOS betas for now – allows you to create emoji-style images that can be used in Messages and other apps as decorative stickers. Inputs can include a text description, people in your contacts, friends and family recognized in Photos, and characters created from whole cloth.

Visual Intelligence has been added to the Camera Control on the iPhone 16 line too. The feature lets you look up details about a place and work with text, copying, reading, summarizing, and translating it.

The next betas also integrate ChatGPT into Siri. As demoed at WWDC, you can opt to pose queries to ChatGPT without disclosing you identity or IP address and without the prompts being used to train OpenAI’s large language models. The ChatGPT integration is free and does not require an account with OpenAI either.

Writing Tools lets you describe your text changes in iOS 18.2, iPadOS 18.2, and macOS 15.2.

Writing Tools lets you describe your text changes in iOS 18.2, iPadOS 18.2, and macOS 15.2.

Finally, Apple has built a new Writing Tool that provides additional flexibility when manipulating text. From the Writing Tools UI, you’ll be able to submit a prompt to alter any text you’ve written. For instance, you could have Apple Intelligence make you sound more excited in your message or rewrite it in the form of a poem, neither of which is possible with the Writing Tools found in iOS and iPadOS 18.1 or macOS 15.1.

For developers, there are also new APIs for Writing Tools, Genmoji, and Image Playground.

As we’ve covered before, Apple’s AI models have been trained on a mix of licensed data and content from the web. If you’re a publisher or a creator who doesn’t want to be part of those models, you can opt out, but it doesn’t work retroactively. In other words, opting out won’t remove any data already ingested by Apple’s web crawlers, but it will work going forward.

I’m not a fan of generative AI tools, but I am looking forward to finally going beyond tightly controlled demos of these features. I want to see how well they work in practice and compare them to other AI tools. Apple appears to have put a lot of guardrails in place to avoid some of the disasters that have befallen other tech companies, but I’m pretty good at breaking software. It will be interesting to see how well these tools hold up under pressure.


Using Apple Journal to Track Home Screen Setups

I love this idea by Lee Peterson: using Apple’s Journal app (which got some terrific updates in iOS 18) to track your Home Screen updates over time.

Every so often, I see screenshots from people on Threads or Mastodon showing their Home Screens from over a decade ago. I routinely delete screenshots from my Photos library, and it bums me out that I never kept a consistent, personal archive of my ever-changing Home Screens over the years. Lee’s technique, which combines Journal with the excellent Shareshot app, is a great idea that I’m going to steal. Here’s my current Home Screen on iOS 18:

My iOS 18 Home Screen.

My iOS 18 Home Screen.

As you can see, I’m trying large icons in dark mode and there are some new entries in my list of must-have apps. The Home Screen is similar, but a bit more complex, on iPadOS, where I’m still fine-tuning everything to my needs.

I plan to write about my Home Screens and Control Center setup in next week’s issue of MacStories Weekly. In the meantime, I’m going to follow Lee’s approach and begin archiving screenshots in Journal.

Permalink

Control Center and Lock Screen Controls for iOS 18: A Roundup of My Favorite Indie Apps

This week, Apple released iOS and iPadOS 18 to the world. One of the main new features this year is the ability to fully customize Control Center. And not only is Control Center customizable, but it now also supports controls from third-party applications. If you open the new Controls Gallery in iOS and iPadOS 18, you will find controls and toggles from some of your favorite indie apps that have been updated to support the new release.

In addition to being available in Control Center, every one of these third-party controls can be mapped to the Action button on the iPhone 15 Pro or newer, and they can used to replace the two default controls at the bottom of Lock Screen – Flashlight and Camera – which have been there since the introduction of the iPhone X in 2017.

While you may think at first that there’s only so much you can do with a simple toggle in Control Center, the range of possibilities that this enables is actually pretty wide. That is why, today, I’m taking a look at a selection of apps that have been updated to offer their own controls for Control Center and the Lock Screen. They’re all unique, and some of them are unexpectedly powerful.

Let’s jump in.

Read more


Chris Lawley’s iOS and iPadOS 18 Walkthrough

It’s been an unprecedented week for Apple’s OSes, with updates to every OS landing at the same time at the beginning of the week. Today we’ll publish our fourth and final OS review with Devon Dundee’s visionOS review, which means I’m finally getting a chance to catch my breath and enjoy what others have to say about Apple’s OSes.

If you haven’t seen it, Chris Lawley, co-host of Comfort Zone here on MacStories, has a fantastic walkthrough of iOS and and iPadOS 18 that covers everything from Home and Lock Screen customization and the all-new Control Center to updates to system apps like Freeform, Shortcuts, Safari, and Messages. The video is especially good if you’ve had a busy week and want to get up to speed on iOS and iPadOS 18 quickly.

Chris has included a lot of excellent lesser known tips in his video that will help you get the most out of the OS updates too.

Permalink