This Week's Sponsor:

Incogni

Put an End to Spam, Scams, and Robocalls on Your iPhone


Stupid Companies Make AI Promises. Smart Companies Have AI Policies. [Sponsor]

It seems like every company is scrambling to stake their claim in the AI goldrush–check out the CEO of Kroger promising to bring LLMs into the dairy aisle. And front line workers are following suit–experimenting with AI so they can work faster and do more.

In the few short months since ChatGPT debuted, hundreds of AI-powered tools have come on the market. But while AI-based tools have genuinely helpful applications, they also pose profound security risks. Unfortunately, most companies still haven’t come up with policies to manage those risks. In the absence of clear guidance around responsible AI use, employees are blithely handing over sensitive data to untrustworthy tools. 

AI-based browser extensions offer the clearest illustration of this phenomenon. The Chrome store is overflowing with extensions that (claim to) harness ChatGPT to do all manner of tasks: punching up emails, designing graphics, transcribing meetings, and writing code. But these tools are prone to at least three types of risk.

  1. Malware: Security researchers keep uncovering AI-based extensions that steal user data. These extensions play on users’ trust of the big tech platforms (“it can’t be dangerous if Google lets it on the Chrome store!”) and they often appear to work, by hooking up to ChatGPT et al’s APIs. 
  2. Data Governance: Companies including Apple and Verizon have banned their employees from using LLMs because these products rarely offer a guarantee that a user’s inputs won’t be used as training data.
  3. Prompt Injection Attacks: In this little known but potentially unsolvable attack, hidden text on a webpage directs an AI tool to perform malicious actions–such as exfiltrate data and then delete the records. 

Up until now, most companies have been caught flat-footed by AI, but these risks are too serious to ignore. 

At Kolide, we’re taking a two-part approach to governing AI use.

  1. Draft AI policies as a team. We don’t want to totally ban our team from using AI, we just want to use it safely. So our first step is meeting with representatives from multiple teams to figure out what they’re getting out of AI-based tools, and how we can provide them with secure options that don’t expose critical data or infrastructure.
  2. Use Kolide to block malicious tools. Kolide lets IT and security teams write Checks that detect device compliance issues, and we’ve already started creating Checks for malicious (or dubious) AI-based tools. Now if an employee accidentally downloads malware, they’ll be prevented from logging into our cloud apps until they’ve removed it.

Every company will have to craft policies based on their unique needs and concerns, but the important thing is to start now. There’s still time to seize the reins of AI, before it gallops away with your company’s data.

To learn more about how Kolide enforces device compliance for companies with Okta, click here to watch an on-demand demo.

Our thank to Kolide for sponsoring MacStories this week.

Access Extra Content and Perks

Founded in 2015, Club MacStories has delivered exclusive content every week for nearly a decade.

What started with weekly and monthly email newsletters has blossomed into a family of memberships designed every MacStories fan.

Learn more here and from our Club FAQs.

Club MacStories: Weekly and monthly newsletters via email and the web that are brimming with apps, tips, automation workflows, longform writing, early access to the MacStories Unwind podcast, periodic giveaways, and more;

Club MacStories+: Everything that Club MacStories offers, plus an active Discord community, advanced search and custom RSS features for exploring the Club’s entire back catalog, bonus columns, and dozens of app discounts;

Club Premier: All of the above and AppStories+, an extended version of our flagship podcast that’s delivered early, ad-free, and in high-bitrate audio.