Unveiling the Enigmatic: The Secrecy Surrounding Apple’s AI Methods

This week in AI, Apple stole the spotlight. At the company’s Worldwide Developers Conference (WWDC) in Cupertino, Apple unveiled Apple Intelligence, its long-awaited, ecosystem-wide push into generative AI. The company promised Apple Intelligence is being built with safety at its core, along with highly personalized experiences. Apple revealed in a blog post that it trains the AI models that power Apple Intelligence on a combination of licensed datasets and the public web. Grab bagThis week marked the sixth anniversary of the release of GPT-1, the progenitor of GPT-4o, OpenAI’s latest flagship generative AI model.

TechCrunch’s AI Roundup: Apple’s Groundbreaking AI Push

Welcome to TechCrunch’s regular AI newsletter.

This Week in AI: Apple Takes Center Stage

This week at Apple’s Worldwide Developers Conference (WWDC), the company made a major announcement that has caught the attention of tech enthusiasts everywhere.

Apple has officially launched “Apple Intelligence,” an ecosystem-wide push into generative AI. This new feature powers a variety of upgrades, including an improved Siri, AI-generated emojis, and advanced photo-editing tools that can remove unwanted elements from images.

According to CEO Tim Cook, “Apple Intelligence is being built with safety at its core, as well as highly personalized experiences.”

“It has to understand you and be grounded in your personal context, like your routine, your relationships, your communications and more,” Cook noted during the keynote. “All of this goes beyond artificial intelligence. It’s personal intelligence, and it’s the next big step for Apple.”

Apple has a reputation for concealing the technical details behind its user-friendly features, and this time is no different. However, as someone who frequently covers the inner workings of AI, I can’t help but wish for more transparency from Apple in regards to how this “sausage” is made.

The company’s model training practices, for example, remain a mystery. In a blog post, Apple revealed that it trains the AI models for Apple Intelligence using a combination of licensed datasets and public web data. However, the post also states that publishers have the option to opt out of future training. But what about independent artists or creators? What if their work has already been included in Apple’s initial models?

This lack of transparency could potentially lead to legal challenges, particularly in regards to copyright. The courts have yet to determine whether companies like Apple have the right to use public data for AI training without crediting or compensating the creators. Apple’s secretive approach may be a way to avoid potential lawsuits, but it also raises questions about their values as a company.

A little more explanation and transparency from Apple would go a long way. But for now, it seems unlikely that we will get any answers unless a legal battle forces Apple to reveal more information about their AI training practices.

In the News

  • Apple’s top AI features: This week, Apple unveiled a range of new AI features during the WWDC keynote, including an improved Siri and deep integrations with OpenAI’s ChatGPT.
  • OpenAI hires new executives: OpenAI has brought on two new executives, Sarah Friar and Kevin Weil, to serve as its CFO and chief product officer, respectively.
  • New AI capabilities for Yahoo Mail: Yahoo’s parent company, Google, has updated Yahoo Mail with new AI capabilities, including AI-generated email summaries. Google has also introduced a similar feature, but it is behind a paywall.
  • Controversial views: A recent study from Carnegie Mellon University found that not all generative AI models are equally capable or ethical, especially when it comes to handling polarizing subjects.

Research Paper of the Week

Google is taking steps towards building a generative AI model for personal health with their new project, Personal Health Large Language Model (PH-LLM).

In a blog post, Google researchers discuss the model, which is a fine-tuned version of their Gemini model. The purpose of PH-LLM is to provide recommendations for improving sleep and fitness based on data collected from wearables like smartwatches.

To test the model’s effectiveness, researchers created nearly 900 case studies with U.S. participants. While PH-LLM’s suggestions were not quite as accurate as those given by human sleep experts, they still showed promise. It’s possible that we may see this technology integrated into Google Fit or other fitness-focused apps in the future.

Model of the Week

Apple has shared very little information about the capabilities of their new AI models, despite dedicating a significant amount of blog space to discussing them. However, we can make some assumptions based on the limited details provided.

The on-device model, which is used for tasks that can be performed offline on Apple devices like the iPhone 15 Pro, contains 3 billion parameters. This size is comparable to Google’s on-device models, Gemini Nano, which come in 1.8-billion-parameter and 3.25-billion-parameter sizes.

The server model, on the other hand, is larger and more capable. Although Apple has not revealed the exact size, they claim it “compares favorably” to OpenAI’s older model, GPT-3.5 Turbo.

Apple also claims that their models are less likely to produce toxic or offensive output compared to other similar models. However, until we can test them for ourselves, it’s difficult to fully assess their capabilities.

Grab Bag

This week marked the 6th anniversary of the release of GPT-1, the precursor to OpenAI’s latest flagship model, GPT-4o. This milestone serves as a reminder of just how far AI technology has come in such a short time.

To put things in perspective, GPT-1 was trained on a dataset of just 4.5 gigabytes of text, while GPT-3 — which is nearly 1,500 times larger and more advanced — took only 34 days to train. This exponential growth in technology is impressive, to say the least.

What made GPT-1 so groundbreaking was its approach to training. Unlike previous models, which relied heavily on manually labeled data, GPT-1 used mostly unlabeled data to “learn” how to perform various tasks. While many experts don’t expect to see a similar leap in AI technology anytime soon, no one saw GPT-1 coming either, so who knows what the future holds.

And just like that, we’ve reached the end of this week’s AI newsletter. Keep an eye out for our next edition, and until then, keep exploring the fascinating world of AI!

Avatar photo
Ava Patel

Ava Patel is a cultural critic and commentator with a focus on literature and the arts. She is known for her thought-provoking essays and reviews, and has a talent for bringing new and diverse voices to the forefront of the cultural conversation.

Articles: 888

Leave a Reply

Your email address will not be published. Required fields are marked *