Apple AI model update: Device-side models match Google/Ali, server-side models don't match OpenAI's GPT-4o from a year ago

June 11 News.appleThe company announced on June 9 local time an update to its AI models, which are designed for Apple devices on the Apple Intelligence features are supported across iOS, macOS, and other systems. However, according to Apple's own published data, these new models don't perform as well as some competitors' older models, especially when compared to the OpenAI compared to products from tech giants such as

Apple AI model update: Device-side models match Google/Ali, server-side models don't match OpenAI's GPT-4o from a year ago

1AI notes that in the blog post, Apple notes that the quality of text generated by its latest "Apple On-Device" model (which runs on devices such as iPhones and doesn't require an Internet connection), as evaluated by human testers, was deemed "comparable" to, but not better than, Google's and Alibaba's models of the same size. "comparable" to, but not superior to, Google's and Alibaba's models of the same size.Apple's other, more powerful model, Apple Server, which is designed to run in the company's data centers, lagged behind OpenAI's GPT-4o, which was launched a year ago, in the tests.

In another test, Apple's model also failed to stand out in terms of image analysis capabilities. According to Apple's own data, human evaluators preferred Meta's Llama 4 Scout model over Apple's Apple Server, a surprising result given that Llama 4 Scout didn't perform as well as the leading models from AI labs such as Google, Anthropic, and OpenAI in several tests.

These benchmark results further corroborate previous reports that Apple's AI research department is lagging behind competitors in the fierce AI competition. Apple's AI capabilities have fallen flat in recent years, with the much-anticipated personalized Siri upgrade delayed indefinitely. Some users have even filed lawsuits against Apple, accusing the company of promoting its products as having unrealized AI capabilities.

The updated "Apple On-Device" model has about 3 billion parameters, which are mainly used for text generation, summarization and text analysis. The number of parameters roughly corresponds to the model's problem-solving ability, and generally the more parameters, the better the model performs. Starting Monday, third-party developers can access the model through Apple's Foundation Models framework.

Apple says the "Apple On-Device" and "Apple Server" models improve on their predecessors in terms of tool usage and efficiency, and are able to understand about 15 languages. This is largely due to its expanded training dataset, which contains images, PDF files, documents, manuscripts, charts, tables and graphs, among other types of data.

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

French AI Lab Mistral Launches Magistral Series of Inference Models, Small Version Now Open Source

2025-6-11 11:27:15

Information

NVIDIA's Jen-Hsun Huang hails AI industry's astonishing rate of change: 1 million-fold progress in last 10 years

2025-6-11 11:30:35

Search