Comment by CharlesW
Comment by CharlesW a day ago
> I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.
Setting aside the obligatory HN dig at the end, LLMs are now commodities and the least important component of the intelligence system Apple is building. The hidden-in-plain-sight thing Apple is doing is exposing all app data as context and all app capabilities as skills. (See App Intents, Core Spotlight, Siri Shortcuts, etc.)
Anyone with an understanding of Apple's rabid aversion to being bound by a single supplier understands that they've tested this integration with all foundation models, that they can swap Google out for another vendor at any time, and that they have a long-term plan to eliminate this dependency as well.
> Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own.
I'd be interested in a citation for this (Apple introduced two multilingual, multimodal foundation language models in 2025), but in any case anything you hear from Apple publicly is what they want you to think for the next few quarters, vs. an indicator of what their actual 5-, 10-, and 20-year plans are.
My guess is that this is bigger lock-in than it might seem on paper.
Google and Apple together will posttrain Gemini to Apple's specification. Google has the know-how as well as infra and will happily do this (for free ish) to continue the mutually beneficial relationship - as well as lock out competitors that asked for more money (Anthropic)
Once this goes live, provided Siri improves meaningfully, it is quite an expensive experiment to then switch to a different provider.
For any single user, the switching costs to a different LLM are next to nothing. But at Apple's scale they need to be extremely careful and confident that the switch is an actual improvement