Apple picks Gemini to power Siri
(cnbc.com)907 points by stygiansonic a day ago
907 points by stygiansonic a day ago
> Gemini is objectively a good model (whether it's #1 or #5 in ranking aside) so Apple can confidently deliver a good (enough) product
Definitely. At at this point, Apple just needs to get anything out the door. It was nearly two years ago they sold a phone with features that still haven't shipped and the promise that Apple Intelligence would come in two months.
Yes but they also haven’t generated spicy deep fakes and talked kids into suicide with their products.
It’s just how Apple does things: They still have no folding phone, under-screen finger print scanner, under-screen front-cam, etc.
Apple is always behind on industry trends, but when they adopt them eventually, they become mainstream and cool. This is what will happen with the folding phones this year, if rumors are true.
> At at this point, Apple just needs to get anything out the door
To the extent Cupertino fucked up, it's in having had this attitude when they rolled out Apple Intelligence.
There isn't currently a forcing function. Apple owns the iPhone, and that makes it an emperor among kings. Its wealth is also built on starting with user problems and then working backwards to the technology, versus embracing whatever's hot and trying to shove it down our throats.
Lately, they've arguably been starting from their own priorities (i.e. pushing and protecting their "services" revenue at all cost) and working backwards to an acceptable user experience from there.
> Its wealth is also built on starting with user problems and then working backwards to the technology, versus embracing whatever's hot and trying to shove it down our throats.
Then again, remember millimeterwave? But yes, as a general rule I think your point still stands.
> Its wealth is also built on starting with user problems and then working backwards to the technology
Since when?
> versus embracing whatever's hot and trying to shove it down our throats
I agree here, to a degree. It's just that Apple tells its customers what's hot and then shoves it down their throats.
> Definitely. At at this point, Apple just needs to get anything out the door
They don't though, Android is clearly ahead in AI integration (even Samsung are running TV ads mocking iPhones AI capability) yet still iPhones sales are breaking records - the majority of their phone buyers still prefer an iPhone over an AI capable other phone.
They can take their time to develop AI integration that others can't deploy - 'secure/private', deep integration with iCloud, location services, processing on device etc. that will provide the product moat to increase sales.
The iPhone 16 was released 16 months ago, not “nearly” 24 months.
I consider Apple to be practical, Also Apple will be running Gemini on its own hardware. This is better than Buying perplexity and running chinese model on which Perplexity runs. Training Models is a money on game, Its better to rent models than training your own. If everyone is training models they are going to be come commodity, also this is not the final architecture.
I'll bite
1. Have a user interface. Sometimes I'll ask a question and Siri actually provides a good enough answer, and while I'm reading it, the Siri response window just disappears. Siri is this modal popup with no history, no App, and no UI at all really. Siri doesn't have a user interface, and it should have one so that I can go back to sessions and resume them or reference them later and interact with Siri in more meaningful ways.
2. Answer questions like a modern LLM does. Siri often responds with very terse web links. I find this useful when I'm sitting with friends and we don't remember if Lliam Neeson is alive or not - for basic fact-checking. This is the only use case where it's useful I've found, when I want to peel my attention away for the shortest period of time. If ChatGPT could be bound to a power button long-press, then I'd cease to use Siri for this use case. Otherwise Siri isn't good for long questions because it doesn't have the intelligence, and as mentioned before, has no user interface.
3. Be able to do things conversationally, based on my context. Today, when I "Add to my calendar Games at Dave's house" it creates a calendar entry called "Games" and sets the location to a restaurant called "Dave's House" in a different country. My baseline expectation is that I should be able to work with Siri, build its memory and my context, and over time it becomes smarter about the things I like to do. The day Siri responds with "Do you mean Dave's House the restaurant in another country, or Dave, from your contacts?" I'll be happy.
I'm sorry, I can't answer that right now.
Would you like to click this button which takes what you said and executes it as a Google search in Safari?
Siri to function above the level of Dragon NaturallySpeaking '95
ANY ability to answer simple questions without telling me to open Safari and read a webpage for myself...?
I should be able to completely control my phone with voice and ask it to do anything it is capable of and it should just work:
"Hi Siri, can you message Katrina on WhatsApp that Judy is staying 11-15th Feb and add it to the shared Calendar, confirm with me the message to Kat and the Calendar start and end times and message."
Counterpoint: iOS’s biggest competitor is Android. They are now effectively funding their competition on a core product interface. I see this as strategically devastating.
Counterpoint: Google is paying Apple $20b/year to keep themselves as the default search engine in iOS. Android's biggest competitor is iOS. They are now effectively funding their competition on a core product interface. I see this as strategically devastating.
It's strategically devastating because no small number of users choose Apple because they do not trust Google and now they have no choice but to have Google AI on-board their machines.
I respect Google's engineering, and I'm aware that fundamental technologies such as Protocol Buffers and FlatBuffers are unavoidably integrated into the software fabric, but this is is avoidable.
I'm surprised Google aren't paying Apple for this.
> no small number of users choose Apple because they do not trust Google
Unfortunately, it probably actually is a small number comparatively. Or at least I would need to see some sort of real data to say anything different.
I feel like people who distrust Google probably wouldn't trust Apple enough to give them their data either? Why would you distrust one but not the other?
Is android really iOSs competition ? I feel like the competition is less android more vendors who use android. Every android phone feels different. Android doesn’t even compete on performance anymore the chips are quite behind. The target audience of the two feels different lately.
>Is android really iOSs competition ?
It ISN'T in this day and age. People don't switch back and forth between iOS and Android like it's still 2010. They use whatever they got locken in initially since their first smartphone or where Apple's green/blue-bubble issue pushed them to or what their family handed them down or what their close friend groups used to have.
People who've been using iOS for 6+ years will 98% stick to iOS for their next purchase and won't even bother look at Android no matter what features Android were to add.
The Android vs iOS war is as dead as the console war. There's no competition anymore, it's just picking one from a duopoly of vendor lock-ins.
Even if EU were to break some of the lockins, people have familiarity bias and will stick with inertia of what they're used to, so it will not move the market share needle one bit.
Of course android is iOSs competition. android is also 75% of the market that apple surely wants bigger piece of.
Performance? We are many years past the point somebody cared about performance. I am writing this on iphone 11 pro and the experience is almost exactly the same as current iOS.
You know what's not the same? Android became pretty great OS. I recently got older Pixel to see how GrapheneOS works and was surprised about Android (which i havent seen for a decade). iOS on the other hand has recently gone trough with very bad ui redesign for no reason.
Imho the main thing Apple has going for it is that Google is spyware company and Apple is still mainly hardware company. But if Apple decides to pull their users data to gemini… well good luck.
Nothing about OpenAI is clean. Their complete org is controlled by Altmann, who was able to rehire himself after he was fired.
Anthropic doesn't have a single data centre, they rent from AWS/Microsoft/Google.
It was 20 billion dollars years ago, 2022. There's little doubt it's closer to $25B now, perhaps more.
True. Also Gemini is the boring model, heavily sanitised for corporate applications. At least it admits this if you press it. It fits Apple here very well.
Personally I wouldn't use it, it still belongs to an advertiser specialised on extracting user information. Not that I expect that other AI companies value privacy much higher. But clean smell also means bland smell.
Google, as the designer of the original transformer, is designer of the original "mechanism" for inserting ads into a prompt answer in realtime to the highest bidder, so it makes sense from that part too.
Given my stance about AI, I'll definitely not use it, but I understand Apple's choice. Also this choice will give them enough time to develop their infrastructure and replace parts of it with their own, if they are planning to do it.
> Not that I expect that other AI companies value privacy much higher.
Breaching privacy and using it for its own benefit is AIs business model. There are no ethical players here. Neither from training nor from respecting their users' privacy perspective. Just next iteration of what social media companies do.
I suspect you're exactly right about it being the most sanitized model.
I don't however like the idea of having Google deeply embedded in my machine and Siri will definitely be turned off when this happens. I only use Siri as an egg timer anyway.
This seems like a odd move for a company that sells privacy.
>If nothing else, this was likely driven by Google being the most stable of the AI labs.
I dont think the model is that much different if they thought Siri was half decent enough for so long.
Judging from the past 10 years, I would say this is more likely driven by part of a bigger package deal with Google Search Placement and Google Cloud Services. When everything else being roughly equal.
Instead of raising price again Paying Apple even more per user, How about we pay the less but throw in Gemini with it?
Apple has been very good, if not the best at picking one side and allowing the others to fight for its contract. They dont want Microsoft to win the AI race, at the same time Apple is increasing the use of Azure just in case. Basically playing the game of leverage at its best. In hindsight probably too well into it they forgot what the real purpose of all these leverage are for, not cost savings but ultimately better quality product.
> I would say this is more likely driven by part of a bigger package deal with Google Search Placement and Google Cloud Services.
Can the DOJ and FTC look into this?
Google shouldn't be able to charge a fee on accessing every registered trademark in the world. They use Apple get get the last 30% of "URL Bars", I mean Google Search middlemen.
Searching Anthropic gets me a bidding war, which I'm sure is bleeding Google's competition dry.
We need a "no bare trademark (plus edit distance) ads or auto suggest" law. It's made Google an unkillable OP monster. Any search monopoly or marketplace monopoly should be subject to not allowing ads to be sold against a registered trademark database.
I agree with your point about Google being more stable company then the rest so the decision probably makes sense. But there was a study done by multiple news companies in Czechia by asking about news topics and Gemini was consistently the worst in citations and straight up being incorrect (76% of its answers had "issues", I don't have exact issues specification).
OpenAI aren’t using their cloud directly, but have signed data center partnerships with them that are effectively huge amounts of debt not backed up with revenue. That’s all liability that Google doesn’t really have because they have revenue from other areas.
The writing was on the wall the moment Apple stopped trying to buy their way into the server-side training game like what three years ago?
Apple has the best edge inference silicon in the world (neural engine), but they have effectively zero presence in a training datacenter. They simply do not have the TPU pods or the H100 clusters to train a frontier model like Gemini 2.5 or 3.0 from scratch without burning 10 years of cash flow.
To me, this deal is about the bill of materials for intelligence. Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own. Seems like they are pivoting to becoming the premium "last mile" delivery network for someone else's intelligence. Am I missing the elephant in the room?
It's a smart move. Let Google burn the gigawatts training the trillion parameter model. Apple will just optimize the quantization and run the distilled version on the private cloud compute nodes. I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.
> I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.
Setting aside the obligatory HN dig at the end, LLMs are now commodities and the least important component of the intelligence system Apple is building. The hidden-in-plain-sight thing Apple is doing is exposing all app data as context and all app capabilities as skills. (See App Intents, Core Spotlight, Siri Shortcuts, etc.)
Anyone with an understanding of Apple's rabid aversion to being bound by a single supplier understands that they've tested this integration with all foundation models, that they can swap Google out for another vendor at any time, and that they have a long-term plan to eliminate this dependency as well.
> Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own.
I'd be interested in a citation for this (Apple introduced two multilingual, multimodal foundation language models in 2025), but in any case anything you hear from Apple publicly is what they want you to think for the next few quarters, vs. an indicator of what their actual 5-, 10-, and 20-year plans are.
My guess is that this is bigger lock-in than it might seem on paper.
Google and Apple together will posttrain Gemini to Apple's specification. Google has the know-how as well as infra and will happily do this (for free ish) to continue the mutually beneficial relationship - as well as lock out competitors that asked for more money (Anthropic)
Once this goes live, provided Siri improves meaningfully, it is quite an expensive experiment to then switch to a different provider.
For any single user, the switching costs to a different LLM are next to nothing. But at Apple's scale they need to be extremely careful and confident that the switch is an actual improvement
It's a very low baseline with Siri, so almost anything would be an improvement.
> provided Siri improves meaningfully
Not a high bar…
That said, Apple is likely to end up training their own model, sooner or later. They are already in the process of building out a bunch of data centers, and I think they have even designed in-house servers.
Remember when iPhone maps were Google Maps? Apple Maps have been steadily improving, to the point they are as good as, if not better than, Google Maps, in many areas (like around here. I recently had a friend send me a GM link to a destination, and the phone used GM for directions. It was much worse than Apple Maps. After a few wrong turns, I pulled over, fed the destination into Apple Maps, and completed the journey).
> what their actual 5-, 10-, and 20-year plans are
Seems like they are waiting for the "slope of enlightenment" on the gartner hype curve to flatten out. Given you can just lease or buy a SOTA model from leading vendors there's no advantage to training your own right now. My guess is that the LLM/AI landscape will look entirely different by 2030 and any 5 year plan won't be in the same zip code, let alone playing field. Leasing an LLM from Google with a support contract seems like a pretty smart short term play as things continue to evolve over the next 2-3 years.
This is the key. The real issue is that you don’t need superhuman intelligence in a phone AI assistant. You don’t need it most of the time in fact. Current SOTA models do a decent job of approximating college grad level human intelligence let’s say 85% of the time which is helpful and cool but clearly could be better. But the pace at which the models are getting smart is accelerating AND they are getting more energy efficient and memory efficient. So if something like DeepSeek is roughly 2 years behind SOTA models from Google and others who have SOTA models then in 2030 you can expect 2028 level performance out open models. There will come a time when a model capable of college grad level intelligence 99.999% of the time will be able to run on a $300 device. If you are Apple you do not need to lead the charge on a SOTA model, you can just wait until one is available for much cheaper. Your product is the devices and services consumers buy. If you are OpenAI you have no other products. You must become THE AI to have in an industry that will in the next few years become dominated by open models that are good enough or to close up shop or come up with another product that has more of a moat.
LLMs are now commodities and the least important component of the intelligence system Apple is building
If that was even remotely true, Apple, Meta, and Amazon would have SoTA foundational models.Are you not aware that all of the above have all invested billions trying to train a SoTA Foundational model?
That's not an "obligatory HN dig" though, you're in-media-res watching X escape removal from the App Store and Play Store. Concepts like privacy, legality and high-quality software are all theater. We have no altruists defending these principles for us at Apple or Google.
Apple won't switch Google out as a provider for the same reason Google is your default search provider. They don't give a shit about how many advertisements you're shown. You are actually detached from 2026 software trends if you think Apple is going to give users significant backend choices. They're perfectly fine selling your attention to the highest bidder.
There are second-order effects of Google or Apple removing Twitter from their stores.
Guess who's the bestie of Twitter's owner? Any clues? Could that be a vindictive old man with unlimited power and no checks and balances to temper his tantrums?
Of course they both WANT Twitter the fuck out of the store, but there are very very powerful people addicted to the app and what they can do with it.
Caveat: as long as it doesn’t feel like you’re being sold out.
Which is why privacy theatre was an excellent way to put it
An Apple-developed LLM would likely be worse than SOTA, even if they dumped billions on compute. They'll never attract as much talent as the others, especially given how poorly their AI org was run (reportedly). The weird secrecy will be a turnoff. The culture is worse and more bureaucratic. The past decade has shown that Apple is unwilling to fix these things. So I'm glad Apple was forced to overcome their Not-Invented-Here syndrome/handicap in this case.
Apple might have gotten very lucky here ... the money might be in finding uses, and selling physical products rather than burning piles of cash training models that are SOTA for 5 minutes before being yet another model in a crowded field.
My money is still on Apple and Google to be the winners from LLMs.
Apple has also never been big on the server side equation of both software and hardware - don't they already outsource most of their cloud stack to Google via GCP ?
I can see them eventually training their own models (especially smaller and more targeted / niche ones) but at their scale they can probably negotiate a pretty damn good deal renting Google TPUs and expertise.
Yeah… there’s this “bro— do you even business?” vibe in the tech world right now pointed at any tech firm not burning oil tankers full of cash (and oil, for that matter,) training a giant model. That money isn’t free — the economic consequences of burning billions to make a product that will be several steps behind, at best, are giant. There’s a very real chance these companies won’t recoup that money if their product isn’t attractive to hoards of users willing to pay more money for AI than anyone currently is. It doesn’t even make them look cool to regular people — their customers hate hearing about AI. Since there are viable third party options available, I think Apple would have to be out of their goddamned minds to try and jump in that race right now. They’re a product company. Nobody is going to not buy an iPhone because they’re using a third-party model.
Reportedly, Meta is paying top AI talent up to $300M for a 4 year contract. As much as I'm in favor of paying engineers well, I don't think salaries like this (unless they are across the board for the company, which they are of course not) are healthy for the company long term (cf. Anthony Levandowski, who got money thrown after him by Google, only to rip them off).
So I'm glad Apple is not trying to get too much into a bidding war. As for how well orgs are run, Meta has its issues as well (cf the fiasco with its eponymous product), while Google steadily seems to erode its core products.
Why would paying everyone $300M across the board be healthier than using it as a tool to (attempt to) attract the best of the best?
Is the training cost really that high, though?
The Allen Institute (a non-profit) just released the Molmo 2 and Olmo 3 models. They trained these from scratch using public datasets, and they are performance-competitive with Gemini in several benchmarks [0] [1].
AMD was also able to successfully train an older version of OLMo on their hardware using the published code, data, and recipe [2].
If a non-profit and a chip vendor (training for marketing purposes) can do this, it clearly doesn't require "burning 10 years of cash flow" or a Google-scale TPU farm.
[0]: https://allenai.org/blog/molmo2
No, of course the training costs aren't that high. Apple's ten years of future free cash flow is greater than a trillion dollars (they are above $100b per year). Obviously, the training costs are a trivial amount compared to that figure.
What I'm wondering - their future cash flow may be massive compared to any conceivable rational task, but the market for servers and datacenters seems to be pretty saturated right now. Maybe, for all their available capital, they just can't get sufficient compute and storage on a reasonable schedule.
I have no idea what AI involves, but "training" sounds like a one-and-done - but how is the result "stored"? If you have trained up a Gemini, can you "clone" it and if so, what is needed?
I was under the impression that all these GPUs and such were needed to run the AI, not only ingest the data.
my prediction is that they might switch once AI craze will simmer down to some more reasonable level
Yea, I think it’s smart, too. There are multiple companies who have spent a fortune on training and are going to be increasingly interested in (desperate to?) see a return from it. Apple can choose the best of the bunch, pay less than they would have to to build it themselves, and swap to a new one if someone produces another breakthrough.
100%. It feels like Apple is perfectly happy letting the AI labs fight a race to the bottom on pricing while they keep the high-margin user relationship.
I'm curious if this officially turns the foundation model providers into the new "dumb pipes" of the tech stack?
As if they really have a choice though. Competing would be a billion dollar Apple Maps scenario.
> I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain
I feel like people probably said this when Google became the default search engine for everyone...
> I'm oversimplifying but this effectively turns the iPhone into a dumb terminal for Google's brain, wrapped in Apple's privacy theater.
This sort of thing didn't work out great for Mozilla. Apple, thankfully, has other business bringing in the revenue, but it's still a bit wild to put a core bit of the product in the hands of the only other major competitor in the smartphone OS space!
I dunno, my take is that Apple isn’t outsourcing intelligence rather it’s outsourcing the most expensive, least defensible layer.
Down the road Apple has an advantage here in a super large training data set that includes messages, mail, photos, calendar, health, app usage, location, purchases, voice, biometrics, and you behaviour over YEARS.
Let's check back in 5 years and see if Apple is still using Gemini or if Apple distills, trains and specializes until they have completed building a model-agnostic intelligence substrate.
Seems like there is a moat after all.
The moat is talent, culture, and compute. Apple doesn't have any of these 3 for SOTA AI.
It is more like Apple have no need to spend billions on training with questionable ROI when it can just rent from one of the commodity foundation model labs.
I don't know why people automatically jump to Apple's defense on this.... They absolutely did spend a lot of money and hired people to try this. They 100% do NOT have the open and bottom-up culture needed to pull off large scale AI and software projects like this.
Source: I worked there
It’s such a commodity that there are only 3 SOTA labs left and no one can catch them. I’m sure it’ll be consolidated further in the future and you’re going to be left with a natural monopoly or duopoly.
Apple has no control over the most important change to tech. They have control to Google.
is it that surprising? they're a hardware company after all.
Google says: "Apple Intelligence will continue to run on Apple devices and Private Cloud Compute, while maintaining Apple's industry-leading privacy standards."
So what does it take? How many actual commitments to privacy does Apple have to make before the HN crowd stops crowing about "theater"?
I always think about this, can someone with more knowledge than me help me understand the fragility of these operations?
It sounds like the value of these very time-consuming, resource-intensive, and large scale operations is entirely self-contained in the weights produced at the end, right?
Given that we have a lot of other players enabling this in other ways, like Open Sourcing weights (West vs East AI race), and even leaks, this play by Apple sounds really smart and the only opportunity window they are giving away here is "first to market" right?
Is it safe to assume that eventually the weights will be out in the open for everyone?
> and the only opportunity window they are giving away here is "first to market" right?
A lot of the hype in LLM economics is driven by speculation that eventually training these LLMs is going to lead to AGI and the first to get there will reap huge benefits.
So if you believe that, being "first to market" is a pretty big deal.
But in the real world there's no reason to believe LLMs lead to AGI, and given the fairly lock-step nature of the competition, there's also not really a reason to believe that even if LLMs did somehow lead to AGI that the same result wouldn't be achieved by everyone currently building "State of the Art" models at roughly the same time (like within days/months of each other).
So... yeah, what Apple is doing is actually pretty smart, and I'm not particularly an Apple fan.
> is entirely self-contained in the weights produced at the end, right?
Yes, and the knowledge gained along the way. For example, the new TPUv4 that Google uses requires rack and DC aware technologies (like optical switching fabric) for them to even work at all. The weights are important, and there is open weights, but only Google and the like are getting the experience and SOTA tech needed to operate cheaply at scale.
> The writing was on the wall the moment Apple stopped trying to buy their way into the server-side training game like what three years ago?
It goes back much further than that - up until 2016, Apple wouldn't let its ML researchers add author names to published research papers. You can't attract world-class talent in research with a culture built around paranoid secrecy.
That was their goal, but in the past couple years they seem to have given up on client-side-only ai. Once they let that go, it became next to impossible to claw back to client only… because as client side ai gets better so does server side, and people’s expectations scale up with server side. And everybody who this was a dealbreaker for left the room already.
Apple thinks they can get a best-of-both-worlds approach with Private Cloud Compute. They believe they can secure private servers specialized to specific client devices in a way that the cloud compute effort is still "client-side" from a trust standpoint, but still able to use extra server-side resources (under lock and key).
I don't know how close to that ideal they've achieved, but especially given this announcement is partly baked on an arrangement with Google that they are allowed to run Gemini on-device and in Private Cloud Compute, without using Google's more direct Gemini services/cloud, I'm excited that they are trying and I'm interested in how this plays out.
For some context with numbers, in mid-2024 Apple publicly described 3B parameter foundation models. Gemini 3 Pro is about 1T today.
https://machinelearning.apple.com/research/apple-intelligenc...
That 3B model is a local model that eventually got built into macOS 26. Gemini 3 Pro is a frontier model (cloud). They're very different things.
Sure. And the same paper describes a ‘larger’ cloud-served model.
Rumor has it that they weren't trained "from scratch" the was US would, i.e. Chinese labs benefitted from government "procured" IP (the US $B models) in order to train their $M models. Also understand there to be real innovation in the many-MoE architecture on top of that. Would love to hear a more technical understanding from someone who does more than repeat rumors, though.
We don't really know how much it cost them. Plenty of reasons to doubt the numbers passed around and what it wasn't counting.
(And even if you do believe it, they also aren't licensing the IP they're training on, unlike american firms who are now paying quite a lot for it)
It also lets them keep a lot of the legal issues regarding LLM development at arms length while still benefiting from them.
> Seems like they are pivoting to becoming the premium "last mile" delivery network for someone else's intelligence.
They have always been a premium "last mile" delivery network for someone else's intelligence, except that "intelligence" was always IP until now. They have always polished existing (i.e., not theirs) ideas and made them bulletproof and accessible to the masses. Seems like they intend to just do more of the same for AI "intelligence". And good for them, as it is their specialty and it works.
It’s also a bet that the capex cost for training future models will be much lower than it is today. Why invest in it today if they already have the moat and dominant edge platform (with a loyal customer base upgrading hardware on 2-3 year cycles) for deploying whatever future commoditized training or inference workloads emerge by the time this Google deal expires?
Could you elaborate a bit on why you've judged it as privacy theatre? I'm skeptical but uninformed, and I believe Mullvad are taking a similar approach.
Mullvad is nothing like Apple. For apple devices: - need real email and real phone number to even boot the device - cannot disable telemetry - app store apps only, even though many key privacy preserving apps are not available - /etc/hosts are not your own, DNS control in general is extremely weak - VPN apps on idevices have artificial holes - can't change push notification provider - can only use webkit for browsers, which lacks many important privacy preserving capabilities - need to use an app you don't trust but want to sandbox it from your real information? Too bad, no way to do so. - the source code is closed so Apple can claim X but do Y, you have no proof that you are secure or private - without control of your OS you are subject to Apple complying with the government and pushing updates to serve them not you, which they are happy to do to make a buck
Mullvad requires nothing but an envelope with cash in it and a hash code and stores nothing. Apple owns you.
Agreed on most points but you can setup a pretty solid device wide DNS provider using configuration profiles. Similar to how iOS can be enrolled in work corporate MDM - but under your control.
Works great for me with NextDNS.
Orion browser - while also based on WebKit - is also awesome and has great built in Adblock and supposedly privacy respecting ideals.
Apple has records that you are installing that, probably putting you on a list.
And it works until it's made illegal in your country and removed from the app store. You have no guarantees that anything that works today will work tomorrow with Apple.
Apple is setting us up to be under a dictator's thumb one conversion at a time.
They transitioned from “nobody can read your data, not even Apple” to “Apple cannot read your data.” Think about what that change means. And even that is not always true.
They also were deceptive about iCloud encryption where they claimed that nobody but you can read your iCloud data. But then it came out after all their fanfare that if you do iCloud backups Apple CAN read your data. But they aren’t in a hurry to retract the lie they promoted.
Also if someone in another country messages you, if that country’s laws require that Apple provide the name, email, phone number, and content of the local users, guess what. Since they messaged you, now not only their name and information, but also your name and private information and message content is shared with that country’s government as well. By Apple. Do they tell you? No. Even if your own country respects privacy. Does Apple have a help article explaining this? No.
If you want to turn on full end-to-end encryption you can, if you want to share your pubkey so that people can't fake your identity on iMessage you can, and there's still a higher tier of security than that presumably for journalists and important people.
It's something a smart niece or nephew could handle in terms of managing risk, but the implications could mean getting locked out of your device which you might've been using as the doorway to everything, and Apple cannot help you.
>Also if someone in another country messages you, if that country’s laws require that Apple provide the name
I don't mean to sound like an Apple fanboy, but is this true just for SMS or iMessage as well? It's my understanding that for SMS, Apple is at the mercy of governments and service providers, while iMessage gives them some wiggle room.
Ancedotal, but when my messages were subpoenaed, it was only the SMS messages. US citizen fwiw
Because Apple makes privacy claims all the time, but all their software is closed source and it is very hard or impossible to verify any of their claims. Even if messages sent between iPhones are E2EE encrypted for example, the client apps and the operating system may be backdoored (and likely are).
All user data is E2E encrypted, so the government literally cannot force this. This has been the source of numerous disputes [0, 1] that either result in the device itself being cracked [0] (due to weak passwords or vulnerabilities in device-level protection) or governments attempting to ban E2E encryption altogether [1].
[0] https://en.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption_d...
> They simply do not have the TPU pods or the H100 clusters to train a frontier model like Gemini 2.5 or 3.0 from scratch without burning 10 years of cash flow.
Why does Apple need to build its own training cluster to train a frontier model, anyway?
Why couldn't the deal we're reading about have been "Apple pays Google $200bn to lease exclusive-use timeslots on Google's AI training cluster"?
Personally also think it's very smart move - Google has TPUs and will do it more efficiently than anyone else.
It also lets Apple stand by while the dust settles on who will out innovate in the AI war - they could easily enter the game on a big way much later on.
absolutely, right now they can avoid any risk but get benefits as they recollect themselves
>Am I missing the elephant in the room?
Everyone using Siri is going to have their personality data emulated and simulated as a ”digital twin” in some computing hell-hole.
Seems like the LLM landscape is still evolving, and training your own model provides no technical benefit as you can simply buy/lease one, without the overhead of additional eng staffing/datacenter build-out.
I can see a future where LLM research stalls and stagnates, at which point the ROI on building/maintaining their own commodity LLM might become tolerable. Apple has had Siri as a product/feature and they've proven for the better part of a decade that voice assistants are not something they're willing to build a proficiency in. My wife still has an apple iPhone for at least a decade now, and I've heard her use Siri perhaps twice in that time.
The trouble is this seems to me like a short term fix, longer term, once the models are much better, Google can just lock out apple and take everything for themselves and leave Apple nowhere and even further behind.
Of course there is going to be an abstraction layer - this is like Software Engineering 101.
Google really could care less about Android being good. It is a client for Google search and Google services - just like the iPhone is a client for Google search and apps.
Agreed, especially since this is a competitive space with multiple players, with a high price of admission, and where your model is outdated in a year, so its not even capex as much as recurring expenditure. Far better to let someone else do all the hard work, and wait and see where things go. Maybe someday this'll be a core competency you want in-house, but when that day comes you can make that switch, just like with apple silicon.
> without burning 10 years of cash flow.
Sorry to nitpick but Apple’s Free Cash Flow is 100B/yr. Training a model to power Siri would not cost more than a trillion dollars.Of all the companies to survive a crash in AI unscathed, I would bet on Apple the most.
They are only ones who do not have large debts off(or on) balance sheet or aggressive long term contracts with model providers and their product demand /cash flow is least dependent on the AI industry performance.
They will still be affected by general economic downturn but not be impacted as deeply as AI charged companies in big tech.
They have the largest free cash flow (over $100 billion a year). Meta and Amazon have less than half that a year, and Microsoft/Nvidia are between $60b-70b per year. The statement reflects a poor understanding of their financials.
> To me, this deal is about the bill of materials for intelligence. Apple admitted that the cost of training SOTA models is a capex heavy-lift they don't want to own. Seems like they are pivoting to becoming the premium "last mile" delivery network for someone else's intelligence. Am I missing the elephant in the room?
Probably not missing the elephant. They certainly have the money to invest and they do like vertical integration but putting massive investment in bubble that can pop or flatline at any point seems pointless if they can just pay to use current best and in future they can just switch to something cheaper or buy some of the smaller AI companies that survive the purge.
Given how much AI capable their hardware is they might just move most of it locally too
calling neural engine the best is pretty silly. the best perhaps of what is uniformly a failed class of ip blocks - mobile inference NPU hardware. edge inference on apple is dominated by cpus and metal, which don't use their NPU.
best inference silicon in the world generally or specialized to smaller models/edge?
Not even an Apple fan, but from what I've been testing with for my dev use case (only up to 14b) it absolutely rocks for general models.
That I can absolutely believe but the big competition is in enterprise gpt-5-size models.
Perhaps spending it on inference that will be obsoleted in 6 months by the next model is not a good idea either.
Edit: especially given that Apple doesn’t do b2b so all the spend would be just to make consumer products
The cash pile is gone, they have been active in share repurchase.
They still generate about ~$100 billion in free cash per year, that is plowed into the buybacks.
They could spend more cash than every other industry competitor. It's ludicrous to say that they would have to burn 10 years of cash flow on trivial (relative) investment in model development and training. That statement reflects a poor understanding of Apple's cash flow.
> Am I missing the elephant in the room?
Apple is flush with cash and other assets, they have always been. They most likely plan to ride out the AI boom with Google's models and buy up scraps for pennies on the dollar once the bubble pops and a bunch of the startups go bust.
It wouldn't be the first time they went for full vertical integration.
If nothing else, this was likely driven by Google being the most stable of the AI labs. Gemini is objectively a good model (whether it's #1 or #5 in ranking aside) so Apple can confidently deliver a good (enough) product. Also for Apple, they know their provider has ridiculously deep pockets, a good understanding and infrastructure in place for large enterprises, and a fairly diversified revenue stream.
Going with Anthropic or OpenAI, despite on the surface having that clean Apple smell and feel, carries a lot of risk Apple's part. Both companies are far underwater, liable to take risks, and liable to drown if they even fall a bit behind.