Comment by rootusrootus

Comment by rootusrootus a day ago

4 replies

This is good for Siri, in many ways. But I was kind of hoping we would see a time soon when phone hardware became good enough to do nearly 100% of the Siri-level tasks locally rather than needing Internet access.

Someone1234 a day ago

I suspect we'll see that; but Siri is in such a bad state of disrepair that Apple really needs something now while they continue to look for micro-scale LLM models that can run well-enough locally. The two things aren't mutually exclusive.

The biggest thing Apple has to do is get a generic pipeline up and running, that can support both cloud and non-cloud models down the road, and integrate with a bunch of local tools for agent-style workloads (e.g. "restart", "audio volume", "take screenshot" as tools that agents via different cloud/local models can call on-device).

layer8 a day ago

I don’t think there’s a clear boundary of “Siri-level” tasks. In particular, properly determining whether a task is “Siri-level” or not is likely to require off-device AI.

  • rootusrootus a day ago

    I'd hope it could be the other way around. Some stuff should be relatively straightforward -- summarizing notifications, emails, setting timers, things like that should be obviously on-device. But aside from that, I would hope that the on-device AI can make the determination on whether it is necessary to go to a datacenter AI for a better answer.

    But you may be right, maybe on-device won't be smart enough to decide it isn't smart enough. Though it does seem like the local LLMs have gotten awfully good.

    • layer8 a day ago

      I can see them going that route, but it would cause similarly annoying breaks in the flow as current Siri offering to delegate to ChatGPT, or on-device Siri deciding it can do the task but actually failing or doing it wrong. It certainly wouldn’t be an “it just works” experience.