Is that necessarily a blocker? As others in this thread have pointed out, this probably becomes possible only once sufficient compute is available for some form of non-public retraining, at the individual user level. In that case (and hand-waving away just how far off that is), does a model need to retain its generality?
Hypothetically (and perhaps more plausibly), a continually learning model that adapts to the context of a particular org / company / codebase / etc., could even be desirable.
Is that necessarily a blocker? As others in this thread have pointed out, this probably becomes possible only once sufficient compute is available for some form of non-public retraining, at the individual user level. In that case (and hand-waving away just how far off that is), does a model need to retain its generality?
Hypothetically (and perhaps more plausibly), a continually learning model that adapts to the context of a particular org / company / codebase / etc., could even be desirable.