Comment by Workaccount2
Comment by Workaccount2 10 hours ago
If I am understanding this correctly, this is pretty damn cool. I got 15 minutes of research on it, but no better way to get corrected than be wrong on the internet.
Essentially it seems that they can statistical magic "fuzz" the training set in such a way that it becomes very difficult for the model to leak information from the training set, while still providing the same output whether or not that exact info was in the training set. So I suppose the goal would be something like the ability to train on medical data, while making it so the model won't be able to complete the prompt "Workaccount 2 has a serious medical condition called ______" and would give the same response regardless of whether or not I was present in the database.
Yes.
prob(training_process(data)(Work account 2 has a serious medical condition called) = anaemia) <= e^epsilon * prob(training_process(data without that piece of information)(Work account 2 has a serious medical condition called) = anaemia)) + delta
Here epsilon = 2, and delta is small. Basically, there is a theoretical guarantee that if it had trained on that sentence, it would be no more than 7x as likely to output that in response to any prompt, compared to when it hadn't trained on that sentence at all. Sentence here is defined to be 1024 tokens long[1].
You might think 7x is not that big of a deal, but note that this is a theoretical guarantee( and with some mathematics it's possible to get an even tighter bound(see: Renyi DP)). In practice, actually getting private data out of a DP-trained model is difficult even for epsilon=8 (corresponds to 2000x likely!).
Edit: [1] this can be problematic, if a piece of information greater than 1024 tokens long gets split into two sentences, then there is no theoretical guarantee across sequences. However this is an implementation detail of this model, I've yet to see the effect of increasing this number to a more reasonable value.