Comment by Borealid
I think a passkey is a good example of how, when the user has a trusted third party grant them limited instead of unlimited permission to do something (e.g. they can use a secret with the site that created it but they can't extract the raw secret from it to send to an arbitrary site), it is possible to make them immune to a particular type of phishing.
As an example of mitigating another type of phishing, if the user only has the ability to log in to a web site from a particular device or country, an attacker tricking them into providing their password gets a much less useful win.
You could argue they have the "right to do" less in that situation. Sure, that's a reasonable perspective. I'm not passing moral judgement here. But I think that it is a factually true statement that it is indeed possible to mitigate (and even entirely prevent) phishing vulnerabilities by giving end users devices that have stronger security policies - with those policies being written by the device creator, and not edited by the end user themself.
I think this principle applies to every single type of social engineering attack. Limiting the context of permissions lessens the risk of a confused deputy.
I am not sure what you are trying to say.
Security is a gradient. At some point, adding security means reducing freedom. It is a societal choice where you stop. If you put all the humans in your country in a jail, each in a separate cell, never let them go out and just bring them food, then there will be no crime in your country. But nobody wants that.
> I think this principle applies to every single type of social engineering attack. Limiting the context of permissions lessens the risk of a confused deputy.
A confused deputy is a computer program. We're talking about phishing.