Comment by tptacek
You'll be using an idiosyncratic definition the rest of the industry does not use, but you do you.
What I think is happening here is another instance of a pattern that recurs all the time in communities like this: a term of art was created, "memory safety", to address the concept of languages that don't have buffer overflows, integer overflows, use-after-frees, double frees, controllable uninitialized pointers, and all the other memory lifecycle vulnerabilities. People unfamiliar with the state of the art heard the term, liked it, and have axiomatically derived their own definition for it. They like their definition better, and are not open to the idea that the term exists to serve a purpose orthogonal to their arguments.
Another recent instance of the same phenomenon: "zero trust".
Just as happened in the Zero Trust Wars of 2022, people, hearing the industry definition and intent of the term, scramble to reconcile their axiomatic definition with the state of the art, convincing themselves they were right all along.
The problem they have in this particular argument is: where are the vulnerabilities? Go is not a niche language. It is a high-profile target and has been for over a decade. I saw Go security talks at OWASP Chicago(!) in 2012(!). People have all sorts of hypotheses about how a memory corruption vulnerability --- not "memory corruption", but a vulnerability stemming from it, implying valuable attacker control over the result of whatever bad thing happened --- might sneak into a Go program. Practitioners hear those axiomatic arguments, try to reconcile them with empirical reality, and: it just doesn't hold up.
Just for whatever it's worth to hear this, if at Black Hat 2025 someone does to Go what James Kettle does to web frameworks ever year and introduces a widespread repeatable pattern of memory exploitability in Go race conditions, about half of my message board psyche will be really irritated (I'll have been wrong!), but the other half of my message board psyche will be fucking thrilled (there will be so much to talk about!) and all of my vulnerability researcher psyche will be doing somersaults (there will be so many new targets to hit!). On net, I'm rooting for myself being wrong. But if I had to bet: we're not going to see that talk, not at BH 2025, or 2026, or 2027. I'm probably not wrong about this.
> You'll be using an idiosyncratic definition the rest of the industry does not use, but you do you.
What definition are you using that you seem to think is the one definition of memory safety that is canonical?
> don't have buffer overflows, integer overflows, use-after-frees, double frees, controllable uninitialized pointers, and all the other memory lifecycle vulnerabilities
Any guarantees about this are dependent on the language not having undefined behavior in its safe subset. Once you have undefined behavior any other guarantees made about memory safety are significantly weakened.
> where are the vulnerabilities?
I don't know of any other than code written to demonstrate the concept. But I imagine if you look at any large Golang codebase you will find race condition bugs. So the fact that you have potential undefined behavior resulting from an extremely common coding error seems like it might be something to be concerned about (to me at least). Especially given how little Golang helps you write safe concurrent code.
That's not to say that Go is therefore totally useless and everyone should stop using it now because it's "insecure". But it also seems ... unwise ... to me to just pretend it's nothing because it is hard to exploit or that we don't have any (known) examples of it being exploited.