Comment by addaon
I can (and have in past) written a long explanation on my experience with this, but…
Redundancy is a tool for reducing the probability of encountering statistical errors, which come from things like SEUs.
Dissimilarity is a tool for reducing the “probability” of encountering non-statistical errors — aka defects, bugs — but it’s a bit of a category error to discuss the probability of a non-probabilistic event; either the bug exists or it does not, at best you can talk about the state coverage that corresponds to its observability, but we don’t sample state space uniformly.
There has been a trend in the past few decades, somewhat informed by NASA studies, to favor redundancy as the (only, effective) tool for mitigating statistical errors, but to lean against heavy use of dissimilarity for software development in particular. This is because of a belief that (a) independent software teams implement the same bugs anyway and (b) an hour spent on duplication is better spent on testing. But at the absolute highest level of safety, where development hours are a relatively low cost compared to verification hours, I know it’s still used; and I don’t know how the hardware folks’ philosophy has evolved.