Comment by Neywiny
I think I'll still stand by my viewpoint. They said:
> On the Cerebras side, the effective die size is a bit smaller at 46,225mm2. Applying the same defect rate, the WSE-3 would see 46 defects. Each core is 0.05mm2. This means 2.2mm2 in total would be lost to defects.
So ok they claim that they should see (46225-2.2)/46225 = 99.995%. Doing the same math for their Nvidia numbers it's 99.4%. And yet in practice neither approach got to these numbers. Nowhere near it. I just feel like the whole article talks about all this theory and numbers and math of how they're so much better but in practice it's meaningless.
So what I'm not seeing is why it'd be impossible for all the H100s on a wafer to be interconnected and call it a day. You'd presumably get 92/93 = 98.9% of the performance and, here's the kicker, no need to switch to another architecture. I didn't know where your 0% number came from. Nothing about this article says that a competitor doing the same scaling to wafer scale would get 0%, just a marginal decrease in how many cores made it through fab.
Fundamentally I am not convinced from this article that Cerebras has done something in their design that makes this possible. All I'm seeing is that it'd perform 1% faster.
Edit: thinking a bit more on it, to me it's like they said TSMC has a guy with a sledgehammer who smashes all the wafers and their architecture snaps a tiny bit cleaner. But they haven't said anything about firing the guy with the sledgehammer. Their paragraph before the final table says that this whole exercise is pretty much meaningless because their numbers are made up about competitors and they aren't even the right numbers to be using. Then the table backs up my paraphrase.