Comment by soganess
Can someone tell me what I am missing here?
This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.
In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.
Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?
The cynic is me says those interesting but ultimately barren long-form articles are just content marketing for Mathematica software.