Comment by b0a04gl
if models shift behavior based on eval cues, and most fine-tuning datasets are built from prior benchmarks or prompt templates, aren't we just reinforcing the eval-aware behavior in each new iteration? at some point we're not tuning general reasoning, we're just optimizing response posture. wouldn't surprise me if that's already skewing downstream model behavior in subtle ways that won't show up until you run tasks with zero pattern overlap