Robin_Message 3 months ago

The weights are aware of the end goal etc. But the model does not have access to these weights in a meaningful way in the chain of thought model.

So the model thinks ahead but cannot reason about it's own thinking in a real way. It is rationalizing, not rational.

  • senordevnyc 3 months ago

    So the model thinks ahead but cannot reason about its own thinking in a real way. It is rationalizing, not rational.

    My understanding is that we can’t either. We essentially make up post-hoc stories to explain our thoughts and decisions.

  • Zee2 3 months ago

    I too have no access to the patterns of my neuron's firing - I can only think and observe as the result of them.