Comment by xg15
I wrote a toy program once, in which an agent should navigate on its own through a 2D platformer environment. Jump over obstacles/holes, climb stairs, etc.
The idea was that the agent would first receive a goal like "go to tile (15, 28)", then use Dijkstra's algorithm to create a "movement plan" - like "move 2 tiles to the right, trigger a jump, move 3 tiles to the left while in the air", etc - and then execute that plan.
My main takeaway was that even in this small toy world, with clearly defined goals, very simple, deterministic "physics", complete information and an "executor" that is 100% reliable and never gets tired or distracted, it didn't work.
The simplified assumptions about the "physics" in the plan-making stage didn't match how the environment actually behaved, and the agent ended up in a different place than planned after a small number of steps.
What worked was to only execute the first few steps of the plan, then throw the rest away and make a new plan from the new location, then repeat, etc etc.
If this stuff didn't even work in a toy world, with a computer, I can't imagine that making detailed steps for a 5 year period in the real world would work, with the planner having even less knowledge about the world to base their plan on.
Isn't the A* algorithm something that kinda helps with this? Basically you recalculate the route every "step".