This article reminds me of tree searches. One of the results we've found in various AI related investigations into the playing of games is that a good policy selects the action which maximizes the value of a future state. A good value for an action corresponds with where that action would eventually leave the agent. The article points out that a person will get better results by expanding the tree out to a depth beyond one. This is usually true. Interestingly, humans are real time and so we also experience contexts in which this is not true. Expansion takes precious time. Thus, one of the cognitive biases which sometimes steers us away from more optimal answers.