"We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach."
What if the meta bitter lesson is that data scaling is just a more extreme form of the human-centric approach of building knowledge into agents? After all, we're telling the model what to say, think and how to behave.
A true general method wouldn't rely on humans at all! Human data would be worthless beyond bootstrapping!
The author fundamentally misunderstands the bitter lesson.
[0] https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson...