I wonder if we'll eventually realize, similar to Solow's productivity paradox, that whatever efficiency "gains" we get from AI are just cancelled out by the increased need for fact-checking, some increased incidence of major errors being blindly trusted, and suboptimal outcomes (e.g. being bamboozled by good copy or fake reviews into paying for an inferior product). All this in addition to the opportunity cost of the brainpower and energy currently being poured into multiple largely-comparable models.
The beautiful thing about our late-stage world, is that the appropriate resources will be spent on testing and validation for those who can afford it, and literally every other creature on the planet will have to bear the externalities.
Better yet: someday, rich people will get to afford the high-quality LLMs that hallucinate somewhat less often, while plebs will have to make do with lower quality LLMs that hallucinate worse than someone on an acid trip.