Yes. Tools like Khan Academy help lots of talented kids to progress in the curriculum beyond what's available in physical classrooms available to them.
There are simply not enough teachers who can provide such an ideal, imagined education, at least not for the current rate of teacher salaries (and it's very far off). The educational strategy has to scale to real people, real teachers and real students as they are in the flesh, not some ivory tower pipe dream. We've had decades of this "we should teach how to think, not what to think".
Alternatively,if you don't care about scale, as in rolling out a system to the population at large, then yeah, this kind of advanced education exists, it's just very selective and is in advanced extracurricular or obtained through private tutors.
This also assumes that universal education is a sensible aim. I think that's doubtful and that it contributes to these sorts of burdens and waters down the quality of education in the process.
As a concrete example, for a few decades now, we're been pushing primary school students toward university education quite aggressively and broadly. It was quite common to scare students toward university by claiming that without a university degree, they would be flipping burgers at McDonalds. This, of course, is completely false and it is disgraceful that such dishonest and manipulative tactics were used. Today, because of rising university costs and the dubious value of most university education, we're seeing this idea challenged at the level of the university. Gen Z's interest in trades has increased by something like 1500%. I don't see this as a negative. In Germany, for instance, there is a more balanced distribution across trades and university.
Now, I admit that the situation is a bit different in the case of primary education, but here, too, I think we do well to think in terms of reform rather than technology and patching up a pedagogically and administratively broken system. The American education system spends an inordinate amount of money on each student with little to show for it. If, for instance, those funds were allocated wisely, then a number of problems would likely go away or become smaller issues.
Of course, what does "allocate wisely" mean? Education systems require a principled grasp of what education is for. If you don't have a sound anthropological grasp of what it means to be human and how education is supposed to enable one's humanity and serve human persons, then you are in no position to run an education system or decide school curricula. I cannot stress this enough. Our education system today is very "pragmatist"; we're constantly told we're being prepared for a career and a job market. That's not education: it's job training. Of course, schools are quite mediocre as training facilities, because they're sort of a halfway house between training and whatever residue of classical education still lingers. So that's one distinction: training vs. education. Now, if we simply accept this distinction, we should ask: how should one organize training on the one hand and education on the other to enable each to be successful within its own circumscribed domain? And what if we keep things as local and decentralized as possible? I guarantee you would not see the inept system we have today.
So, with this...
> There are simply not enough teachers who can provide such an ideal, imagined education
...I agree, but again, my view is that at best we are buying time with these sorts of technological gimmicks. We're also social animals. We cannot keep isolating ourselves behind technology under the pretext of "practicality".
Yes, Germany has different educational tracks that are decided fairly early, at 10 or 12 years old (with some opportunity to change tracks). I don't think Americans like this idea.
Still, 40% of young adults have a tertiary degree (https://www.oecd.org/en/publications/2025/09/education-at-a-...) while it's 47% in the US, so I wouldn't say it's a huge difference. And its not just a US thing, Denmark stands at 45%. So I wouldn't spin too big of a narrative around this.
Education is a field where decade after decade they try some new fad which is basically the old fad re-dressed and never really learn much. That's because teachers and their methodologies don't really have that big of an effect. A stable non-chaotic learning environment and access to the learning material though any kind of presentation, and books gets you to pretty much as good as it gets. To have a real effect, you need private tutoring for the gifted or very small groups of talent nurturing, which goes far beyond the default curriculum. But again, these don't fit the current zeitgeist, so they will keep on pushing "critical thinking" and "how to think", no matter how much they fail.
If you think that "2 days" makes it sound a lot... You'd be surprised how long it takes to actually make learning materials. I don't want to be too harsh, in case you're a high school student etc. I see it's good faith, but do note the reaction here.
I read a couple of good analogies to predict how you and others will feel about your AI content: 1) telling people at the breakfast table about the dream you just had, 2) showing all your loose acquaintances the photos of your newborn baby.
That is, it's very precious and interesting to you, but it really isn't to anyone else. This is true about generated text, images and songs. I've generated a lot of what I think of as bangers with Suno but learned quickly that they have zero value to anyone else. Part of the value to me is the thrill and dopamine hits of having generated it. This simply doesn't translate to anyone else. It will take a while until society internalizes this.
This is not to say that AI can't have any role in the creative process. But the effort will be still high and original human thinking and intent and input is still very important.
it's a worthwhile lesson. thank you. There was a great deal of effort on my part, but not in the prose. You've taught me something and I appreciate it.
it's not that you're teaching the AI, it's that you're framing the conversation on a reference material and having a conversation around it. Exploring a problem with referential framing, like a white paper or a dense blog post is a useful cognitive hack. You just have to be careful to pin extraordinary claims to extraordinary evidence.
Just read a good textbook instead of this LLM-written stuff. For example those by Murphy or Prince or Bishop. Or one of many YouTube lecture series from MIT or Stanford. There are many primer 101 tutorials and Medium posts. But if you actually want to learn instead of procrastinating, pick up a real textbook or work through a course.
I've bounced off of many good textbooks. Even Karpathy's YouTube series was too dense for me. I'm trying to come in at a more palatable level.
This was a two day exploration where I provided the syllabus and ran through it with Claude Code, asking questions, trying to anchor it to stuff I understand well. I feel like the artifact has value.
I think chatting with an llm alongside a textbook can be helpful but producing learning material when you yourself are a novice is not really that valuable.
I agree that it's worrying that we're moving more and more towards implicit and opaque state. Hiding what exactly is getting edited, very limited tooling to check what the subagents are doing exactly, setting up scheduled and recurring tasks without it being obvious etc.
It's tending more and more towards pushing the user to treat the whole thing as a pure chat interface magic black box, instead of a rich dashboard that allows you to keep precise track of what's going on and giving you affordances to intervene. So less a tool view and more magic agent, where the user is not supposed to even think about what the thing is even doing. Just trust the process. If you want to know what it did, just ask it. If you want to know if it deleted all the files, just ask it in the chat. Or don't. Caring about files is old school. Just care about the chat messages it sends you.
Here in SF I talk to people all day who see this as a feature, not a bug, and that's the persona Claude Code and Codex are selling to.
It started being proposed as a thought experiment "why should we care about the files if AI is going to do the edits", then as Opus got better and the hype built up, the rhetorical part of that dropped and now there are plenty of people who swear they don't write code at all anymore and don't see why anyone would.
I think we're in a feedback loop caused by the fact you can totally get away with not writing code anymore for some reasonably complex topics. But that doesn't account for the long term maintainability of the result, and it doesn't account for people who think they're not writing code, but are relying heavily on the fact we haven't fully magicked away the actual code. They're watching the agents like a hawk, doing small bits and pieces at a time, hitting stop when it starts thinking about the wrong thing, etc.
My worry is the market taking the wrong lesson out of the trends and prematurely trying to force the agent-first future well before the tools or the people are ready.
For example the first frontpage post I read just now (I haven't checked others) is I'm fairly sure written with the use of AI (I would guess based on a human draft): https://qht.co/item?id=47566442
I can't prove it but I'm comfortable enough in my judgment to say it.
It's rumors based on vibes. There are attempts to track and quantify this with repeated model evaluations multiple times per day, this but no sawtooth pattern has emerged as far as I know.
I don't want to go too far down the conspiracy rabbit hole, but the vendors know everyone's prompts so it would be trivial for them to track the trackers and spoof the results. We already know that they substitute different models as a cost-saving measure, so substituting models to fool the repeated evaluations would be trivial.
We also already know that they actively seek out viral examples of poor performance on certain prompts (e.g. counting Rs in strawberry) and then monkey-patch them out with targeted training. How can we be sure they're not trying to spoof researchers who are tracking model performance? Heck, they might as well just call it "regression testing."
If their whole gig is an "emperor's new clothes" bubble situation, then we can expect them to try to uphold the masquerade as long as possible.
Valuable ideas have already been those that others find unintuitive and it's kinda hard to get people on board because they are skeptical and they need long form, tailored explanation for them to get convinced. If a short elevator pitch convinces them to go home and try to build it, it's probably already being considered by others.
reply