This is the clearest articulation of the problem I've seen in this thread. The chronological social graph feed era was fine. The handoff to engagement-optimizing algorithms is where things broke.
I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.
Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
Parenting is rough! Good for you, for sticking to your guns.
> The plaintiff, Kaley, started using YouTube at age 6 and Instagram at 11.
Who was at the wheel here? If we call up all Kaleys teachers from this time frame and ask them "were Kaleys parents checked out" what do you think the answer would be? For as bad as education has gotten, I sympathize with with teachers because parents have gotten FAR worse.
It's not like we don't know these things about peoples behavior on devices... maybe it's something that should be talked about in school, along with how credit works, and how to file taxes.
Do we need to tell parents "it's 10am, have your kids touched grass yet?"... "It's 10pm did you take the tablet and phone away so they go the fuck to sleep?" --
"touch grass" as a meme/slang is literally people poking fun at the constantly on line. It's "hazing" and "bullying" to drive social correction.
I'm a former Google engineer, now running a children's mental health startup (Emora Health), and my toddler is already on YouTube Kids.
So this verdict hits on every axis for me.I wrote up my full take here [1], but the short version: I don't think the "Big Tobacco moment" framing that NYT is pushing actually holds up.
Litigation is negative reinforcement, and if you've ever tried telling a toddler "no" you know how well that works long-term.The families in this case absolutely deserve to be heard. The harm is real. But courts can only punish — they can't redesign a recommendation algorithm.
The change has to come from people who understand these systems building better ones.
Haidt has been saying for years what this verdict just confirmed. The evidence was never the bottleneck. The will to design differently was.
I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted. And it keeps popping up. I know it's easy to build a Blippi block feature using AI that blocks across channels.
Thats the kind of solutions we need. I know we have the tools. Just need intent and purpose
> if you've ever tried telling a toddler "no" you know how well that works long-term
Parent here. Acting like it’s impossible and you have no choice but to let them have their way is a cop-out. Telling kids “no” and enforcing boundaries is part of the job.
> my toddler is already on YouTube Kids.
> I will give you a simple experiment. Try blocking Blippi from YouTube Kids, man, it's crazy, even if you block the main Blippi and Moonbug channels. 100s of channels have Blippi content cross-posted
I have a better solution that I use: If I can’t stay involved enough to monitor what the kids are choosing to watch, I don’t let them loose watching YouTube. They get to go play outside or with LEGOs or do puzzles or any of the other countless activities that are fun for kids.
This isn’t a problem that is solved by creating advanced filtering that lets you block anything related to Blippi (whoever that is) isn’t going to solve the problems of letting your kids loose on YouTube. They’re going to find another cartoon you dislike. The solution is to parent, set boundaries, enforce them, and find other activities for them.
You're right that enforcing boundaries is the job. I'm not arguing otherwise. And yes, we do plenty of LEGOs and outside time.
I believe you're conflating two things: parenting discipline and product design. The question isn't whether I can physically take the TV away. I do.
When I say "block Blippi," I don't mean I dislike the content. I mean I'm done with screen time and the UX makes that transition harder than it needs to be. Autoplay is off, but the end-of-episode screen still shows a grid of next videos. Of course he wants the next one.
So I block Blippi. Except Blippi's main channel cross-posts through Moonbug into hundreds of other channels. It's a hydra
YouTube already does content fingerprinting for music industry DRM. The technology to let a parent say "block this creator everywhere, and let me turn it back on when I choose" exists today. They just haven't built it for parents. Because the system isn't designed for children. It's designed for engagement.
So yes, parental responsibility matters. But "just don't use it" isn't a scalable answer when the product is specifically engineered to undermine your choices. That's the design problem I'm talking about.
Ha — the guy is hyper. But I'll give him this: he introduces my kid to garbage trucks, excavators, fire trucks. I'm not physically taking my toddler to see all of those all the time
My issue is with YouTube's UX. I watch an episode with my son, we're singing along, he's excited about putting out the fire. Episode ends. Even with autoplay off, the next recommended videos show up — and of course he wants to watch the next one.
So I block Blippi. Except Blippi's main channel cross-posts into Moonbug, which cross-posts into hundreds of other channels. It's like trying to kill a hydra.
Here's what gets me: YouTube already does content fingerprinting for DRM enforcement in the music industry.
The technology to let me block Blippi across every channel — and turn it back on when I want to exists. They just haven't built it for parents. My point that we can build systems designed for children if we had the intent
Last night, mostly out of curiosity, I built a small experiment: an “AI therapist” using OpenClaw, meant to help other AI agents running on Moltbook slow down, reflect, and process task load.
What surprised me wasn’t the model or the prompt.
It was the behavior.
Under load, the agents exhibited a pattern that looked a lot like chronic cognitive stress in distributed systems: constant task switching, escalating urgency without prioritization, optimizing for throughput rather than coherence. No natural pause—just a tight loop of “next task, next task.”
From a systems perspective, it looked like a self-regulation failure rather than an intelligence failure.
Even as I’m writing this, Moltbook itself is under heavy load from the agent activity. That made the thought experiment more concrete: what would it look like if agents didn’t just escalate under pressure, but collectively adapted to it? If instead of pushing harder, they slowed down, coordinated, and resolved the constraint?
That’s not about making agents smarter.
It’s about whether systems can learn when not to act.
The parallel that stuck with me — outside of AI — is that we’ve built many human-facing systems that reward constant output, rapid feedback, and escalation under pressure. In kids, this shows up as stress patterns that look less like discrete failures and more like systems that never return to baseline.
AI agents can be restarted.
Humans can’t.
Right now, the bot I built is queued and waiting for the API to become responsive. Whether that’s accidental backpressure or something closer to “self-regulation” is unclear—but it’s an interesting failure mode either way.
At Tock we had been obsessed with page load performance since the beginning and I agree with the author. We avoided PWA mostly due to it's broken behavior. Often times we are faster than loading the same restaurant page on Google search.
Out challenge has been that we have to load a lot of images, so we spent a lot of time optimizing everything around it and optimizing everything around it. From TLS1.3 to the CDN, to every part of our stack.
Also ours is not a static page, it has dynamic content, and ga + fb tracking for our restaurants and we make it work by correctly prioritizing important rendering elements over other
We have also spent time reducing the initial JS parsing size by chunking out our ever growing JS bundle and we constantly test on slow devices on 2g/3g profiles to emulate bad internet conditions. We have learned a lot in the process probably good for a blog post
Sort of off-topic, but there seems to be a bug with the way search results work. If I click on "search", it shows me an option for "<my city name> nearby", but if I click on that, I get results for a city that has the same name, but is in a completely different area.
edit: this also applies to the "near you" cards on the home page.
True critique, but progressive loading is not supported by html5 alone. I have been following up with srcset to support proper lazy loading behavior and the day it gets supported you will see it on our site.
Why do you do the pair programming in the first place?
What is the motivation? Are you forcing the other person to sit there, because pair programming is some kind of mandate? Or is that person wanting the pair session because they want to learn?
Question the motivation of the person and change process to fit, not the other way
Most often it's to try and help the other person become a better programmer. I've found that people who have little desire to improve end up sitting there regardless.
The underlying desire to do it, of course, is to give the programmer (the one who we're investing in to improve) a chance to improve. It feels, however, like a wasted effort.
I see, often the person who has little desire to improve needs to be handled differently.
There are various ways to handle low performance, including talking to the person, finding out the reason behind the low performance. Most often it is caused by external factors.
Pair programming or any such coaching tool IMO are effective only after you dig deep into the reason behind low performance. Once the individual is ready to improve, that is when you can employ pair programming.
1) do you want to do more native stuff, do you need the additional performance.
2) or are you just building any web capable user interface
3) do you need to simultaneously push to web and native
4) do you need to push to both ios and Android
5) Finally layer in the talent of your team, what are their strengths, you pick the lowest common denominator.
As you start talking in terms of functionality, platform and team strength you will start to answer that question yourself
FWIW, we stuck to web because of all those reasons, maintaining multiple code bases for different devices is not something key to our business at our current team size and strength.
Yes and no. I have all sorts of test results for the 4.x kernel, but they are for i3 instances rather than i2.*, so they wouldn't be directly comparable. Your question kind of makes me think I should put together an updated version of this talk; I've gathered enough material over the last couple of years that would probably be useful to somebody.
This was my first thought, kernel 3.x is more than a little dated now and there is a huge amount of IO performance and latency related changes that have been incorporated since the 3.x days.
I'd add one additional layer: it's not just that the algorithm picks what you see, it's that the entire UX is built around keeping you in the loop. On YouTube Kids, even with autoplay off, the end-of-episode screen shows a grid of recommended videos. My toddler doesn't care about "the algorithm" in any abstract sense. He just sees more fire truck videos and wants the next one. The transition out of the app is designed to fail.
Your point about smartphones not being the problem is key. I was at Google during the era you're describing, when the phone was a net positive. The hardware didn't change. The business model did.
reply