Education research is really low quality. Like so many other fields in social sciences, the results rarely generalize beyond the direct findings, and only support the hypotheses in the mildest way. It cannot robustly guide decision making.
The fact that studies on screens vs books cannot get a consistent answer says enough. I checked #3 of your links, and the amount of bullshit is astonishing. The cited articles offer vague, unresearched explanations for contradictory findings, or point at differences in the stimuli, something which should obviously never have happened. After some cherry picking, article #3 treats the remaining studies as equal and reliable enough to throw in a big bag, as if that solves the problem.
Think of it like this: the replication crisis in cognitive psychology was found trying to replicate some of the better studies. The average education research study is several levels below that. It'll have a replicability of 0.1 or worse.
Yep. Part of the reason is the ethical problems with experimenting on children.
And part of the problem is that there is a ton of money to be made in education, so there is a lot of incentive to create or cherry pick data promoting one’s preferred (most profitable) policies.
Why do you think children will learn anything from a remark on a specific problem? If it were that simple, teaching would be easy. (Notice that teaching smart kids is easy).
Much of education requires making errors until you get it right a few times in a row, and paying attention of the errors. Getting an explation of your errors is only part of that process. No LLM can provide the rest of it.
I use a very simple encryption plus some padding (fluff in the article), but the email address gets updated by JS. This requires JS plus evaluating the resulting DOM. If you don't evaluate JS, the address will be something like "please@activate.javascript". Or you could use "potus@whitehouse.gov", in which case clueless scrapers end up spamming the US government.
The best works of Bach and Beethoven are from later in their life, although neither lived to be 85 (65 and 57, respectively), and also wrote great works in their younger years. Bruckner kept improving with age. There are also composers who lost it at a later age: Ravel, famously. Classical music is difficult, so experience does allow a better overall view, something which a lot of short works (such as pop songs) don't need.
If I remember correctly. Bach had about 20 children and he dedicated a lot of his time to their education. A few became very successful musicians. It is an example than later in life a lot of our value is not so much on doing, but helping form the new generations.
Ravel wrote his most famous work, Bolero, after age 50, and suffered a traumatic head injury a few years later. Not a good example, except perhaps that the odds of bad things happening increase with longevity.
He wasn't happy with the Bolero, and it certainly wasn't his best work. The piano concerto in G was also late, and that's definitely better. I didn't know about the head trauma.
Weather forecasts are notoriously iffy, and accuracy drops with time, but we understand the physics behind it (to a large extent). There's also a lot of fine-grained data available. For some arbitrary time series, there's only one data sequence, and the model is unknown. Extrapolation then becomes a lot more magical.
Axios has a long history, and is included in a lot of code, also in indirect dependencies. Just check its npm page: it has 174025 dependents as of this moment, including a lot of new packages (I see openclaw and mcp related packages in the list).
And with LLMs generating more and more code, the risk of copying old setups increases.
To me, the music is a bland mixture of game and elevator music. It totally sounds like sequenced music without using any expression. Minimal, in this case, is just a qualification of the amount of control over the outcome, not an art philosophy.
This tool, however, is specifically built for mass surveillance. It serves no other purpose. The tool is broken, and everybody knows it. The tool makers are at least as guilty as those who use it.
The tool is unethical, not broken. And unfortunately remains legal for the time being. To that end it's a social or political problem that can be fixed.
I think it would be too easy to create an uncontrollable cascade of function calls, causing terrible performance. IMO, it's best to keep concerns separated. Perhaps the current JS/DOM interface is a bit cumbersome, but it gets a lot done. What is your reason for merging?
Good question, I personally think that seperating by concerns is good. But when problems arise like boundaries that get crossed or compilers implementing language features into css like Sass, maybe it proves that those things are actually not two concerns but one.
Lately I am using Catch2 (a c++ testing framework) and wanted to benchmark some code. My first instinct was looking for a benchmark framework. But to my surprise Catch2 does also have a benchmarking framework included!.
Most people would argue that a testing framework should not include a benchmarking framework. But using it myself it showed me that both concerns of benchmarking for performance regressions and testing are similar.
Similar enough that I would prefer both of them together.
Most people, me included, are asking: "Should this be split into more?" But seldom, we ask: "Should this be merge into one?"
The fact that studies on screens vs books cannot get a consistent answer says enough. I checked #3 of your links, and the amount of bullshit is astonishing. The cited articles offer vague, unresearched explanations for contradictory findings, or point at differences in the stimuli, something which should obviously never have happened. After some cherry picking, article #3 treats the remaining studies as equal and reliable enough to throw in a big bag, as if that solves the problem.
Think of it like this: the replication crisis in cognitive psychology was found trying to replicate some of the better studies. The average education research study is several levels below that. It'll have a replicability of 0.1 or worse.
reply