No, they totally grew enough calories for themselves. My grandparents lived like that. They farmed around 15 hectares, which was actually quite a lot. You can easily grew enough calories for your family on 5 hectares, or even less if you have access to modern cultivars and artificial fertilizer. It’s just even poor people like variety, and will trade some of their crops for stuff they cannot make at home efficiently, like sugar, fish, or candy.
Prior to Industrial Revolution, nobody could go hunt in the woods, because the woods were King’s, and poaching King’s game carried death penalty. Situation was similar on the continent: the tiny slivers of remaining wood lands were off limits.
Granted, things were different in the New World, as a result of mass depopulation event following the Columbian exchange. But even there, the megafauna was hunted to extinction soon after the humans first appeared there.
Anyway, the point is that no, prior to Industrial Revolution, the world was of full of scarcity, not abundance.
If this is the case, then why doesn’t everyone get the top a score? The answer is, of course, that it’s not so simple, and you can’t just learn to the test.
That’s just like with sports: anyone can learn how to train himself, and anyone can improve with training, but in the end, some people will end up faster, and some people will end up slower.
My point was exactly that the chances are NOT the same for everyone. A kid from an affluent family might have both better tutoring as well as fewer troubles in life that could deter from learning.
But of course, in addition to that, there is always also a genetic component, as in sports.
The question is what you're measuring. You can have a test that gives you whatever distribution of scores you like. But is the thing it measures competency in the subjects it tests, general intellectual ability, familiarity with the test format, etc.? The worst negative outcome is usually subordination of learning itself to preparing for the exam, which can happen even when the gatekeeping function of an exam still works perfectly.
All scientific research on this topic points to the conclusion that standardized test results are the single best predictor of subsequent academic performance. Some studies suggest that using GPA in addition to test results improves the prediction accuracy, but the marginal increase is very small, and it increases variance.
Everyone is well familiar with the downsides of standardized tests, but so far, nobody has proposed any alternative that better. Learning to the test is not great, but what’s the alternative? It’s not like anyone knows how to teach things that results in more actual knowledge and skills being attained despite lower test results.
> All scientific research on this topic points to the conclusion that standardized test results are the single best predictor of subsequent academic performance.
And academic performance is measured how? With standardized tests?
Obviously, yes. This is not circular: it is by no means tautological that people who did well on test X will do well on a completely different test Y that tests different knowledge and skills. The fact fact that it does gives strong evidence to the value of using these tests for admission.
It depends on what skills are necessary to succeed at test X and Y.
While the subject matter or details thereof might differ, it's possible that things like "knowing how to learn to the test (i.e. cramming)" or "reading between the lines of what the teacher/professor says is relevant" belongs to these skills. And these can absolutely be transferred between test X and Y.
So the question is, how much do these tests actually test skills in the subject matter and how much do they test "meta skills"?
The research shows that the tests are predictive almost precisely to the extent they test “meta skills”. We can even measure how much individual questions test meta skills vs specific skills. You can learn about this by searching for the terms “factor analysis” and “g-loading”.
The main problem I raised with the Gaokao isn't that it's biased, but that it has negative effects on the way education is conducted prior to university.
It's not difficult to find first-hand accounts of this; go browse social media posts by teachers in mainland China if you're curious.
There are similar problems of "teaching to the test" in other contexts, too.
I'm not categorically opposed to standardized testing and I never said I was.
Traditional defense contractors have low profit margin because of the cost plus pricing on the contracts. They literally are only allowed to charge the cost they incur plus some fixed profit percentage. As such, they have incentive to drive up the costs, so that their profit, while low percentage, is on high base.
SpaceX wouldn’t need to so that. Companies like Anduril already are trying to win contracts on fixed price model, and if they succeed, they’ll have much higher profit margins than Raytheon et al.
The estimates that have Golden Dome at anything close to a trillion dollars are posited on the assumption that it will be much more expensive to build than the administration believes it will take. If it ends up as fixed price bids and costs less than people think, it will be well under $200 billion.
That's right.. and Golden Dome (which is definitely a mult-trillion dollar program if space based weapons are employed) has a bunch of convenient oligarch properties like built-in planned obsolescence with orbital decay that amplifies a launch monopoly.
His political goals seem to align pretty well with the goals of the democratically elected governments, which are perfectly happy to buy products and services from him. You might not agree with their goals, but it’s absurd to suggest that this should make him ineligible for clearance. Clearance is not some kind of a “good boy with right politics” certification, it’s rather “is this person trustworthy enough to depend on in the matters of national security”.
On modern machines, looking things up can be slower than recomputing it, when the computation is simple. This is because the memory is much slower than the CPU, which means you can often compute something many times over before the answer from memory arrives.
Not just modern machines, the Nintendo64 was memory bound under most circumstances and as such many traditional optimizations (lookup tables, unrolling loops) can be slower on the N64. The unrolling loops case is interesting. Because the cpu has to fetch more instructions this puts more strain on the memory bus.
If curious, On a N64 the graphics chip is also the memory controller so every thing the cpu can do to stay off the memory bus has an additive effect allowing the graphics to do more graphics. This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.
They were hoping that by having high speed memory, 250 mHz, the cpu ran at 90mHz, it could provide for everyone and it did ok, there are some very impressive games on the n64. but on most of them the cpu is running fairly light, gotta stay off that memory bus.
> This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.
The Ensoniq EPS sampler (the first version) used 13-bit RAM for sample memory. Why 13 and not 12? Who knows? Possibly because they wanted it "one louder", possibly because the Big Rival in the E-Mu Emulator series used μ-law codecs which have the same effective dynamic range as 13-bit linear.
Anyway you read a normal 16-bit word using the 68000's normal 16-bit instructions but only the upper 13 were actually valid data for the RAM, the rest were tied low. Haha, no code space for you!
The N64 was a particularly unbalanced design for its era so nobody was used to writing code like that yet. Memory bandwidth wasn't a limitation on previous consoles so it's like nobody thought of it.
Unless your lookup table is small enough to only use a portion of your L1 cache and you're calling it so much that the lookup table is never evicted :)
Even that is not necessarily needed, I have gotten major speedups from LUTs even as large as 1MB because the lookup distribution was not uniform. Modern CPUs have high cache associativity and faster transfers between L1 and L2.
L1D caches have also gotten bigger -- as big as 128KB. A Deflate/zlib implementation, for instance, can use a brute force full 32K entry LUT for the 15-bit Huffman decoding on some chips, no longer needing the fast small table.
Interesting. About 20 years ago, it must have been the other way around because I remember this paper [1] where the authors were able to speed up the log function by making use of a lookup table in the CPU cache.
I make things faster all the time by leveraging various CPU caches, sometimes even disk or networked disks. As a general principle though, memory lookups are substantially slower than CPU (and that has indeed changed over time; a decade or three ago they were close to equal), and even cache lookups are fairly comparatively slow, especially when you consider whole-program optimization.
That isn't to say that you can't speed things up with caches, but that you have to be replacing a lot of computations for even very small caches to be practically better (and even very small caches aren't helpful if the whole-program workload is such that you'll have to pull those caches from main RAM each time you use them).
To your paper in particular, their technique still assumes reasonably small caches which you constantly access (so that you never have to reach out to main RAM), even when it was written, and part of what makes it faster is that it's nowhere near as accurate as 1ULP.
Logarithms are interesting because especially across their entire domain they can take 40-120 cycles to compute, more if you're not very careful with the implementation. Modern computers have fairly fast floating-point division and fused multiply-add, so something I often do nowadays is represent them as a ratio of two quadratics (usually rescaling the other math around the problem to avoid the leading coefficient on one of those quadratics) to achieve bounded error in my domain of interest. It's much faster than a LUT (especially when embedded in a larger computation and not easily otherwise parallelizable) and much faster than full-precision solutions. It's also pretty trivially vectorizable in case your problem is amenable to small batches. Other characteristics of your problem might cause you to favor other solutions.
Logarithms are interesting because there's hardware to approximate them built into every modern processor as part of floating point. If you can accept the error, you can abuse it to compute logs with a single FMA.
An example of an exp and a log respectively from my personal library of bit hacks:
It’s a delicate balance and really hard to benchmark. You can write a micro benchmark that keeps the lookup table in cache but what if your function isn’t the only thing being done in a loop? Then even if it’s in the hotpath, there’s insufficient cache to keep the table loaded the entire way through the loop and lookup is slower.
TLDR: it depends on the usage and we actually should have multiple functions that are specialized based on the properties of the caller’s needs where the caller can try a cache or compute approach.
I don't think these functions are programmed as it looks like they are in their Math form. Atan2 is something like a line-to rotating, if the given point is close to any pixel of line-to, returns how many times the line to is rotated. It is almost a motion but an algorithm. This is why I'm telling it is a brute force.
> I think it is stored like sintable[deg]. The degree is index.
I can think of a few reasons why this is a bad idea.
1. Why would you use degrees? Pretty much everybody uses and wants radians.
2. What are you going to do about fractional degrees? Some sort of interpretation, right?
3. There's only so much cache available, are you willing to spend multiple kilobytes of it every time you want to calculate a sine? If you're imagining doing this in hardware, there are only so many transistors available, are you willing to spend that many thousands of them?
4. If you're keeping a sine table, why not keep one half the size, and then add a cosine table of equal size. That way you can use double and sum angle formulae to get the original range back and pick up cosine along the way. Reflection formulae let you cut it down even further.
There's a certain train of thought that leads from (2).
a. I'm going to be interpreting values anyway
b. How few support points can I get away with?
c. Are there better choices than evenly spaced points?
d. Wait, do I want to limit myself to polynomials?
Following it you get answers "b: just a handful" and "c: oh yeah!" and "d: you can if you want but you don't have to". Then if you do a bunch of thinking you end up with something very much like what everybody else in these two threads have been talking about.
It isnt good idea to store such values in code. I think it is something that computed when a programming environment is booting up. E.g. when you run "python", or install "python".
I try to understand how Math.sin works. There is Math.cos. It is sin +90 degrees. So not all of them is something that completes a big puzzle.
There's no nice way of saying this, and I mean no malice here, but I think you're exceptionally confused or ignorant, and I don't think it would be rewarding for either of us to continue this conversation.
This is not about being correct. Posted article looks like a clickbait. I am digging what really is about. I would like to dig more Imho. I am looking for work here.
And why do you think Congress has passed this law? What prompted them to micromanage the military in this manner? I encourage you to research this topic, “McNamara’s folly” will serve as a good starting keyword. Spoiler: it has everything to do with unsuitability of low IQ enlisted.
FWIW, ASVAB is an IQ test. Any intelligence researcher will tell you so, because it exhibits the usual positive manifold, you find the usual g factor in it, and it shows high correlation with other IQ test. The military doesn’t usually call it as such for political reasons, but will happily admit in private that ASVAB and WAIS measure the same thing: https://web.archive.org/web/20200425230037/https://www.rand....
Sorry, what exactly do you mean by "is representative of general intelligence"? This is a very abstract statement. What does this mean in scientific, empirical terms? What kind of facts we would observe in the world where this is true? What empirical observations we'd make in the world where it's false?
> Sorry, what exactly do you mean by "is representative of general intelligence"? This is a very abstract statement.
No need to apologize. Perhaps my g is too low to describe my thoughts properly.
> "is representative of general intelligence"?
This factor that is derived from the positive correlations, g, is called general intelligence. So, g is nominally general intelligence, but is g actually what the name implies? One can take n number of positively correlated but independent things, and there will always be a some factor that can be derived from it. However, that does not mean the underlying factor is necessarily causal.
> This is a very abstract statement.
We are discussing abstract concepts.
> What does this mean in scientific, empirical terms?
That causality would be scientifically and empirically verifiable.
> What kind of facts we would observe in the world where this is true? What empirical observations we'd make in the world where it's false?
Alas, that is precisely the point I was trying to paraphrase from Shalizi. Whether g be true or false -- the result wouldn't look any different. The methodology being used cannot determine what is true nor false, and that is the crux of this entire problem.
One can take n number of positively correlated but independent things, and there will always be a some factor that can be derived from it.
I hope you understand that your vague question cannot be seen as equivalent to this rather more concrete statement. That’s why I asked for clarification, and your patronizing comments were really not called for.
In any case, Shalizi is very wrong, probably because he is entirely unfamiliar with the literature. He is wrong on multiple accounts.
First, yes, any number positively correlated measurements will yield a common factor. However, when talking about g, this is not an artifact of how we constructed IQ tests. Shalizi says:
What psychologists sometimes call the “positive manifold” condition is enough, in and of itself, to guarantee that there will appear to be a general factor. Since intelligence tests are made to correlate with each other, it follows trivially that there must appear to be a general factor of intelligence.
But this is just not true. Tests are not made to correlated with each other. Any time anyone attempts to construct a test of general mental ability, we always find the same g factor, even if they explicitly attempt to make a battery that tries to measure distinct, uncorrelated mental aptitudes. Observe how Shalizi fails to provide a single example of a test that does not exhibit the positive manifold with other tests.
Second, unlike Shalizi, we know that g is the predictive component of the IQ tests. IQ predicts real world outcomes very well, but what is really interesting is that the predictive power of individual subtests of an IQ test is practically perfectly correlated with g-loadings of the subtest. This would be very surprising if g was just a statistical artifact.
Shalizi says
So far as I can tell, however, nobody has presented a case for g apart from thoroughly invalid arguments from factor analysis; that is, the myth.
But this is just baffling if you have any familiarity with the literature.
Whether g be true or false -- the result wouldn't look any different. The methodology being used cannot determine what is true nor false, and that is the crux of this entire problem.
That’s just not true. For example, if g was a statistical artifact, one of the hundreds of intelligence tests devised would have not exhibited the positive manifold with all the others. It would not be correlated with heritability. It would not be correlated with phenotype features like reaction time. The world where g is a statistical artifact looks much different than our world.
There's no debate on construct validity of IQ among the experts in the field. The consensus position is that IQ tests measure something real, that the tests enjoy extremely high measurement invariance (which implies construct validity), and that the results have extremely high predictive validity (relative to literally anything else in the entire field of psychology). The current debate is more along the lines, whether the contribution of genes to variance in IQ is closer to 30% or to 80%.
Wait, this comment starts out with an assertion about one scientific question (the construct validity of a quantitative psychological metric) and ends with a statement about the range of a totally different question, and it's one studied by different fields than the former question.
I’m really struggling to understand what your point is. The person I replied to was wrong as to where the current debate is among experts, so I pointed it out, and gave an example of where the debate currently is. Is that really so strange thing to say?
Yes, but since the heritability is high, the average IQ of the children will be close to the average IQ of the parents, despite the fact that it will tend to regress towards the mean.
That’s exactly how it works in the standard additive model of heritability, and we have lots of empirical evidence that heritability of intelligence matches that model very well.
reply