Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
[flagged] No Fair Sex in Academia: Evidence of Discrimination in Hiring to Editorial Board (openpsych.net)
37 points by temp8964 on May 31, 2022 | hide | past | favorite | 25 comments


As someone who has gone through quite a few application processes in academia, I can say the results of this (subjective) survey:

> We followed up our research with a survey of 231 academics, asking for their attitudes towards discrimination in hiring to editorial boards. Although two-thirds of academics supported no bias, for every 1 academic who supported discrimination in favour of men, 11 supported discrimination in favour of women. Our results were consistent with the hypothesis that academics and journal editors are biased in favour of women, rather than against women

Do not surprise me at all, and qualitatively the bias favoring women has seemed to be true in my experience.

HOWEVER, it's very hard to give this paper any credibility when the authors are willing to casually drop a statement like

> As mentioned, the variance in intelligence is higher amongst males, and their average also seems to be somewhat higher

On the third page. I'm aware there have been one or two studies to this effect, but a quality like "intelligence" is so amorphous, and any attempts to measure it are surely met with confounding variables, and even if treated statistically carefully is such a controversial topic, it really just makes me feel like the authors performed this study with a certain agenda / chip on their shoulder. You may notice that both authors are men.


Indeed, it seems they went into the study with preconceived notions about the superiority of men. Garbage in, garbage out.

Edit: just looked up the author. yikes... https://rationalwiki.org/wiki/Emil_O._W._Kirkegaard


I almost confused your link with Wikipedia...


Wikipedia policy about biographies of living people prevents this kind of “expose” type article from being written.


With good reason in most cases.


I’m not very familiar with the reasoning behind Wikipedia’s decisions. In this case, the controversies seem highly relevant to the papers being churned up. I appreciate the effort the authors of the rationwiki article put forth. It’s distributed citizen journalism.


> it really just makes me feel like the authors performed this study with a certain agenda / chip on their shoulder. You may notice that both authors are men.

I highly recommend you look up OpenPsych and Kirkegaard, and even the Ulster Institute of Social Research (and its president Richard Lynn). You will see the extremely obvious bias of the authors


From the abstract:

> Our results were consistent with the hypothesis that academics and journal editors are biased in favour of women, rather than against women.


In my anecdotal experience working for a public university in the US. All VP level or higher admin staff were women and about 3/4 of our department heads were Women.

My university was probably an outlier. But it was funny doing our yearly diversity training and getting to the section about Women's underrepresentation in management.


I thought it is an open secret that in most western countries schools favor girls in regards to grades. But from my experience this changes to a negative effect later in life for employment. Sill, academia is certainly politically correct and discriminates against men just like schools do. Same for public services.


What a twist!


Also useful to consider you can actually watch the peer review process for this paper here: https://openpsych.net/forums/5/thread/242/


It matches previous results like this one from 2015:

https://www.washingtonpost.com/news/morning-mix/wp/2015/04/1...

More scholarly take on the same study:

https://www.pnas.org/doi/abs/10.1073/pnas.1418878112



I think the bias here should be pointed out. There is such a large and obvious agenda here that the data is inherently untrustworthy. Garbage in Garbage out.

1) OpenPsych isnt some prestigious scientific journal with rigorous peer review. It was founded by the author of this piece, and has had a public and troublesome past as a pseudoscience journal[1]

2)The review process was literally done by anonymous users of that site.[2]

3) I doubt the sincerity of the author when considering his past thoughts on women.

3a) Reposting and saying he agrees with a literal 4chan post on why women are less creative. https://emilkirkegaard.dk/en/2012/03/quote-lit-anon-on-femal...

3b) Just an large rant on why woman are inferior to men. https://emilkirkegaard.dk/en/2022/01/too-many-women-in-the-w...

> As women increasingly are hired into traditionally male jobs via affirmative action laws or indirect pressure via media, we see more and more incompetence > Not only are women more less interested in these jobs to begin with, but they obviously lack talent compared to men. > Of course, as we live in clown world with no adults in charge, the army is going forward with more women in these roles

3c) https://rationalwiki.org/wiki/Emil_O._W._Kirkegaard

[1] https://en.wikipedia.org/wiki/OpenPsych [2] https://openpsych.net/person/profile/75/


Excuse a question from a total statistics illiterate:

> Using a transformation of the h-index as our indicator of research output, we find male research output to be 0.35 standard deviations (p < 0.001) above female research output. However, the gap falls to 0.13 standard deviations (p < 0.001) when years publishing is controlled for.

What does this mean? What does "years publishing is controlled for" means?


Assume for a moment that research output is affected by time in the field; either older people publish less due to resting on their laurels, or older people publish more due to greater experience allowing them to publish with less effort; I don't know without reading the paper which is true.

Now consider what happens if men are (on average) more experienced because up until recently more men entered the field than women, but then the trend reversed. If you just compared female to male research output you could come to the wrong conclusion; the difference in experience could overwhelm the difference between sexes.

So there are various statistical tools you can use to measure the difference due to experience, and subtract it out from the difference between sexes to get a more accurate measure of the difference due to sex. Doing this is called "controlling for a variable"


It basically means instead of measuring "papers published" they chose to measure "papers published per year"


No, they mean that they control for 'career length' ... e.g. an academic with 5 years of history is likely to have a better score than one with 2 years of history.

So they effectively compare women-with-1-year vs men-with-1-year, etc. etc, rather than 'women with h-index 5' vs 'men with h-index 5'

edit: should be "Women with h-index of X after Y years" rather than just "Women with h-index of X" (e.g. they control for Y years' publishing between men and women, assuming that time-in-academia is correlated with h-index. Quick glance at the paper suggests that time-in-academia has R2 of 0.62 with h-index)


Thanks. And what does the standard deviation 0.13 means? I just don't know how to interpret the result.


However, the gap falls to 0.13 standard deviations (p < 0.001) when years publishing is controlled for.

This is a much smaller gap than I would have expected.


Actually it's expected to be negative.


How can a gap be negative?


I find this paper a bit confusing.

Authors use "woman", "men", "male" and "female" interchangeably at beginning. Latter to stay on safe side they use word "sex". But try to determine that from persons name and images!

Perhaps authors should first define what they are trying to measure. And then ask people about their sex and gender identity!

Guessing core data makes this paper totally random and irrelevant!

From paper:

> In line with the practice of previous research on sex representation on editorial boards, we coded the sex of academics according to whether their names were clearly male or female (e.g. Ioannidou & Rosiana 2015). When this was not obvious we used Google Search to find their sex from pictures or left the sex variable missing when this was insufficient. Of the 5,625 editorial board members in our dataset, we were unable to determine the sex of 7 individuals.


This was actually pointed out in the "peer" (laughable to call it that) review of the paper. The data this rag was based on was flawed from the start.

> You should use the term ‘sex’ rather than ’gender’, because your use of categories (male vs. female) indicates it is biological sex you are considering. In fact, you inconsistently use sex 20 times and gender 20 times in the ms.

> The problem is tricky because our survey used the term gender rather than sex. We have only used the term gender when referring to the survey questions and have explained why in a footnote at the start of the survey section. We forgot to explain that we had made this change in our last communication with the reviewer, but we hope you find this approach reasonable. Admittedly, a few other ‘genders’ were added by accident in past reviews.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: