Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I have a much lower opinion of the average human's writing capabilities. Much of what we see online has been written by people who either love to write or are journalists. I think GPT-3 is already at the average human's writing level.


I'm not sure what's being discussed here exactly. If we talk about vocabulary, spelling and grammar I agree with you. On the other hand humans are able to express opinions and idea, come up with novel things to say, not merely mimicking an input.

If you give me a huge corpus of chinese texts and a very long time, I might be able to figure out what character goes with what other, find the various structures in the text and then be able to generate a somewhat convincing made up chinese text while still not understanding a word of it.

These GPT-3 demos are impressive because they look like real text with proper syntax and grammar, but they still express absolutely nothing. It reads like a long series of rambling that goes nowhere. There's no intent behind it.

It reminds me of these videos of apes imitating humans by using their tools, banging hammers ineffectively. They are able to copy the appearance of the behavior, but not the reasoning behind it. They don't get why we bang hammers or what it achieves.


Have you read any business books? I used to read quite a few. For the most part, they take a central thesis and then repeat variations on the theme over and over again. Sometimes with anecdotes of questionable veracity. I venture that many of them could be generated with GPT-3.

My point is, GPT-3 is operating at human levels for certain contexts. I think it would get passing grades on essays in a lot of schools in the US, for instance, just based on syntax and grammar.


This stuff is so new that HN threads may be the first to mention realistic potential applications - congratulations, I think you just found one. Having GPT-3 render a first draft of books in the archetype you mention (one simple idea stretched out over many pages) seems like a very profitable endeavor.


> Having GPT-3 render a first draft of books in the archetype you mention (one simple idea stretched out over many pages).

Given what I've seen so far with GPT-3, that simple idea would have to have already been discussed at length on forums on the internet and in the corpus.

Usually books have facts and studies that they use as supporting points. Many of the connections they make between the subject material and their thesis are unique, and this forms their supporting argument. GPT-3 is rearranging words and sentences to resemble structures it's seen before, but it does not create novel facts.


So ideally it could work like a meta-study. Meta-studies combine results from multiple separate studies, making correlations and drawing more confident conclusions. Most 'original' human ideas are just reinventions of older ideas, too.

The interesting part is that GPT-3's leap in performance can be attributed to scaling. That's easier to do than inventing completely new approaches. Scale data, scale compute, scale money, then you have something you couldn't have invented directly.


A Chinese room can still be an interesting conversation participant


That's a good point, but I feel like there's still a long way to go before the model has enough data to actually output insightful content. Right now it seems to mostly output grammatically correct Lorem Ipsum.


I completely agree. People have a hugely inflated opinion of themselves. It's already better at writing articles than your average human.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: