Hacker Timesnew | past | comments | ask | show | jobs | submit | sampullman's commentslogin

It's not possible in Taiwan's current political/social climate. I'm not so confident to say 50 years, but 20+ feels conservative.

You say this, but I’ve watched American political culture across the spectrum evolve a ton within that time in ways I’d never thought I’d see.

Agreed, I've blocked all notifications for years. Maybe it got worse recently, but I thought they were annoying since at least Big Sur.

I think there's a little more nuance than that, but it seems roughly correct.

Wouldn't it be better if apps/websites targeting kids didn't use A/B testing to be more addictive?


I think addiction is a redherring.

Pokemon is addictive, computer games are addictive. Its whether they are knowingly causing harm, and or avoiding attempts to stop that harm.


Addictive patterns in games and other online activity is a bit less innocent than you are portraying it: knowingly causing harm is too low a standard. A lot of the profitability of online games, prediction markets, etc. comes from the whales. The whales are probably addicted. If your business is a whale hunt you are possibly causing harm at least to the extent that addiction is dangerous.

They'd find another method. Why are we allowing this in the first place?

I don't have an answer to fix this whole mess, but it starts with our attitude towards addiction. We've built a system that rewards addiction in all sorts of places. Granted, every addiction is different, and I'm of the opinion that it's not (drug = bad), it's how you use it and react to it. We can control the latter, but we choose to ignore it because we're too busy with anything else. This is a tale as old as time...


> Why are we allowing this in the first place?

Exactly what I keep coming back to.

For me, it feels like you could cut this problem down substantially by eliminating section 230 protection on any algorithmically elevated content. Everywhere. Full stop.

If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement. You want that content to be seen, for whatever odd reason, and if it's harmful to your users, you should be held responsible for it. It's one thing if some random asshole messages me on Telegram trying to scam me; there's little Telegram can do (though a fucking "do not permit messages from people not in my contacts" setting would be nice) but there is nothing at all that "makes" Facebook shovel AI bullshit at people, apart from it juices engagement, either by genuine engagement or ironic/ragebaiting.

And AI bullshit is just annoying, I've seen "Facebook help" groups that are clearly just trawling to get people's account info, I've seen scam pages and products, all kinds of shit, and either it pisses people off so Facebook passes it around, or they give Facebook money and Facebook shoves it into the feeds of everyone they can.

It's fucking disgusting and there's no reason to permit it.


> algorithmically elevated

I don't see a good way to make a definite legal distinction between the icky stuff versus normal an unobjectionable things which are, technically, also forms of elevation-by-algorithm:

    rank_by_age(items) // Good
    rank_by_age_and_poster_reputation(items) // Probably   
    rank_by_on_topic_ness(items, forum_subject)
    rank_by_likes(items)
    rank_by_engagement_likelihood(items) // Bad?
    rank_by_positive_sentiment_toward_clients(items) // Bad

Really, I see one right here:

    rank_by_age(items) // Good
    rank_by_age_and_poster_reputation(items) // Probably   
    rank_by_on_topic_ness(items, forum_subject)
    rank_by_likes(items)
    <-- here -->
    rank_by_engagement_likelihood(items) // Bad?
    rank_by_positive_sentiment_toward_clients(items) // Bad
Age is deterministic. When was the thing posted?

Poster reputation is deterministic. How many times has this poster received positive feedback based on their content?

On-topic-ness is deterministic, if a bit fuzzy. That said I think the likes will reflect this, if you post a thread about cooking potatoes in the gopro subreddit, your post will be downvoted and probably removed via other means in which case it's presence in the feed is already null.

Likes are again, deterministic. How many people upvoted it?

In contrast:

Engagement likelihood is clearly a subjective, theoretical measure. An algorithm is going to parse a database for other posts like this, see how much attention it got, and say "is this likely to drive engagement." That's what I'm talking about.

And positive sentiment towards clients I can't quite read? I'm guessing you're referring to like, community sponsors but I'm not 100% certain. But that almost certainly is a subjective one too, and even if not, it's giving people with money the ability to put their thumb on the scale.


I don't think "deterministic" is the right term to capture this concept. An if-statement which bans posts containing a political phrase would be 100% deterministic, or one which prioritizes anything from a username on a list.

> On-topic-ness is deterministic, if a bit fuzzy.

If you permit that exception (even for good reasons) then it reveals how the original "algorithmic elevation" is too vague and unenforceable.

All someone needs is a ToS footnote like "this forum is provided for truthful international news and engaging with $COMPANY in a positive way." Poof, loophole. Anything the moderator (or moderator-algorithm) decides is "untrue" or "negative" becomes off-topic and can be pushed down.


Eliminating section 230 protections would heavily disfavor any kind of intellectually stimulating content, because it's hard for a platform to scalably verify that nobody's making defamatory claims. But pointless clickbait, heavily filtered Instagram models, etc. don't really have liability concerns on a video-by-video level. To me it seems like this makes the problem worse.

It’s not eliminating section 230 entirely, it’s eliminating it for algorithmically promoted content. If you have a site that has user content and you present that content in a neutral fashion, section 230 applies. If you pick and choose what content to present to users (manually or by algorithm), you’re no longer a neutral platform, and shouldn’t be getting the benefit of 230.

I understand that. My point is that this would mean algorithmic feeds can only contain vapid, pointless content with no liability concerns. To me, it doesn't improve the world to require that Instagram and Youtube exclusively serve slop, even if that might cause some number of people to abandon them for non-algorithmic platforms with better content.

Literally every social media site I'm aware of has had, in varying strengths and at varying times, many still currently, a movement among users asking for a fucking chronological ordered feed. Just, what the fuck my friends are saying, in the reverse order that they said it, displayed in a list.

Not only is this seemingly the most desired feed among end users, it was also the default one. MySpace didn't have a choice in the matter, they had to show a chronological timeline, because they didn't have a machine-learning algorithm nor a way to make one. They could tweak it based on engagement metrics but on the whole, it was just here's what all your friends have posted, in reverse order, scroll away. And then eventually you'd hit the end where it's like "you're up to date" and then you go on with your fucking day.

But of course platforms hate that. They want you there, all day, scrolling through an infinite deluge of bullshit, amongst which they can park ads. And we know they hate this, because not only have platforms refused to bring back chronological feeds, they actively removed them if they existed at one time. Not only is this doable, it's the most efficient way that requires the least compute from their servers, but platforms reliably chose the inverse... because it makes them more money.

Also specifically on this:

> My point is that this would mean algorithmic feeds can only contain vapid, pointless content

The vast majority of these sites is vapid, pointless content RIGHT NOW, even if it attempts to convince you it isn't.


Literally every social media site I'm aware of has a chronological ordered feed of people you've chosen to follow. Facebook does, Instagram does, Youtube does. It's just not the homepage, and most people don't care enough about what feed they get to go navigate to it every time they open the app. Would it be nice to make them let you put it on the homepage? Sure, I'd support that.

The current state of affairs is that Youtube and Instagram have brought back fascism and the measles, so if the complaint here is "it's impossible to moderate algorithmic content at scale and so the platforms would become incredibly risk averse," I think I'd take that alternative. I also don't think effectively forcing a breakup of the current online media monopolies is a bad thing either - if you can't actually mitigate the damage of your platform because you're too big, then maybe you shouldn't be that big.

> If you write or have an algorithm created that pushes content to users, in ANY fashion, that is endorsement

Yes. People make free speech arguments about this, but the list and order of stuff returned by algorithmic non-directed (+) lists is clearly a form of endorsement. Even more so is advertising, which undergoes a bidding process. Pages which show ads should be liable if those ads are fraudulent, especially if they're so obviously fraudulent that casual readers suspect them immediately.

(+) Returning a list of stuff in a user-specified query, on the other hand, is not endorsement. Chronological or alphabetical order or distance-based or even random is fine.

Note that section 230 is, of course, US specific and other countries manage without it.


In the span of how long it takes for law to catch up to what’s going on, YouTube and Facebook has been around for a tiny amount of time.

They have been around long enough to have done unknowable damage to entire generations of humans

As usual unfortunately laws are reactive.

"Free market" and "entrepreneur spirit" fetishism and fear of collective social action against individual drives.

For context, facebook is so dystopian when I login once every few years that I’m not sure I’ll ever use it again. And, I hate wading through the YouTube cesspool to find some educational content I like. But, I don’t think it makes sense to ban a/b testing or optimization in general. Some company could use it, for example, to figure out how to teach math to kids in a way that’s as engaging as possible. This would be “more addictive” technically.

That's a good point, I'm not 100% sure it's worth throwing away the potentially beneficial uses. There might not be a solution that's both feasible to implement and avoids banning useful things. In the end I usually come back to it being the parent's responsibility to monitor usage, limit screen time, etc., but it hasn't been working so well in practice.

> more nuance

Not enough to diffuse liability. 15 years ago when recommender algorithms were the new hotness, I saw every single group of students introduced to the idea immediately grasp the implication that the endgame would involve pandering to base instincts. If someone didn't understand this, it's because

> It is difficult to get a man to understand something, when his salary depends on his not understanding it. - Upton Sinclair


It's 1.5 miles in 14:25, I think most people can handle that. There are plenty of ways to exercise that aren't plain running. Biking, skating, swimming, Tai chi...

There is lossy PNG compression that works very well for images using a limited color palette (pngquant, lossypng, etc).


It can be harder, but it's specific to the country/system. Here it Taiwan you can walk into any clinic with stock and get a (NHI covered) vaccine any time.

There are other things to complain about of course, but the rules for what's covered ate generally logical. Non-covered medication is affordable to, which helps.


That's true for code editing, but it's nice to not have to reach for a different solution when editing huge files. Sometimes I like to open up big log files, JSON test data, etc.


Do you actually edit big log files?


I interactively pare down log files to just the parts I need. I rarely save the result


I am always surprised even vim chokes on files with one massive line. That could be a useful optimization too.


If you buy a game and can't tell it's made with AI, isn't that just as good?


Let me know if you ever find out!


If natural language is used to specify work to the LLM, how can the output ever be trusted? You'll always need to make sure the program does what you want, rather than what you said.


Just create a very specific and very detailed prompt that is so specific that it starts including instructions and you came up with the most expensive programming language.


It's not great that it's the most expensive (by far), but it's also by far the most expressive programming language.


How is it more expressive? What is more expressive than Turing completeness?


This is a non-sequitur. Almost all programming languages are Turing complete, but I think we'd all agree they vary in expressivity (e.g. x64 assembly vs. TypeScript).

By expressivity I mean that you can say what you mean, and the more expressive the language is, the easier that is to do.

It turns out saying what you mean is quite easy in plain English! The hard part is that English allows a lot of ambiguity. So the tradeoffs of how you express things are very different.

I also want to note how remarkable it is that humans have built a machine that can effectively understand natural language.


>"You'll always need to make sure the program does what you want, rather than what you said."

Yes, making sure the program does what you want. Which is already part of the existing software development life cycle. Just as using natural language to specify work already is: It's where things start and return to over and over throughout any project. Further: LLM's frequently understand what I want better than other developers. Sure, lots of times they don't. But they're a lot better at it than they were 6 months ago, and a year ago they barely did so at all save for scripts of a few dozen lines.


That's exactly my point, it's a nice tool in the toolbox, but for most tasks it's not fire-and-forget. You still have to do all the same verification you'd need to do with human written code.


You trust your natural language instructions thousand times a day. If you ask for a large black coffee, you can trust that is more or less what you’ll get. Occasionally you may get something so atrocious that you don’t dare to drink, but generally speaking you trust the coffee shop knows what you want. It you insist on a specific amount of coffee brewed at a specific temperature, however, you need tools to measure.

AI tools are similar. You can trust them because they are good enough, and you need a way (testing) to make sure what is produced meet your specific requirements. Of course they may fail for you, doesn’t mean they aren’t useful in other cases.

All of that is simply common sense.


More analogy.

What’s to stop the barista putting sulphuric acid in your coffee? Well, mainly they don’t because they need a job and don’t want to go to prison. AIs don’t go to prison, so you’re hoping they won’t do it because you’ve promoted them well enough.


* prompted


> All of that is simply common sense.

Is that why we have legal codes spanning millions of pages?


The person I'm replying to believes that there will be a point when you no longer need to test (or review) the output of LLMs, similar to how you don't think about the generated asm/bytecode/etc of a compiler.

That's what I disagree with - everything you said is obviously true, but I don't see how it's related to the discussion.


I don't necessarily think we'll ever reach that point and I'm pretty sure we'll never reach that point for some higher risk applications due to natural language being ambiguous.

There are however some applications where ambiguity is fine. For example, I might have a recipe website where I tell a LLM to "add a slider for the user to scale the number of servings". There's a ton of ambiguity there but if you don't care about the exact details then I can see a future where LLMs do something reasonable 99.9999% of the time and no one does more than glance at it and say it looks fine.

How long it is until we reach that point and if we'll ever reach that point is of course still up for debate, but I dnt think it's completely unrealistic.


That's true, and I more or less already use it that way for things like one off scripts, mock APIs, etc.


I don't think the argument is that AI isn't useful. I think the argument is that it is qualitatively different from a compiler.


I did not find this to be the case, except with a few low quality vendors we ended up dropping.

It was mostly the same as anywhere else, you go talk to them in person, tour their facilities/processes, and see what else they've built.

I was warned strongly about IP theft and cost cutting, but didn't find that expectation quite met reality. It may have been that our products were mostly un-copyable, and we specified everything precisely, or were just lucky.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: