Hacker Timesnew | past | comments | ask | show | jobs | submit | superkuh's commentslogin

It doesn't look like it in the full sense of "free". But part of how one pays these services is by running a permissive modern browser which allows the corporation to spy on you even when you already paid in currency. In a sense by depriving them of the ability to easily spy on your this workaround is closer to "free".

>My best guess is -- ChatGPT is running something in your browser to try to determine the best things to send down to the model API

There's no way this is worth it unless the models are absolutely tiny, in which case any benefits from offloading to the client is marginal and probably isn't worth the engineering effort.


It’s free as a loss leader. The trick is to upsell later. Unfortunately for OpenAI there are plenty of competitors with fungible products, so it might be hard to pull a classic monopoly rug-pull.

They already see everything I’m doing because I send my prompts to them. What “workaround” are you referring to?

They see everything your doing because you send the text. But this is talking about everything about your computer system. You would not normally be sending this to them or having it involved at all. This workaround allows you to not involve unneeded information about your computer setup. It is not about avoiding sending prompt text.

And as for "but chatgpt isn't paid" (another commenter), well, then yes, that's even closer to free by removing this spying on your computer setup. But they spy on the paid users too.


But isn't ChatGPT access free through the browser? What do you mean already paid in currency?

If you want to send more than a few prompts each day, you have to pay. With currency.

Of course Googlebot, Bingbot, Applebot, Amazonbot, YandexBot, etc from the major corps are HTTP useragent spiders that will have their downloaded public content used by corporations for AI training too. Might as well just drop the "AI" and say "corporate scrapers".

What these corporations were trying to do is bad and vaguely feasible to a degree. I think it's bad enough regulation could apply. But there is an additional consideration that's really important in how we as a society deal with this.

Screens are not drugs. They are not somehow uniquely and magically addictive (like drugs actually are). The multi-media is not the problem and not the device to be regulated. The corporate structure and motivations are the problem. This issue literally applies to any possible human perception even outside of screens. Sport fishing itself is random interval operant conditioning in the same way that corporations use. And frankly, with a boat, it's just as big of a money and time sink.

We should not be passing judgements or making laws regulating screens themselves because we think screens are more addictive than, say, an enjoyable day out on the lake. They're not. You could condition a blind person over the radio with just audio. The radio is not the problem and radios are not uniquely addictive like drugs.

We can't treat screens like drugs. It's a dangerous metaphor because governments kill people over drugs.

Without this distinction the leverage this "screens are drugs" perceptions gives governments will be incredibly dangerous as these cases proceed. If we instead acknowledge that it's corporations that are the problem and not something magical about screens then there's a big difference in terms of the legislation used to mitigate the problem and the people to which it will apply. The Digital Markets Act in the EU is a good template to follow with it only applying to large entities acting as gatekeepers.


It's not the screen, it's the format. It's an engineered gambling addiction where the currency is time and instead of the house taking your money the arbitrage your time to an advertiser, often surreptitiously.

Worse than that, often times the content that fosters the most engagement borders on propaganda that directly damages the social fabric over time. A lot of the extremist content (left, right, and otherwise) fits this description.

Screens on their own aren’t “uniquely and magically addictive”, but infinitely scrollable short form video delivered through that screen is, because a few companies spent billions on the smartest minds in the world to make it so.

So you would support banning any form of entertainment that people spend more time on than TikTok since it would be above the threshold of addiction?

More or less, yeah. There might be some nuance about the threshold for maladaptive behaviour, but if it’s all or nothing I’ll take all.

How would you get around the First Amendment difficulties?

There are plenty of public interest limitations on free speech. Food labels, cigarette warnings, deceptive ad laws. Regulating addictive social media isn't really an outlier here.

Even commercial speech regulations need a stronger basis than, “People spend a lot of time listening to it.”

The parent comment set up a false choice and then had to adapt to the response calling their bluff.

The issue isn’t with reading or consuming content, as was set up in the challenge above.

The issue is with designing feeds and surfacing content in ways that take advantage of our brains.

As an analogy, loot boxes in video games, and slot machines come to mind. Both are designed to leverage behavioral psychology, and this design choice directly results in compulsive behavior amongst users.


I live in New Zealand, so I don't have to.

I didn’t mention time? From Cambridge dictionary: ‘addiction: an inability to stop doing or using something, especially something harmful.’ I am in support of regulating things which are harmful and which people have trouble not doing

Like potato chips?

If a specially designed endless bag of such were aggressively marketed and chemicals to induce appetite added to them then sure.

None of those attributes are necessary beyond those of an ordinary bag of Lays to meet the definition:

“things which are harmful and which people have trouble not doing”


It's a matter of degree.

I don't impulsively drive to the store to purchase another bag immediately after finishing the one I have whereas (for example) many people exhibit such behavior when it comes to tobacco.

In the case of social media the feed is intentionally designed to be difficult to walk away from and it is endless (or close enough as makes no practical difference). Even if it weren't endless, refreshing an ever changing page is trivial in comparison to driving to the store and spending money.


How would you contrast social media with Netflix in this regard?

An amusing question. Episodes are much longer and most shows only have one or a few seasons. I don't get the sense that streaming services optimize for difficulty to walk away and do something else any more or less than a good book does.

Maybe autoplay and immediately popping up a grid of recommendations should both be legally forbidden as tactics that blatantly prey on a well established psychological vulnerability. I'd likely support such legislation provided that it could be structured in such a way as to avoid scope creep and thus erosion of personal liberties.

In short I think Netflix is closer to a bag of Lays and modern social media closer to the cigarette industry of yore.


Screens are drugs. They are uniquely and magically addictive.

Try to take away a kids tablet, a teen's phone, or an adult's phone. They will fight just like an addict.


This is not particularly insightful if you stop and think about it. Try to unilaterally snatch a book that someone is in the middle of reading and you will probably be met with a hostile reaction. Grab the tool someone is using to do a task, similar. What you're describing is the natural reaction to messing with someone else's possessions. Without further context it's blatantly toxic behavior even if you happen to have the authority to force the matter.

You aren’t reading or using a hammer for 6 hours a day. It’s hard to find a tone ppl aren’t using their phone that would be appropriate to take it away if it’s only while not using it

Phones and computers are used for more than one thing; in that sense they aren't analogous to a single item such as a book or hammer but rather an entire closet filled with odds and ends. Keeping in contact with acquaintances, checking traffic and looking up other day to day information, reading a book during down time, these are three completely distinct activities that have all been nearly entirely subsumed by screens for me.

Motherfucker you try to take my fork while I'm eating and you're going to get a stabbed hand. Are forks addicting?

so… choices, as you see them in this issue, the lenses through which on the one hand you think is extreme and the other appropriate… are either screens-as-drugs or sports fishing?

Some middle ground might be there somewhere. But if forced to choose… the choices for interpreting behavioral engineering funded by $billions in research for over a decade + data harvesting on a scale unprecedented, for the purpose of manipulating users:

Doesn’t sound a lot like fishing to me.


Maybe governments should stop killing people over drugs.

If anyone else was hoping this was using Q8 internally and that converted to Q4 it could fit in 12GB VRAM: unfortunately it's already at Q4_K_M (~9GB) and the the 16GB requirement is from other parts not a 14B@8bit+kv cache/etc you might guess.

Parent poster has some… interesting and popular but entirely false views on neuroscience. Specifically, an extremely outdated view on concepts like the role of dopamine and dopaminergic neuronal populations in human cognition. Rather than an understanding based on science and the idea that incentice salience and valence is modulated by such populations, he is attributing pleasure and enjoyment to them because of a meme.

Even beyond the dangerous legal precedent it sets, we're all cheering for a legal precedent that human persons don't have volition or free will and that multi-media can somehow bypass normal sensation pathways a act directly on want like drugs do. And that's simply not true. Believing that and setting up a legal precedent means that now the government can use violent force to regulate anything shown on a screen. This is going to cause incredible damage to our society as a whole and to individual peoples lives. Government use of force is far more dangerous than unsupported memes/old-wive tales from the 1970s.

I too fear what governments will actually do in this area. But I think you may be underestimating the threat to personal agency.

Imagine you are trapped in a groundhog day like time loop - but you are not the person who remembers previous loops. "Z" is. He tries to convince you to do something, over and over and over, thousands or millions of times, refining his approach based on your reactions while you remember nothing. Are you really confident that your free will protects you from being taken advantage of in this situation?

Now imagine that instead of a time loop, Z has a million clones of you. He tries his persuasion on one of them at a time, refining it until it works reliably before using it on you. You are just as vulnerable.

Now suppose he has a billion people, not identical to you but drawn from the same distribution. He has a harder computational problem, mapping the high dimensional manifold of their responses to create a model of you sufficiently accurate to manipulate you. But with enough data he can approximate the results of the previous case without more than a tiny fraction of his experimentation being visible to you.

Any relationship where one party gets to surveil and monitor not only the other party, but millions or billions of like parties, has the potential to be a deeply abusive one. We should not tolerate such situations whether the surveilling party is a government or not.


There’s a few books I recommend for you, if you’re open to learning more about this subject.

The first is “Addiction by Design: Machine Gambling in Las Vegas” by Natasha Dow Schüll. The second, and arguably more direct and fascinating, is “The Age of Surveillance Capitalism” by Shoshanna Zuboff. Both are incredibly eye-opening in their treatment of technology and how it is designed to influence behavior.


And for you, to help understand the vast gulf that is the difference between drugs that directly modifify incentive salience and simple normal perceptions of multi-media screens via our senses (that don't), https://sites.lsa.umich.edu/berridge-lab/selected-review-art...

I'm not seeing where the content you linked is supporting your argument.

It's background education in the basics so you can understand what drug addiction is and the neurological differences in the active populations for wanting versus liking. I guess I can spell it out.

Addictive drugs directly increase wanting via directly activating the downstream targets of dopaminergic populations which predict the valence of stimuli and control of wanting and motivation. By taking a chemically addictive drug you don't even have to enjoy the stimuli related to it. You will still be conditioned to want it and be motivated to re-experience the stimuli surrounding it.

This is vastly different in mechanism and result than simply seeing or hearing a screen. These things cannot directly increase incentive salience regardless of actual valance of the stimuli. You have to actually enjoy the thing and the experiences to form habits.

Do you see the difference now? One thing, the chemical drugs, are addictive. The other things are enjoyable. One will addict everyone because they're addictive. The other only leads to addiction-like behaviors in the context of say, random interval operant conditioning, if you actually enjoy the thing intrinsically first and are of the fairly small subset of that subset that is predisposed to behavioral addictive behaviors.


This strikes me as a distinction without a difference.

You're right in an important sense. There's not a complete difference in outcome between direct manipulation of wanting with drugs and using enjoyable stimuli in some form of unethical non-consensual conditioning program (aka advertising). It is one of many scales of magnitude and a lot of abstraction but that's still bad.

What I am trying to get across, and what I'd hoped all the conditionals and premises I laid out in my original comment made clear, is an additional consideration:

Screens are not drugs. They are not somehow uniquely and magically addictive (like drugs actually are). The multi-media is not the problem and not the device to be regulated. The corporate structure and motivations are the problem. This issue literally applies to any possible human perception even outside of screens. Sport fishing itself is random interval operant conditioning in the same way that corporations use. And frankly, with a boat, it's just as big of a money and time sink.

We should not be making laws regulating screens themselves because we think screens are more addictive than, say, an enjoyable day out on the lake. They're not. You could condition a blind person over the radio with just audio. The radio is not the problem and radios are not uniquely addictive like drugs.

I am saying it's important not to think of screens as the problem. The problem is the corporations' behavior and scale. That's a big difference in terms of the legislation used to mitigate the problem and the people to which it will apply. The Digital Markets Act in the EU is a good template to follow with it only applying to very large incorporated entities acting as gatekeepers.


>Credit cards are not documents. Many people don’t have them. Apple don’t provide any other way to verify your age because they are a stupid American company with American values in which you’re just as human as your credit score.

This is the way ID verification is going in the USA and the reasons for it seem clear. A human person is only useful to a corporation if they have money to give the corporation. If you don't have provable money, either through a third party corporate payment service willing to pay for you sometime later (a credit card) or by giving a corporation your login details to your bank account (ie, Plaid), then you're not a human.

It clear what a bot is now: anything that doesn't have provable money.


I think it's also related to the fact that the US and the UK don't have ID documents the way that a lot of EU countries have and many people don't have passports, so the only other way left that has an API and is checking periodically that you are who you claim is your bank before giving you a fresh credit card

I think it’s a lot simpler than that. Verifying a credit card is probably the easiest and cheapest reliable method to verify identity.

If you look at it this way: they’re trying to identify somebody, and they don’t want to do a massive amount of work in house. Do you go to a company that verifies identity? Or… you can use credit cards as a proxy for identity. Most of your users already have them.

Credit cards require no additional infrastructure, no additional corporate approval, no additional expenses, and no additional auditing. It’s good enough for the company and who cares if it’s good enough for the users.

Corporate greed is a massive problem, but you’re giving people too much credit to assume they have some kind of grand conspiracy for every decision. That requires far too much intelligence.

Corporate laziness is a far better explanation for this one.


And even better for companies: banks and credit card companies are completely unaccountable entities who've established they're willing to put up with 10000 false positives to block one false negative. They don't even have to get it right. And getting it wrong won't result in bad press or anything actionable for anyone. We're just ending up in a system where a good fraction of people are declared not people forever.

> A human person is only useful to a corporation if they have money to give the corporation.

This is spot on. This is the same tactic used by the affiliate marketers back in the day to qualify leads - Free book, just pay for shipping! Or, get this e-book for just $1 (so we can upsell you a $97 product later)


On my local computer used only by me because now I don't need a corporation to make them for me. In the past decades I'd make maybe one or two full blown applications for myself per 10 years. In the past year "I" (read: a corporate AI and I) have made dozens to scratch many itches I've had for a very long time.

It's a great change for a human person. I'm not pretending I'm making something other people would buy nor do I want to. That's the point.


Opera is not 30. Opera is dead. Opera died and never went beyond version 12.

You're getting 10% of content? I get 0% because of the impassible cloudflare wall.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: