It's more that the "far left wing cluster" had something like a "we should all get up and leave Twitter for BlueSky" activist campaign. And "far right wing cluster" didn't.
The closest thing "far right" had to that was Gab and Truth Social, and that's both more specific and less impactful overall.
Thus, BlueSky's userbase is biased towards extreme left wing - it's basically the go-to place for far left wing nutjobs go when they get too nutty for Twitter moderation, or feel like Twitter is not left wing enough for them.
>It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'.
That pretty much explains the "it's not real AI" hysteria that we observe today.
And what is "AI effect", really? It's a coping mechanism. A way for silly humans to keep pretending like they are unique and special - the only thing in the whole world that can be truly intelligent. Rejecting an ever-growing pile of evidence pointing otherwise.
>there was a chorus of critics to say, 'that's not thinking'.
And they were always right...and the other guys..always wrong..
See, the questions is not if something is the "real ai". The questions is, what can this thing realistically achieve.
The "AI is here" crowd is always wrong because they assign a much, or should I say a "delusionaly" optimistic answer to that question. I think this happens because they don't care to understand how it works, and just go by its behavior (which is often cherry-pickly optimized and hyped to the limit to rake in maximum investments).
Anyone who says "I understand how it works" is completely full of shit.
Modern production grade LLMs are entangled messes of neural connectivity, produced by inhuman optimization pressures more than intelligent design. Understanding the general shape of the transformer architecture does NOT automatically allow one to understand a modern 1T LLM built on the top of it.
We can't predict the capabilities of an AI just by looking at the architecture and the weights - scaling laws only go so far. That's why we use evals. "Just go by behavior" is the industry standard of AI evaluation, and for a good damn reason. Mechanistic interpretability is in the gutters, and every little glimpse of insight we get from it we have to fight for uphill. We don't understand AI. We can only observe it.
"What can this thing realistically achieve?" Beat an average human on a good 90% of all tasks that were once thought to "require intelligence". Including tasks like NLP/NLU, tasks that were once nigh impossible for a machine because "they require context and understanding". Surely it was the other 10% that actually required "real intelligence", surely.
The gaps that remain are: online learning, spatial reasoning and manipulation, long horizon tasks and agentic behavior.
The fact that everything listed has mitigations (i.e. long context + in-context learning + agentic context management = dollar store online learning) or training improvements (multimodal training improves spatial reasoning, RLVR improves agentic behavior), and the performance on every metric rises release to release? That sure doesn't favor "those are fundamental limitations".
Doesn't guarantee that those be solved in LLMs, no, but goes to show that it's a possibility that cannot be dismissed. So far, the evidence looks more like "the limitations of LLMs are not fundamental" than "the current mainstream AI paradigm is fundamentally flawed and will run into a hard capability wall".
Frankly, I don't buy that LeCun has that much of use to say about modern AI. Certainly not enough to justify an hour long podcast.
Don't get me wrong, he has some banger prior work, and the recent SIGReg did go into my toolbox of dirty ML tricks. But JEPA line is rather disappointing overall, and his distaste of LLMs seems to be a product of his personal aesthetic preference on research direction rather than any fundamental limitations of transformers. There's a reason why he got booted out of Meta - and it's his failure to demonstrate results.
That talk of "true understanding" (define true) that he's so fond of seems to be a flimsy cover for "I don't like the LLM direction and that's all everyone wants to do those days". He kind of has to say "LLMs are fundamentally broken", because if they aren't, if better training is all it takes to fix them, then, why the fuck would anyone invest money into his pet non-LLM research projects?
It is an uncharitable read, I admit. But I have very little charity left for anyone who says "LLMs are useless" in year 2026. Come on. Look outside. Get a reality check.
My opinions on the matter does not come from any experts and is coming from my own reason. I didn't see that video before I came across that comment.
>"LLMs are useless" in year 2026
Literally no one is saying this. It is just that those words are put into the mouths of the people that does not share the delusional wishful thinking of the "true believers" of LLM AI.
To be honest, I would prefer "I over-index on experts who were top of the line in the past but didn't stay that way" over "my bad takes are entirely my own and I am proud of it". The former has so much more room for improvement.
>Literally no one is saying this.
Did you not just advise me to go watch a podcast full of "LLMs are literally incapable of inventing new things" and "LLMs are literally incapable of solving new problems"?
I did skim the transcript. There are some very bold claims made there - especially when LLMs out there roll novel math and come up with novel optimizations.
No, not reliably. But the bar we hold human intelligence to isn't that high either.
>my bad takes are entirely my own and I am proud of it"
Sure, but the same could apply to you as well.
>"LLMs are literally incapable of inventing new things" and "LLMs are literally incapable of solving new problems"?
You keep proving that you have trouble resolving closely related ideas. Those two things that you mention does not imply that they are "useless". They are a better search and for software development, they are useful for reviews (at least for a while). But it seems that people like you can only think in binary. It is either LLMs are god like AI, or they are useless.
Mm..You seem to be consider this to be some mystical entity and I think that kind of delusional idea might be a good indication that you are having the ELIZA effect...
>We don't understand AI. We can only observe it.
Lol what? Height of delusion!
> Beat an average human on a good 90% of all tasks that were once thought to "require intelligence".
This is done by mapping those tasks to some representation that an non-intelligent automation can process. That is essentially what part of unsupervised learning does.
It is there to ensure an animal is not experimented on unnecessarily or with excessive pain. Discussing a process like this might require you to slightly look further than one mostly clear cut case.
Part of his filings will be actually stating the "terminally ill" part and having this approved by an ethics committee. Making a moral judgment here is the committee's actual role as not all cases are so "simple".
It could have been a single informal paper that says "the animal is terminally ill, my judgement call is that this is unlikely to cause excessive suffering and might help instead, even if the chances are low, and if my judgement is proven wrong and this appears to cause excessive suffering the animal will be put down humanely". Signed by the veterinarian and the owner.
Because the system is high speed low drag, and trusts the veterinarian and the owner to make reasonably good calls about pet health and suffering - unless proven otherwise by overwhelming evidence. The system trusts people by default, and that 3 months long process and an ethics board come into play when there's a suspicion that this trust may have been abused.
Of course, that's not the world we live in. Which is why we're having this conversation.
It depends on your point of view. For the person deciding on giving permission they will not be thanked for allowing it, but might well be blamed if something goes horribly wrong.
That's kind of the issue with a lot of bureaucratic oversight. It often produces systems that aren't at all interested in being streamlined, in letting things that should happen happen. It produces systems where compliance is a drag on the one doing things, and the default state is "forbidden".
Yes, but this is a clasical agent-principal problem.
Theoretically, the bureaucracy works on your behalf, but only approximately so. If it makes a mistake that kills you, the decision maker does not pay any price.
Have you ever considered that "finding cures for disease" is really fucking hard to do?
Things that were easy to cure were already cured some time in the past century. What remains is the hard to crack nuts that resist simple scalable methods.
There's money to be had in curing HIV - but good luck pulling that off. Maybe someone will, this century.
Have you ever considered that once a disease is cured, the industry can no longer profit off of it being a disease? Treating disease rather than curing it, is a much more profitable venture.
How is there money to be had in curing HIV? It seems to me like it's much more profitable to continue selling expensive HIV treatments rather than curing the disease. Once a patient is cured, they no longer need to pay for expensive treatments.
And? Why would that be my problem? I'm in the business of selling HIV cures, not HIV treatments.
If I get to undercut your entire "HIV treatment" business AND line my pockets with your entire market share, then, good for me, bad for you. Sucks to suck. Should have cured HIV first if you didn't want me to do it.
There are many, many, many examples of "newer and better treatment X kills the market share of older and worse treatment Y" in the history of healthcare. Your conspiracy theory model predicts this never happening.
So you think that complicated diseases are easily curable and the entire scientific world, including very different countries like China, has just decided to hide the knowledge?
If your cynical take was correct, there would be no cures ever. And yet there are new ones all the time. For example, vaccines. There are way, way more vaccines developed in the 21st century than in the 250 years before that.
Vaccines against HPV have reduced incidence of cervical cancers to basically 0 in the cohorts that obtained them. How come? Shouldn't Big Cancer be interested in treating cervical cancers expensively and promoting relapses?
Even in cancers, your chances of surviving, say, Hodgkin's lymphoma, are now north of 90 per cent. The treatment is expensive, but time limited. You don't have to take pills for your entire life.
How does that square with your view of the medical system as a machine for prolonging diseases indefinitely?
Plus even if we posit nefarious forces, we should also account for nefarious forces which want the sickness gone.
If you're seriously sick you aren't making money because you can't work or all your money goes to Evil Pharma Co, then the Evil Government doesn't like that, because they can't wring taxes out of you. (Which they prefer since it's easier than fighting Evil Pharma Co.)
Meanwhile, The Shadow Government wants you to be healthy enough to work every day, or else they won't finish the navigation beacons for the alien invasion.
I mean, yes, I and many others have thought of that.
To counter, have you realized HIV is an evolutionary entity that is optimized to continue existing by not fucking dying. HIV mutates like crazy. I mean there are other things like the flu that mutate, but because we have partial immunity to the flu we can use that immunity to create new vaccines every year against it.
It doesn't take much self research to see that HIV is a rather insane virus, and if somehow out of the gate it would have been wildly contagious that it could have wiped humanity.
Cool it with the moral outrage. Even if I did believe that prediction markets are bad, "easily the worst things to grace the internet by far" is such a ridiculous hyperbole that it strains any belief.
A big part of the problem education systems are solving is not "how do we get knowledge to children", but "how do we get masses of children to learn without coercion of the ugliest kind".
Some children are innately motivated to learn. Some are motivated so strongly you could give them a smartphone and watch them learn all they need to learn in life. But those children aren't the norm - they're the freaky 1 in 1000 outliers. And education has to work with everyone.
Thus, peer pressure. That's what putting a whole bunch of students in the same room accomplishes.
I don't think I've met a single child in my life that isn't excited about learning about new stuff, but it really depends on what it is, it differs a lot! And they're all different as well, someone who's really into math might hate history, or vice-versa. But they all want to learn something, in my experience.
The problem occurs when you place them all in one school, and force them to learn everything, even things they don't want to learn about, and that kind ruins the other parts they actually find fun and engaging.
> The problem occurs when you place them all in one school, and force them to learn everything, even things they don't want to learn about
A difficult part is that children aren't really in the position to know what they want to learn most of the time.
Sure, many prefer sports over math but covering a broad spectrum in pre-teen and teenager education is quite important to get them develop these preferences and themselves as a person. They are given more agency/choice (electives etc.) as they grow up.
There are also topics you need to learn that aren't fun/engaging (especially as fun/engaging is quite subjective and depends on the individual). Especially when those topics are prerequisites to other potentially fun topics (you will have to learn the fundamentals before engaging with advanced topics in most subjects)
Lest you think there’s one simple solution, my kid went to a school for one year that deliberately eliminated all that stuff - no set curriculum, no specific academic goals, and students get the majority of the vote on the rules and anything about the whole setup. They could learn about anything they want to, with no pressure.
Most of the kids spent their whole days playing Xbox, Switch, or brainrot games like Roblox on tablets. (No, they weren’t “creatively building new worlds” on Roblox, just screwing around consuming what others had made in order to manipulate them into spending Robux).
Yep. This is the human condition for the vast majority of humans.
I grew up in a place where education and hard work wasn’t valued much by the community. Those that could scam some sort of government benefits did so, and they certainly were not working on art or helping out their communities with all their spare time. At best was a consumption state - the median was actively self destructive behavior, and the worst was behaviors that ruined their surrounding community.
This whole idea that on average humans would hit some utopia of creativity and community mindedness if only they could throw off the yoke of needing to work to survive goes against every single bit of my lived experience. And recent history.
The kids who went to the local public school my nieces went to basically did the bare minimum - usually just showing up is enough these days. Zero interest in learning or putting effort in. Only when they were removed from that environment and put with self-selecting (well, parent-selecting) peers that were curated beforehand did this fact change.
The vast majority of humans are not inherently motivated to better themselves in any way.
Its so sad that humans perform best when suffering. I adopted a supper skinny worn out street cat, all she did was sleep eat and poop, she never went outside, straight from the sofa to the food and back to the sofa, really really slowly. For 4 years it did nothing but sleep, no exceptions. Then one day a different cat looked around the corner of the open door. In 0.3 seconds she launched from the sofa covering impressive distance and ran after it to the end of the street. Safe to say, if I don't move for 4 years I wouldn't be looking to pick a fight. But cats do get stupid if they don't have to work for food.
There were a few kids, primarily among the handful of high school aged ones, who seemed to be doing some sort of work vaguely resembling schoolwork, doing some kind of reports on some topic, or a project writing some kind of video game mod.
I hear this often but I don't really buy it. Variety is good. If I had been routed into a field in first grade or whatever based on what I liked and was good at at the time my life would look completely different, but likely not better. I certainly never would have taken art history or design classes in college, both requirements that I wouldn't have otherwise considered, but among my favorite classes in retrospect.
>Some children are innately motivated to learn. Some are motivated so strongly you could give them a smartphone and watch them learn all they need to learn in life. But those children aren't the norm - they're the freaky 1 in 1000 outliers. And education has to work with everyone.
I worked as a teacher for a year. Children are innately motivated and curious (this is not just a cliche). If there was any laziness it usually stemmed from fear of not being good enough but they definitely all tried, even students that didn't know their 5 times table by age 10. Some students have greater self-perseverance than others though, some can't handle being wrong and fear being seen as less-then their peers. Others like to challenge themselves without such fear.
I believe that fear is not unwarranted. It's a learned behavior that helps one survive in their environment. I imagine many of those children were likely punished for mistakes or for not being good enough.
JSON just works. Every language worth giving a damn about has a half-decent parser, and the syntax is simple enough that you can write valid JSON by hand. You wouldn't hit the edgy edge cases or the need to use things like schemas until down the line, by which point you're already rolling with JSON.
XML doesn't "just work". There are like 4 decent libraries total, all extremely heavy, that have bindings in common languages, and the syntax is heavy and verbose. And by the time you could possibly get to "advanced features that make XML worth using", you've already bounced off the upfront cost of having to put up with XML.
Frontloading complexity ain't great for adoption - who would have thought.
That's my point. By the time you hit "until it doesn't", you're already doing JSON, and were for a while.
Also, is "parse well if there's a missing bracket" even a desirable property? If you get files with mangled syntax, something has already gone horribly wrong. And, chances are, there is no way to parse them that would be correct.
By "parses well" in that case I mean "can identify where the error is, and maybe even infer the missing closing tag if desirable;" i.e. error reporting and recovery.
If you've ever debugged a JSON parse error where the location of the error was the very end of a large document, and you're not sure where the missing bracket was, you'll know what I mean. (S-exprs have similar problems, BTW; LISPers rely on their editors so as not to come to grief, and things still sometimes go pear-shaped.)
This seems like it has some potential, but is pretty much useless as it is.
Shame there are no weights released - let alone the "compiler" tool they used to actually synthesize computational primitives into model weights. It seems like a "small model" system that's amenable to low budget experiments, and I would love to see what this approach can be pushed towards.
I disagree with the core premise, it's basically the old neurosymbolic garbage restated, but embedding predefined computational primitives into LLMs could have some uses nonetheless.
Basically, a holdover from the days of symbolic AI, from back when neural network ML wasn't the dominant AI paradigm.
Some people in the "symbolic AI" camp didn't take the loss well, so they pivoted towards "ML is not real AI and it needs a symbolic component to be a real AI", which is: the neurosymbolic garbage.
This work isn't exactly that, and I do think it can amount to something useful, but the justification for it reeks of something similar.
Full disclosure: all my published work is on symbolic machine learning (a.k.a. Inductive Logic Programming) :O
I think you're confusing various different things as "neurosymbolic AI". There is a NeSy symposium and I happen to have met many of the people there, and they are not GOFAI ideologues, rather they recognise the obvious limitations of neural nets (i.e. they're crap at deduction, though great at induction) and they look for ways to address them. Most of that crowd also has a predominantly statistical ML/ neural nets background, with symbolic AI as an afterthought.
I don't think I've ever heard anyone say that "ML is not real AI" and I mainly move in symbolic AI circles. I would check my sources, if I were you.
Anwyay, honestly, this is 2026, there is no sensible reason to be polarised about symbolic vs. statistical AI (or whatever distinction anyone wants to make). An analogy I like to make is as follows: a jetliner is a flying machine, a helicopter is a flying machine. We can use both for their advantages and disadvantages, but a flying machine is something too useful to give up on any one kind for ideological reasons. The practical benefits overwhelmingly make up for any ideological concerns (e.g. "jets bad" or "propellers bad").
And just to be clear, symbolic AI is still in rude health: automated theorem proving, planning and scheduling, program verification and model checking, constraint satisfaction, discrete optimisation, SAT solving, all those are fields where symbolic approaches are dominant, and where neural nets have not made significant inroads in many decades; nor are they likely to, not any more than symbolic approaches are likely to make any inroads in e.g. machine vision, or speech recognition. And that's just fine: lots of tools, lots of problems solved.
I don't think symbolic approaches are completely useless. It's just that they're solving yesterday's problems 1.12% better. While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
One is near the end of its potential while another is only picking up steam.
In many ways, the space ML dominates now is the space of "all the things symbolic approaches suck ass at". Which is a very wide space with many desirable things in it.
Well, neural nets do what neural nets do best (not ML in general, which is a broader field), so if a lot of funding is going to neural nets then we'll see a lot of progress on the stuff neural nets are best suited for. No surprise. If Google et al were spending billions on symbolic AI maybe we'd see equally spectacular results there too. Maybe not. But we won't know because they don't.
There's no sense in which symbolic AI is at the end of its life and if you pay close attention you'll see that LLMs are trying to do all the things that symbolic AI is good at: major examples being reasoning, and planning from world models.
And as nextos says in the sibling comment most of the recent successes of LLMs in tasks that go beyond language generation, e.g. solving math olympiad problems, are the result of combining LLMs with symbolic verifiers.
>> While ML is cracking open entirely new fields - and might go all the way to AGI, the way it's going now.
I don't agree. Everything that neural nets do today, speech recognition, object identification in images, machine translation, language generation, program synthesis, game playing, protein folding, research automation, I mean every single thing really, is a task that comes from the depths of AI history. There's a big discussion to be had about why those tasks are "AI" tasks in the first place and what they have to do with "intelligence" in the broader sense (e.g. cats are intelligent but they can't generate any sort of text) but this discussion is constantly postponed as we all breathlessly run up the hill that neural nets are climbing. When we get to the top and find it was the wrong hill to climb, maybe we'll have that discussion at last, or maybe the entire industry, academia in tow, will run after the Next Big Thing in AI™ all over again. But- cracking open new fields? Nah. Not really.
AGI is not going to happen any time soon though. We have no idea what we're doing in terms of reproducing intelligence, that much is clear.
The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
It's also the kind of thinking that results in "neurosymbolic garbage is good actually".
What neural nets do today is basically "everything humans do". There is no longer a list of "things computers can't do" - just a list of things computers do worse than the top 1% of humans. Ever shrinking.
Well, for example a computer can't make me an omelette. There's tons of examples like that, pretty much everything humans "can do" with our bodies, that computers can't- not just because they don't have bodies, but because even when we give them bodies we can't program them to do the things we want them to. LLMs don't help at all here. They can easily fake knowing what to do but the -not few- attempts people have made to connect LLMs to a robot to get the LLM to drive the robot like a little AI brain have ... not really worked out? I guess? Not even self-driving cars use LLMs.
Speaking of self-driving cars' AIs, while they have plenty of machine learning components, e.g. for vision, SLAM, and so on, they are largely hand-coded, rule-based systems. Just like the good old days of GOFAI.
>> The whole notion of "we need to know what intelligence is exactly to reproduce it" is completely and utterly wrong.
Modern DRM for video and audio is such a strange construct.
It never stopped a thing. Clearly, it only exists to cover someone's arses and check some boxes off the requirement lists.
And yet, some people put actual effort into integrating it, and keep shipping mandatory DRM modules that run with deranged levels of privilege in places like TrustZone. They keep restricting some browsers and phones from being able to view Full HD content - despite that Full HD footage being on every shady pirate streaming website that runs on ads for online slot machines and penis enlargement pills, and the 4K versions of that very same content being available day 1 in any torrent search engine. Because some cheeky madman somewhere has a few exploits, and one exploit is all it takes for DRM to stay broken forever.
Punish the legitimate users, and completely fail to deter the pirates. Security theater at its finest.
Every time I read of how modern economics eliminate waste and inefficiency, this kind of DRM stands out as a counterexample. It never worked and never will - nonetheless, here it is.
Some people keep writing and signing the licensing deals with those stupid requirements that don't touch reality in them. Other people keep needing to fulfill them. And so, the strange useless cover-your-arse-ware lingers in every device, like the smell of stale piss in a public toilet. With no care for how unwelcome it is.
The tech people have done a terrible job of using their leverage.
They're inventing the cool hardware and infrastructure, but for some reason, they let the content people dictate terms. I want to see the LGs and Samsungs of the world announching "we're making this amazing 16k OLED panel, and the only interface it has is unencrypted DisplayPort. If you don't want to your precious movies on it, there are still stock traders who will buy it to fill with graphs or programmers who will fill it with StackOverflow tabs and vim windows."
Sure, Sony could try to make some sort of sealed box viewing system that never let the raw bitstream out-- by the darkness, they've spent the last 30 years trying-- but most of the content firms don't have the technical chops or the market power to make it happen. I can fully imagine them trying to sell multiple different set-top boxes, each of which is only capable of decoding one studio's DRM.
On the other hand, what ended up liberating the music market wasn't some grand audiophile product, it was the market full of $29 no-brand/minor-brand "MP3 players". It was such a fragmented market, running random bare-metal firmware on the cheapest MCUs available, that nobody except Apple could possibly make a play out of selling anything but DRM-free content.
>and keep shipping mandatory DRM modules that run with deranged levels of privilege in places like TrustZone
What's "deranged" about TrustZone? It's just a way to allow code to be executed in a tamper-proof way. Advocates like Stallman might object to this on the basis of "freedom to tinker" and "user control", but it can't steal your data, which is what "deranged levels of privilege" sounds like.
Moreover it's not too hard to imagine DRM implemented in a way that doesn't have those issues. The most obvious example would be some sort of dongle that handles decryption and forwards it to a TV. In other words, a chromecast. It'll still be a black box, but I doubt anyone seriously cares. You can make a case about how your computer or smartphone should be "open", but the case is far less persuasive for a media dongle.
In ARM, TrustZone[0] is a higher level of privilege than hypervisors (EL3 vs. EL2); it's morally equivalent to x86 System Management Mode. That means it categorically can steal your data. There's nothing EL2 code can do to prevent inspection or manipulation from a malicious EL3.
A less awful design would have been to keep the security code at EL2 and have CPU hardware that can isolate two EL2s from one another[1]. This is ultimately what ARM wound up doing with S-EL2, but you still need to have EL3 code to define the boundary between the two. At best the SoC vendor can design a (readable/auditable!) boot ROM that occupies EL3 and enforces a boundary between secure and non-secure EL2s.
[0] Or, at least, TrustZone's secure monitor. TZ can of course run secure code at lower privilege levels, but that doesn't stop a TZ compromise from becoming a full system compromise.
[1] If you're wondering, this is morally equivalent to Apple's guarded exception levels.
Oh, definitely. They could put the foot down and end this pointless charade swiftly. But if Google and Apple had the balls to go against the big media, the world would look very differently.
They are, in fact, the big media themselves now. They have the power, and more than enough of it. No streaming service can afford to skip having an app on iOS or Android - all Apple has to do is crack the whip. Say "this DRM is no longer compliant with our device policy and will be phased out by 2030" and there goes that.
But they still act like they're weird web teens who can't raise a voice against the big media boys without getting bullied for it.
That, or they believe this DRM charade serves them - and user experience can go suck a dick.
FYI Samsung was paid by MS to add DRM to the Galaxy devices ~2010. Source: I was part of the team that had to implement the customer-facing part, carrier billing integration and backend 4-way revshare accounting in zero time. Harder than it sounds, and probably never repeated since unless you're preloading it's against terms to introduce payments on Android. IIRC the real heroes were the Indian embedded engineers who were 10x better than the Koreans.
If you're someone building an app/service that uses licensed content, you're going to be subject to their demands put forth in the agreement. If they say you use DRM, you use DRM. You can be against DRM, but you'll have an app/service with lame content.
Yes. The demands are stupid and useless. Nonetheless, some idiot put them down and into the licensing agreement. And that makes you the idiot tasked with fulfilling them.
> how modern economics eliminate waste and inefficiency, this kind of DRM stands out as a counterexample
Ironically it's a product of the made up concept we call intellectual property that legal teams like to "protect" because they can ask the government to enforce their monopoly over the idea.
How is it that articles about DRM have people like you commenting who are critical of IP (I'm not disagreeing with you on that), but articles about AI being trained on GPL code, books, and art are full of people complaining about IP rights not being respected. Why don't the commenters cross-pollinate so the pro-IP guys can come here and say "We need DRM to protect artists' livelihoods" and the anti-IP guys can go there and say "AIs are doing God's work making information free like it wants to be".
Because both are really just totems for an actual discomfort about rent-seeking capital operating in abstract realms that just tangentially touch some relatable life experience.
Oh. Now that explains basically all the popular but inconsistent or controversial beliefs people have. I guess it's why you can't reason with someone about a political issue - the issue isn't the thing that's important to them, it's something more abstract that it represents.
Most of those Stallmanesque nicknames feel kinda childish, but this one is so spot on that it always takes a moment for me to realize that it's not how it was originally named when I see it being used.
> It never stopped a thing. Clearly, it only exists to cover someone's arses and check some boxes off the requirement lists.
Yes, and traffic lights and speed limit signs have no physical mechanism of stopping a driver directly, those who violate them escape without consequences 99%+ of the time, and the 1% that get caught are only penalized after they physically did so already. Clearly, security theater at its finest.
To go even further, only 54% of murders get cleared. 46% are never cleared. Which shows, clearly, the rule against murder is ludicrously ineffective security theater. What's the point of a law that only works on a coin flip, am I right?
If your law enforcement does as much to prevent or deter murder as DRM does to prevent or deter video piracy, oh boy do I not want to live in your city. I'm not sure who would. Maybe all the serial killers?
Imperfect does not mean ineffective. Every time you make something more difficult it reduces the number of people who will do it.
Pardon my 1990s metaphor, but:
* If you have no DRM and people can just share the install disk, they will do that and piracy will be universal
* If you implement a CD check, yes, people with CD burners can bypass it but those are far fewer. Yes, industrial shops can mass-produce pirated CDs but not everyone is willing to buy those.
* If you implement even more stringent restrictions such that duplicating the CDs is significantly harder (to continue the metaphor, do something weird with the sectors that requires CloneCD instead of more generic ISO-ripping software) and now you're down to people with specialized hardware/software
* If you go further and implement software DRM checks, they can be bypassed, but now we're down to the portion of the market willing to download sketchy crack programs that totally aren't viruses, the host of the website swears. This is a *much smaller* group than those that would just grab an official install disc from their friends.
etc., etc. These measures do not have to be perfect to be effective. There can still be pirated copies available, but if the effort to get to them is sufficiently higher than buying the official copy (and that threshold is different for different people) they have served their purpose.
Most techie people I know ripped their DVD collections. Many ripped their Blurays but plenty didn't because it requires specialized software to get around the DRM. Only a handful of them have ripped their UHD discs which require specialized software AND specific hardware AND flashing a specific firmware on that hardware.
The vast majority of DRM protected content (or at least majority by watch time) available in UHD via torrent in a matter of hours.
People like to stay away from torrents, because it carries significant risk in many jurisdictions.
But the only reason UHD versions are only available via torrent and often not as streams or downloads is bandwidth cost.
I can't see how it has any thing todo with DRM.
The only thing it maybe cut's down is sharing within friend groups. But even then it only takes one to figure out how to set up a VPN for torrenting.
The whole point is that if it is so easy everyone can do it without asking, it will be more widespread than if there are hurdles in the way, no matter how minor.
"Cutting down sharing in friend groups" is exactly what they hope to achieve.
I have so many people watching off my plex that I should start charging them for second ISP line. And most of my friends are not technical people. This is my way of saying that streaming is available a well.
>>Most techie people I know ripped their DVD collections. Many ripped their Blurays but plenty didn't because it requires specialized software to get around the DRM. Only a handful of them have ripped their UHD discs which require specialized software AND specific hardware AND flashing a specific firmware on that hardware.
>The vast majority of DRM protected content (or at least majority by watch time) available in UHD via torrent in a matter of hours. People like to stay away from torrents, because it carries significant risk in many jurisdictions.
Sounds like you're proving his point? If stripping DRM is so trivial that anyone can pop in a bluray and rip it (like ripping CDs in itunes), piracy would arguably far worse. Pirates today have to brave shady torrent sites and the risk of getting C&D letters. Asking your friend to make a copy is far more accessible.
>No. The bottleneck isn't "getting the files", it's sharing them.
It's that hard to upload a file to google drive and share a link? Is your model of the average person a bumbling idiot that struggles to do anything other than opening tiktok and flicking up?
With streaming content, the barrier to just copying it is already as high as pirating. You don't just have a file you can email to your friend -- you have to install and use software to capture the video and then handle the big file that results, on your phone, which is awkward. And that just gives you one movie which in isolation is barely worth anyone's attention to begin with. By the time you've figured out all that, you could have just figured out how to torrent, or even easier, find a free Chinese website that streams the pirated content to your browser just like the original service.
Pretty much. The path of "figure out how to screen capture the entire DRM-unprotected movie as a video and send that entire file" has about the same level of resistance as "find a link to a pirate streaming site that already has the movie on it and send that link". Maybe more.
>The path of "figure out how to screen capture the entire DRM-unprotected movie as a video and send that entire file" has about the same level of resistance [...]
The biggest flaw with this logic is that screen capturing tools specifically don't work on DRM protected content. Moreover if you're trying to imply making a screen recording is some sort of black magic to normies, you must be living in the 2010s. Nowadays both iOS and Android have built-in screen recorders, and on desktops you can use something like loom, which works off a browser.
The biggest flaw with your logic is the utter lack of it.
If I could rip K-Pop Demon Hunters with a screen capture app to obtain a file I could share with a friend, I still wouldn't do it. Because finding a torrent is simpler and faster. I would get a very similar file, but so much faster, because I didn't have to keep the screen running at x1 for the full duration.
And finding a shady website that has it available is simpler and faster still.
>If I could rip K-Pop Demon Hunters with a screen capture app to obtain a file I could share with a friend, I still wouldn't do it.
Well no, because the lack of DRM wouldn't just mean you can manually screen record netflix. It also means you (or someone else) can write an app to screen record netfilx for you, or skip that altogether, similar to something like yt-dlp. After all, if somebody wants to rip youtube (DRM free), they don't screenrecord it, they find some random website/tool off google.
YouTube is not just DRM-free, but cost-free. One of the things you can pay for is "enhanced bitrates", and while you can yt-dlp them if you auth (maybe?), you won't find the random download sites offering it.
Even if money is no object, if you want to watch bluray-quality 4K content your only choice is to buy the physical media and get it shipped to you (and then use some horrible proprietary player interface). I'm not aware of any streaming services offering the same bit-rates at any cost.
In my case, the in-browser DRM is what is making things more difficult. Whenever something uses the DRM checks, one or both of my monitors turn off. I am not interested in troubleshooting this beyond disabling DRM in my browsers. I don't generally pirate any media, but it might actually be easier than troubleshooting this hardware problem.
The closest thing "far right" had to that was Gab and Truth Social, and that's both more specific and less impactful overall.
Thus, BlueSky's userbase is biased towards extreme left wing - it's basically the go-to place for far left wing nutjobs go when they get too nutty for Twitter moderation, or feel like Twitter is not left wing enough for them.
reply