I’m deeply concerned about disinformation, it’s a major problem. Politics has always had spin, and many issues are complex and it can be easy to state things too emphatically to press your case. Accusations of lying are everyday in politics. But recently flat out knowingly lying with the specific intent of deceiving people has become normalised. It’s a serious threat.
This is precisely the wrong way to tackle it though. We cannot ever allow government to control what can or cannot be said, outside narrow limits such as incitement to violence. Making the case for the truth will just have to be done the hard way.
Fortunately it looks like this is only 2 congresscritters, not “House Democrats” generally. There are at least a handful of utter wing nuts on both party benches so last put this in perspective.
The main problem with social media services is algorithms that drive engagement by turning people’s feeds into an ever more extreme echo chamber. Whether it’s lefties being zombified into SJW snowflakes deplatforming people on campuses, or Qannnon turning people into alt right political flat earthers. That’s what they need to address, picking and choosing opinions to block is a fig leaf move that’s more likely to backfire than improve anything. It’s a hard problem though. What do we do about these engagement algorithms? I’ve no clue.
Take the politics out of it — unaccountable, unquestioned mass communication is almost always bad.
Mass media needs the fairness doctrine back to take the carnival show out of the news. Social media is no exception.
The current model basically neuters editorial discretion and creates a “Team A” vs “Team B” environment that is bad for everyone. These problems started in niche mediums like talk radio and eventually locked in because it’s an easy way to make money. The problem is it’s a race to the bottom, and outlets like OANN, RT, etc are really self-sustaining propaganda outlets. The NY Post has an editorial voice but their news product isn’t fiction.
On the internet, if you give Facebook, Google, etc rules, they will develop algorithms to comply. IMO, regulation in the space would improve the quality of engagement and make them money. P&G won’t buy ads associated with flat earth people, and they pay more than the gold coin, prostate pills, crazy pillow people, etc.
> Mass media needs the fairness doctrine back to take the carnival show out of the news. Social media is no exception.
https://en.wikipedia.org/wiki/FCC_fairness_doctrine ("The fairness doctrine had two basic elements: It required broadcasters to devote some of their airtime to discussing controversial matters of public interest, and to air contrasting views regarding those matters. Stations were given wide latitude as to how to provide contrasting views: It could be done through news segments, public affairs shows, or editorials. The doctrine did not require equal time for opposing views but required that contrasting viewpoints be presented.").
I would love to see MSNBC airing opposing viewpoints on controversial issues. They could have someone on to explain that Obama was putting undocumented immigrant children in cages, defend Hobby Lobby, etc.
> * I would love to see MSNBC airing opposing viewpoints on controversial issues.*
ME TOO.
Something that drives me nuts about the current political climate is that some people are so sure of their views on seemingly every hot topic. I think this is because of a complete lack of discussion of any opposing viewpoints, which I believe is fundamental to actually understanding an issue. If you refuse to consider why people think differently, how can you possibly engage with them? Isn't the goal of any sort of political activism to get more people to vote the way you do?
Unfortunately I don't think it would work out very well, given the current media machinery. I find cable news completely ridiculous as a whole, but the rare cases where they do bring in someone to discuss an opposing viewpoint are really something. (one that comes to mind was Fox having a "union leader" on sporting a full track suit, big cigar, and several giant rings on his hands like he was a Sopranos character.)
What a cynic way to look at things. Anti-war activists in the 60s didn't want people to think? Pro-democracy protests Warsaw Pact countries, in Hong-Kong? Anti-capitalist activists today, even vegan activists?
Go to a group of BLM or antifa activist protest and tell them you voted for Trump, see how that works out for you. Not all activists do not want you to think, but that doesnt change the fact that the majority are against it.
That is precisely the problem. The expectation is abnormal (for the modern age and standpoint of Enlightenment values) and harmful.
Look beyond the proverbial horizon of contemporary America, and see that in civilised societies, the appropriate response would be tolerance at the very least, possibly an exchange of minds in the form of inquiry or discussion.
It’s ridiculous for the very least response for all ideas to be tolerance when there are some ideas that shouldn’t be tolerated at all. For example racism , by any means, shouldn’t be tolerated. Activists against racism shouldn’t be expected to tolerate the very thing their advocating against.
This is shifting the goalpost/improper generalisation, did you notice? Grand³-parent was about going "to a group of BLM or antifa activist protest and tell them you voted for Trump".
If you want to operate as a non-profit, do what you like.
But if you make >$x in profit on a regulated channel (radio, tv, streaming audio or video, or platform of same), then you have fairness requirements to satisfy.
Write it in safe harbor terms. If you do one of x, y, or z, then you gain protection for all of your content.
Where x might be "ensure your feed algorithms mix content according to physically local norms." y might be "produce educational content without a clear position, about an issue of interest to the public." Etc.
> outlets like OANN, RT, etc are really self-sustaining propaganda outlets
Absolutely. They're the "news" realization of the same system social media companies have optimized for.
If the system of rules you have in place incentivizes sewer creatures, change the system. Don't waste time trying to play whack-a-mole on evolutionarily fit species created by your environment.
"Since cable’s infrastructure is privately owned and cable channels can, in theory, be endlessly multiplied, the FCC does not put public interest requirements on that medium."
In the context of cable news and journalism on the internet (basically infinite supply), there's no version of the Fairness Doctrine that would hold up.
It absolutely does not do any such thing. Speech can be well-regulated, just like other constitutional rights. Your right to speak does not mandate a megaphone.
What I described was the law from the 1930s until the 1980s. Our predecessors saw what happened in fascist and communist states and wisely took measures to avoid that.
> Speech can be well-regulated, just like other constitutional rights
Broadly, the opinion of SCOTUS has been that speech cannot be regulated outside of very particular circumstances, and those circumstances have, in general, been shrinking over time (from undefined to "clear and present danger" to "imminent lawless action" to clarify that "imminent lawless action" really means right now, and not just relatively soon)
The Fairness Doctrine isn't a regulation on speech, it's a regulation on use of government licensed airwaves.
I think there's a legitimate difference whether the Fairness Doctrine was imposed as a condition of licensing the use of a limited, public resource (frequency spectrum allocation) or as an attempt to regulate freedom of the speech or press.
Because of the way it was implemented, I believe it was a condition of the use of public spectrum, not a regulation on speech broadly.
First Ammendment caselaw is a little muddled, however, content regulation falls between strict scrutiny and per se invalid depending on which way the wind is blowing. See eg. Simon & Shuster v. NY (invalidating Son of Sam law). See also, RAV (invalidating hate speech law).
Per Thomas Jefferson, people are "endowed by their Creator with certain unalienable rights," which has different nuances than "God-given." And it was in the Declaration of Independence as someone else said, only thematically connected to the Constitution.
If you want to cite the Declaration, you should probably quote that entire passage:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.--That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, --That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.
The unalienable rights are in the declaration of independence. The bill of rights are like human made restrictions on government which attempt protect the unalienable rights (correct me if I am wrong).
I didn't say that propaganda is solely Russian. But that technique is closely aligned to Russian propaganda which we can all "admire" in the open ( eg. rt)
The firehose of falsehood is propaganda sure, but it's principles were consciously and deliberately developed by the USSR as a particular propaganda technique. It's partly based on the rhetorical strategies used by Lenin. One of his (Russian) biographers called him the godfather of post-truth politics.
"That’s what they need to address, picking and choosing opinions to block is a fig leaf move that’s more likely to backfire than improve anything."
Counter point - before social media that is exactly how it worked for the last 100 years. Newspapers, radio stations and tv stations were picking and choosing opinions to block.
This entire problem is actually being caused by the total removal of editorial discretion from sane people.
I have no problem with platform owners editorialising. Well I do in some sense, if it's done in a shitty way, but that's an opinion and shouldn't be legislated.
Editorialising on a publishing platform you own is a right I'd not like to see infringed on. It's a matter for free citizens to decide. If the public disagree with the editorialising, they can use another platform. It's freedom all round. It's not ideal, but the alternatives are worse.
> This is precisely the wrong way to tackle it though. We cannot ever allow government to control what can or cannot be said
Can you please point me to a proposal for government to control what can or cannot be said?
Not a speculation about what might, in the future, be proposed based on what some people fear based on the questions in these letters, but an actual concrete proposal?
Otherwise, I don't see how “This is precisely the wrong way to tackle it” follows from “We cannot ever allow...” since the only possible thing “this” can refer to doesn't, at all, involve the thing we “cannot ever allow”.
The violent attack on the capitol was the result of fake news media without anyone ever inciting violence. They simply need to repeat over and over that the election was stolen and that caused the violence and people died.
Incorrectly yelling Fire in a crowded theater is illegal and no one is inciting violence in that situation either. There are many commonalities between broadcasting fake news for profit and propaganda, and incorrectly yelling fire in a crowd. Both end up resulting in public safety hazards.
It is a difficult problem to deal with because there is always the possibility of corruption and a reduction in genuine free speech when there is regulation involved. But it is a problem that has to be solved.
It is also no longer social media only, it is Fox, OANN, NewsMax, Sinclair, etc that are increasingly filling up air time with lies solely to make a buck.
Do you have any other options? I don’t care who solves it, but when a company is run for the intent to produce propaganda it’s pointless to ask them to self regulate.
There is always the option to let the issue sort itself out. To allow space and time for a solution to emerge.
We should be careful not to fall into action bias. E.g. the thought that we need to do something, anything, since that can lead to counterproductive solutions.
I've begun to look at information problems like this not too differently than viruses of thought. Right now these viruses are running rampant because we've never had to deal with anything like them before on such a wide scale. It seems perfectly possible to me that over time we will develop social standards that immunize us from these viruses. More and more people will begin to disregard clickbait, outrage-inducing headlines, etc. They will simply become less salient the more and more we experience them.
Reframing the question at hand around this metaphor: What would an effective vaccine look like for these thought viruses? I'm not at all sure, but I can't imagine any kind of partisan response that would work, since these viruses infect left and right alike, and many people will bend over backwards to argue otherwise. Until we can face that fact honestly, I don't see how we could even begin to have a productive conversation about a solution.
>Reframing the question at hand around this metaphor: What would an effective vaccine look like for these thought viruses? I'm not at all sure, but I can't imagine any kind of partisan response that would work, since these viruses infect left and right alike, and many people will bend over backwards to argue otherwise. Until we can face that fact honestly, I don't see how we could even begin to have a productive conversation about a solution.
Actually, there are effective ways to identify the credibility of information. From the well known CRAAP.[0] test to "lateral reading"[1] and a host of related [2][3][4][5] methods to clarify the credibility of online (or offline, for that matter) material. There are even curricula[6] that addresses these issues.
And no, none of these methods are partisan. Rather, they give the reader tools to help them determine the validity and credibility of information.
That many folks don't do so is definitely a problem. One of the less involved methods is "lateral reading" as described in [1]. I heartily recommend it, as well as other methods.
They're not partisan, no, but as you kind of allude to ("many folks don't do so") a prerequisite for using them properly is a certain ideological flexibility that is... less common these days. If someone is ideologically possessed, they will use these tools to skewer outgroup ideas but not apply them to ingroup ideas. As the letters in this very post demonstrate even our congress people can't apply them to their own thinking!
And as far as a governing apparatus, I'm not sure these tools really help provide the structure needed to declare any given piece of media misinformation or not.
>And as far as a governing apparatus, I'm not sure these tools really help provide the structure needed to declare any given piece of media misinformation or not.
If my post came across as suggesting that the methods I linked to should be used in some sort of [quasi]-governmental way to determine what is "good" or "bad" information, then I apologize.
My focus was strictly on answering GP's question[0] on an individual basis:
"What would an effective vaccine look like for these thought viruses?"
I was also trying to imply that there are already ways to "separate the wheat from the chaff" that are quite well-known and well thought out.
But they are just tools. And what use someone (doesn't) makes with such tools is up to them.
> If someone is ideologically possessed, they will use these tools to skewer outgroup ideas but not apply them to ingroup ideas.
Not just that, but ideologically-possessed people will flat-out reject a truth-finding methodology that results in conflicts with their worldview.
There's no point in giving someone the tools to find the truth if they're so wedded to their "truth" that evidence will not make them change their minds.
>There's no point in giving someone the tools to find the truth if they're so wedded to their "truth" that evidence will not make them change their minds.
Are you making the argument that because some folks won't use them, such tools/methods are useless?
>Perhaps useless in the sense that those who need them the most will either refuse to use them or misuse them.
I'd argue that such tools are valuable to everyone, even those who have no interest in verifying the credibility or veracity of information sources.
As the old saw goes, "you can lead a horse to water, but you can't make him drink." Or both more snarkily and (IMHO) more accurately, "you can lead a fool to knowledge, but you can't make him think."
>Going further... I think it might be fair to say that those tools just don't scale.
I'm not sure what your mean by "scale" in this context.
Determining for oneself the credibility/veracity of information or an information source is (and should be, IMHO) inherently an individual pursuit.
> Reframing the question at hand around this metaphor: What would an effective vaccine look like for these thought viruses?
Well, if stopping disinformation is too hard for various reasons, maybe we can
focus on the problem from the other direction: we need to find ways for accurate information to be easier to find and to verify.
If you think of misinformation more like a bacteria, then one of the common causes of bacterial infections is that the regular good bacteria have been wiped out for one reason or another. Antibiotics might help, or they might make the problem worse.
I do think we have some serious institutional problems that are preventing the usual sources of accurate information from operating effectively. News that's become overtly partisan, and an economic model that selects for the most sensational headlines. Scientific research findings that aren't reproducible. Universities becoming increasingly run like profit-focused corporations, and too expensive for many to attend due to lack of public funding. Misinformation is always a problem even in the best of times, but it can also fill the void when there's a lack of accurate information.
I don't know what the solution is. I tend towards more distributed models of information sharing that have fewer institutional gatekeepers declaring who the experts are, but I don't know exactly what that looks like, or how to do that in a way that tends towards more credibility rather than less.
So in other words no, but you don’t think it’s a long term problem?
I honestly think your fath in humanity is refreshing. Personally, I think this is just reversion to the mean where simply lying was historically the most common response.
> I honestly think your fath in humanity is refreshing. Personally, I think this is just reversion to the mean where simply lying was historically the most common response.
Its your faith in humanity that's "refreshing" if you think giving the people with the guns the power to police speech is the proper solution.
Don’t put words in my mouth, I didn’t suggest government regulation of speech.
Though I will admit debate rules where each side gets equal time back to back to be somewhat humorous. That’s mostly my love of chaos and the spectacle of such an idea.
Odd, I was initially thinking in terms of a non governmental organization to regulate the terms Reporter and News much like how Doctor is a protected. But, that doesn’t seem viable.
The medical monopoly is more expansive than that. It regulates not just the use of the terms but the practice of medicine itself. And those rules are backed by the force of the government. If that’s what you’re suggesting for journalism, it’s even worse. It ultimately has the backing of government, but without the political accountability.
That’s one of many issues, however the peanut butter vs peanut spread line feels like a useful benchmark.
You could call your a current events organization and say anything or call yourself a News organization and be held to some standard. That IMO avoids limitations on free speech as the body of the message is what’s important not the label of such a method. As you say using government force to require organizations to change their name is distasteful.
However, coming up with a new term like whizphish that currently has no meaning but could gain meaning in this context should avoid stepping on any toes while achieving similar goals. LEED Platinum certified doesn’t directly have government backing, but a building falsely claiming such is simple fraud.
In my previous comment, I am roughly equating incorrectly yelling fire in a crowded theater with broadcasting fake news to millions of people.
If there were a way to clearly differentiate between free speech and fake news, then yes, I would support legal ramifications for spreading blatant intentional fake news created solely as profitable propaganda that causes harm, and treating that as intentionally lying about fire in a crowd.
I don't know what the best organization or process for setting that up would be. After a certain number of complaints, can we transparently look into the owners of the news media, their revenue streams, their involvement with foreign governments, to determine whether a company is a legitimate news source or not? Can we get non-profits and media-freedom watchdogs involved to ensure fairness? Can we get the fairness doctrine running again? I don't see why not.
It was made more specific in Brandenburg v. Ohio but it was not overturned. ie, if someone is falsely yelling fire in a crowded theater which is "speech brigaded with action" then it is a situation where a person could be prosecuted for speech. They used that very example.
Why was this downvoted? garg is quite plainly correct:
> The example usually given by those who would punish speech is the case of one who falsely shouts fire in a crowded theatre.
> This is, however, a classic case where speech is brigaded with action. [...] They are indeed inseparable and a prosecution can be launched for the overt acts actually caused.
> Apart from rare instances of that kind, speech is, I think, immune from prosecution.
This couldn't be more explicit in saying that falsely shouting fire in a crowded theatre is a prosecutable offense. (As long as an injury occurred.)
And "speech brigaded with action" would still have to pass the muster of being "directed to inciting or producing imminent lawless action". One would have to prove a criminal element (almost always a mens rea) in addition to such speech rather than holding the presumption that the words themselves carry a distinguishing factor among other things. You're right to say it's prosecutable (although technically anything is prosecutable), however what you appear to allude to (and I could be wrong in assuming that of your claim) is that "yelling fire in a crowded theater" is prima facie unprotected speech. If so, then that hasn't been true since the Brandenburg test was instituted.
The violent attack on the capitol was the result of the sitting President of the United States claiming the election was stolen and telling them to march on the capitol.
That is decidedly not a social media thing.
Social media gave him the mob, but it was a man with a podium that incited the action.
> The violent attack on the capitol was the result of the sitting President of the United States claiming the election was stolen and telling them to march on the capitol.
If true, this would be much more convincing with a direct quotation and a source, rather than your interpretation.
> we're going to walk down to the Capitol, and we're going to cheer on our brave senators and congressmen and women, and we're probably not going to be cheering so much for some of them.
> I know that everyone here will soon be marching over to the Capitol building to peacefully and patriotically make your voices heard.
> outside narrow limits such as incitement to violence
Why can’t those narrow limits include “flat out knowingly lying with the specific intent of deceiving people?” Sure it’s a very human definition but it’s one with built-in limits on its scope. You can’t use it to ban wrongthink because it has to be from people who know that they’re lying.
> turning people’s feeds into an ever more extreme echo chamber
So yes but also this is done voluntarily. Those algorithms are keying on to the fact that I do not want specific kinds of content. If given the option I’ll even explicitly make my preferences known — I’ve blocked probably a thousand subreddits just to make my /r/all tolerable; Twitter is only usable if you confine yourself to niches. It’s #general or barrens chat that’s the cesspool of nonstop screaming.
Even if there would be a simple way to define "lying" in this context and a simple 100% effective way to proof it. It would only shift the problem not solve it. You can already "lie" under oath if you formulate something as opinion if there is nothing that contradicts your statement, its that simple. If people can be sentenced for the writing words online if they intentionally lied that just puts a target on normal people an make professional writers team up with lawyers to avoid ever writing anything that could be deemed a lie. That solve no problem at all. People find a way to tell you that the earth is flat anyway. Putting wrong speaking closer to wrongdoings is a very dangerous idea in general. we should want more speak not less and we get that if speech is tolerated.
The "inciting violence" thing is already very very close to breaking the concept of free speech. And it can also be defeated simply by linguistic tricks. "Kill the ...." would incite violence but "I think we should kill the ..." expresses an opinion. Also this very example here used the same words as something that in fact could incites violence but clearly my post isn't. Now do we really want an AI to detect de difference? Or maybe real human? Moderators who are almost certainly not qualified to judge because a content moderator isn't a judge and should not be.
Precisely -- and let's be clear here: the disinformation being discussed here breaks down along partisan lines.
We can barely get republicans and democrats to agree on a budget, what makes anyone think that they could reasonably come to an agreement on objective standards of truth in media? Let alone a process by which those standards are enforced? This is way, wayyyyy outside the realm of reality.
I agree with your point, though I think your example is a bit flawed: I think it's reasonable to disagree on what should be in a budget; there's no one "correct" budget where all other budgets are wrong.
On the other side, facts are facts. Assuming you actually have all the facts (which often we don't), there is only a single truth.
What people call a “fact” for these purposes is a lot broader than what epistemologically qualifies as a “fact.” You can see this with a lot of “fact checking” websites. The second item on the fact-check.org website is whether reduced wind power caused the Texas electrical outages: https://www.factcheck.org/2021/02/wind-turbines-didnt-cause-...
The percentage drop in window power megawatts is a “fact.” What “caused” the Texas power outage is a multi-variable system analysis that produces a conclusion, not a fact, under certain specified assumptions. (This is obvious to an engineer: the NTSB spends months investigating what “caused” a plane crash, and issues a report with conclusions, not facts. The notion that some journalist can in a day or two perform a similar analysis on a complex system like a power grid, and report the result as a “fact” blinks reality.)
There's not only the issue of incomplete information, there's the issue of salience. There are an infinite number of true statements. Which ones do you focus on? Which ones are the right ones to focus on? You can detect bias in reporting not only based on what is said, but about what is not even mentioned.
The new york times won't run a story sympathetic to liberal individuals pushing back against the excesses of critical race theory. Fox news won't report on how even though there were anomalies in the election, none justified stopping the transfer of power to the Biden administration. Both are bullshit.
Then don’t have them. Having lie checkers on the internet is a moronic idea. This rule is to stop organized coordinated disinformation campaigns. It’s to take down sites who’s whole purpose is to literally make up news stories, present them as fact, and spread them on social media.
It was mean sarcastic. In case you haven't figured out how awful the fact-checkers are.
An no, if you ban "disinformation" you ban free speech. There is no way to figure out if a flat earther publishes something for disinformation purpose or if he really believes what he writes.
Disinformation is best frighted by debunking it, not by removing it.
Most people have heard form the flat earth but most dont believe it. Because they can inform themself. That's how it should be. No need to "save" everyone trough authoritarian measures. The risk here is way higher than having to deal with some forever flat earther.
I used to believe that as well. Then we did real world practical experiments over the last decade. It's clear most people don't give a shit about informing themselves and will readily believe just about anything.
Not saying the solution is regulating what can / cannot be said, but this idea that free speech is the ultimate thing isn't working when you have groups that can spend troves of cash making their disinformation legitimate enough for the masses.
Both you and i probably believe at least one, maybe more of those things, by the way. It's not all outlandish nonsense, sometimes it's reasonable enough to believe at first glance and you don't bother looking it up afterwards.
I accept this as unavoidable reality. The only way to fight this problem is education. I'm not worried much about the fact that everyone "believes" some stuff that is actually not true. This is and was the case for all time humans where alive. In time where people had the opportunity to debate the different "truths" humanity made progress. In times or societies where this was not possible progress was slow. we dont need and will never get the absolute truth. but wee need freedom to search for it. there is no guarantee that we will find it and even the opinion of the majority can be wrong and often is but it will self-correct as long as pointing out the wrong is allowed. There will always come a time where the wrong becomes obvious to the majority. Pointing out the wrong will be disinformation if the people who decide are in the wrong. We can not have that risk.
So back to the start. You cant have an authority who decide. whats right or wrong has to be proofed/debunked. And it can not be removed afterwards as this would invalidate the debunking.
This is a slow and inefficient process we should probably focus on making it better because it works, it just does not work as good and as fast as the modern world would require. The shortcut "solutions" however will most likely not work at all and potentially case more harm than good.
I fully agree with you that education is the ultimate solution, but in the mean time, wtf do we do about the entities that have wealth and power, and are able to influence millions of people on just about whatever the fuck they want?
What do we do when the things they choose to influence the population on are no longer just "the rich getting richer" but become actual existential threats? When they get dictators elected, make climate change worse, endanger lives by producing healthcare misinformation, etc?
Does it matter that, over the long term, there are more idealistic freedom of speech ideals if we don't live long enough to even get to the long term?
I dont know a solution to solve this all either. But I'm worried we make it worse with bad solutions.
Certainly you dont want to give these powerful people the tools to become more powerfully by implementing an authoritarian system against disinformation. Its rather obvious that if these powerful people can not circumvent that system they will become the system. If they can manipulate millions they sure can manipulate or replace the few "decision makers" too. Now you have powerful people spreading disinformation who also have the power to remove any critics simply by labeling it disinformation.
Wait no. That’s not how this works. There’s no determination of fact. It doesn’t matter whether what you said is true or false — this isn’t a rule against being wrong. It’s a rule against someone speaking something they know and believe a priori to be false with the intent to mislead people.
Like it’s literally the same ideas as fraud but applied to misinformation. If you believe that climate change is a hoax then you’re fine, tell the world. But if you make up a study and data “disproving” climate change and then circulate it in Facebook then you’re not.
> Unless you have a mind-reading device, there is no way to be sure what somebody believes.
In general, we are comfortable doing this in at least some contexts. The legal system in almost every case attempts to ascertain intention to satisfy the mens rea of a criminal act. They don't have mind reading devices, but they do have expert witnesses such as psychologists and doctors, and testimony.
> It’s a rule against someone speaking something they know and believe a priori to be false with the intent to mislead people.
If one doesn’t believe in the Holocaust, yet publishes erudite webpages with the intent to mislead others (at least from his POV) into thinking it really happened, would that be a problem?
If yes, you are consistent, albeit a bit crazy.
If no, your rule reduces, once again, to a focus on the falsity of the communication as opposed to the writers intent.
It would be a problem! I don’t think people would care as much because it’s the same as stealing a balloon on free balloon day but you still have a guilty mind and had the intent to deceive people regardless of your success at it.
It becomes a bigger issue if the evidence on the site is made up but I won’t assume that.
This is precisely the wrong way to tackle it though. We cannot ever allow government to control what can or cannot be said, outside narrow limits such as incitement to violence. Making the case for the truth will just have to be done the hard way.
Fortunately it looks like this is only 2 congresscritters, not “House Democrats” generally. There are at least a handful of utter wing nuts on both party benches so last put this in perspective.
The main problem with social media services is algorithms that drive engagement by turning people’s feeds into an ever more extreme echo chamber. Whether it’s lefties being zombified into SJW snowflakes deplatforming people on campuses, or Qannnon turning people into alt right political flat earthers. That’s what they need to address, picking and choosing opinions to block is a fig leaf move that’s more likely to backfire than improve anything. It’s a hard problem though. What do we do about these engagement algorithms? I’ve no clue.