In a time not so long ago, when what would be the global computer network was an open question and everything seemed possible in cyberspace, a concern about what was published online arose.
But there was also a fear that this concern could stifle the thriving idea that was the Internet.
The response included a United States law, which contained what are known as “the words that created the internet” in its section 230:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.“.
It is, in essence, a liability shield: no matter what happens, who gets hurt, or what damage is done, tech companies cannot be held responsible for what happens on their platforms.
And it is also what has allowed the internet to be what it is, for better and for worse.
Section 230 is unique.
No other jurisdiction in the world has such broad immunity for online services.
But because, not coincidentally, the US is home to some of the largest interactive computing services on the planet, other sovereign countries have a hard time passing laws on their own to rein in the tech giants.
However, those 26 words over time began to generate discomfort at home.
Democrats believe they have allowed falsehoods and hate speech to spread online, and failed to give internet platforms incentives to take swift action to stop it or hold them to account.
Republicans, for their part, blame Section 230 for censoring conservative views and giving online platforms too much power over the content featured on their sites.
From President Joe Biden, who called on Congress to “remove the special immunity for social media companies and put in much stronger enforcement,” to former President Donald Trump, who declared “We have to get rid of Section 230,” various They have railed against section 230.
However, those handful of words that created the internet are still intact.
How did all this come about?
2 cases, 2 verdicts, 1 result
As the internet went from being something only used by academics to a window to the world for everyone, the first commercial providers of telematics services appeared.
They were simple murals, with just text, where people could post and access information.
One of the main ones was Prodigy, and as more and more people went online to read news, share recipes or express opinions, its administrators noticed that obscene, insulting and fictitious messages were also being published.
Prodigy decided that he should moderate, that is, delete the messages that crossed the line.
One day, an over-the-counter brokerage house called Stratton Oakmont sued Prodigy, claiming that someone had used the platform to smear the company as a criminal organization and that its president was a thief, involved in scams.
In court, lawyers for the firm argued that because Prodigy employed moderators, it was responsible for every defamatory message they left.
And the judge agreed. In his opinion, they exercised editorial control, much like a newspaper did.
Prodigy had to pay US$100 million.
Years later it would be discovered that these publications were not defamatory.
Stratton Oakmont and its president defrauded many shareholders, several of its executives were jailed, and the firm closed in 1996.
But that was not yet known and, as far as what would happen with the internet, it did not matter.
What did matter was that, right around this time, there was another defamation lawsuit against another service called CompuServe, which also offered public forums.
But because CompuServe had decided not to moderate what was posted on his board, his case was dismissed.
Basically, the law was saying that if the website’s focus was to police and enforce rules, it was responsible for everything each user posted, but if the focus was to turn a blind eye, it was off the hook.
When former California Republican Rep. Chris Cox found out about it, it didn’t seem like the right way to regulate this new medium of communication that he knew was going to be vitally important.
So he told his friend, Democratic Sen. Ron Wyden, and together they set out to find a better way.
A couple of days later, they finished composing a paragraph with those words that would be essential to create the network we know.
no guardians
“Today, our world is being rebuilt once again by an information revolution,” then-President Bill Clinton said at the Library of Congress signing ceremony for the Communications Decency Act of 1996.
“This historic legislation recognizes that with freedom comes responsibility. (…) It guarantees the diversity of voices on which our democracy depends. Perhaps, above all, it improves the common good,” he said.
And the controversy was triggered.
But not because of those 26 words that appeared in section 230, but because the law tried to regulate both indecency and obscenity in cyberspace.
Free speech advocates successfully argued that First Amendment-protected speech, such as novels in print or the use of the “seven bad words” (which were bleep-censored on TV), would suddenly become illegal when released. will post online.
Critics also pointed out that the law would have a chilling effect on the availability of medical information.
With online protests and legal cases, soon most of the Communications Decency Act was amended.
But not the then-seemingly innocuous section 230.
At the time, almost no one understood its implications.
It was nothing more than an adaptation of a law that protected bookstore owners, who in the past had been held responsible for selling books containing “obscenity.”
In the 1950s, the Supreme Court ruled that since booksellers could not be expected to read all the books they stocked, it was unfair to prosecute them for something written in one of them.
But in the case of books there were publishers who could be sued and therefore acted as gatekeepers.
On the internet, those barriers to publication didn’t exist: not only were there no gatekeepers, there weren’t even doors.
all a puzzle
Section 230 does not protect companies that violate federal criminal law, or those that create illegal or harmful content or violate intellectual property rights.
But it is a shield, because it basically says that when harmful speech occurs, the person responsible must be the author, not the service that hosts it.
And also a sword, since it allows content providers to moderate and determine what is allowed on their platforms, as long as they do it “in good faith”.
In the face of exponential user growth, making sure websites were moderated as well as possible without the threat of constant drag to court led to the creation of that vast online ocean we navigate today.
Nothing, from eBay and Wikipedia to Facebook, Airbnb, X (formerly Twitter) and Google, would be the way it is without that laissez-faire attitude.
But just as such freedom has fostered wonderful things, it has also given cruelty free rein.
And when you think in extremes, it’s easy to conclude that section 230 should, if not be deleted, amended.
Few would oppose the removal of everything that facilitates child sex trafficking or fake news, cyberbullying, bias against minorities, scams and much more.
This is why content providers are constantly under pressure to take action.
But defenders of section 230 have always insisted that it be done without violating the statute, fearing that once it starts to peel off, it won’t end there: many will push for more.
And many have. Both politicians and ordinary people, in Congress and in the courts, even in the Supreme Court.
To understand why it remains firm despite so many attacks, it is useful to imagine what would happen if those 26 words were erased from history.
Once again, content companies would be faced with the dilemma of behaving like CompuServe or Prodigy decades ago.
One option would be to not moderate and open the doors wide to everything so as not to be responsible for anything.
But that probably wouldn’t sit well with those who provide the money that oils the wheels of the Internet: advertisers.
If they opted for moderation, given the sheer amount of content, firms would have no choice but to resort to forceful, broad, and cautious algorithms that, regardless of context, would ban information based on keywords.
That would eliminate, for example, racist or misogynistic insults, but it would also hinder movements like #BlackLivesMatter and #MeToo, by detecting prohibited terms in the testimonies of their victims with the blindness of artificial intelligence.
With such extreme moderation, many threads and forums would become a risk that might not be worth taking.
When you start judging what content is permissible, you can end up in a situation where everything from jokes and irony to opinions that we may dislike but should be part of a free and open internet simply disappear.
On the flip side, removing section 230 would also place a significant burden on smaller platforms like Etsy and Yelp that host user-generated content and lack the resources of sites like Google or Facebook.
Wikipedia, a non-profit platform, would probably be unfeasible.
Startups, small companies that could become the next TikTok, would find it extremely difficult to compete, since they would require resources not only to take care of anything that offends anyone, but also to defend themselves in case someone decides to accuse them of having done so.
Thus, we would be left with the already known but possibly unrecognizable titans, since imposing barriers to what people publish could turn the Internet into something more similar to television or newspapers, in which communication is one-way.
For all of this and more, there is no clear way to address section 230 without destroying the internet as we know it.
It’s not that you don’t want to change the situation, it’s that no one is quite sure how to do it.
Those 26 words that came into force 27 years ago are still valid because they have something in common with democracy, as described by Winston Churchill…
“No one claims that democracy (or section 230) is perfect or all-knowing. In fact, it has been said that democracy (and section 230) is the worst form of government, except for all those other forms that have been tried from time to time.” once in a while”.
Remember that you can receive notifications from BBC News World. Download our app and activate them so you don’t miss our best content.
Do you already know our YouTube channel? Subscribe!
BBC-NEWS-SRC: https://www.bbc.com/mundo/articles/c06186yeg21o, IMPORTING DATE: 2023-08-26 18:10:06
#worse #Section #words #created #internet