social media executive order section 230 kris ruby

Section 230 of The Communications Decency Act

Getting your Trinity Audio player ready...

What is Section 230 of The Communications Decency Act?

“Platforms are more or less immune from liability arising out of user-generated content.”

social media executive order section 230 kris ruby

It is important to preface this article by stating that Section 230 is currently one of the most controversial topics in America. Everyone has a different opinion on the future of the Internet and the role government regulation could potentially play in it. The top of this article shares my opinion on why big tech may ultimately have too much power and the role Section 230 plays in it. The second half of the article features an interview with a top attorney on Section 230.

My opinions are my own and do not constitute legal advice. I have made significant updates to the article since the initial publication to share some of the critics’ opinions representing the case in favor of Section 230 to provide more insight into both sides of the case for repealing Section 230.

Does Section 230 protect users?

Fact or Fiction: “Section 230 of The Communications Decency Act is a legal shield of immunity and protection for Social Media Companies.”

Yes, but not to the same extent as it protects the Internet platforms themselves.

Section 230 has two operative provisions. Section 230(c)(1) says that a platform or user of an interactive computer service will not be treated as the speaker of user-generated content provided by other users. This protects Twitter users from being sued for defamation, for example, simply because they engage with a thread in which defamatory content might be found. It also protects Twitter itself from those lawsuits.

Basically, Section 230 makes everyone on the Internet of user-generated content liable for what they say, but not liable for what other people say.

Should the legal protections given to social media platforms be limited?

TLDR: In my opinion, Section 230 protects the platforms more than the users. Legal immunity for social media companies does not protect individual users the way people think it does. It protects tech companies and interactive computer services that host any type of third-party content from getting sued.

Translation: someone trashes your business on Facebook. The social media platform- Facebook- is not liable.

Small businesses are killed by big tech giants every day because of their inability to moderate or their decision to incorrectly moderate. Those who control the algorithm control the platform. Whoever programs the platform is the arbiter of truth in the public square.

The opposing viewpoint on this is that Section 230 does protect users from being held liable for the content they retweet and forward in emails. Bad 230 Takes on Twitter argues that Section 230 is not blanket legal immunity and that several court cases have defined the limits of Section 230 protection.

The heated debate on 230: “Section 230 is about liability, not the ability of big tech companies to set their own rules.”

How far away are we from public regulation of algorithms in the U.S.?

In the wake of the 2020 election and in particular, the events preceding the Presidential inauguration, the debate has focused on the role of social media companies’ content amplification algorithms, which generally speaking are optimized to drive engagement rather than civil discourse.

Amplification of particular posts doesn’t turn companies like Facebook into speakers of the speech their algorithms amplify – rather, such algorithmic activity is the exercise of discretion in the nature of publishing which Section 230 is expressly designed to protect. See Force v. Facebook. There are also constitutional issues involved with Congress attempting to legislate what opinions social media companies can and cannot amplify due to the First Amendment, which guarantees freedom of speech and prevents the government from erecting prior restraints on speech, for both individuals and corporations.

Has Section 230 been amended?

Yes. Section 230 was recently amended by FOSTA-SESTA in 2018, see Section 230(e)(5), which clarified that Section 230 did not apply to specific provisions relating to online sex trafficking and, among other things, broadened the definition of sex trafficking crimes to include “knowingly assisting, supporting, or facilitating” sex trafficking. The response was that most websites, to give FOSTA-SESTA as wide a berth as possible, began to censor sex-positive content and queer communities for virtually all sexual content.

There is a risk that further carve outs being proposed in the halls of Congress would result in platforms censoring even more content. For example, removing the protection for political material due to platform bias might result in platforms simply not allowing political speech, or repealing 230 completely might result in platforms pre-screening user comments for potentially offensive content before allowing them to post.

Personal Bias and Internet Law: where do you draw the line?

Critics of Section 230 argue that Section 230 allows online platforms to exhibit “bias” without consequence and that “bias” on online publishing platforms should be regulated out of existence.

There are two major problems with this. First, there is a subjectivity issue: what does “bias” mean and who gets to decide whether a particular moderation action is driven by “bias” or by an objective and uniform standard?

Second, there is the issue of the First Amendment. Online websites like social media companies are modern-day printing presses. Why should the government be able to tell them what content they can and cannot host if the government is constitutionally barred from telling a printer what he or she can or cannot print?

Political bias is difficult to prove, easy to deny, and, at the end of the day, it’s a constitutional right.

AI & Algorithmic Bias

When the person or entity deciding what the facts are have preferences, a political agenda, and money to make from advertising, “facts” are nothing more than opinions. The facts will always be swayed towards where the ad dollars flow and what is most profitable. Bias exists within those who script the code and determine the rules of these social media platforms. This is true from a policy standpoint as well within the larger context of machine learning driven content moderation rules.

Fact-checkers are people. People have political bias and deep-rooted opinions and beliefs. That will always impact the definitional term of a fact and in many cases, small businesses will suffer because of it. This is much larger than politics. The future of business depends on it.  The problem with “fact-checking” misinformation on social media platforms by big tech giants is that it is beholden to who the tech Gods believe are the ones telling the truth.

My personal opinion is that Google, Facebook, or Twitter should not have the authority to determine what is real vs. fake and what is dangerous vs. what is helpful. This choice should be left to the platform user. That being said, the choice must fall within legal guidelines and within the existing framework for The Digital Services Act.

Critics argue that no one will advertise on a platform that lets people post, for example, rants about how historical events never happened and there’s no legal framework to tell a private company what they can and can’t do outside of existing discrimination laws.

My opinion: Shouldn’t social media companies be open to liability for defamation and slander? How many businesses have been ruined because Meta refuses to do anything regarding negative reviews? This gives business owners power back. They deserve that vs. a one-way street.

The other side of the heated debate: Critics argue in favor of Section 230 that there would be no social media ecosystem as we know it today — no Twitter, no Facebook, no Yelp – if these companies had to carry full liability for what their users post. Repealing Section 230 would affect virtually every online platform’s business model if they have to carry liability.

social media executive order section 230 kris ruby

U.S.C. § 230, a Provision of the Communication Decency Act

How does Facebook currently censor content and what role will Trump’s social media executive order have in altering that?

Facebook currently censors content for violence and other items that can be located in their updated community standards document, which they have made publicly available. Facebook states that it is currently working to reduce the spread of fake news with machine learning that “predicts what stories may be false” and by reducing the distribution of content “rated as false” by independent third-party fact-checkers. There are several problems with this.

Every human being has a level of internal bias. So, whatever the fact-checkers say is false is flagged as false. Meaning: they are engaging in fact-checking, but everyone’s views of facts are distorted by their own personal opinion. What I see as a fact may not be what you see as a fact because of the theory of internal bias. This is why it is critical for Facebook and big tech companies to decide if they are truly publishers or if they are neutral parties. You can’t be both.

In the future, this problem will only get worse as artificial intelligence becomes the primary tool to execute social media content moderation at scale. AI will be used as a Trust and Safety proxy layer to determine fact from fiction. Currently, we are focused on the internal bias of humans. But what about the bias of AI if the AI is trained on skewed training data?

The other problem is that AI often gets things wrong. Machine learning is rarely perfect; the machines are learning on the job, often at your expense. We saw this several times during the pandemic when content moderation staffing was down at Facebook and content was incorrectly flagged and taken down. It was later put back up-but we learned a powerful lesson- AI is still learning. Mistakes will be made while these machines are learning and those mistakes will be made on you and your business until the machine gets the algorithm right.

But as we know, there are hundreds of algorithms and hundreds of machine learning models running at any given time. So what happens if the machine never gets it right? And what happens when the machine is more right than wrong? Who is liable for those mistakes?

What would happen if Section 230 went away? What limits does section 230 of the communications decency act put on libel suits with social media platforms? Can Trump revoke Section 230?

Trump’s social media executive order could impact social media platforms’ liability. Currently, social media platforms are not responsible for the content that appears on their platforms. Revising section 230 could potentially change that.

As it currently stands, many social media platforms are immune from liability arising out of user-generated content. Gutting Section 230 could significantly change that and would remove this immunity. Social media bias lawsuits continue to fail in court largely because of the protection and immunity offered by Section 230. If it was altered, that could significantly change the trajectory of some of these legal cases in the future.

Could repealing these protections backfire?

Yes. One of the reasons Section 230 was enacted was to prevent varying state and local rules from preventing the development of an open and robust Internet. Section 230 creates a uniform and predictable framework for persons and companies large and small, from a comment section on a lowly WordPress blog to the fire hose of short-form thought that is Twitter, to facilitate open discourse without fear of falling foul of an obscure state criminal law or tort, or from a foreign actor seeking to enforce a foreign judgment in a U.S. court.

Repealing Section 230 would “pull up the ladder” behind Big Tech companies like Google and Facebook, who can afford armies of lawyers to protect them from frivolous lawsuits – something small companies who want to challenge Big Tech cannot afford. Those who advocate for a repeal of 230 to “punish” big tech are more likely giving big tech exactly what it wants: higher barriers to entry which will entrench Big Tech’s dominance.

Section 230 protects and promotes freedom of expression and a competitive Internet. Critics argue that it should not be repealed without first thinking very carefully about the significant repercussions.

My opinion: big tech companies are largely protected by CDA 230, so if CDA was rewritten or repealed, the immunity these tech platforms have would possibly be stripped, so if it backfired, it could potentially backfire on them, but not necessarily the end-user.

The largest backfire could be the deep reduction of Facebook’s market value. That may seem like an abstract concept, but removing Section 230 protections (not just from Facebook but from all social platforms) would expose these companies to potentially hundreds of billions of dollars in liability. The cut in valuation would likely ruin millions of pension plans, retirement accounts etc.

Big tech is now more powerful than the United States government. In a cancel culture era, this is extremely dangerous.

The other side of the debate in favor of Section 230:

230 protects companies like Gab that are trying to compete with Facebook etc. If we repeal 230, we will just entrench the tech giants.

Preventing censorship on social media platforms 

Where has Facebook gone wrong in its censorship trajectory?

A case study in failed leadership and brand positioning.

Facebook has become one of the largest news outlets and media publishers in the world.  Do they put users first? Are they a publishing platform? A media platform? An advertising company? Typically, companies are transparent with their positioning from the get-go. Facebook is a company that grew quickly and acquired more power than it ever knew what to do with.

What they started out as when they first launched is not who they are today. This is a common mistake that entrepreneurial companies make: they grow too fast and do not rebrand accordingly. Facebook started out in Mark Zuckerberg’s dorm room and now has the ability to potentially interfere with and impact global elections. It is okay to admit that your purpose or mission as a company has changed; what is not okay is to pretend it hasn’t. Facebook has lost trust with the public after a number of data breach scandals and it has been hard to regain that ever since.

It takes years to build a reputation, but only a second to destroy one.

Is it hypocritical for Facebook to ban the protests pertaining to the recent stay-at-home orders, all the while last year they profited off Iranian groups sharing “death to America” ads that encouraged protests in Baghdad?

This is a perfect example of why Facebook’s fact-checking standards are inconsistent. Why? Because of human bias. Facebooks guidelines consistently change and are unreliable. As a New York-based social media marketing consultant, I always tell clients that social media platforms are rented space. You must diversify your assets.

You cannot build all of your virtual property on one social media marketing platform such as Facebook. Why? Because one change in their algorithm or one new community guideline could leave your business in shambles.

Business owners are better off creating owned content on their own platforms rather than relying solely on other people’s platforms. Remember, everything you have built on a social media platform can be ripped away from you at the drop of a hat simply because a content moderator says you violated their community standards or their AI incorrectly flags your page as spam.

The law has not kept up with technology.

We need up-to-date laws that take into account the power of big tech platforms.  Updating laws to reflect the new digital ecosystem we live in would always be a good thing. Think of it this way- would a contract you wrote fourteen years ago be good for the way you conduct your business today? No. Why? Because times change. Process changes. And people change. The way we do business changes. Contracts or in this case, the law, must be updated consistently to reflect those tech changes. If they aren’t, it hurts more than it helps because we are abiding by out-of-date agreements that were not created in the digital environment we live in today.

When people discuss the topic of social media censorship, they need to understand Google’s Quality Rater Guidelines and how they have been written and the role bias plays in these guidelines from the beginning. This is a far larger issue than Trump, it can impact regular small business owners, too.

The most pressing legal issue over the next ten years? AI & bias.

When we think about Artificial Intelligence, we must understand: whoever controls the algorithm controls the messaging.

People who write social media content moderation policy or the QRG control the code and Machine Learning parameters ultimately dictate and control the algorithms.

This cannot be overlooked or ignored.

Can the President shut down Twitter with a social media executive order?

The President can’t shut Twitter down, but he could potentially claim that it is violating the Section 230 provisions that have made it billions of dollars. It is a safe-harbor provision that says online platforms are not responsible for the content their users generate.

The executive order asked the federal government to look at possible enforcement action. Since it’s not an actual law, President Biden is free to ignore it. The executive order is still there unless the Biden administration formally revokes it.

President Biden revoked Trump’s Social Media Executive Order that targeted Section 230.

READ: Revocation of Presidential Actions. The following Presidential actions are revoked:  Executive Order 13925 of May 28, 2020 (Preventing Online Censorship)

Can Biden revoke Section 230?

Section 230 is a legislative provision so it can only be revoked by an Act of Congress. If Section 230 disappeared, it would likely result in a deluge of lawsuits being filed against social media companies, who would accordingly begin to aggressively censor all but the most benign content in an effort to protect themselves from secondary liability for various torts or crimes.

How effective is Trump’s social media executive order? 

Getting people to take a hard look at Section 230 of The Communications Decency Act and whether it still makes sense to uphold is a good outcome, regardless of what changes (if anything does change). This law makes very little sense in the social media-driven world we now live in. If someone trashes your business on Facebook today, you can’t sue Facebook. Facebook is not legally responsible, even though they are hosting that content. In my opinion, people deserve the right to go after the person hosting content that is defamatory, and right now, they do not have that right. This isn’t only a political issue; it is an economic one.

Social media companies can control the outcome of your business. So many small business owners have lost the fight against big tech. They have tried to sue and were unsuccessful because the companies have indemnity.

If I host a website, and someone trashes your business on my website, I should be responsible for that content. Someone should have the right to come after me for hosting it. As it stands now, American’s do not have that right.

The future of your business rests in the hands of social media companies who ultimately decide what is true and not true about you and your business. I urge business owners to take the power back and really think about the meaning behind this executive order, above and beyond any political party lines or divides.

How should Section 230 be changed?

Social media platforms play a critical role in shaping society.

The Future of Social Media 

Why should big tech platforms have an unfair business advantage and be immune from lawsuits when other businesses are not afforded this advantage? This law has not been changed in twenty-four years. Regardless of the outcome of Trump’s social media executive order, it is important that we review this policy to see if it still makes sense for the digital economy and ecosystem we live in and beyond. You can’t say on the one hand that you are not responsible for the content that appears on your platform, but on the other say here is a list of 50 reasons why we will delete your content.

The law doesn’t make sense anymore in the current world we live in. The pandemic has pushed digital transformation forward at an epic pace, the legal regulations must also transform with the times. Contracts have to be revisited, and so do laws that pertain to Internet usage. Can you imagine if you changed your business contract with a client once every twenty-four years? The contract you created twenty-four years ago would no longer be relevant to where your business is today. The same is true for social media platforms. Social media platforms play a critical role in the dissemination of news today. Transparency disclosures and disclaimers are critical as we move forward in an AI-driven economy.

what is section 230 of communications decency act kris ruby social media expert

Social Media Regulation: Section 230, social media censorship, and the future of free speech on tech platforms

WATCH: Social Media Executive Order Section 230 Communications Decency Act

Kris Ruby was interviewed on Fox News about Social Media Censorship and Section 230

BONUS: PODCAST: Section 230 explained!

Commentary from two leading digital media experts on President Trump’s Executive Order on preventing Online Censorship from social media marketing expert Kris Ruby and legal analyst Preston Byrne.

“Section 230 serves as a way to protect and immunize user web platforms from litigation over user content that is published on the site.”

what is section 230

Kris Ruby: Hi everyone and welcome to the Kris Ruby podcast, a new show on the politics of social media and big tech. I am so excited to be here today with my guest Preston Byrne. Preston, welcome. Preston is a partner at Anderson Kill.  Today we’re going to be talking about Section 230 of the Communications Decency Act, but first Preston, please, introduce yourself.

Preston Byrne:  My name is Preston Byrne and I’m a technology lawyer. I represent social media companies, bitcoin and cryptocurrency companies, and all kinds of other edgy tech companies with a range of issues from financial regulation to free speech issues to law enforcement relations. I deal with Section 230 on a pretty regular basis in practice, which is not something that a lot of lawyers can say.

Kris Ruby: What is section 230 of the Communications Decency Act?

Preston Byrne: Section 230 of the Communications Decency Act is a federal law that was designed to immunize Internet companies from making common-sense day-to-day decisions that you would expect a company to make if they’re running a web platform.

Back in the early days of the Internet, we had things like message boards, instant messenger, and forums.  Congress recognized that in that setting, traditionally, if you had something like a newspaper, let’s say, and someone wrote a letter to the editor, and you republish that letter to the editor, and it said something like, “I think Preston Byre is a scoundrel,” I could then sue the newspaper for the letter because they’ve published the letter which has defamed me. It’s libel or slander, depending on whether it’s spoken or written, and because they’ve defamed me, I have a cause of action against the newspaper.

Congress wanted to prevent that from happening because they recognized that the Internet was where a lot of people were going to talk. If you had the situation where everybody who conveyed or republished a message from the moment someone pressed “send” on their keyboard to the moment it was published on a website, these are publications in the classical sense of the term.

There are two pieces to Section 230.

There’s Section 230(c)(1), which essentially says that a provider of an online publishing platform isn’t going to be liable for content which is put there by other information content providers, i.e. users, or other platforms that are feeding data into it.

There’s also Section 230(c)(2), which says that the online publishing platforms can moderate, or they won’t be liable for good faith moderation activities which restrict access to offensive material.

So, what that means is if I put up a post saying, “I think that Joe Blogs is a complete *** and I don’t like him,” and the site says that content falls outside of our community rules, then they can remove my posts without my being able to get money damages from them for having removed that content.

Essentially, Section 230 allows Internet companies to allow as much as they want to allow, and to remove as much as they want to remove so that they can craft the user experience in the way that they want.

How Section 230 Changed: Has CDA 230 been revised?

Kris Ruby: When is the last time that Section 230 was updated or changed?

Preston Byrne: Section 230 was updated fairly recently to address sexual abuse and sex trafficking material. There was a congressional act called FOSTA SESTA (Fight Online Sex Trafficking Act and Stop Enabling Sex Traffickers Act), which said that platforms could be liable under federal criminal law if they failed to police for sexual abuse content or other types of content on the platform. As a consequence, what we saw was platforms like Tumblr, for example, which I’ve never used, but apparently, it was a very sex-positive web platform and a lot of that content disappeared, which promptly destroyed the company and ruined their user numbers. So that was one change that was made fairly recently, but there hasn’t really been much else other than that that I’m aware of.

Kris Ruby: To my knowledge, Section 230 of the Communications Decency Act has not been changed in twenty-four years.

Preston Byrne: The core of Section 230 hasn’t been changed in several decades since it was passed. Those core immunities haven’t been changed. It immunizes platforms for liability under state criminal and civil liability and federal civil liability. It does not immunize platforms under federal criminal law. So, if there’s a federal crime or a criminal statute, which pertains to certain types of online content, it will override Section 230. I think that’s what Congress is trying to do in recent discussions about Section 230. They’re trying to figure out if there are certain additional carve-outs from the immunity which should be made.

The big carve-out that I’m aware of and encounter on a day-to-day basis is intellectual property law. There’s something called the Digital Millennium Copyright Act, where providers of online publishing platforms like Twitter, online forums, Reddit, and YouTube are immune from copyright infringement liability unless someone hits them with a notice of infringement or they have actual knowledge of the infringement. Under those circumstances, they’re then not able to go and say, “Well, Section 230…” because Section 230 is expressed not to have any impact on copyright law, intellectual property law, or intellectual property rights. It doesn’t immunize people from breaches of federal law in those areas. So, the Feds can always take things out of Section 230. They have already taken a few things out of Section 230 in terms of sex trafficking material, but they haven’t really gone after political content or anything like that yet.

CDA 230: WATCH THE VIDEO

Digital Defamation: Facebook and Glassdoor Removal of Negative Reviews

Kris Ruby: One of the issues that I have with Section 230 is let’s say, for example, someone decides to trash my business online. Right now, if they do that on Facebook, I can’t go after Facebook, and based on the current law, I cannot sue Facebook if someone else, say a third party, trashes my company on the website. Is that correct?

Preston Byrne: That’s correct.

Kris Ruby:  That’s crazy.

Preston Byrne: If you look at a country like England, just by way of comparison – in England, you could go after Facebook. You could go after a company like The Financial Times [an English newspaper]. For example, the FT has a comment section on their website and they view themselves as re-publishers of the content. There’s a reason why there are no big European Internet companies – and that’s part of the reason.

What you can do under those circumstances is you can sue the person who’s actually trashing your business. If you don’t know who they are, you can say it’s a John Doe and then you can try to ascertain their identity through service of a third-party subpoena on Facebook, asking them to hand over user data. We see a lot of that. You can’t go after Facebook for the content but there’s still a remedy, so you’re not totally denied a remedy if someone’s using those platforms [to trash your business]. Since those platforms suck up so much data, it’s certainly possible to use a subpoena on that platform to ascertain who you’re dealing with and try to track down the person who’s defaming your business.

But no, you can’t go after Facebook any more than you could go after a message board or a bulletin board on a sidewalk.

Kris Ruby: Tell me how Section 230 works with Glassdoor because every business owner I know has major issues with Glassdoor. This notion of fraudulent reviews that are written by people who are angry and make up different accounts to trash the business owner is a problem. Business owners really can’t do anything about it.

Preston Byrne: They can sue the person who made the statement, and they can try to go after Glassdoor to cough up the information about the identity of that user. I think Glassdoor uses Facebook logins as their Single Sign-On scheme, so it shouldn’t be too hard to find out who made the comments if you want to go through the trouble and expense of bringing a lawsuit.

The issue of course is that if you bring a lawsuit, there’s a couple of things to consider.

First, it’s hugely expensive.

Second, in the United States, it’s very hard to prove damages in a defamation case, unless someone really says something egregious. We’ve actually seen some lawsuits against the Southern Poverty Law Center (SPLC) for defamatory statements succeed in the past. For business criticism, not so much.

Long story short, it’s hard to prove and you should be able to just go after the person who’s making the statement rather than Glassdoor. I agree, it’s difficult and it’s a pain, but ultimately Glassdoor is immune and they’re just providing a forum for other people to speak.

If you think it’s worth your while to go after them, go ahead, but in all likelihood, what you’re going to do is draw more attention to the negative comment.

That was the third point I wanted to say earlier – that’s the Streisand effect, and you’re not going to get what you want in terms of monetary remuneration from suing someone who just doesn’t like your business because chances are they’re just some schmoe on the Internet who doesn’t have a lot of money.

 

Moderating User Generated Content: Is It Legal Under Section 230?

Kris Ruby: Walk me through what would happen if Section 230 was changed or amended. How would that impact this scenario I just gave you with Glassdoor? Would that mean that then that business owner could sue the platform? Is that what you’re saying?

Preston Byrne: I don’t think so. The current proposals are proposals to change The Communications Decency Act. They generally come from the Trump administration or Josh Hawley, who is a junior senator from Missouri. The proposals say something along these lines:

“If a platform puts its thumb on the scale in terms of how it moderates content and if the platform decides to take a particular viewpoint or advance a particular viewpoint, then the Section 230 immunity that the platform enjoys in relation to the content will fall away in relation to that content specifically.”

There are some people who say that if you do any moderation of any kind which is biased at all that the immunity should fall away. To those people, I’d say, “Listen, how do you determine what bias is?” That’s a very, very broad, determination that’s hard to make. It’s just too loose. It’s not specific enough for a legal standard.

The proposal basically would be like, let’s say you’re Facebook, and host a group that’s started by Hamas and for whatever reason, Hamas decides that they’re going to pay Facebook to boost the group’s advertisement, which Hamas makes. As a consequence, someone joins the group and the group grows and grows and grows. Then let’s say someone is killed by Hamas or a member of that Facebook group who joined Hamas and found Hamas through the group. I think the argument would go that if Facebook promoted that in any way, that Facebook should be liable for anything that flows from that act, for example, under a theory of providing material support to terrorism.

There’s a civil cause of action there for that, and there was a case that dealt with this called Force v. Facebook. What the court said [in Force] is, “Listen, these companies are allowed to do curation. They’re allowed to do content boosting. They’re allowed to let these groups form and do whatever they want on the platforms. They’re not actually doing anything which is legally culpable unless they are materially developing and advancing the content issue.”

If Facebook was writing advertisements for Hamas and saying, “Hamas, we noticed that you sent this ad and it wasn’t so good. We called up our ad consultants in Silicon Valley and they had a recommendation. Maybe if you add this image of a bazooka more people will click on it and go to your rally in Ramallah next week,” then that is the kind of thing where there could be legal liability.

At the moment, though, the question is: what happens if Facebook’s algorithms just promote engagement with that content? The answer is that the algorithm is simply promoting engagement or supporting one viewpoint or another, it is not materially developing the content.

For example, if Twitter decides it wants to herd people into echo chambers and only promote one viewpoint or another to certain groups of people, and there’s evidence that it does this, there’s no liability for that at the moment because Twitter isn’t materially developing the content. The users are still providing all of the information that’s going up on the platform.

Free Speech, Censorship, and Political Advertising on Twitter

“Most of what you hear on the Internet in terms of political rhetoric is designed to elicit outrage and the algorithms that big tech companies deploy are designed to elicit outrage. 

Outrage means eyeballs, eyeballs means engagement, and engagement means advertising dollars. 

All of these companies are competing for a very small and dwindling pool of advertising dollars that haven’t already been sucked up by Google.”

Kris Ruby:  Twitter is making a lot of changes and bold declarations. What are your thoughts on what Twitter is trying to do with free speech, censorship, and political advertising? I know those are three separate topics, but I want to know your point of view with Twitter, specifically, in the actions they have taken regarding social media censorship.

Preston Byrne: Free speech, censorship, and political advertising are actually pretty easy answers there. Twitter is a company, companies are made of people and as such, they have First Amendment rights. Twitter is exercising its First Amendment rights to hold viewpoints and advance those viewpoints.

In terms of censorship, Twitter is also exercising its First Amendment rights to decide what content it does or does not want on its platform. It booted off Milo Yiannopoulos. It booted off Alex Jones. It has a right under contract law to determine what the rules are.

People continue to make use of its platforms and it has a First Amendment right to determine whether people will be granted continued access. It doesn’t necessarily mean that what they’re doing isn’t censorship. It is censorship, but the right to censor material on platforms you control as a private actor is part of what your First Amendment rights permit.

In terms of political advertising, for Twitter, there’s a weird distinction between paid political advertisement and political advertisement that is arising out of just ordinary run-of-the-mill engagement. I think any rational person would describe the President’s tweeting as political advertisements, or Brad Parscale’s tweeting as political advertisements, or any other politician posting a political video as a political advertisement – albeit one which isn’t paid for.

Twitter basically said, “Listen, we’re not going to accept money from campaigns to promote or boost content that they produce. They’re going to have to do that organically on their own,” and that’s a business decision that I don’t think we have any place to comment on. Although I think what we will see is that a lot of political advertising that is dressed up as something else will probably get through and Twitter will take the money. What they likely won’t do is just take political advertising direct from the Trump campaign and the Biden campaign.

Kris Ruby: What about Twitter censoring President Trump’s tweets? I know that caused a huge backlash. Should they have done that? Should they not have done that?

Preston Byrne: It’s not really a should or should not. They’ve staked out a position in the market and as a consequence of staking out that position, they’re going to have to accept whatever the market’s consequences for that are. We’ve seen the growth of rival platforms, such as Gab, Parler, and a couple of others that offer different moderation rules compared to Twitter’s and they are much more permissive with the type of content they allow and don’t wait into censorship of political figures and others.

They said, “Listen, that’s not our bailiwick, so we’re going to be pretty hands-off.” Twitter, I think, has staked out a market position and said, “Listen, our users want this, we want this and so this is what we’re going to do.” In the short term, it doesn’t seem to have much of an effect on their business, but in the long term, it may. The effect is to be determined.

Section 230 and social media legal liabilities.

Does President Trump’s Social Media Executive Order Have Legal Authority?

Could Donald Trump’s executive order drive action from lawmakers to reform Section 230?

Kris Ruby: What about President Donald Trump’s proposed social media executive order. What do you think is going to happen with that?

Preston Byrne: Yeah, that’s unconstitutional. The government can’t tell people what they can and can’t think. It can’t even investigate people’s thoughts and try to formulate plans as to how to control what people can and can’t think. The social media executive order is predicated on the false assumption that something in Section 230 or the Constitution requires publishers to be neutral.

In fact, the opposite is true. The First Amendment allows publishers to be neutral or non-neutral or whatever else, and prohibits the government from imposing any content-based requirement, including the requirement that people be politically neutral, on private citizens and private companies. The Social Media Executive Order is just signaling to the base. It’s not really anything which we should expect to be legally effective. There may be a bill that comes out of it, but at the moment, it’s just to drive some talking points for the President in an election year.

Kris Ruby:  Do you think that we’re going to see major changes to Section 230?

Preston Byrne: I doubt we will see any changes to it. It’s too important to the American tech companies that they have that protection in place. I strongly doubt that Congress will vote in favor of any substantial repeal or limitation of its provisions.

How Section 230 Prevents Businesses from Liability for User Generated Content (UGC)

Preston Byrne: I’ve seen Section 230 come up on a pretty regular basis. If you’re running an online platform where there’s content on that platform, someone is going to object to some of that content and they are going to send you an email in legalese demanding that the content come down. Section 230 allows smaller platforms to take these notices and laugh at them irrespective of where they come from.

The company can basically sit back and shrug them off rather than sit there and worry that a court is going to enforce speech-unfriendly foreign edicts or unreasonable domestic edicts that have been served on the company.

Section 230 is extremely useful for early-stage tech companies. I would say it’s actually useful in an outsize fashion to early-stage tech companies because if you’re a company that operates on the Internet, someone’s going to want to get a piece of you and Section 230 of the Communications Decency Act really prevents them from doing that.

Legal Tip #1: Section 230 Protects You From (Almost) Anything

Don’t get too worked up over copyright trolls and other litigants who walk in the door. Section 230 has you covered for just about everything other than child sexual abuse content, which you have to register with something called NCMEC, which is the National Center for Missing and Exploited Children, and deal with an automatic reporting procedure with the FBI.

That’s also something that you need to do in order to be immune from certain types of liability, so you should do that. Basically, Section 230 will usually protect you from just about everything else that’s going to come in the door and that includes any foreign requests, any civil litigation, that kind of stuff. Your legal expenses are not going to be huge, although you should expect legal notices to come in.

Legal Tip #2: Coordinate with Law Enforcement for User Data Retrieval 

How to deal with law enforcement. 

Have a way that you plan to interface and verify communications through law enforcement or a way that they can get ahold of you in case they need to serve you with legal process, like a warrant or a subpoena. Those requests will come in and the more popular your website is, the harder and faster those requests will come in.

For example, in the recent riots, a lot of those ANTIFA members were organizing on Twitter and they were organizing on Discord. My guess is that Twitter and Discord, when we look at the transparency reports for those websites next year, we’re going to see that there were a lot of data requests made to those two websites. They’re not going to be able to tell you who they were made for, but they are going to tell you that they exist.

My guess is we’re going to see a very high volume of requests. If there are users on your site who are particularly edgy or may present risks in the future, whether they’re very politically active or ANTIFA or whatever else, those users are probably going to create a problem for you in the future in terms of dealing with law enforcement.

You’re going to want to have a procedure where you can pull down user data quickly in a secure fashion, in a way that isn’t accessible to the open surface web because otherwise, you’ve got a big data exposure problem.

Legal Tip #3: Register with the Copyright Office.

CDA 230 has you covered for most civil liability and be prepared to deal with the FBI if you have edgy users.

All of those things sound scary and difficult. There are people who deal with these things for a living. We’re called lawyers. So, we can help you set up the appropriate policies and procedures to comply with your various legal obligations because people don’t usually think a social media company or interactive Internet company is subject to regulation or legal controls. Yeah, I start an app, whatever it is I push “go” and I don’t have to deal with it again. It’s not that easy. But if you get the right legal framework in place and you have the right advice, your compliance burden can be pretty low impact, at least until you start scaling the company, at which point you need to put a proper legal team in place.

ABOUT

Preston Byrne – New York Attorney 

Preston Byrne is a Partner at Brown Rudnick.  A corporate lawyer with extensive experience working with cryptocurrency and blockchain technologies, Preston is a member of the firm’s Technology, Media, and Distributed Systems group as well as its Corporate and Commercial Litigation Group. Preston writes and speaks about, and is quoted widely by print media on, technology law matters.

Connect with Preston

RESOURCES:

PERTINENT LEGAL CASES:

Subscribe to The Kris Ruby Show

Apple Podcasts | Stitcher | Spotify | TuneIn

ABOUT KRIS RUBY 

KRIS RUBY is the CEO of Ruby Media Group, an award-winning social media marketing agency based in New York.  Kris Ruby has more than 12 years of experience in the social media industry. She is a sought-after digital strategist and social media marketing consultant who delivers high-impact personal branding training programs for executives. Over the past decade, Ruby has consulted with small- to large-scale businesses, including Equinox and IHG Hotels. She has led the social media strategy for Fortune 500 companies as well as private medical practices and is a digital media strategist with 10-plus years building successful brands. Ruby creates strategic, creative, measurable targeted campaigns to achieve an organization’s strategic business-growth objectives. Ruby is also a national television commentator and political commentator. She has appeared on national TV programs over 150 times covering big tech bias, politics and social media. She is a trusted media source and frequent on-air commentator on social media, tech trends and crisis communications and frequently speaks on FOX News, CNBC, Good Morning America and other networks. Ruby is at the epicenter of the social media marketing world and speaks to associations leveraging social media to build a personal brand.  She graduated from Boston University’s College of Communication with a major in public relations and is a founding member of The Young Entrepreneurs Council.  For more information about Kris Ruby, visit https://www.krisruby.com and https://rubymediagroup.com

 

*Date last updated: August 24, 2023

The information in this article is not legal advice. For official legal advice on Section 230 please consult your attorney. 

When this post was originally written, Trump was in office and Parler was still live. Since the initial publication of this post, President Biden is now in office and Parler has been removed. 

Disclaimer: I am not a lawyer and the information shared during this episode are for informational purposes only and not for the purposes of providing legal advice. You should contact your attorney to obtain advice.

RESOURCES: 

Additional Reading on Section 230 & related topics

AI Censorship: The Future of Free speech on the Internet

In the age of generative AI, should the legal protections given to social media platforms be limited?

The recent debates surrounding social media liability protections and Section 230 continue to expand as big tech companies host plagiarized content generated by artificial intelligence.

READ: Washington Post: AI chatbots may have a liability problem