social media executive order section 230 kris ruby

Section 230 of The Communications Decency Act

What is Section 230 of The Communications Decency Act?

“Platforms are more or less immune from liability arising out of user-generated content.”

social media executive order section 230 kris ruby

Section 230 of The Communications Decency Act is a legal shield of immunity and protection for Social Media Companies.

Has section 230 been amended?

No. Section 230 has not been rewritten in 24 years.

Does Section 230 protect users?

No. Blanket legal immunity for social media companies doesn’t protect individual users the way people think it does. It protects tech companies from getting sued.

Translation: someone trashes your business on Facebook. The social media platform- Facebook- is not liable.

Small businesses are killed by big tech giants every day because of their inability to moderate or their decision to incorrectly moderate. Those who control the algorithm control the platform. Whoever programs the platform is the arbiter of truth in the public square.

Trump can’t shut Twitter down, but he could potentially claim that it is violating the Section 230 provisions that have made it billions of dollars. It is a safe-harbor provision that says online platforms are not responsible for the content their users generate.

Personal Bias and Internet Law: where do you draw the line?

When the person or entity deciding what the facts are has clear preferences, a political agenda, and money to make from advertising, “facts” are nothing more than opinion. The facts will always be swayed towards where the ad dollars flow and what is most profitable.  Bias exists within those who script the code and determine the rules of these platforms.

Fact-checkers are people. People have political bias and deep-rooted opinions and beliefs. That will always impact the definitional term of a fact and in many cases, small businesses will suffer because of it. This is much larger than politics. The future of business depends on it.  The problem with “fact-checking” misinformation on social media platforms by big tech giants is that it is beholden to who the tech Gods believe are the ones telling the truth.

Google, Facebook, or Twitter should not have the authority to determine what is real vs. fake and what is dangerous vs. what is helpful. This choice should be left to the platform user.

Shouldn’t social media companies be open to liability for open defamation and slander? How many businesses have been ruined because Facebook refuses to do anything regarding negative reviews? This gives business owners power back. They deserve that vs. a one-way street.

social media executive order section 230 kris ruby

U.S.C. § 230, a Provision of the Communication Decency Act

How does Facebook currently censor content and what role will Trump’s social media executive order have in altering that?

Facebook currently censors content for violence and a laundry list of other items that can be located in their updated community standards document, which they have made publicly available. Facebook states that it is currently working to reduce the spread of fake news with machine learning that “predicts what stories may be false” and by reducing the distribution of content “rated as false” by independent third-party fact-checkers. There are several problems with this.

Every human being has a level of internal bias. So, whatever the fact-checkers say is false is flagged as false. Meaning: they are engaging in fact-checking, but everyone’s views of facts are distorted by their own personal opinion. What I see as a fact may not be what you see as a fact because of the theory of internal bias. This is why it is critical for Facebook and big tech companies to decide if they are truly publishers or if they are neutral parties. You can’t be both.

The other problem is, AI often gets things wrong. Machine learning is rarely perfect; the machines are learning on the job, often at your expense. We saw this several times during the pandemic when content moderation staffing was down at Facebook and content was incorrectly flagged and taken down. It was later put back up- but we learned a powerful lesson- AI is still learning. Mistakes will be made while these machines are learning and those mistakes will be made on you and your business until the machine gets the algorithm right.

What would happen if Section 230 went away? What limits does section 230 of the communications decency act put on libel suits with social media platforms? Can Trump revoke Section 230?

Trump’s social media executive order could impact social media platforms’ liability. Currently, social media platforms are not responsible for the content that appears on their platforms. Revising section 230 could change that.

As it currently stands, many social media platforms are immune from liability arising out of user-generated content. Gutting Section 230 could significantly change that and would remove this immunity. Social media bias lawsuits continue to fail in court largely because of the protection and immunity offered by section 230. If it was altered, that could significantly change the trajectory of some of these cases in the future.

Preventing censorship on social media platforms 

Where has Facebook gone wrong in its censorship trajectory?

Facebook has become one of the largest news outlets and media publishers in the world. They went wrong with their brand positioning. Do they put users first? Are they a publishing platform? A media platform? An advertising company? Usually, companies are transparent with their positioning from the get-go. Facebook is a company that grew quickly and acquired more power than it ever knew what to do with.

What they started out as when they first launched is not who they are today. This is a common mistake that entrepreneurial companies make: they grow too fast and do not rebrand accordingly. Facebook started out in Mark Zuckerberg’s dorm room and now has the ability to interfere with major elections. It is okay to admit that your purpose or mission as a company has changed; what is not okay is to pretend it hasn’t. Facebook has lost trust with the public after a number of data breach scandals and it has been hard to regain that ever since. It takes years to build a reputation, but only a second to destroy one.

Is it hypocritical for Facebook to ban the protests pertaining to the recent stay at home orders, all the while last year they profited off Iranian groups sharing “death to America” ads that encouraged protests in Baghdad?

This is a perfect example of why Facebook’s fact-checking standards are inconsistent. Why? Because of human bias. Facebooks guidelines consistently change and are unreliable. As a New York-based social media marketing consultant, I always tell clients that social media platforms are rented space. You must diversify your assets.

You cannot build all of your virtual “property” on one social media marketing platform such as Facebook. Why? Because one change in their algorithm or one new community guideline could leave your business in shambles.

Business owners are better off creating owned content on their own platforms rather than relying solely on other people’s platforms. Remember, everything you have built on a social media platform can be ripped away from you at the drop of a hat simply because a content moderator says you violated their community standards or their AI incorrectly flags your page as spam.

How effective is Trump’s social media executive order? 

Getting people to take a hard look at Section 230 of The Communications Decency Act and whether it still makes sense to uphold is a good outcome, regardless of what changes. People cannot see past their hatred for the President to realize this law makes very little sense in the social media-driven world we now live in. If someone trashes your business on Facebook today, you can’t sue Facebook. Facebook is not legally responsible, even though they are hosting that content. People deserve the right to go after the person hosting content that is defamatory, and right now, they do not have that right. This isn’t only a political issue; it is an economic one.

Social media companies can control the outcome of your business. So many small business owners have lost the fight against big tech. They have tried to sue and were unsuccessful because the companies have indemnity.

If I host a web site, and someone trashes your business on my web site, I should be responsible for that content. Someone should have the right to come after me for hosting it. As it stands now, American’s do not have that right.

The future of your business rests in the hands of social media companies who ultimately decide what is true and not true about you and your business. I urge business owners to take the power back and really think about the meaning behind this executive order, above and beyond any political party lines or divides.

How should Section 230 be changed?

The future of social media 

Why should big tech platforms have an unfair business advantage and be immune from lawsuits when other businesses are not afforded this advantage? This law has not been changed in twenty-four years. Regardless of the outcome of Trump’s social media executive order, it is important that we review this policy to see if it still makes sense for the 2020 digital economy and ecosystem we live in and beyond. You can’t say on the one hand that you are not responsible for the content that appears on your platform, but on the other say here is a list of 50 reasons why we will delete your content.

The law doesn’t make sense anymore in the current world we live in. The pandemic has pushed digital transformation forward at an epic pace, the legal regulations must also transform with the times. Contracts have to be revisited, and so do laws that pertain to Internet usage. Can you imagine if you changed your business contract with a client once every twenty-four years? The contract you created twenty-four years ago would no longer be relevant to where your business is at today. The same is true for social media platforms.  Social media platforms play a critical role in the dissemination of news today. Transparency disclosures and disclaimers are critical as we move forward.

what is section 230 of communications decency act kris ruby social media expert

Social Media Executive Order Section 230 Communications Decency Act Resources:

Kris Ruby was interviewed on Fox News about Social Media Censorship and Section 230

BONUS: PODCAST: Section 230 explained!

Commentary from two leading digital media experts on President Trump’s Executive Order on preventing Online Censorship from social media marketing expert Kris Ruby and legal analyst Preston Byrne.

“Section 230 serves as a way to protect and immunize user web platforms from litigation over user content that is published on the site.”

what is section 230

Kris Ruby: Hi everyone and welcome to the Kris Ruby podcast, a new show on the politics of social media and big tech. I am so excited to be here today with my guest Preston Byrne. Preston, welcome. Preston is a partner at Anderson Kill.  Today we’re going to be talking about Section 230 of the Communications Decency Act, but first Preston, please, introduce yourself.

Preston Byrne:  My name is Preston Byrne and I’m a technology lawyer. I represent social media companies, bitcoin and cryptocurrency companies, and all kinds of other edgy tech companies with a range of issues from financial regulation to free speech issues to law enforcement relations. I deal with Section 230 on a pretty regular basis in practice, which is not something that a lot of lawyers can say.

Kris Ruby: What is section 230 of the Communications Decency Act?

Preston Byrne: Section 230 of the Communications Decency Act is a federal law that was designed to immunize Internet companies from making common-sense day-to-day decisions that you would expect a company to make if they’re running a web platform.

Back in the early days of the Internet, we had things like message boards, instant messenger, and forums.  Congress recognized that in that setting, traditionally, if you had something like a newspaper, let’s say, and someone wrote a letter to the editor, and you republish that letter to the editor, and it said something like, “I think Preston Byre is a scoundrel,” I could then sue the newspaper for the letter because they’ve published the letter which has defamed me. It’s libel or slander, depending on whether it’s spoken or written, and because they’ve defamed me, I have a cause of action against the newspaper.

Congress wanted to prevent that from happening because they recognized that the Internet was where a lot of people were going to talk. If you had the situation where everybody who conveyed or republished a message from the moment someone pressed “send” on their keyboard to the moment it was published on a website, these are publications in the classical sense of the term.

There are two pieces to Section 230.

There’s Section 230(c)(1), which essentially says that a provider of an online publishing platform isn’t going to be liable for content which is put there by other information content providers, i.e. users, or other platforms that are feeding data into it.

There’s also Section 230(c)(2), which says that the online publishing platforms can moderate, or they won’t be liable for good faith moderation activities which restrict access to offensive material.

So, what that means is if I put up a post saying, “I think that Joe Blogs is a complete *** and I don’t like him,” and the site says that content falls outside of our community rules, then they can remove my posts without my being able to get money damages from them for having removed that content.

Essentially, Section 230 allows Internet companies to allow as much as they want to allow, and to remove as much as they want to remove so that they can craft the user experience in the way that they want.

How Section 230 Changed: Has CDA 230 been revised?

Kris Ruby: When is the last time that Section 230 was updated or changed?

Preston Byrne: Section 230 was updated fairly recently to address sexual abuse and sex trafficking material. There was a congressional act called FOSTA SESTA (Fight Online Sex Trafficking Act and Stop Enabling Sex Traffickers Act), which said that platforms could be liable under federal criminal law if they failed to police for sexual abuse content or other types of content on the platform. As a consequence, what we saw was platforms like Tumblr, for example, which I’ve never used, but apparently it was a very sex-positive web platform and a lot of that content disappeared, which promptly destroyed the company and ruined their user numbers. So that was one change that was made fairly recently, but there hasn’t really been much else other than that that I’m aware of.

Kris Ruby: To my knowledge, Section 230 of the Communications Decency Act has not been changed in twenty-four years.

Preston Byrne: The core of Section 230 hasn’t been changed in several decades since it was passed. Those core immunities haven’t been changed. It immunizes platforms for liability under state criminal and civil liability and federal civil liability. It does not immunize platforms under federal criminal law. So, if there’s a federal crime or a criminal statute, which pertains to certain types of online content, it will override Section 230. I think that’s what Congress is trying to do in recent discussions about Section 230. They’re trying to figure out if there are certain additional carve-outs from the immunity which should be made.

The big carve-out that I’m aware of and encounter on a day-to-day basis is intellectual property law. There’s something called the Digital Millennium Copyright Act, where providers of online publishing platforms like Twitter, online forums, Reddit, and YouTube are immune from copyright infringement liability unless someone hits them with a notice of infringement or they have actual knowledge of the infringement. Under those circumstances, they’re then not able to go and say, “Well, Section 230…” because Section 230 is expressed not to have any impact on copyright law, intellectual property law, or intellectual property rights. It doesn’t immunize people from breaches of federal law in those areas. So, the Feds can always take things out of Section 230. They have already taken a few things out of Section 230 in terms of sex trafficking material, but they haven’t really gone after political content or anything like that yet.

CDA 230: WATCH THE VIDEO

Digital Defamation: Facebook and Glassdoor Removal of Negative Reviews

Kris Ruby: One of the issues that I have with Section 230 is let’s say, for example, someone decides to trash my business online. Right now, if they do that on Facebook, I can’t go after Facebook, and based on the current law, I cannot sue Facebook if someone else, say a third-party, trashes my company on the web site. Is that correct?

Preston Byrne: That’s correct.

Kris Ruby:  That’s crazy.

Preston Byrne: If you look at a country like England, just by way of comparison – in England, you could go after Facebook. You could go after a company like The Financial Times [an English newspaper]. For example, the FT has a comment section on their website and they view themselves as re-publishers of the content. There’s a reason why there are no big European Internet companies – and that’s part of the reason.

What you can do under those circumstances is you can sue the person who’s actually trashing your business. If you don’t know who they are, you can say it’s a John Doe and then you can try to ascertain their identity through service of a third-party subpoena on Facebook, asking them to hand over user data. We see a lot of that. You can’t go after Facebook for the content but there’s still a remedy, so you’re not totally denied a remedy if someone’s using those platforms [to trash your business]. Since those platforms suck up so much data, it’s certainly possible to use a subpoena on that platform to ascertain who you’re dealing with and try to track down the person who’s defaming your business.

But no, you can’t go after Facebook any more than you could go after a message board or a bulletin board on a sidewalk.

Kris Ruby: Tell me how Section 230 works with Glassdoor because every business owner I know has major issues with Glassdoor. This notion of fraudulent reviews that are written by people who are angry and make up different accounts to trash the business owner is a problem. Business owners really can’t do anything about it.

Preston Byrne: They can sue the person who made the statement, and they can try to go after Glassdoor to cough up the information about the identity of that user. I think Glassdoor uses Facebook logins as their Single Sign-On scheme, so it shouldn’t be too hard to find out who made the comments if you want to go through the trouble and expense of bringing a lawsuit.

The issue of course is that if you bring a lawsuit, there’s a couple of things to consider.

First, it’s hugely expensive.

Second, in the United States, it’s very hard to prove damages in a defamation case, unless someone really says something egregious. We’ve actually seen some lawsuits against the Southern Poverty Law Center (SPLC) for defamatory statements succeed in the past. For business criticism, not so much.

Long story short, it’s hard to prove and you should be able to just go after the person who’s making the statement rather than Glassdoor. I agree, it’s difficult and it’s a pain, but ultimately Glassdoor is immune and they’re just providing a forum for other people to speak.

If you think it’s worth your while to go after them, go ahead, but in all likelihood, what you’re going to do is draw more attention to the negative comment.

That was the third point I wanted to say earlier – that’s the Streisand effect, and you’re not going to get what you want in terms of monetary remuneration from suing someone who just doesn’t like your business because chances are they’re just some schmoe on the Internet who doesn’t have a lot of money.

 

Moderating User Generated Content: Is It Legal Under Section 230?

Kris Ruby: Walk me through what would happen if Section 230 was changed or amended. How would that impact this scenario I just gave you with Glassdoor? Would that mean that then that business owner could sue the platform? Is that what you’re saying?

Preston Byrne: I don’t think so. The current proposals are proposals to change The Communications Decency Act. They generally come from the Trump administration or Josh Hawley, who is a junior senator from Missouri. The proposals say something along these lines:

“If a platform puts its thumb on the scale in terms of how it moderates content and if the platform decides to take a particular viewpoint or advance a particular viewpoint, then the Section 230 immunity that the platform enjoys in relation to the content will fall away in relation to that content specifically.”

There are some people who say that if you do any moderation of any kind which is biased at all that the immunity should fall away. To those people, I’d say, “Listen, how do you determine what bias is?” That’s a very, very broad, determination that’s hard to make. It’s just too loose. It’s not specific enough for a legal standard.

The proposal basically would be like, let’s say you’re Facebook, and host a group that’s started by Hamas and for whatever reason, Hamas decides that they’re going to pay Facebook to boost to the group’s advertisement, which Hamas makes. As a consequence, someone joins the group and the group grows and grows and grows. Then let’s say someone is killed by Hamas or a member of that Facebook group who joined Hamas and found Hamas through the group. I think the argument would go that if Facebook promoted that in any way, that Facebook should be liable for anything that flows from that act, for example, under a theory of providing material support to terrorism.

There’s a civil cause of action there for that, and there was a case that dealt with this called Force v. Facebook. What the court said [in Force] is, “Listen, these companies are allowed to do curation. They’re allowed to do content boosting. They’re allowed to let these groups form and do whatever they want on the platforms. They’re not actually doing anything which is legally culpable unless they are materially developing and advancing the content issue.”

If Facebook was writing advertisements for Hamas and saying, “Hamas, we noticed that you sent this ad and it wasn’t so good. We called up our ad consultants in Silicon Valley and they had a recommendation. Maybe if you add this image of a bazooka more people will click on it and go to your rally in Ramallah next week,” then that is the kind of thing where there could be legal liability.

At the moment, though, the question is: what happens if Facebook’s algorithms just promote engagement with that content? The answer is that the algorithm is simply promoting engagement or supporting one viewpoint or another, it is not materially developing the content.

For example, if Twitter decides it wants to herd people into echo chambers and only promote one viewpoint or another to certain groups of people, and there’s evidence that it does this, there’s no liability for that at the moment because Twitter isn’t materially developing the content. The users are still providing all of the information that’s going up on the platform.

Free Speech, Censorship, and Political Advertising on Twitter

Kris Ruby: We see Twitter making a lot of changes and declarations. What are your thoughts around what Twitter is trying to do with free speech, censorship, and political advertising? I know those are three separate topics, but I want to know your point of view with Twitter, specifically, in what they’re doing with social media censorship.

Preston Byrne: Free speech, censorship, and political advertising are actually pretty easy answers there. Twitter is a company, companies are made of people and as such, they have First Amendment rights. Twitter is exercising its First Amendment rights to hold viewpoints and advance those viewpoints.

In terms of censorship, Twitter is also exercising its First Amendment rights to decide what content it does or does not want on its platform. It booted off Milo Yiannopoulos. It booted off Alex Jones. It has a right under contract law to determine what the rules are. People continue to make use of its platforms and it has a First Amendment right to determine whether people will be granted continued access. It doesn’t necessarily mean that what they’re doing isn’t censorship. It is censorship, but the right to censor material on platforms you control as a private actor is part of what your First Amendment rights permit.

In terms of political advertising, for Twitter, there’s a weird distinction between paid political advertisement and political advertisement that is arising out of just ordinary run of the mill engagement. I think any rational person would describe the President’s tweeting as political advertisements, or Brad Parscale’s tweeting as political advertisements, or any other politician posting a political video as a political advertisement – albeit one which isn’t paid for.

Twitter basically said, “Listen, we’re not going to accept money from campaigns to promote or boost content that they produce. They’re going to have to do that organically on their own,” and that’s a business decision that I don’t think we have any place to comment on. Although I think what we will see is that a lot of political advertising that is dressed up as something else will probably get through and Twitter will take the money. What they likely won’t do is just take political advertising direct from the Trump campaign and the Biden campaign.

Kris Ruby: What about Twitter censoring President Trump’s tweets? I know that caused a huge backlash. Should they have done that? Should they not have done that?

Preston Byrne: It’s not really a should or should not. They’ve staked out a position in the market and as a consequence of staking out that position, they’re going to have to accept whatever the market’s consequences for that are. We’ve seen the growth of rival platforms, such as Gab, Parler, and a couple of others that offer different moderation rules compared to Twitter’s and they are much more permissive with the type of content they allow and don’t wait into censorship of political figures and others.

They said, “Listen, that’s not our bailiwick, so we’re going to be pretty hands-off.” Twitter, I think, has staked out a market position and said, “Listen, our users want this, we want this and so this is what we’re going to do.” In the short term, it doesn’t seem to have much of an effect on their business, but in the long-term, it may. The effect is to be determined.

Section 230 and social media legal liabilities.

Does President Trump’s Social Media Executive Order Have Legal Authority?

Could Donald Trump’s executive order drive action from lawmakers to reform Section 230?

Kris Ruby: What about President Donald Trump’s proposed social media executive order. What do you think is going to happen with that?

Preston Byrne: Yeah, that’s unconstitutional. The government can’t tell people what they can and can’t think. It can’t even investigate people’s thoughts and try to formulate plans as to how to control what people can and can’t think. The social media executive order is predicated on the false assumption that something in Section 230 or the Constitution requires publishers to be neutral.

In fact, the opposite is true. The First Amendment allows publishers to be neutral or non-neutral or whatever else, and prohibits the government from imposing any content-based requirement, including the requirement that people be politically neutral, on private citizens and private companies. The Social Media Executive Order is just signaling to the base. It’s not really anything which we should expect to be legally effective. There may be a bill that comes out of it, but at the moment, it’s just to drive some talking points for the President in an election year.

Kris Ruby:  Do you think that we’re going to see major changes to Section 230?

Preston Byrne: I doubt we will see any changes to it. It’s too important to the American tech companies that they have that protection in place. I strongly doubt that Congress will vote in favor of any substantial repeal or limitation of its provisions.

How Section 230 Prevents Businesses from Liability for User Generated Content (UGC)

Preston Byrne: I’ve seen Section 230 come up on a pretty regular basis. If you’re running an online platform where there’s content on that platform, someone is going to object to some of that content and they are going to send you an email in legalese demanding that the content come down. Section 230 allows smaller platforms to take these notices and laugh at them irrespective of where they come from.

The company can basically sit back and shrug them off rather than sit there and worry that a court is going to enforce speech-unfriendly foreign edicts or unreasonable domestic edicts that have been served on the company.

Section 230 is extremely useful for early-stage tech companies. I would say it’s actually useful in an outsize fashion to early-stage tech companies because if you’re a company that operates on the Internet, someone’s going to want to get a piece of you and Section 230 of the Communications Decency Act really prevents them from doing that.

Legal Tip #1: Section 230 Protects You From (Almost) Anything

Don’t get too worked up over copyright trolls and other litigants who walk in the door. Section 230 has you covered for just about everything other than child sexual abuse content, which you have to register with something called NCMEC, which is the National Center for Missing and Exploited Children, and deal with an automatic reporting procedure with the FBI.

That’s also something that you need to do in order to be immune from certain types of liability, so you should do that. Basically, Section 230 will usually protect you from just about everything else that’s going to come in the door and that includes any foreign requests, any civil litigation, that kind of stuff. Your legal expenses are not going to be huge, although you should expect legal notices to come in.

Legal Tip #2: Coordinate with Law Enforcement for User Data Retrieval 

How to deal with law enforcement. 

Have a way that you plan to interface and verify communications through law enforcement or a way that they can get ahold of you in case they need to serve you with legal process, like a warrant or a subpoena. Those requests will come in and the more popular your website is, the harder and faster those requests will come in.

For example, the recent riots, a lot of those ANTIFA members were organizing on Twitter and they were organizing on Discord. My guess is that Twitter and Discord, when we look at the transparency reports for those websites next year, we’re going to see that there were a lot of data requests made to those two websites. They’re not going to be able to tell you who they were made for, but they are going to tell you that they exist.

My guess is we’re going to see a very high volume of requests. If there are users on your site who are particularly edgy or may present risks in the future, whether they’re very politically active or ANTIFA or whatever else, those users are probably going to create a problem for you in the future in terms of dealing with law enforcement.

You’re going to want to have a procedure where you can pull down user data quickly in a secure fashion, in a way that isn’t accessible to the open surface web because otherwise, you’ve got a big data exposure problem.

Legal Tip #3: Register with the Copyright Office.

CDA 230 has you covered for most civil liability and be prepared to deal with the FBI if you have edgy users.

All of those things sound scary and difficult. There are people who deal with these things for a living. We’re called lawyers. So, we can help you set up the appropriate policies and procedures to comply with your various legal obligations because people don’t usually think a social media company or interactive Internet company is subject to regulation or legal controls. Yeah, I start an app, whatever it is I push “go” and I don’t have to deal with it again. It’s not that easy. But if you get the right legal framework in place and you have the right advice, your compliance burden can be pretty low impact, at least until you start scaling the company, at which point you need to put a proper legal team in place.

Preston Byrne – New York Attorney Bio  

Preston Byrne is a shareholder in the New York office of Anderson Kill.  A corporate lawyer with extensive experience working with cryptocurrency and blockchain technologies, Preston is a member of the firm’s Technology, Media and Distributed Systems group as well as its Corporate and Commercial Litigation Group.  Preston writes and speaks about, and is quoted widely by print media on, technology law matters.

Connect with Preston

RESOURCES:

PERTINENT LEGAL CASES:

 

Subscribe to The Kris Ruby Show

Apple Podcasts | Stitcher | Spotify | TuneIn

ABOUT KRIS RUBY 

KRIS RUBY is the CEO of Ruby Media Group, an award-winning social media marketing agency based in New York.  Kris Ruby has more than 12 years of experience in the social media industry. She is a sought-after digital strategist and social media marketing consultant who delivers high-impact personal branding training programs for executives. Over the past decade, Ruby has consulted with small- to large-scale businesses, including Equinox and IHG Hotels. She has led the social media strategy for Fortune 500 companies as well as private medical practices and is a digital media strategist with 10-plus years building successful brands. Ruby creates strategic, creative, measurable targeted campaigns to achieve an organization’s strategic business-growth objectives. Ruby is also a national television commentator and political commentator. She has appeared on national TV programs over 150 times covering big tech bias, politics and social media. She is a trusted media source and frequent on-air commentator on social media, tech trends and crisis communications and frequently speaks on FOX News, CNBC, Good Morning America and other networks. Ruby is at the epicenter of the social media marketing world and speaks to associations leveraging social media to build a personal brand.  She graduated from Boston University’s College of Communication with a major in public relations and is a founding member of The Young Entrepreneurs Council.  For more information about Kris Ruby, visit https://www.krisruby.com and https://rubymediagroup.com