Facebook, Flat Earth, and Fascism: What Wolf Wishes the Supreme Court Knew about Section 230 - Securing Sexuality Episode 25
Securing Sexuality is the podcast and conference promoting sex positive, science based, and secure interpersonal relationships. We give people tips for safer sex in a digital age. We help sextech innovators and toy designers produce safer products. And we educate mental health and medical professionals on these topics so they can better advise their clients. Securing Sexuality provides sex therapists with continuing education (CEUs) for AASECT, SSTAR, and SASH around cyber sexuality and social media, and more.
Links from this week's episode:
Analyzing the Gonzalez v. Google Case: Corporate Responsibility, the Henderson Test, the Rise of Flat Earth Theory, and Yelp Rankings
The debate over Section 230 of the Communications Decency Act (CDA) has been raging for years, with many arguing that it should be overturned in order to hold tech companies accountable for their content moderation decisions. Recently, the Supreme Court of the United States heard arguments in a case that could potentially overturn Section 230, and the implications of such a decision could have far-reaching consequences. In this article, we examine the potential impact of overturning Section 230 by looking at two key concepts: The Henderson Test and algorithmic responsibility.
The Henderson Test is a legal standard used to determine whether or not an online platform can be held liable for content posted by its users. It was established in 2020 by the Supreme Court in its ruling on Facebook v. Henderson et al., which involved a defamation lawsuit against Facebook brought by several individuals who had been defamed on the platform. The court ruled that Facebook could not be held liable for user-generated content unless it had “actual knowledge” of such content and failed to act upon it. This means that platforms like Facebook are only responsible for user-generated content if they have actual knowledge of it and fail to take action against it.
If Section 230 were overturned, this standard would no longer apply and platforms would be held liable for all user-generated content regardless of whether or not they had actual knowledge of it or took action against it.
The second concept is algorithmic responsibility – namely, who is responsible when algorithms make mistakes?
Algorithms are increasingly being used to moderate online content, but they are far from perfect and can make mistakes that result in wrongful censorship or other forms of harm to users. If Section 230 were overturned, then platforms could potentially be held liable for any mistakes made by their algorithms as well as any other user-generated content they fail to moderate properly.
This would create an incentive for tech companies to invest more heavily in developing better algorithms and improving their moderation processes so as to avoid liability issues down the line. In conclusion, overturning Section 230 could have significant implications both legally and technologically speaking – from changing how courts view online platform liability through The Henderson Test to incentivizing tech companies to invest more heavily in developing better algorithms with greater accuracy when moderating online content.
Ultimately, only time will tell what effect overturning Section 230 will have on our digital landscape – but one thing is certain: It will certainly shake things up!
Hello and welcome to Securing Sexuality, the podcast where we discuss the intersection of intimacy.
And information security. I'm Wolf Goerlich.
He's a hacker. And I'm Stefani Goerlich.
She's a sex therapist.
And together we're going to discuss what safe sex looks like in a digital age.
And today, today we're going to discuss the 26 words that made the internet.
Um, paywall, user subscription, password sharing.
Okay, that's six.
What am I missing?
Oh, probably one of your favorite ones.
Oh, what's this?
What's the internet for?
Porn! The internet is for porn. The internet is for porn. I don't remember the rest of the lyrics.
Porn, porn, porn. That would be an Avenue Q terrible rendition for those of you that are not sex and Broadway nerds, which actually is a surprisingly big community.
So, no, I want to talk today about section 230. So here's the thing. There are two fundamental protections that we had during the early internet. And by early internet, I mean like mid 90s to, you know, today, give or take a few years. So we had net neutrality and section 230. And I came up with a metaphor, baby. I came up with a metaphor for you to describe this that hopefully makes sense.
Oh, can I give it to you?
You can give me anything you want. I love presents.
Are you ready?
Yes. All right. So you're one of your other favorite things is a house party. It's true. Going to have a nice dinner party. You're going to have people come over. It's going to be a good time. You're going to set out invites, those sort of things. So net neutrality ensures that people can get to your place. All right. The streets, the roads, there's equal access during times. No one's got preferential treatment.
We all have the same roads.
Now, section 230 covers what happens when people are our house. So someone's at our house, we're all having a good time. And you know that record scratch moment when someone says something just unbelievably stupid. So net neutrality controls access. Section 230 controls what you have in your house.
It says, hey, look, if someone says something stupid at your place, you can't be held accountable for them, right?
They have made their own choices.
Also, if you then decide to say, you know what, you're done talking. We no longer want you at our party. You have the ability to do that. So those are the two protections.
What do you think?
So it's the difference between the city salting our sidewalks so that my guests can get into the house versus me deciding that my guests want to put Fox News on the TV and I'm asking them to leave?
Pretty much. Yeah. And if they put Fox News on the TV or if they say something completely off color, you're not held accountable for it. So that's a little bit where you lose me because I have been to parties where I was a guest and somebody else was hosting the party and another guest was being incredibly rude or misogynistic or just drunken, topfoolery.
And maybe I'm just a judgy Capricorn, but I absolutely gave side eye to those hosts.
And I would absolutely kind of be like, why did you let this person in?
Maybe I have misjudged your character if these are the kind of people you invite to your parties.
Well, see, and that's the thing. When I say you can't be held accountable, I mean from a government perspective, from a lawsuit perspective. But it's your house. It's your rules. So you're right.
I mean, we've all been on websites where we went, yeah, really?
You're going to let those comments stay posted. You're not going to stop the disinformation or you're not going to to filter that out.
So Section 230 both protects the party thrower from legal action, from lawsuit, from government oversight, but also enables the party thrower to say, yeah, you can stay or no, it's really time for you to leave.
OK, dumb question.
But why do they need a law to tell people get off my website?
Why can't they just say, I don't want this here. I'm banning you as a user. You are being a bad guest. Get out of my house. So to tell you that, I got to take it back to one of your favorite things, which is, and I almost want you to sing this song again.
Will you sing this song one more time or no?
Are you doing porn again?
I think you are going to dramatically, dramatically overestimate my affinity for porn. I do not watch as much porn as you are leading people to believe. I am simply a proponent of people's right to create erotic content.
Well, you told me when we were doing the early research and you were presenting on 10,000 years of cyber sex.
You told me the story of Prodigy in AOL, right?
What was that story?
Effectively, Prodigy owned the early Internet. That was my first exposure to the web. And it was monolithic for a hot minute. And they really intentionally tried to position themselves as being family friendly, or at least being averse to adult material. And because of that, they banned adult BBSs, they banned adult content, and very quickly their user base dropped off. And that contributed directly to the rise of AOL.
Because AOL figured out ways to hide the adult rooms.
I mean, I was, I figured out how to get in when I was 14, 15. But they made it difficult for minors to find, or at least not intuitive for people to find, without banning it outright. And so people naturally drifted to where they had more freedom of porn and also just freedom of expression and freedom of more adult conversations and encounters. Absolutely.
Now, one of the reasons why they blocked content and were so strict was because they wanted to be family friendly. Another reason why was because they got sued.
So the the Wolf of Wall Street, right?
If you haven't seen the movie, you've seen the memes. The Wolf of Wall Street, that company was Stratton Oakmont. And Stratton Oakmont sued Prodigy because someone said something negative about him and was was trash talking him and claimed that Stratton Oakmont was a committed criminal and that his IPOs were fraudulent. And so he took them to court and sued them.
Companies during that time got very concerned about lawsuits because if I've got a public forum, if I've got a user group, if I've got people posting content, how do I know if that content is good?
How do I protect myself?
So that was one part of what led to the Communications Decency Act. The other part was that people were very concerned about the content that was being posted and wanted to make sure that websites were empowered and endowed with the ability to defend against that and to clean up their website. They thought that this would lead to a whole bunch of cleaning up of certain adult material.
Of course, it had the arguably the opposite effect. But in 1996, the Communications Decency Act gets passed. And that is where Section 230 lives. So exactly what you're talking about, Prodigy ended up feeding into the story because of the Wolf of Wall Street.
Okay, but correct me if I'm wrong, they were criminals.
That was kind of the whole point of the movie, right?
How is telling the truth on the internet getting people sued?
And why does that result in new cases going to the Supreme Court now 30 plus years after the decline of Prodigy?
This is the double-edged sword of this ruling. It enables companies to have anyone post content and enables companies to do moderation. But at the same time, there's no real restrictions around what that content is and no real good controls around how that moderation works. So it was a really good law, I think, at the time where we didn't necessarily have good scaling.
And it allowed people to, again, throw house parties, to build websites.
I've talked in the past about Mastodon, right?
If I was to stand up a Mastodon site today, which I can, anyone can start their own social media. If anyone and their brother could post something stupid and sue me, I would not want to post that site. So it is allowed for a lot of innovation to flourish, but there's certainly some downsides.
Which brings us, I guess, to now, because part of why we're having this conversation is because last week, the Supreme Court heard oral arguments in a case about Section 230. And it seems like everybody and their brother has an opinion about what the Scottish should do about Section 230. It's been interesting for me as a policy nerd to kind of look at the strange alliances that have been formed in the amicus briefs.
And I'm curious if you can tell us a little bit about what this current case is and whether or not the Wolf of Wall Street is involved. Or if not them, at least the cocaine.
It is the cocaine?
Look, that's all I remember from that movie was a lot of cocaine. Yes.
What is the the Wall Street?
So the the court case is Gonzalez versus Google. And it's basically asking whether or not Google should be held liable for aiding and abetting terrorist organizations. So this isn't necessarily so it isn't necessarily related cocaine, but it is related to one of the four horsemen that we talked about in this podcast that Tim May warned us about years ago that Bruce Schneider has brought up. Right.
If you want to convince any a lot of change, you evoke one or more of the four horsemen. And those are drug dealers. So you would think cocaine, but not in this case, terrorists, kidnappers, child pornographers. And this example, Gonzalez versus Google, is terrorists. Effectively, Google had ISIS videos and these YouTube videos were allowed to stay up.
OK, we should we could argue whether or not they should be up because, again, the flip side, the other side of the coin was Section 230 as you're supposed to moderate and remove content. But they stayed up. And not only that, but there was algorithms and YouTube that was feeding this content to people.
And as you might imagine, smart algorithms feeding video content to users was not something on the mind of anyone in 1996.
Yeah, were there even really algorithms in 1996, such as we understand them today, because I am struggling to imagine that there there were fundamental algorithms. But this idea of data science being used to deliver specific tailored content based on what you're feeding, based on what you think you're getting interacting with that classification of algorithms.
No, those are relatively new within about the past 10 decades. Fundamentally, algorithms have existence as the beginning of computing. But now I'm going to start splitting hairs and you're going to be very bored with me. I think one of the things for me is I'm reasonably smart and I like to read the news and I consider myself to be moderately intelligent and reasonably well informed.
And yet I can't know enough about everything to have a fully fleshed out opinion.
So I tend to for some of these cases when they go like all the way to the Supreme Court, I look at who is supporting both sides and I use that to kind of inform my decision making process because it's the the enemy of my enemy is my friend sort of gambit right like if I see something pending before the court and a whole bunch of causes or organizations or people that I generally agree with are backing it, I probably would generally agree with it too.
And I think one of the things that's been so confusing for me is that I agree with a lot of people on both sides, and I really don't like a lot of people on both sides so that's left me not really clear on on what they're trying to argue and what I think should happen.
In part, it's because this rule has two sides. This rule has the legal protection, and it has the moderation it has what some people called at the time, the shield, you cannot sue me for what I have on my site, and the sword I have the ability to moderate and cut things out.
And as you might imagine, in the day and age of disinformation, we want better moderation, and we want to reduce that shield because if you're not doing good moderation. If you're not doing safety org if you're not removing disinformation if you're not removing access to ISIS videos, if you're not removing, and and and that can have, and we've seen it have really powerful negative impacts for our society.
On the other side, organizations and institutions are like, oh they censored me. I want to have, you know, free speech with a capital F and a capital S. Those organizations are looking to tear down the shield. I want to be able to sue if my words don't get out I want to be able to intimidate technology companies because I've got better lawyers or deeper pockets.
A couple decades in a way that you're right has brought together a very unique bipartisan approach both for and against. So I want to kind of lay that out a little bit, I pulled up a list on SCOTUS blog of the people who have filed briefs either supporting the defendant which is Gonzalez in this case or supporting the defendant which is Google.
And if you're somebody like me, it is a really confusing party to try and decide which side of the room I want to stand on. Backing Gonzalez, we have Common Sense Media and the Gifford Center to Prevent Gun Violence, and the Counter Extremism Project and the Anti-Defamation League. But then we have like, Senator Josh Hawley and the National Police Association and the state of Tennessee.
And it's a mix, Ted Cruz for God's sake, there's a whole bunch of like, I really like them and also I want to punch their faces on that side of the room. But then on the Google side, there's the Center for Democracy and Technology, but then there's the Chamber of Commerce. There's the Electronic Freedom Foundation and Twitter. There's Microsoft and National Security Experts.
It's just Rick Santorum is backing Google, Ted Cruz is backing Gonzalez, the ACLU and Rick Santorum are agreeing about something. It leaves me very confused as somebody that looks kind of at the party mix of supportive briefs to figure out where I should fall on an issue. So let me ask the person I look to most of these situations.
Where do you fall baby?
What should I think about Section 230?
The reason why this is so split and divided is because it is not clear how a cut against Section 230 will roll out. You'll hear things like people saying if this happens, we will never again be able to provide personal recommendations for products or videos.
Sure, maybe. The flip side is there's the evolution of what's being called the Henderson test. So the Henderson test is being backed by Google, and it's most likely going to be the compromise. And the Henderson test is a way of saying, yes, you're not responsible for what any random person puts on your website.
However, you are responsible for how you display it, which may mean having more oversight of algorithms may mean having more control and oversight over moderation. Back to my analogy, it's one thing to say, you know, you can have a party and you're not going to get sued for what someone in your party says.
It's another thing to invite a whole bunch of people to your party by promising they'll be crazy people saying crazy things. Right. If one person ends up advocating terrorism over, you know, cocktails and a game of cards against humanity, that's a problem.
But if you're saying, hey, come, we're going to talk about terrorism and put you with other people who are advocating for it, then suddenly that is a on you and it's a higher degree of burden. So the Henderson test I think is an interesting way to sort of split the middle and have a better set of guidance when it comes to interpreting section 230.
So what would that look like in real life for the everyday user, does that mean that I could put online that I want to punch Josh Hawley in the face or does that mean that he could then sue me. Would I still be able to get my Yelp rankings or would that funnel me to ISIS videos like what would those sort of revamps play out as in real life.
You could probably still put on someone and not get sued.
However, do you remember Hunter Moore right of the is anyone up website the most hated man on the internet that Netflix documentary was out in 2022. Speaking of punchable faces.
Yes, yes I do. So he was the one who was all about revenge porn. This is a really good example of how this would not pass the Henderson test at the time. He was shielded by section 230 you could not take Hunter Moore down because he said hey I can't be held responsible for the pictures of women that other people angry exes are posting on my website.
He was eventually taken down because he was involved in criminal activity to obtain these images, but we'll put that aside for a minute. If the Henderson test was applied to is anyone up. The question would be is Hunter Moore deliberately soliciting this information and deliberately posting this information in a way that drives the website is the fundamental reason this materials on there is because he is driving it and presenting it.
So would be.
Yes, that's what he was all about. That's why he was the most hated man on the internet, and therefore, the section 230 shield would not have protected that site.
So, it almost sounds like what you're saying is that it's less about who shows up to the party and more about how the host lays out the buffet. And invites people.
So help me understand the invites people thing because most websites people don't necessarily you know I mean nobody ever nobody from Twitter ever said hey stuff you should join Twitter I just kind of peer pressured my way into it by hanging out with hackers. So how do we decide what they're inviting versus what they are just passively not preventing.
The by the invitation I'm drawing a parallel to the algorithm, if you go on to YouTube and it serves up video after video after video, and it keeps pulling you into ISIS videos, that's a problem. What was your tongue with the other day about Facebook and flat earth. So I'm reading this book called over the edge, that is discussing how flat earth became as popular as it is.
And one of the things that they've noticed is that it really could not have existed to the degree that it is outside of the internet, the last time there was a big flat earth craze was when printing became cheap and affordable and pamphlets started really proliferating.
So there was a late sort of seven or 1800 sort of flat earth craze with people publishing broadsheets about their poor science, and then it kind of faded away again.
And now, with the internet and specifically with algorithms. Part of why flat earth has gained prominence as a apparently socially acceptable theory right now is because people that were already inclined towards disbelief in mainstream reality people who were conspiracy minded people who might be curious about you know like David Ikey's lizard people or QAnon. When they went on Facebook to join groups talking about their initial conspiracy minded thoughts.
The Facebook algorithms would say, oh, you might also be interested in this and pull up flat earth websites. And so it actually served kind of similar to the argument that Gonzalez is making about you know people being radicalized into terrorism through the YouTube algorithm.
The Facebook algorithm this author suggests is directly contributing to people being radicalized into pseudoscience conspiracy theories and flat earth ism to the point where it has become a borderline mainstream community which is really bizarre and disturbing and the argument that is being made is that without the nudging of those algorithms it wouldn't have happened.
The example you just laid out is a really good one in terms of algorithms, taking people down a certain path, which leads to some very weird views. And this idea in section 230 that hey, just because you're running a computer service doesn't mean you're a publisher or a speaker or held, you know, liable for the content that's published and spoken.
The argument is, is the algorithm itself content or is the algorithm itself something the website is offering is doing to draw people in is changing how the information is being presented. And I think that's a really important nuance.
Now, you asked me what I think SCOTUS should do and how they should rule. And this is very difficult for me and in the beginning I thought it was easy. Net neutrality is good, section 230 is good and those are the hills I'll tie on.
And, by the way, we've got Bianca coming in another episode or two to talk to us about net neutrality so please hang out and if you're interested in that, come back and we'll have a great conversation around the impact of net neutrality in relationships. But that was my stance. But the more I get into this, the more I think about the conversation we had with Stephanie Hare on technology not being neutral.
When we invented section 230, we invented is anyone up.com. And there's been so many examples where this shield and sword approach this legal protection and moderation approach has failed. People posting images of others and revenge porn people uploading videos that they shouldn't radicalize folks. There clearly needs to be a change and it should be pointed out, you know, this is a very US centric problem.
So very US centric episode I know we have people all over the globe that tune in and listen to this. This is not necessarily the case in the EU right where they have the Digital Services Act, which has much stronger rules around illegal content and much stronger rules around disinformation. I'm not suggesting we need that level of strength, but I am suggesting that this is a very complex case.
I just know that the way section 230 is played out from its original point of view in the 90s. This has led to some very great upsides but also some very low downsides.
This will rule on this later on the summer usually around June, the more controversial the case the later in the year they they do the ruling but I'm curious what, what are you hoping to see happen what is your best and worst case outcome for this because I suppose there's always the chance that they could punt it, just come up with a reason to not really rule it all but nobody's really expecting that truly happen.
So what do you think is likely and what do you think would be best. I can't even begin to say what's likely because as you said there's so many different voices vying.
The worst case scenario for me is section 230 being overturned, because I do think it's incredibly important that new websites new people standing up things, heck some of the projects you and I are involved in that have comment sections that we don't reopen the world for a whole bunch of litigation.
My ideal preference would be for there to be some adjustment that is reasonable and allows for a degree of enforce or enhanced moderation for content by providers. Is that a statement, is that a position that's going to get all your hacker friends mad at you or you being controversial today.
I mean, I don't disagree with you. I think that's kind of in trying to understand this and to figure this out for myself. That's where I've ended up falling is, I don't necessarily think that websites should be responsible for what users post.
But I absolutely think that they should be responsible for what they create in the same way that if the New York Times publishes an article that is factually false and defamatory people have legal resource recourse. I think that if a website or a media company or technology giant is creating algorithms that are doing harm to others that they should be responsible for that.
We can't police what other people bring to the party but we can absolutely police the setting or the room that the party is hosted in and I think that there needs to be more thoughtfulness and probably more regulation around algorithms these days, especially with porn, especially with the rise in.
I'm going to call it as a therapist anti social thinking right these conspiracy theories and these us versus them sort of mindset to become so dangerous in our country these days. Those are fed and reinforced by the push notifications that we get in our media consumption.
And I guess where I have fallen on this is that I think that algorithms created by companies, the company should be responsible for what they create but not necessarily for what other people create and put on their sites. I think that's balanced. It's a balanced response. And I tied back to what we're talking about here right now, we think about protecting relationships.
Right now, there's very little recourse towards anyone stalking people online towards anyone doing revenge porn online towards anyone posting information to the fame people online. And that created a lot of pain, a lot of pain for many people.
We clearly need to address that and in my mind where it crosses the line is where it goes from, I have a comment board and someone just happened to do this too. I built a website where I'm soliciting people to post this or I built an algorithm that is promoting and accelerating these types of posts. When it crosses that line in my mind that's where we need to really hit home.
The need for tighter moderation and tighter controls. And with that we have solved section 230. Thank you America, our work here is done. We'll wait and see if the Supreme Court agrees with us. Exactly. Thank you so much for tuning into securing sexuality as your source of information you need to protect yourself and your relationships. Securing Sexuality is brought to you by the Bound Together Foundation, a 501c3 nonprofit.
From the bedroom to the cloud, we're here to help you navigate safe sex in a digital age. Be sure to check out our website securingsexuality.com for links to more information about the topics we discussed here today, as well as our live conference in Detroit. And join us again for more fascinating conversations about the intersection of sexuality and technology. Have a great week.
Comments are closed.
Write something about yourself. No need to be fancy, just an overview.
Join us in Detroit! October 19 & 20, 2023
Proudly Sponsored by The Bound Together Foundation
An IRS approved 501(c)3 nonprofit organization
Michigan Charitable Solicitation Registration# 64801