Securing Sexuality is the podcast and conference promoting sex positive, science based, and secure interpersonal relationships. We give people tips for safer sex in a digital age. We help sextech innovators and toy designers produce safer products. And we educate mental health and medical professionals on these topics so they can better advise their clients. Securing Sexuality provides sex therapists with continuing education (CEs) for AASECT, SSTAR, and SASH around cyber sexuality and social media, and more.
Links from this week's episode: Changing Societal Norms around Surveillance and Privacy: A Call for Community Engagement and Activism
In today's digital age, citizen surveillance apps have become increasingly prevalent, allowing individuals to monitor their communities and report suspicious activities. While these apps aim to enhance public safety and promote community involvement, their impact on mental health and community relationships cannot be overlooked. This article delves into the potential consequences of citizen surveillance apps on mental well-being and community dynamics, while also addressing privacy concerns and advocating for change.
With the proliferation of smartphones and the advent of social media, citizen surveillance apps have gained popularity among individuals concerned about their safety and community well-being. These apps, such as Nextdoor, Citizen, and Neighbors, provide a platform for users to report incidents, share information, and connect with their neighbors. The intentions behind these apps are commendable, as they aim to foster a sense of security and encourage community engagement. While citizen surveillance apps offer a means of keeping communities informed, they can also have unintended consequences on mental health. Constant exposure to reports of crime, suspicious activities, and safety concerns can lead to heightened anxiety and stress levels among app users. With the ability to receive real-time alerts, individuals may feel a constant state of vigilance and worry, negatively impacting their overall well-being. Moreover, the fear-mongering nature of some citizen surveillance apps can contribute to a culture of paranoia and mistrust within communities. When every minor incident is amplified and shared, it can create an environment of hyper-vigilance, leading to increased social isolation and decreased trust among neighbors. Citizen surveillance apps have the potential to disrupt the fabric of community relationships. Traditionally, strong communities have been built on trust, open communication, and a sense of belonging. However, these apps can inadvertently erode these foundations by promoting a culture of suspicion, anonymity, and fear. When community members primarily interact through a digital platform focused on crime reporting, it becomes difficult to foster genuine connections and meaningful relationships. In turn, this can lead to decreased social cohesion, limited support networks, and a decline in overall community well-being. Privacy concerns surrounding citizen surveillance apps cannot be ignored. While these apps often claim to prioritize user privacy, there are inherent risks associated with sharing personal information and reporting incidents within a community. The potential for misuse of data, hacking, or the creation of digital profiles based on reported activities raises legitimate concerns about the privacy and security of app users. Furthermore, the anonymity provided by these apps may embolden individuals to make false reports or engage in vigilantism, further jeopardizing the integrity of the platform and impacting innocent individuals' lives. To address the potential negative impacts of citizen surveillance apps, it is crucial to advocate for change. Here are some suggestions to strike a balance between promoting community safety and preserving mental health and community relationships:
Citizen surveillance apps have the potential to play a significant role in community safety and engagement. However, it is imperative to recognize and address their impact on mental health and community relationships. By prioritizing digital well-being, fostering community engagement, enhancing privacy measures, educating users, and collaborating with law enforcement, we can strike a balance that promotes safety while preserving the essential fabric of our communities. Together, we can advocate for change and create a future where citizen surveillance apps enhance community well-being and relationships. Key Concepts:
Hello and welcome to Securing Sexuality. The podcast where we discuss the intersection of intimacy and information security. I'm Wolf Goerlich. He's a hacker. And I'm Stefani Goerlich. She's a sex therapist. And together we're going to discuss what safe sex looks like in a digital age, starting with starting with love of my life.
Why do you always ask me questions I don't know the answer to? I think you love doing that. Well, I mean, partly it's because I like being the one that's more right between the two of us. So I like stumping you. Well, partly because you're the tech dude. So I ask you questions I genuinely think you're gonna be able to answer. But the tech dude is the key word there. I'm not a lawyer. Like in a recent episode, it was the the new Peeping Tom. You're like, what can we do legally for this? I'm like, I don't know. We need we need an expert who can help us with this. Well, to be fair, we talked to a lot of experts and when I asked them what we can do, they still tell me they don't know. So we need an expert who knows? And so we bring you this week. Albert Foxcon. How's it going, my friend? It's going well. It's so good to be back. It is great to have you back on. So for those of you who missed, uh uh, his first episode, Albert Fox Khan is the founder and executive director of Surveillance Technology Oversight Project Stop. And as a lawyer, a technologist and an activist Albert, you become a leading voice in how to govern and build technologies of the future. I basically just point out all of the abundant creepiness of modern life. It's getting easier and easier by the day to find the examples. You know, I love the visual that comes to my mind when you say that because I picture you running around New York with, like, stickers or like a big rubber stamp. Just be, like, creepy and run away creepy and run away. No joke. I did look at doing a campaign of doing digital nutrition labels on all of the digital infrastructure around New York City until we realised we would definitely go to jail if we put them on city property. That's amazing, though. Love that love that love that. The question that we keep coming into Albert is citizens spying on citizens. Now, obviously, this is done. For the best of reasons. I would hope we all want safer neighbourhoods for our families, for our kids. Um, but it's really ratcheted up with apps, like bless every home and citizen. And of course, it started with next door. And and I recall an article on Stop about next door a few years back. Yeah, this is something that's been worrying me for for years now. we put out a report called The Spy Next Door all the way back in 2021. And back then we were talking about this landscape of really unregulated apps where people were posting their wildest suspicions to oftentimes spark panic and we use not so coated dog whistles and all of this harmful activity and flash forward to the day. And it's still a hot mess. And what really I think, tends to make me most upset about these apps is that while they have systematically failed to show that they are able to live up to their promises of keeping people safe. They are having a proven deleterious effect on people's mental health on our communities, and I see it with my own family. I. I have relatives who watch these things are constantly being fed this mad Max vision of their own communities where everyone's a threat. We're constantly at risk, and this is part of how we're in a society where even when crime rates go down, people still think they're going up and I love the Mad Max statement because it's It's not just the whole like purge style. Barricade the doors, Everything's going to hell. But it it feels like that That's been true since you know Mad Max first came out or we can like the seventies and like soy and green, like all of the dystopian stuff that came out of really the Cold War. We have had 5060 years of people believing that everything's going to hell and the world is going to end tomorrow. But it feels like there's this new variation where we're turning on each other, right? It used to be like everything's going to hell. I'm building a bomb shelter and when the nukes come my neighbours are all going it with me. And now I feel like the neighbours have become the nukes, like everybody's become very suspicious of each other. And so the bomb shelters are more isolationist, like it's like me and mine, and everybody else is suspicious and scary and evil. And maybe I'm beating this metaphor to death. But that that's the vibe I get these days is nobody trusts each other anymore. But as you're describing that I'm mad, I'm thinking back to the Twilight Zone episode from the fifties, where it was about a bomb shelter where a family was barricading themselves in and they only had enough provisions for themselves and the neighbours break down the door and you know, barge their way in. And in that case, the neighbours were the threat, and it turned out that everyone was freaking out because they thought it was the end of the world. But the Martians were just proving that with a few lights in the sky, you could cause such mad pan mass panic that we would all tear ourselves apart. So I think there's always been this search for agency. This sense that if you only have enough, if you only have enough provisions if you only have enough guns because this is America that somehow you're going to be able to keep yourself safe. But what we see is that when the statisticians actually break down the data, none of these things are actually promoting safety the way that we're told they will, especially not all of this neighbourhood surveillance. And in a way that makes sense, right, because there's a cognitive bias that more information equals less risk, because if I understand it better, then it must be less risky, and we can go on and on with. That could be a whole podcast episode talking about how maladaptive that is for today's world. So we know that human beings do this. One of the things that really bothers me about these apps is the very paranoid person may say that they play into these cognitive biases. Um, the the less paranoid and perhaps more optimistic lens may be they naively ignore these biases and don't put in good guard rails. Well I think that when you are building a platform, you are building a vehicle for user engagement. And when your user engagement is around observing stories about crime, the way you build quote unquote success into your platform is to have more crime data to have more stories, to have more content to have more engagement. And and so I think that. And certainly the the bias narratives play into it as well. Because we see so many examples whether it's kids going trick or treating or or neighbours. Who folks don't recognise where you know suddenly, when it's a black person or a Hispanic person or an Asian person on camera, that they are viewed through this racialized lens that labels them as a threat simply for being in your neighbourhood. That views the the black kids on Halloween, as you know, quote unquote thugs and other horrible things just because they're trying to go door to door for candy. And these are like the sort of examples we see constantly on these apps. But then because these are tech platforms, they are also able to augment the data that you see coming from users, which is completely unverified, that most of the time and then create these partnerships with police to tap into 911 data and create even more content to constantly bombard everyone with these different triggers for imagining the absolute worst. So I feel like we need a deep, cleansing breath. And just like I feel like the common theme of our conversations is that there's always these moments of Oh, God, how bad is it? We'll we'll get after listeners. Yes, Take that cleansing breath. We'll get to the police side in a moment. But I want to ask a slightly different question, which is short of being on these apps. Is there any way for anyone to know if they're being targeted or any way for anyone to know if they're they're caught up on this? No, I mean that that you don't have a duty to notify the people that you're saying nasty things about you know, and and really and there are a lot of people who really have no choice but to constantly be surveilled by these apps. Think about a delivery driver who day in day out, is getting monitored not by one or two of these, but sometimes dozens. Hundreds of these and you know, their their video, their likeness, their their their image is being used by other people for whatever purpose they want. And you'll see sometimes people putting on Tik toks of folks without their consent or doing sort of more comedic stuff. But then it's also, you'll see unsubstantiated allegations of, Well, I didn't get this delivered today, so I think this person is stealing from me. There's no evidence there's nothing to substantiate, but they'll just jump to that conclusion. And again because of the power dynamics between the person who owns the camera and the person who is being recorded. The the person being recorded off doesn't have a choice. It's the same goes for, you know nannies, domestic workers, any number of people who work in these environments where they're then being recorded. Um, not because they choose to, but because the homeowners do. And my first thought was, Well, at least thank goodness there are things like Google alerts, right? Like, I might not know that somebody has recorded me, but I know when it hits the Internet, but I don't because it's not. I'm not tagged in it. If somebody's like, Look at this crazy person in her wild wardrobe on my block, I'm just the random crazy person. There's no way for me to know that that video has been posted. Yeah, unless you want to get a subscription to a sketchy facial recognition website and constantly look for examples of your face they showing up in the world. You know, you're gonna have a hard time with that. And even with those websites, you're not gonna find every platform. You know I do think that the is this just broader cultural norm we've created of surveillance as entertainment surveillance as cultural connector that is just built around constantly turning other people's bodies into content for your hot takes and sort of like, Well I think that when the sartorial objections might be the some of the stuff that is less alarming in the States. But it's still a little creepy. Just to be to think that OK, my image, my likeness, my identity is being turned by this person without my consent into their content, and and it's something that we've never really debated as a society as part of a new social contract. And we've We've seen examples of that, right? So I think about people who were turned into memes, who some embraced it. But others are like, I don't want you using my image everywhere. And there is a recent case in the news about a woman who went to a protest with a particular sign and that picture was then used for I think it was in support of Palestine. It was you used to push back against the Jewish community and be in support of Palestinians since oi went and was talking to the university about getting that taken down. So we've seen some examples of both people leaning into it and people really pushing back on it. But I guess more broadly, like how? Let's assume I do know this, right? So I I've got a privacy service that I use I love it and it delivers me updates every once in a while and certain parts of my information and leaked. I say yes, do a take down request. Uh, it doesn't necessarily do photo imaging and videos. So now, next time I talk to him, I'm gonna say, Hey, I got an idea for you, but I want a 5% royalty. For the record, I'll I'll see if I can negotiate that year. We have. So what what can a a everyday person do to remove themselves from these apps or remove them their photos? These videos? Well, I mean, look, you can walk around your neighbourhood wearing a Freddie Krueger mask, but I don't think any of us want that to be the the price for a little bit of privacy going forward. And on an individual level, there's it. It really depends on the state. So, for example Illinois has a law called B a the Biometric Information Privacy Act, which goes further than most state laws in protecting the rights of individuals to keep their biometric data private, which can include your these data and so that potentially can be a way to litigate against if not the homeowners using these camera systems and posting to these platforms to the platforms themselves and specifically thinking about potential litigation against ring and and their associated neighbours platform. You know, we also have a litigation that we've been pursuing for a few years at staff against some of the data brokers that are monetizing the data on these platforms using California's right of publicity. Um and that I think will go a long way to sort of setting a new standard that you can't have your data monetized by third parties in this way without your consent. But when it comes to the way that individuals just go on next door, go on citizen and use that data. Use that photo without your consent. That's the hardest one. And if we were to try to pass a law against that, you would come up against real First Amendment scrutiny I think it would be hard to have it passed. And so I think that a lot of it is, you know. I think that's where I take off my lawyer hat and put on my activist hat and sort of talk about Well, what are the community norms that we're trying to promote? How do we in our neighbourhoods push for a better vision of how do we commu and how we come together than what the neighbourhood apps will will imagine for us. How do we reassert that there is something profoundly creepy about using someone else's image without their consent? Now, of course, if they're trying to take your face and make money off of that, that that's a whole other thing. But like most of the time when it's you know, someone taking a photo in a public place, you posting it without an attempt to monetize the name or likeness that that gets some that goes beyond what? Um, our laws really can touch. So this is where I get to ask the question. I've been asking him most of my episodes for the last month or 12. What can we do? Yes, well, there is a tonne. We can do so First off, get we can get off the platforms. We can encourage our friends and families to get off the platforms. We can try to de normalise the the constant surveillance entertainment ecosystem that we've developed over recent years and we as activists, we can push for new luggages SL of protection for example. You know Amazon has contracts with thousands of police departments and municipalities across the country getting this 911 data to port into their app and then also in getting access to a lot of other municipal data, partnering with police departments to promote their product. We don't have to accept that we can go to our local city council if you're a jurisdiction that partners with Amazon, and you can try to push for the local government to stop that partnership. It it's rarer with Citizen and the other apps to see those same sort of partnerships, but if you see it, you can push back. You can push for those partnerships to end, and if you want, you can. Even if your jurisdiction doesn't have that type of agreement, you can push for a ban on that going forward. Prevent that problem before it starts because without that 911 data constantly feeding into the app, constantly populating it with even more details, even more scary stories. Then it's a lot harder to maintain the sort of user engagement the platforms want to remain sort of a a vibrant echo chamber of fear. And then we in the civil rights litigation space are always looking for new ways to sue these companies. And for example, if someone does know that they've been targeted in a racist way in a sexist way, in a discriminatory way along some other basis on these platforms, those are things that we can potentially sue over because in many jurisdictions hate different types of hate speech are protected. But when it rises to the level of harassment, it can be something that you still sue over. And and and you know there's one particular toward intentional infliction of emotional distress That is really, I think, especially well suited for these sorts of platforms where you see just terrible racist caricatures and grotesque statements being made. I know. I like I heard the sigh. I know that's not quite enough. I know none of these are gonna be the cure all but like we, we can push back incremental I A Actually, the sigh was that that actually gives me a degree of hope, and it does like leave me feeling a little bit more empowered, that if somebody really, genuinely hates my sequin coat that much to mock me online, I have recourse. But the the question that harm piece is you know, how do I prove that they did it? To be mean? How do like one of the things that and this is why I'm so glad to have a friend who's a lawyer? One of the things I know that we have to demonstrate in in a legal case is intent, right? So how do we show that somebody was doing it to be a dick, as opposed to just doing it? Because, wow, look at this random thing I've never seen before. Isn't that quirky? How do we prove that harm has occurred? Well, I mean, the sequin outfit. Look, I haven't seen it. I'm sure it's lovely. I personally hate whoever said these things about it. But like I always tell people you have to have, like, the Elon Musk perspective. When you think about free speech issues, you have to think who is the most obnoxious, deplorable, wealthy person who might have. You know, the money and time to misuse this law if it were broader and defend against it. And you know, if if you're an Elon musk fan, think of someone who you hate who meets the meets those criteria, because with the sequin outfit, it's probably not enough to to actually be something you sue over. And that's pro. That's been the standard for most of American history. You know, we have broad commentary. We have broad commentary rights. We have broad power under the First Amendment to be assholes, to be rude, to be obnoxious. And we see plenty of Americans who equip themselves of that right on a daily basis. But where the speech rises to the level of outrageousness that it is so extreme as to cause severe emotional distress, that's where we start to see a carve up. And so the classic example of this is The West Baptist Church was successfully sued for intentional infliction of emotional distress for their protesting of funerals and their engagement in just grotesque you know, hate speech. Um, at a time when it was designed to cause maximum emotional harm and and the that is a carve out from the General First Amendment protections. Of course, anything that's a threat is a carve out. But then also, discrimination is something where we've seen this just, yeah, interest in creating additional protections against harassment and discrimination on the basis of a protected class. So where it's on the basis of race, religion, gender, sexual orientation, any number of legally prescribed categories, that's where you have broader rights to push back, especially in a city like New York, where I'm based, which has one of the broadest harassment laws in the country. You have a lot of power to sue when you're targeted on that basis. But if it's targeted because of something that's not protected, your clothing choice your the the books on your bookshelf, the you know your your driving habits and you know any number of things. Then you know unless it's truly outrageous, like the Westboro Baptist Church, it's less likely to to end up in litigation. So we've got this situation going on right where people have the right to be rude and obnoxious. Systems are being built that are these dystopian echo chambers of distrust and we got a very concerning feedback loop there. But I want to take it one step further, which is what happens when this data starts. As someone took a picture of somebody and made a rude comment or labelled them on any of the protective classes that you identified maybe was harassing them. And then that data gets bundled and you know, the word is enriched. But I don't like the word enriched in this context by data brokers who are like, Oh, yeah. And this person also is this this and this And we also know this about their search history. We also know that about their their location. We know. You know, there's a lot of, uh uh behind the scenes work that happens in these systems, right? Not the the system of the app. The system of the data broker. I don't know if we can't use enrich. Maybe in creepy if we can make that a word in Cree, we sell the T shirt. But like I, I think that yes, there's this broader ecosystem of monetizing almost every aspect of our digital lives and converting the analogue portions into digitized assets that could be amalgamated into this just profile of almost everything we do. And you know the lawsuit I mentioned earlier? You know, this class action against Thomson Reuters? We allege that they basically are tracking every single person in the United States. We essentially allege that they are collecting data on all of us employment data, address data, associational, data, data about our finances, data about our politics, data about our location and driving habits and all of these things and selling it to the highest bidder, but selling the most invasive portions of it to the police. And so I definitely worry about the data broker effect on this on a couple of levels. First of all it allows police to amass volumes of data indirectly that they couldn't collect on their own. You know, it would be unlawful for the local police department to say that everyone on your block needs to have a security camera that feeds into the police department. But there's nothing to stop them from reaching out to every one of those homeowners. If they know that they have a ring camera, for example, and and previously ring had a portal for law enforcement that made it possible for them to do just that. And even if you're not giving the footage to a ring, well, a lot of the times they can just just subpoena ring themselves and get that same footage. And and so these platforms become sort of force multipliers for police surveillance and the public private partnership dramatically increases the scale and decreases the cost. Uh uh, of mass surveillance. But it also points to the fact that we can never know what the aggregate lifetime effect of any one piece of footage we post is going to be on the person we felt. You know, we cannot know what is going on in their life and the way that that having that data online, having that image of them posted that video could put them in danger of arrest, put them in danger of deportation, put them in danger in other ways. Um, and this is something that we keep highlighting right now in the context of abortion care and gender affirming care where we know, for example, that a lot of people who are coming to states like New York have heightened privacy needs because they're seeking care here, but risk prosecution. Then they return home. And so suddenly, a photo that proves they were in New York in that context can be a lot more dangerous than it would have been Preds. So yeah I mean, the data broker, the the police data broker situation. It is a hot mess. Um, and this is one of the few areas where I think Congress will actually do something positive in our lifetime. Uh, hopefully sooner than that, but there's a bill called the Fourth Amendment is not for sale Act, which, if it was passed, would make it illegal for police to access data. Um, using tax dollars that they can't access under the force of law. So, like, right now today, if they're buying data rather than using a warrant, there are basically no checks on what data police can buy. And under this bill, it the police would actually be limited to only buying the data they would be eligible to get pursuant to a warrant. So I wanna I wanna suss that out and restate it and make sure I understand it correctly because there's a direct search, which is I want to know more about Wolfgang and therefore I'm going to go to a data broker and I'm gonna buy all the information about Wolfgang that you would think, having watched all the police procedurals for the years. Oh, that must clearly take a warrant. No, because we're purchasing publicly available information. Yes, and that is completely right. If you want to get Wolfgang's information from you from a company that's unwilling to give it from your employer, then you need to get a warrant and go to court. But if you wanna get that exact same data from a willing seller, as long as you're using dollars and not threats, you can get whatever data you want, which is terrifying. But also there's there's that's a direct search. There's a reverse search, right? Um, and I may not be using the right term, but it's a basically a digital dragnet where you say I may not know what information I want, but show me all the people who walk down the street during this time or show me all the people who were in New York Googling for this particular search term or show me all the people who are watching a particular YouTube video from a particular geo. Yep, and and all of those things happen thousands of times a year, we've seen Google getting geofence warrants, which track every single person within a given geographic area. We've seen a growing number of keyword words where anyone who has searched for a particular keyword address a search term they'll get their data will be given over to law enforcement. And recently there was a lot of controversy when the FBI demanded information. I believe it was thousands of different individuals who had accessed a YouTube video, an innocuous YouTube video. Just because that YouTube video might be helpful in identifying a single suspect. And it's really important to note that, like, I don't think any of these warrants would have been considered constitutional a decade or two agowe historically, under the Fourth Amendment, we didn't just require probable cause for some of the people you wanted information on, but instead why you had to have probable cause for each and every individual you want to search there's Supreme Court precedent that says you go to a bar. You know, one person in that bar, for sure, is dealing drugs you cannot pat down and search each and every person in that bar because you do not have particularised probable cause for any of them. You just have this fuzzy, more generalised probable cause. But we've allowed in the computer search context in the digital search context this generalised probable cause that allows for these digital dragnets for for these fishing expeditions. But you know, as bad as these reverse warrants are you do sometimes see judges push back and say, Oh, this is too broad or this is too many people or this is too large a period when you're going out and buying the data. There's no judge. There's no one pushing back. There's no one who knows. All you have is a police officer tax dollars. And so I'm willing to sell a lot of our data for a very high price. Sometimes I do consulting witness work or teaching witness work, and I know not as much as you do about rules of evidence and all of that cool stuff because I know you guys. You fancy attorneys take entire classes in it, but I know enough to know that some things are allowed in court and some things aren't And some things, some factors, some information, some identity things, all sorts of pieces they we have to look at how important is this? How relevant is this? Is this information we're allowed to consider when we're evaluating somebody and I'm even gonna say, like, criminally or not, because, I mean, just across the board when the law is looking at somebody for a reason. Mhm. There are things they're allowed to consider, and there are things they aren't, but And that, if I understand, is supposed to inform what sort of information the investigators are gathering to begin with, right or no. Like how the the um prejudicial stuff isn't being gathered isn't being included. Things that shouldn't matter are now, all of a sudden entering the conversation. I don't know if this question makes any sense, because I don't know legal expertise to frame it up in my head. No, no, it makes perfect sense. It what doesn't make sense is the reality of our legal system, where the vast majority of these types of search tools go completely unregulated. In many cases, they aren't even disclosed to defendants, even though I believe that's unconstitutional, because police have broad discretion in uh amassing quote, tips and leads in their pre investigatory phase. And what you'll oftentimes see is a facial recognition search or a cell site simulator search or these other types of things. And then you'll have all this data collected or a bunch of data bought from a data broker and then based off of that, you'll develop the case. You'll find other evidence, and then you'll use that evidence you got down the road. But you'll never bring into court. You'll never give to the grand jury. You'll never introduce it at trial any of the other things that you bought earlier on in the case. And if you're not relying on it before the grand jury, there's some bad cases. It depends on the state, but here in New York, there's some bad case law, for example, that you know, people haven't been able to challenge facial recognition searches because they the the police were saying, Oh, this wasn't the basis for the arrest It it was just a tip. It was just a lead, and and and and it's really it. It violates everything we know about human psychology and how police actually make their decisions about who's a suspect. But it's a gaping chasm in our legal system. Even worse, we see a lot of evidence of abuse with officers engaging in what's called parallel construction, which is the practice of coming up with, and this is illegal, but people do it all the time. But coming up with a justification for how you found someone that doesn't involve the constitutionally questionable search tactics that you won't admit to running a facial recognition scan or using a questionable vendor, you'll just say, Oh, I had this officer who had arrested him seven years ago who is based in a completely different part of the city. But he just happened to see this case file and ID that guy. And that's that's something I hear about all the time from public defenders. Um, I I'm also going through flashbacks to my evidence law class which was perhaps one of the least useful classes I ever took in law school in that the heavily over educated professor who taught us it began the class by saying, This is a class on Cartesian epistemology and it just went downhill from there. Oh, well, that just sounds like a relic in good time. Who doesn't love Cartesian epistemology? I mean I know that. That's what really starts the party. I used a single Greek word in an entire 7000 word chapter I'm writing, and Wolf has been mocking me for a week. So I am down for Cartesian. Whatever that second word was. I've already forgotten it because clearly we're here. Yeah, Let's Let's let's learn. Yeah, Thank you for that. I know that's gonna come up later on in the conversation. You know, this sounds a lot like her teaching ology to me. I'd be like Ah, that was Albert. Thank you. Thank you, Albert. This is why we're friends. Where do we go from here? What is your vision for the future? How do we move from the digital bunker? The surveillance of our neighbours, the hyper suspicion I mean it it at least like you know, back in like the Andy Griffith Show Days people trusted the cops. And then we realised that was a bad idea. But we stopped trusting the cops. But we trusted our neighbours, and now it feels like we don't trust anybody at all. How do we start to make changes to surveillance culture to the culture as a whole? To pull back from that, I feel like we're kind of at a psychological edge, like right when I think about my clients, when I think about having conversations about vulnerability and trust and friendships and even just feeling safe moving through the communities, there's a lot of fear now that has come up because of technology. How do we start to pull back from that edge a little bit? I wish I had a magical wand, but I do have a nonprofit that puts out a lot of reports, and we'll have one coming up in the future weeks about a lot of the deceptive advertising practises that go on to sell these products. I think one of the things that will be helpful is to see regulators like the Federal Trade Commission coming in here and starting to hold companies accountable for the, you know safety promises they're making people when they're actually selling stereotyping. And And I think a lot of these companies are going to is really significant litigation going forward for selling us a bill of goods. And while we're waiting for the regulators to act and while we're waiting for the chance to file more of these lawsuits I mean, for everyone out there, this is one of those surveillance issues that each and every one of us has the power to make it a little bit better. Right? Because this isn't the NSA setting up a massive server to copy our internet traffic, it's our parents, our cousins, our aunts and uncles, and a friend who lives down the block. These are people in our lives who listen to our feedback. Who can respond to our push back when we continue to critique the sort of of reliance on these platforms we've seen. And this is it's not always a fun conversation like I have this talk with my dad more often than I want about deleting these apps. And I think it's something where we can continue in a friendly, open, nonjudgmental way to highlight the ways that these apps are actually deteriorating people's quality of life to really remind them that you don't have to live a life constantly bombarded by triggers of how fearful you need to be. But you can live a life that is actually much freer than that much more generous. That has a much more open view of the world around you, a view that's not rose-tinted glasses, just not the fire goggles that that citizen wants us to wear. And and so, really, if you're looking for something to do every time you hear one of these stories, just think about the people in your life who use these platforms and spend a bit of time talking with them about the ways that you know these platforms are actually hurting them. Uh, and I think that conversation could go a long way and like I said, we'll have a new report coming up at stop in the coming weeks that we will give a bit more evidence to to help fuel that particular fight. That's such a strong point. You know, a safer world, much like safe sex is achieved one uncomfortable conversation at a time. It doesn't have to be an uncomfortable conversation. It could be a It could be a joyous one. Well, this was been a joyous conversation. So Albert, thanks so much for for covering up some time and coming back on and helping us navigate navigate all these issues. It's always such a pleasure. Thank you again and thank you for listening and tuning in to Securing Sexuality. Your source of the information you need to protect yourself and your relationships. Securing Sexuality is brought to you by the bound Together Foundation of 501 C three Nonprofit. From the bedroom to the cloud, we're here to help you navigate what safe sex looks like in a digital age. Be sure to check out our website, SecuringSexuality.com for links to more information about the talks that we discussed here today as well as our live events and join us again for more fascinating conversations about the intersection of sexuality and technology. Have a great week! Comments are closed.
|