Securing Sexuality is the podcast and conference promoting sex positive, science based, and secure interpersonal relationships. We give people tips for safer sex in a digital age. We help sextech innovators and toy designers produce safer products. And we educate mental health and medical professionals on these topics so they can better advise their clients. Securing Sexuality provides sex therapists with continuing education (CEs) for AASECT, SSTAR, and SASH around cyber sexuality and social media, and more.
Links from this week’s episode:
Mental Healthcare in the AI Era: Enhancement for Clinicians, Not Obsolescence
In the ever-evolving landscape of technology, it's hard to miss the rapid development and integration of Artificial Intelligence (AI) in various sectors. This sweeping transformation raises questions about the implications for many professions, including mental health. A significant debate centers around a profound question: "Will AI replace human mental health providers such as therapists and other clinicians?"
To address this question, we need to delve into the abilities and limitations of AI in the therapeutic space and understand the intricate dynamics of human psychology and therapy. The Promise of AI in Mental Health AI's potential for mental health is significant. The 24/7 availability provides an immediate response, which is especially valuable for those who may not have access to mental health services. Chatbots, Woebot and Tess, are examples of such applications. They utilize cognitive behavioral therapy (CBT) techniques to help users manage their thoughts and behaviors. AI has also made headway in identifying mental health problems. Machine learning algorithms can analyze data from individuals' online activity or speech patterns to detect signs of mental distress. This could aid early detection and intervention, a critical aspect of managing mental health conditions. The Limitations of AI in Therapy AI advancements are promising, but they come with significant limitations. AI lacks the capacity for empathy, an essential ingredient in a therapeutic relationship. Empathy is the ability to understand and share feelings, a uniquely human trait that fosters trust, openness, and therapeutic alliance. AI can simulate empathetic responses but cannot truly feel human emotions. Secondly, AI cannot understand and interpret the subtleties of human communication. Human interactions are nuanced, involving both words and tone, body language, and contextual understanding. Although natural language processing (NLP) advancements have improved AI's capacity to interpret language, the technology still needs to work on these complexities. Furthermore, the ethical considerations surrounding AI in therapy cannot be ignored. Issues around data privacy and consent, the risk of misdiagnosis, and the absence of regulatory frameworks are some concerns that warrant careful consideration. The Future of AI and Therapists: Collaboration Over Replacement Considering the potential and limitations of AI in mental health, it appears unlikely that AI will replace human therapists or counselors entirely, at least in the foreseeable future. Rather than seeing AI as a threat, viewing it as a tool that can complement and enhance therapeutic services would be more constructive. AI applications can serve as first-line support, providing immediate responses and delivering basic mental health information. They can assist in monitoring mental health, enabling real-time tracking and alerting professionals when intervention is required. AI could also alleviate administrative tasks for therapists, allowing them to focus more on providing care. In the meantime, human therapists and counselors will continue to be indispensable. They provide empathetic understanding, nuanced communication, and ethical considerations that AI cannot replicate. Rather than replacing therapists, AI will likely transform the mental health field into a more collaborative space, enhancing the provision of care and expanding its reach. As with any transformative technology, our responsibility is to guide its integration in a manner that benefits and safeguards humanity. As we navigate this new frontier, we must foster ongoing dialogues around the ethical implications and the human-AI relationship in mental health. Key Concepts:
Stefani Goerlich: Hello and welcome to securing sexuality. The podcast where we discuss the intersection of intimacy-
Wolf Goerlich: -and information security. I'm Wolf Goerlich. Stefani: He's a hacker. And I'm Stefani Goerlich. Wolf: She's a sex therapist. And together, we're going to discuss what safe sex looks like in a digital age. And today we're talking AI – not just like, can I date AI, and I think we talked about that. We had a really good conversation a few episodes back. I think it was Episode 28 with Markie Twist and Neil MacArthur about this right? Stefani: We did, Um, we were talking about people dating their Replikas, dating the personas that they had created in the Replika app. Wolf: Yeah, so as an update to that story, when we were recording it, Replika had pushed out an update, and it basically, like, stopped all relationships. Like if you had a relationship with your Replika if it was your boyfriend or girlfriend, it was done. You went to bed telling him you loved him. You woke up and hearing. “I'm sorry. I can't have that conversation with you, Hal.” Um and, uh, and that was a really, really troubling traumatic time for a lot of people now, the AI company has since, like, went back on that. They restored the personalities. Um, so, you know, as a little bit of update, we have seen that part of the population taken care of as it were, Uh, but it's still a reminder that sometimes these things can go wrong. But I thought today we'd have a little bit broader conversation, right? Because chat GP t is having a moment. Um, there are people doing all sorts of intriguing things with this in terms of, uh, how they find other people, how they connect to people, how they flirt with people. Uh, and of course, we're immediately wiring it up to to make another virtual girlfriend or boyfriend. Uh, but it is an interesting moment in time. Stefani: I mean, as somebody that is documented as a terrible flirter. I would love to hear more about how people are using chatGPT to flirt because I feel like that's the skill that would have come in handy when I was dating. Wolf: You know, and it's interesting, Like, I guess, like having a text game is a thing, right? I mean, I didn't realize it was a thing, but I now get ads every once in a while like, Oh, you should take this class and up your text game. It seems very much like the old school-like pick-up games that people used to teach when we were younger. I don't know if that's the case, but that does seem to be the way it feels. Stefani: I am not entirely sure I know what a pickup game is. What is a pickup game? Wolf: Oh, it's all those pickup scams, you know. They used to get, like, pass out books or you download things like, Oh, you should do this and you should do that. And you know, it's where, like, negging comes from all those sort of things that they would try and sell desperate young men. Stefani: Oh, OK, So when I think of that, I think of the term like pickup artistry, this idea that there are all these, like subtle techniques and tricks that you can use to, um less woo women and more Just like trick them into deciding you're sexy. Wolf: Yes, Well, I mean the trick side is really the key part of all this right, which is - there certainly is. And I think we all intrinsically know this. There certainly is a way that you can be flirty and be successful and say something a little bit sexy and say something a little bit, you know, alluring or mysterious, and woo the person, reel them in and there's, like, you know, walking in and stumbling all over yourself. And, uh, and there, the two show me. But for people who have long time struggled with that second one, especially in the transition to the text age where things like, uh, punctuation and spelling. I mean, you had a whole list you've walked me through in the past about like, here's what I think is sexy And here's what is a turn off? Uh, those types of things, uh, have certainly upped the game. So there's this idea that now you gotta have a text game, right? You know, you need to be on point to know what to say, to get people to reply back and to match with you. Stefani: Well, I mean, that's in theory just communication skills, right? Like and maybe I have a bias because I'm a writer. But I've never understood the idea of, like, the text game, because ultimately, what we're talking about is the ability to form a coherent sentence that connects you with the other person. And I feel like that should be the same in any format. But again, I'm willing to own my biases. I am a writing person. I prefer text to a phone call. So maybe that's just a me thing. Wolf: So and this is an interesting question that I would I would ask for you. Uh, I've heard AI referred to as sparkling autocomplete. It's only AI if it comes to the GPT region. Otherwise, it's just sparkling autocomplete. If someone is using a spell checker or grammar checker when they're texting you, Uh, and you're dating how does that change things? Stefani: -them using a grammar corrector? In what way? I don't know that I understand. You mean like to make sure that my sentences are grammatically correct and spelled right, right? Wolf: Yes, exactly. Stefani: Oh, I mean, I notice when things are misspelled or poorly constructed for sure. But I don't think it's ever occurred to me that somebody would run my messages through some sort of like English class filter. So I didn't even know that was an option that now both nitpicky and warms my grammarian's heart. Wolf: So So we have at least, uh, sparked some interest on you. We've warmed your heart a little bit. What do you think of the transition from that to what some tiktokers are calling robo-rizz, which is effectively using some of these chatbots Specifically chat GPT like I said, having a moment, But I'll talk about some of the others as we go on, uh, to pick up people on dating sites. Uh, hinge is one that, uh, recently got some coverage in terms of this being a phenomenon. Stefani: I mean, I understand the appeal. I understand the sort of the Casanova of – No, not Casanova. What was the other one? The, uh, Peter Dinklage movie that I love so much? Wolf: Uh, yeah, Cyrano. Stefani: Ah right, Cyrano! For as long as people have been courting other people we've been using the words of third people, of better wordsmiths, to accomplish that. So on one hand it doesn't strike me as a new phenomenon at all. Right? We just need like, CyranoGPT. But the the flip side is ideally, at some point that CyranoGPT conversation should result in an actual human interaction. And you're gonna have to actually form your own thoughts and connections at a certain point. So unless the end goal is permanent long-term relationships where neither of you are ever actually ever talking to each other, you're just facilitating flirtation between your two respective AI apps. At a certain point, that system's gotta break down, right? We have to talk to each other. Wolf: Well, it will be interesting to see how that plays out. But before I do that, your comment about Cyrano reminds me of the movie Her which we've also talked about and which I will go on record as having a little bit of a struggle to watch. I try to rewatch it, uh, ahead of this call in this conversation. Oh, that's a painful movie to watch. But the job that Joaquin Phoenix's character in Her, right - Theodore - his job is writing love letters. His job is to be that, uh, you know, Peter Dinklage character, Um, which is fascinating because that's the first job that we're replacing with these AIs. Stefani: I don't know that it's the first job we're replacing because we've been hearing about AIs doing all sorts of things right. We're hearing about students using them to write their essays, which isn't paid work. But student is a job When you're a young person, we're hearing about lawyers using it for better, for worse to write their legal briefs. I mean, I don't know how big the market for Theodore-esque faux letters is in real life, but I definitely know that there are a lot of ways in which people are trying to use AI to make their jobs easier, and it seems like they're getting in more trouble than they are getting actual good guidance from these little bot things that everybody's in love with right now. Can you tell I’m a little cynical? Wolf: Yes, a little bit. And you know, I'll back you up on that because, uh, a while back I was helping one of our kids with homework, and I just wanted to, like, double-check the math formula, right? Like I know the way to go about it. I've got a degree in computer science. Uh, I've done way too much math, but it's also been a long time. I'd forgotten it. And so while we're going through, I had chatGPT up just to test, and I was plugging in things and getting the formulas. And what was fascinating is oftentimes it did the math wrong. Like it had the formula right. It sounded very convincing, but then it would like, multiply things correctly because, you know, these chatbots are not calculators, right? Don't confuse your chatGPT with your calculator. Or don't confuse your chatbot with your therapist. You had, uh, made a comment before, uh, this episode where we've been talking about this like, hey, would an AI write my treatment notes? And one of the things I found interesting. Sort of like closing this historic thread. Uh, you're right. This has been going on for a long time, right? So the first chatbot was Eliza - Eliza, the therapist. Created in the late 1960’s over at MIT. And so Eliza gets created. It's not many people. Lots of people do a lot of things with it, but it doesn't really, uh, you know, replace therapists.Obviously, you still have a job. Your career is still flourishing. But 30 years later, Eliza inspires Alice. So Alice was the chatbot that I got introduced in the mid-nineties. And Alice makes a circuit for many years, wins a number of awards. What's interesting, and tying this all back full circle is Spike Jonze, creator of Her, right. The producer of Her, uh, which came out in 2013. He mentions, Oh, I was inspired by Alice. So there's this, like, really interesting historic line from “we're going to replace therapists. We're gonna, you know, write treatment notes” to “we're we're going to create a, uh, chatbot” to people who are going to try and form relationships. The chatbot that's gonna inspire a movie. Which is then, of course, going to inspire dating. Stefani: I don't know if I would go so far as to say that my job is safe because, you know, if you just hop on Google and Google AI Therapist, the first listing is the “10 best Virtual Therapist Apps”, and then not very far down is “5 AI chatbots that aim to put a therapist in your pocket”. There are at least half a dozen of these showing up already that are, um, not great, but they are affordable. And in a time when insurance coverage for mental health is so spotty and when so many people are in need of therapy, I can see why people would lean into these therapy bots. So that's a really scary idea, especially given what you've told me about bots not being able to do basic math and bots getting things wrong. Wolf: Yeah, it is a bit scary in that regards, especially since there's a couple of different aspects of these, uh, chatGPT models that we need to take into account. And one of them is they oftentimes, uh, are not multi-tenant. What that means is, any conversation they're having with anybody tends to go right back into the learning model tends to be, uh, you know, absorbed in and made part of the corpus of which that they are mimicking so there. There certainly is a lot of concerns around data privacy. Uh, not surprising. I mean, privacy runs through everything we talk about on this on this podcast, but certainly there's some current concerns there. Stefani: How is the information that somebody gives to a chatbot or to an AI captured and stored, whether it's somebody using chat GPT to sex with the person they met on grinder or whether it's somebody like pouring out their anxiety and depression to Woebot? How are the things that we're telling these what feel like empty spaces, um, being collected and preserved, Or are they being collected and preserved? Wolf: Oh, absolutely. So if you look at like, uh, how Eliza worked, Eliza was based on a thing called natural language processing. Um, which is a very fancy word for mad libs. You remember mad libs like you got the structure, you pop it, the noun, the verb, and suddenly you have a sentence. The whole idea of natural language processing was to pull back certain things you say pop them out, find synonyms and find other ways to pop them into a mad lib and regurgitate that for you. Now, we've moved away from that to, um, to LLMs, which is, uh, effectively large language models. So in a large language model, what you do is you have a ton of text – tons and tons and tons and tons and tons of text. And you use that text, uh, to feed and train the model. So you give the prompt and you get the output, and then you compare the two and you run it through the process. So you're basically training these models on these text conversations, Which is why if the, uh, bot keeps running for a while and keeps being trained with these conversations, they can go off the rails pretty quickly, right? Because depending on how people are interacting with them, changes their responses, changes their actions. Um, and we see this play out in terms of, you know, bots being turned off for being racist or bots being turned off or being hateful, Uh, in this space of dating, uh, one of the things that's interesting is one of the apps for helping people date. It's called Rizz. Um, and some of the people are complaining about Rizz because it's become so sexual. They look at what comes out, and they're like, I can't put that in chat. That's way beyond my comfort level so it can push some of these, uh, these chat systems, you know, further down the road. So to your question, oftentimes and there is, uh, OpenAI is Is the people behind chatGPT – Not all these are tied to ChatGPT. Uh, but OpenAI is the company behind that. And there is certainly an option to disable learning from your conversations. Um, but I think that's a paid option. I think that's more of a service for companies. Certainly. You're not gonna find that in, uh, you know, in therapist conversations and dating conversations. Stefani: And I've never heard of that before, so I'm assuming most just everyday people that are hopping onto the chatGPT website probably don't even know that is an option or even that it's something that's happening that they should form an opinion about, right? Wolf: Yeah. Yeah, I would think so. So there's basically in the chat history for those of you who are using it, um, go into settings, data controls and turn off, uh, chat history and training. Stefani: So not to keep bringing this back to my job, although, you know, I like having a job. But if dating AIs are becoming overly sexualized because of the content that's being provided to them by the users. What does that say about therapy AIs, because I'm assuming that most of the people using them are not clinicians Input like the greatest new modality they found or their latest thoughts on how to best treat OCD, the majority of the content being input would be coming from the client side. So how would that impact the ability of a therapy but to act as a therapist? Wolf: So these LLMs, these large language models, they have two main problems. First off, they hallucinate, which is, um, you know, an engineering term for they make stuff up that isn't real. And second off, they have emergent properties, which is AI speak for, um, they go off the rails after several prompts. So many of these chat providers have set up their prompts to only have a couple of conversations because the longer that conversation goes, the longer the AI is running, the more emergent it becomes. And the more emergent it becomes, the weirder it becomes, and that can sometimes become dangerous. Uh, we've seen the application of this technology for suicide hotlines, and they've had to limit and start resetting those bots because all the interaction with suicidal people has made the suicide support bots, Uh, let's just say less than friendly, and it's actually caused more problems than this has helped. Stefani: So the suicide prevention bots are making people more suicidal? Wolf: Uh, yes, I think at a very, very high level. Yeah, it's not working well right now. Stefani: Wow. This is, um, not great news for those who are impacted by therapy, dry areas or lack of affordability or any of the other things that can be a barrier to mental health. Um, I suppose it's easier with dating because everybody's experimenting with new dating AI or new dating technologies, right? That's one of the things that you and I always talk about Is that the minute a technology is developed? The technology is eroticized. So I suspect this is maybe less of a problem in the dating world than it is in the mental health world where there are fewer options and more people trying to access them. Or am I wrong there? Wolf: Yeah, I think so. I think you're on to something. I mean, one of the things that's interesting is the application of, uh, this technology to basically, like, clone a personality, right? Clone their voice, uh, clone their mannerisms. There is a, um you know, influencer is getting a lot of press right now. Her name is Karen Marjorie, and she's getting a lot of press for this because she charges, I think, like a dollar a minute to talk to her. And, of course, her is more “Her” like the movie, not her. Like the person it is a simulation that uses her voice. It uses, you know, past messages and things she's put on there. But even there, um, she's been complaining that her, um rentable AI uh, is going rogue. It keeps getting more and more sexualized as the conversations go on and she's been having to reset it. Stefani: So in some ways, that would seem like a bonus for an influencer, right? Like I can't imagine that you're setting up an AI to talk to the people that you don't have the time or interest in talking to, especially those who are approaching you from, uh, an objectification or lustful sort of lens. I have to think that a certain degree of sexualized content would be assumed from the get-go. Wolf: I'd imagine so. But I think this is similar to what I was saying about the Rizz app. You know, you can use these, uh, chatbots to create some really intriguing letters, right? You can do the Cyrano thing about here's the letter I'm gonna send to my loved one, but you better read it and be comfortable with it and then be aware of what it's saying. Uh, because if you're not, you know that that, uh, bot continues on to go further and further down whatever path that's going on. And that may not be a path you're comfortable with. Stefani: You might end up propositioning grandma for a threesome. Wolf: Uh, yes. Although I did not want that mental image. Thank you. Stefani: You're welcome. Wolf: So I think broadly, we've got, like, three levels of things that are interesting about this, right? The level one is, how am I using this information to To flirt with people, to start relationships, uh, to send texts. And you're right. It's gonna be very interesting.Is it I'm using this to another person. Or is it, I'm using it on their bot and the bots decide they like each other, and then the people meet? Well we'll see how that plays out. But that's one area. Second area is people using this technology to scale themselves. So what Karen's doing, um there's also other other apps out there where you can clone your partner. Um, so if they're unavailable or uh, busy throughout the day, you can still have a text message with someone like them because they read, pass text messages and behave like you or talk like you. Um and that was another area I was exploring I'm like, Hm? Do I need another Stefani? I think we text a lot, though. I don't know that I need another Stefanii. Stefani: You absolutely don't. I mean, from a mental health perspective, all of this sounds so incredibly unhealthy. Like setting aside the tech cynicism of we have known These things are gonna be a problem. Since Hal, in 2001 a space odyssey, Uh, from from the beginning of our conceptualization of a I the the idea that it will go rogue and murder us has been built into our understanding of these technologies, and yet we keep making them, and I don't understand that. But setting all of that aside, it seems profoundly unhealthy to me to be outsourcing your flirtation to be, um, relying on artificial intelligence to have a cogent conversation with another human being to be close your spouse. Because you what? You're not able to just be on your own for a day or two while they're on a business trip. None of the scenarios that we're talking about come across as healthy to me. And maybe again, I will be willing to own my bias. I am a bit of, um, a curmudgeon when it comes to technology, but none of this feels good to me. Why are we doing this? Why is this having the moment that it's having? Wolf: Well, because it's upending so many, so many things. It's a really cool technology. And as oftentimes happens, a cool technology happens and we leap on it, right. We leap on it, uh, all the time without thinking about the downsides. You bring up an interesting point. You know, My two things come out of that one is I just want to go on record and say, Hal did nothing wrong. If the people had just let him do his job and not interfered with the mission, it would have been OK, I've rewatched 2001. I find that much easier to watch than Her, which is so much talking. Um, even in Her, like the end of her was like the AIs are all gonna go away. I'm like, Oh, that's the cool part, right? Like maybe there's a spaceship. Or maybe there's some It's like, No, we're going away. And now we're all gonna sit up on the roof like. So at least things happened in 2001, even though that's a slow movie too. But I will say, Hal did nothing wrong. However, my concern would be, um, from a personal perspective. How do I keep track of what conversations I've had with you versus what conversations I've had with you in air quotes? If I was to clone you because you are in session a lot, you talk a lot. You, uh you got that lovely neon over your door that tells me you're busy. Um, you, uh, you present a lot. You're writing a lot. You know, sometimes I miss you. If I was to clone you right, just a little bit. Just a little bit to keep me, Keep me, keep me from missing you too much. My concern would be how I keep track of the conversations I had with you versus the conversations that you know I had with this replica. Stefani: In listening to you talk about missing me so much, you create an artificial version of me and noticing the cold chill that goes down my spine when you say that that also raises for me. You know, the inevitable consent question. Like, do I get a say in whether or not you call me or can you just unilaterally decide that you want a pocket version of Stephanie and I don't get any sort of input or autonomy around that decision? And how much of me is being used by this chatbot that you're leveraging? Wolf: I'm so glad you brought that up because I think that that gets back to that's a through line. We talked about privacy as a through line, but that consent is another through line. Like, did I consent to this bot, uh, sending this message that I was not comfortable with? Did I consent to this bot who I have now modeled after my personality and Caryn is, uh, you know, position, uh, saying these things that I'm not, uh, comfortable with. Did I consent, uh, to my spouse cloning me? There's a lot of room for ambiguity, and we may get to the point where, like, you know, a generation from now, people are like, Why are you so weird about that? It's fine. You just ask them what they said. It's OK. Just share the chat logs right? We may be able to solve for this, but it's certainly a thorny problem. So problem area number one or interesting area number one is that dating Uh, and attracting people. Problem number two is cloning yourself or a loved one. Uh, problem area number three continues to be, you know, I'm gonna date an AI and you know, there's been some new interesting developments in that space as well. I mean, so, you know, with the GPT technology that's coming out, there's, uh, new models that are coming right, So, like my anima and replica and those, like last generation things are still out there. Uh, but now there's, like Partner AI, which is based in the new GPT 4. By the way, we're on version four when we recorded this, version five is being worked on. No one but me cares about this, but I feel the need to say it. Stefani: At least half of our audience. The tech side cares deeply. The clinical side probably doesn't know what version four versus version five means, but the tech part of our audience would be very distraught. If you did not mention which version we were on. Wolf: They'd be like, Wolf, why did you leave that out? I think as a broad, uh, broad theme, We've went from mad Libs to parroting blocks of text to creating blocks of text that look enough, uh, like the real thing, which is where we're at with GPT three-ish to GPT four, which is being able to have much more of a conversation pass the Turing test, do those sort of things, Uh, you know, and GPT five will have context, but in so many of these things, like take anima, uh, for example, and I'll go back to what you were talking about with a therapist because there's an interesting parallel here. Uh, they don't necessarily, like maintain state. Like, if I was talking to you yesterday and said, you know, I've got I'm really concerned about my dog. Uh, my dog's not feeling good. And you’d be like, Oh, I'm sorry to hear that. And we have a conversation, right? If someone you're dating, maybe it's a moment of vulnerability. And you explore your feelings. If it's a therapist, maybe you're opening up and you're, you know, pulling out questions. I've never been a therapist, so I don't know what therapists do but maybe like, “tell me more”. “How does that make you feel?” “What does that remind you of your childhood?” Now I'm way off because I could see your face. But in both of those areas, the minute that session is done and you reconnect later because remember, the longer these things run, the more emergent properties they have, the more they go off the rails, you reconnect later. It's a new instance. Hey, do you remember what I was telling you about the dog? No. Tell me about your dog. You know, how does that sit? If your therapist forgets everything you say every single session or your virtual girlfriend forgets everything you say. You know in between every time you text with them. Stefani: I mean, that's not a real relationship. Then I would argue that that ongoing connection that empathy for one another, that concern for what's happening, that ability to hold space and to proactively say, “How have you been since your dog passed” or to proactively say “I was thinking about you today. I heard something that made me smile, and it brought you to my mind.” If those are not tasks that an AI bot can accomplish, then we are replacing genuine human interaction with something that is far inferior in my mind. Wolf: I think that's the mic drop moment we go out on because it isn't right? I said earlier, chat GPT isn't a calculator. These GPTs and chatbots are not relationships. They can't. We may form a relationship because we form a relationship with everything. I love you very deeply, but, you know, I am also very, very much in love with my coffee maker. Um, and not at the same level at all, but I mean that we as humans form relationships with all sorts of different things. Stefani: Coming back to the famous story of the pack bonding with the Roomba. Apparently, I'm gonna come down one day, and you're gonna have a little tribe of coffee makers following behind you. Wolf: Well, no. Because then I'd be cheating on my coffee maker. I am monogamous to my coffee maker. Don't be weird, but yes, my girlfriend, who lives on a server farm in Canada, says all the right things in the moment. But that is not a relationship, that is not shared, meaning that is not shared context. It can be very fulfilling, but I think it's important as we go in and move forward with this, uh, moment in time that AI is being driven, Uh, for this moment in time, where chatbots are taking on so many things that remember what it is and what it isn't. It's great at mimicking texts and saying the right thing in the right moment. That's not a relationship, that's not a calculator. And it won't make me coffee. Stefani: It makes me very excited to hear our philosopher friend Lee Elliott in, um at the Securing Sexuality Conference in October, because I suspect she would push back on everything you just said and everything I've expressed as a concern. And I am super excited to hear the philosophical response to some of the questions that we've raised. Um, because I know you and I are fairly aligned in this, and I know others feel quite differently, and it is a conversation that is fascinating and terrifying to watch unfold. Wolf: It absolutely is, so I think we'll leave it there. Thank you so much for tuning in to Securing Sexuality, your source of information you need to protect yourself and your relationships, including, please remember, don't save those chats on the servers. Stefani: Securing Sexuality is brought to you by the Bound Together Foundation, a 501c3 nonprofit. From the bedroom to the cloud, we're here to help you navigate safe sex in a digital age. Wolf: So be sure to check out our website, securingsexuality.com for links to more information about the topics we've discussed here today, as well as our live conference in Detroit, where we'll see Lee, which you're talking about. Stefani: And join us again for more fascinating conversations about the intersection of sexuality and technology. Have a great week Comments are closed.
|