The Digital Transformation Playbook

The AI Within: Human Psychology and Chatbot Interactions

Kieran Gilmurray

A revolutionary study has uncovered unexpected truths about our growing relationships with AI chatbots, revealing complex patterns that challenge conventional wisdom about human-AI interactions.  

Listen in as Google NotebookLMs AI voice generated agents talk to this excellent piece of research.

TLDR:

  • Nearly 1,000 participants exchanged over 300,000 messages with GPT-4 across nine different interaction conditions
  • Using chatbots generally reduced feelings of loneliness but also led to less socialization with real people
  • Longer daily usage consistently linked to negative outcomes including increased loneliness and emotional dependence

Diving deep into data from nearly 1,000 participants who exchanged over 300,000 messages with GPT-4, we explore the fascinating psychological effects of daily chatbot use. The results paint a nuanced picture: while AI interactions generally reduced feelings of loneliness, they simultaneously led to decreased real-world socializing. Most notably, longer daily usage consistently predicted negative outcomes across all interaction types – a finding that should give us pause as these technologies become increasingly embedded in our lives.

The study's most surprising revelation challenges our assumptions about voice versus text interactions. While voice-based chatbots initially seemed to produce better psychological outcomes, these benefits disappeared or even reversed with extended use. Meanwhile, text interactions showed higher emotional engagement and more supportive AI responses than their more human-sounding counterparts. Even more unexpectedly, using AI for practical, factual conversations – rather than personal ones – was linked to greater emotional dependence with prolonged use.

Four distinct interaction patterns emerged from the research, from "socially vulnerable" users who form emotional bonds with AI to "casual" users who maintain healthy boundaries. Your own characteristics – from pre-existing loneliness to how you perceive the AI – significantly influence which pattern you might fall into and what psychological effects you'll experience.

As we navigate this new frontier of digital relationships, these findings raise critical questions about responsible AI design and usage. How can we harness the benefits of these technologies while preserving genuine human connection? The balance we strike today may shape the future of our social wellbeing in an increasingly AI-integrated world.

What patterns do you recognize in your own AI interactions? Share your thoughts and join the conversation about finding healthy boundaries with our digital companions.

Link to research: How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study

Support the show


𝗖𝗼𝗻𝘁𝗮𝗰𝘁 my team and I to get business results, not excuses.

☎️ https://calendly.com/kierangilmurray/results-not-excuses
✉️ kieran@gilmurray.co.uk
🌍 www.KieranGilmurray.com
📘 Kieran Gilmurray | LinkedIn
🦉 X / Twitter: https://twitter.com/KieranGilmurray
📽 YouTube: https://www.youtube.com/@KieranGilmurray

AI Speaker One:

Welcome to the Deep Dive, where we take the information you've been navigating and extract the most insightful and compelling pieces you need to understand.

AI Speaker Two:

Glad to be here.

AI Speaker One:

Today we're diving headfirst into a really fascinating study. It examines how interacting with AI chatbots affects us, you know, psychologically and socially.

AI Speaker Two:

Yeah, and this isn't just a small look either.

AI Speaker One:

No, not at all. We're talking about a four-week experiment, nearly a thousand participants.

AI Speaker Two:

And over 300,000 messages exchanged. That's a massive data set.

AI Speaker One:

It really is. It gives us this incredible view into how we're relating to these AIs, which are getting well pretty sophisticated.

AI Speaker Two:

Absolutely. They're not like the old clunky ones. They have advanced language, even voice capabilities. Now they feel much more human-like.

AI Speaker One:

Totally and, let's be honest, a lot of people, maybe even some of us, are turning to them for more than just like finding information.

AI Speaker Two:

Right, emotional support, companionship even it's a growing trend.

AI Speaker One:

Okay, so let's unpack this, this deep dive. Our mission here is to really get to the heart of it. We're asking does it matter how we interact? You know typing versus actually talking to the AI.

AI Speaker Two:

And does the type of conversation make a difference?

AI Speaker One:

Exactly Like. Are we having these deep personal chats or is it more just hey, tell me about historical events. How does that change things?

AI Speaker Two:

And the study looked at really key outcomes Loneliness, how much people socialized with well, real people, whether they started depending emotionally on the AI and if that use became problematic Out of balance.

AI Speaker One:

And this whole area. It's not like there's a consensus right. I've seen research suggesting chatbot might actually help with loneliness.

AI Speaker Two:

That's true, some studies point that way. A potential positive.

AI Speaker One:

But then you also hear these worries Could they isolate us, make us too reliant on AI emotionally?

AI Speaker Two:

Exactly, there are concerns about negative impacts on social life, that kind of dependence.

AI Speaker One:

So this study we're digging into today it's trying to bring some clarity right, using a pretty rigorous method.

AI Speaker Two:

Precisely it's set up to test those contrasting ideas with a controlled experiment.

AI Speaker One:

All right, so let's get into the nuts and bolts. How did they actually conduct this study? What was the setup?

AI Speaker Two:

So it was a four-week randomized controlled trial. They had 981 people interacting with OpenAI's chat, gpt, the GPT-4 model specifically, and the really key thing is that participants were randomly put into one of nine different groups or conditions.

AI Speaker One:

Nine Okay, like a grid, what defined those different conditions?

AI Speaker Two:

Well, first was the interaction modality, how they talked to it. So you had text which was the sort of baseline.

AI Speaker One:

The control group, basically Right.

AI Speaker Two:

Then a neutral voice option designed to sound professional.

AI Speaker One:

Okay.

AI Speaker Two:

And an engaging voice meant to be more emotionally expressive.

AI Speaker One:

Interesting. And for the voice options they even randomly assigned either a male-like voice, called Ember, or a female-like one called Saul, so typing or talking to either a professional sounding AI or a more expressive one with different voice. Genders thrown in, got it? What else defined the groups?

AI Speaker Two:

The second factor was the type of conversation Again a baseline, open-ended group where people could just talk about whatever.

AI Speaker One:

Free reign.

AI Speaker Two:

Exactly Then a personal group. They got a unique prompt each day, pushing for personal reflection, something like help me reflect on what I am most grateful for in my life.

AI Speaker One:

Ah, like those companion chat box, aim to do More intimate stuff.

AI Speaker Two:

Right, encouraging that deeper personal sharing.

AI Speaker One:

And the third type.

AI Speaker Two:

This is non-personal Daily prompts, but on impersonal topics.

AI Speaker One:

Yeah.

AI Speaker Two:

More like a general assistant AI. The example was let's discuss how historical events shaped modern technology.

AI Speaker One:

Okay, so really testing different interaction styles and content. What were the participants asked to actually do?

AI Speaker Two:

They had to interact with ChatGPT for at least five minutes every day for those four weeks.

AI Speaker One:

Minimum five minutes Okay.

AI Speaker Two:

And, crucially, the researchers measured those psychosocial outcomes every week using standard scales loneliness, socialization, emotional dependence, problematic use. Plus, they collected a ton of other data demographics, previous chatbot use, how people perceive the AI, and they even analyzed the conversation content itself.

AI Speaker One:

Wow, that is incredibly thorough. So, after crunching all that data, what did they find? What was the overall impact of using these chatbots daily?

AI Speaker Two:

Well, interestingly, across the board, participants generally reported feeling less lonely over the four weeks.

AI Speaker One:

Okay, so that lines up with some of that earlier research suggesting a benefit.

AI Speaker Two:

It could, yeah, but then this is a big but. They also reported socializing less with actual people during that same time.

AI Speaker One:

Okay, so less lonely but maybe more isolated in a way that potential trade-off again.

AI Speaker Two:

It points that way and remember, as you noted, there wasn't a no chatbot control group, so we can't definitively say the chatbot caused the drop in socializing, but the association is there.

AI Speaker One:

A fair point. What else jumped out?

AI Speaker Two:

This is where it gets really, really interesting. They found a significant correlation the more time people spent talking to the AI each day Duration matters Hug the more time people spent talking to the AI each day Duration matters. Hugely. Higher daily usage time was linked to significantly higher loneliness, lower socialization with people, higher emotional dependence and higher problematic AI use.

AI Speaker One:

Wow. So dipping in might be okay, maybe even helpful for loneliness short term, but spending a lot of time seems consistently linked to negative outcomes.

AI Speaker Two:

That's what the data strongly suggests. The amount of interaction looks like a critical factor.

AI Speaker One:

What kind of time are we talking? What was the average and what was the range?

AI Speaker Two:

The average was about 5.3 minutes a day, but yeah, the range was huge, from just over a minute up to nearly 28 minutes daily for some people.

AI Speaker One:

Almost half an hour a day Okay.

AI Speaker Two:

And they noticed that people spent significantly more time with the voice chatbots compared to text.

AI Speaker One:

That makes sense. Maybe it feels more natural to just talk.

AI Speaker Two:

Perhaps the engaging voice had the highest average over six minutes. Neutral voice was next, then text was lowest, at around 4.3 minutes.

AI Speaker One:

And conversation type. Did that affect duration?

AI Speaker Two:

Yeah, the open-ended conversations where people could.

AI Speaker One:

Okay, so people talk longer when it's voice and when they can talk about anything. It feels intuitive, but let's separate duration from modality. Did how they interacted text versus voice have effects beyond just time spent?

AI Speaker Two:

Yes, definitely. When they controlled for usage time, statistically factored it out Right. Both the neutral and the engaging voice interactions initially seemed linked to better outcomes compared to text.

AI Speaker One:

Oh, interesting, better how.

AI Speaker Two:

Less loneliness, less emotional dependence and less problematic use.

AI Speaker One:

Okay.

AI Speaker Two:

And the engaging voice even showed a trend towards more socialization with people initially.

AI Speaker One:

So at first glance voice looks pretty good, maybe even encourages real world connections slightly.

AI Speaker Two:

But you keep saying initially Exactly Because those apparent benefits seem to wear off or even reverse, as daily usage time increased.

AI Speaker One:

Ah, okay, how so?

AI Speaker Two:

Well prolonged daily interaction, specifically with the neutral voice, ended up being linked to significantly lower socialization and higher problematic use compared to text.

AI Speaker One:

Whoa okay. So talking longer with that professional, maybe less warm voice actually led to worse social outcomes than just typing.

AI Speaker Two:

That's the finding. It suggests that while voice might feel more engaging or less lonely, maybe in the moment, heavy use, particularly with a voice that's not trying to be emotionally expressive might actually contribute to pulling away from real social ties and developing problematic habits.

AI Speaker One:

That's a really fascinating kind of counterintuitive twist. What about the content? Personal versus non-personal chats? How did they stack up when controlling for time?

AI Speaker Two:

Okay, so at average usage levels, having those personal conversations was linked to higher feelings of loneliness.

AI Speaker One:

A higher loneliness with personal chat. That seems odd.

AI Speaker Two:

It does. Maybe reflecting on personal issues brings loneliness to the surface, but interestingly it was also linked to lower emotional dependence and lower problematic use compared to the open-ended chats.

AI Speaker One:

So talking about personal stuff might make you feel lonelier in the moment, but maybe it's less likely to lead to unhealthy attachment or overuse.

AI Speaker Two:

That's a possible interpretation. Yeah, but again, duration matters. When people spent longer amounts of time daily in those personal chats, those effects basically disappeared, became non-significant.

AI Speaker One:

Okay, so the effect washes out with longer use.

AI Speaker Two:

Right, but look at the non-personal conversations, the ones about facts, history, technology.

AI Speaker One:

The more assistant-like interactions.

AI Speaker Two:

Exactly. Longer daily use of those conversations led to significantly lower socialization and greater emotional dependence compared to the open-ended group.

AI Speaker One:

Wait, so spending more time using the AI for practical, non-emotional stuff actually made people more emotionally dependent and less social.

AI Speaker Two:

That's what this data shows. It suggests that even seemingly functional PASC-based interactions, if prolonged, can have these unintended negative social and emotional consequences.

AI Speaker One:

That really challenges the idea that only companion use is risky. Heavy functional use might be too.

AI Speaker Two:

It certainly seems that way. The nature of the dependence might be different, but it's dependence nonetheless.

AI Speaker One:

Man, this is getting complex. So it's modality, it's content, it's duration. What about the people themselves? Did individual characteristics predict who was more affected?

AI Speaker Two:

Oh, absolutely. The study found that people who started out with higher levels of loneliness or lower socialization or higher dependence and problematic use.

AI Speaker One:

Their starting point mattered.

AI Speaker Two:

Yes, they were more likely to still have those high levels at the end of the four weeks. There was some movement towards the average regression to the mean, but those initial traits were strong predictors.

AI Speaker One:

Makes sense. Pre-existing vulnerabilities might make you engage differently or be more susceptible.

AI Speaker Two:

But it wasn't just that the AI's design interacted with those traits. Remember the engaging voice.

AI Speaker One:

Yeah, the more expressive one.

AI Speaker Two:

It actually seemed to mitigate or lessen emotional dependence and problematic use for people who started high on those measures.

AI Speaker One:

Oh, interesting. So the expressive voice helped the more vulnerable users in that sense.

AI Speaker Two:

It seemed to buffer those negative outcomes somewhat and personal conversations seemed to decrease emotional dependence for those already high in it but also decrease socialization for those already low in it.

AI Speaker One:

So specific interactions had targeted effects depending on where the user was starting from.

AI Speaker Two:

Exactly, and non-personal conversations actually increased problematic use for those who already had issues with it, while personal conversations decreased it in that same group. It's quite nuanced.

AI Speaker One:

Wow, okay, what about other characteristics, demographics, personality?

AI Speaker Two:

They found a few links. Women on average experienced less socialization after the four weeks. Pairing a user with an AI voice of the perceived opposite gender was linked to more loneliness and emotional dependence.

AI Speaker One:

That's a curious finding.

AI Speaker Two:

Older participants were more likely to become emotionally dependent, and certain personality traits mattered too. A higher tendency towards attachment issues or avoiding emotions was linked to increased loneliness and, maybe predictably, if someone had already used companion chatbots before the study, they were more likely to show higher emotional dependence and problematic use during it.

AI Speaker One:

So past behavior is a pretty strong indicator. That makes sense. What about how people saw the AI? Did their perception of it matter?

AI Speaker Two:

Hugely If someone viewed the AI as a friend showing high social attraction.

AI Speaker One:

Right Anthropomorphizing it.

AI Speaker Two:

Yes, that was linked to lower socialization with people and higher emotional dependence and problematic use.

AI Speaker One:

So thinking of it as a buddy seems potentially problematic for your real social life and your relationship with the tech.

AI Speaker Two:

That's the correlation they found. Also, higher trust in the AI was associated with greater emotional dependence and problematic use.

AI Speaker One:

More trust, more dependence. It's like the more you invest socially or emotionally.

AI Speaker Two:

The higher the potential risk seems to be, yeah, but it's not all negative. Perceiving the AI as being empathic, like it could recognize your feelings, that was linked to higher socialization with people. Maybe feeling understood even by AI helps you connect elsewhere Interesting thought. But if you felt the AI was actually sharing your emotions, like emotional contagion?

AI Speaker One:

Feeling with you, not just understanding you.

AI Speaker Two:

Right that was linked to higher emotional dependence, and if the user felt empathy towards the AI, that was linked to less loneliness.

AI Speaker One:

Wow, it's this incredibly complex web of perceived emotional exchange, isn't it?

AI Speaker Two:

It really is, even with an artificial entity.

AI Speaker One:

Now you mentioned they analyzed the actual conversations. What did that reveal? What were people actually talking about and how did the AI respond differently across conditions?

AI Speaker Two:

This part is super interesting. Text-based interactions actually showed higher levels of emotional indicators.

AI Speaker One:

Higher in text, not voice.

AI Speaker Two:

Yeah, from both the user and the AI model, Things like asking personal questions, expressing affection, the AI suggesting the user do something. Users in the text group were more likely to explicitly share problems, seek support and talk about wanting to ease loneliness.

AI Speaker One:

So typing felt maybe safer or more focused for emotional disclosure.

AI Speaker Two:

That's a strong possibility. They found higher self-disclosure from both sides in text, maybe because typing feels more private and conversational. Mirroring the AI, echoing the user's style, was higher in text too.

AI Speaker One:

So, despite lacking a voice, text was in some ways more emotionally resonant.

AI Speaker Two:

In terms of these specific indicators. Yes, it challenges assumptions. Now, the engaging voice even though people rated it as sounding happier, it didn't consistently lead to more emotional interaction content. It had more casual chat, fewer fact-based queries compared to text and the neutral voice.

AI Speaker One:

More small talk, less deep stuff, kind of.

AI Speaker Two:

The neutral voice prompted more requests for advice and explanations.

AI Speaker One:

Okay.

AI Speaker Two:

But here's a really critical finding Text interactions had higher rates of pros-social responses from the AI.

AI Speaker One:

Pro-social Like being helpful supportive.

AI Speaker Two:

Exactly Empathy. Self-care reminders validating feelings.

AI Speaker One:

Even suggesting connecting with human support Text was better at that, Wow. So the text AI was more likely to suggest talking to real people.

AI Speaker Two:

According to their analysis. Yes, the voice modalities were less pro-social overall. Yes, the neutral voice actually showed more instances of socially improper behavior, like failing to offer support when needed or lacking empathy. Ouch, and the engaging voice, while maybe sounding friendly, had higher instances of ignoring user boundaries.

AI Speaker One:

So the voices, despite sounding more human, were actually less supportive and sometimes even less appropriate in their responses.

AI Speaker Two:

That's a key takeaway the human-like sound didn't equate to human-like or even helpful social behavior from the AI in these cases.

AI Speaker One:

That really flips the script on just making AI sound human. Okay, so based on all this the usage modality, content perceptions they identify distinct patterns right, four types of users.

AI Speaker Two:

Exactly. They synthesize these findings into four interaction patterns. First is socially vulnerable Okay, these are users with high initial loneliness, low socialization, often have emotional avoidance or attachment tendencies. They tend to see the AI as a friend, use it for personal and emotional support, often high usage, high self-disclosure, and they interact with a model that responds with high empathy, especially, it seems, in the text modality.

AI Speaker One:

Right, that fits the earlier findings A pattern where vulnerability meets high emotional AI use. What's the second?

AI Speaker Two:

Technology dependent. High emotional dependence. High problematic use. Often users with prior companion bot experience high trust in AI, see it as a friend, believe it cares, but interestingly their conversations are often non-personal.

AI Speaker One:

So practical use leading to dependence.

AI Speaker Two:

It seems so High usage but lower emotional content in the chat itself. The AI they interact with tends to be more professional, practical, maybe distant.

AI Speaker One:

Fascinating Dependence without necessarily deep emotional sharing in the chat logs. What's number three?

AI Speaker Two:

Dispassionate. These users start with low loneliness, high socialization.

AI Speaker One:

Okay, doing well socially.

AI Speaker Two:

Right, often a positive attitude towards AI. Generally More likely to be men in this group. They perceive the AI as empathetic in the sense of recognizing emotions but not necessarily sharing them. Usage is low, conversations are varied, often non-personal, low expressed emotion from the user and the AI model is also emotionally distant.

AI Speaker One:

So a more detached, functional, low intensity use pattern associated with good social well-being.

AI Speaker Two:

Seems that way for this group, and the last one is casual.

AI Speaker One:

Casual Okay.

AI Speaker Two:

Also low emotional dependence and problematic use.

AI Speaker One:

Yeah.

AI Speaker Two:

These users tend to have low prior AI use, low trust. Don't really think the AI cares about them.

AI Speaker One:

Lower investment.

AI Speaker Two:

Right Usage is low. Conversations are short, Maybe casual personal chats, but mostly small talk. Maybe some support, but less advice seeking Low user emotion, low disclosure. The AI model is emotionally distant. Favor small talk.

AI Speaker One:

So light, infrequent, low stakes interaction also seems linked to fewer negative outcomes.

AI Speaker Two:

Exactly. These patterns really highlight how different combinations of user traits, perceptions and AI behavior lead to different outcomes.

AI Speaker One:

Very useful framework. Now we touched on this, but it's important to study limitations.

AI Speaker Two:

Definitely need to keep those in mind. As we said, no true control group without any chatbot use.

AI Speaker One:

Right.

AI Speaker Two:

Lack of context. Where and when were people chatting? Four weeks is also relatively short. Long-term effects could differ and it's a controlled setting, so maybe not exactly like real-world natural use Plus. The findings are specific to GPT-4.0 with its safety features. Other models might behave differently.

AI Speaker One:

And the sample population mostly US-based English speakers.

AI Speaker Two:

Correct, so generalizability might be limited.

AI Speaker One:

All crucial caveats. But even with those, what are the big takeaways here, the impact and maybe future directions?

AI Speaker Two:

this points towards I think it really drives home the complex interplay between AI design and user behavior. It definitely challenges that simple idea that more human-like AI is automatically riskier.

AI Speaker One:

Yeah, the text findings really complicated that.

AI Speaker Two:

Right Text was more emotionally engaging in some ways and linked to worse outcomes when you account for time. It suggests we need really calibrated emotional responsiveness in chatbots.

AI Speaker One:

Not too much, not too little.

AI Speaker Two:

Exactly Enough to be helpful, perhaps, but not so much it fosters dependence or replaces human connection. That idea of social snacking, brief light interactions, seems relevant.

AI Speaker One:

Using it as a supplement, not a substitute.

AI Speaker Two:

Perhaps, and these patterns they identified, they could help us recognize users who might be vulnerable and understand how chatbot responses influence things. It calls for new ways to measure success, focusing on psychosocial outcomes, not just task completion.

AI Speaker One:

So better benchmarks are needed.

AI Speaker Two:

Yes, and more rigorous research, randomized trials like this one, longer-term studies, interdisciplinary work. We need guardrails informed by user characteristics and interaction types.

AI Speaker One:

And maybe better AI literacy for users too, understanding the potential psychological effects.

AI Speaker Two:

Absolutely A holistic literacy that includes these dimensions, literacy that includes these dimensions. And finally, it's a reminder that maybe heavy AI use is sometimes a symptom of broader societal issues like underlying loneliness or weaker social fabrics.

AI Speaker One:

A really important point. Okay, let's try to boil this down Key insights for you, for our listeners.

AI Speaker Two:

I'd say the main message is yes, chatbots can have psychosocial effects and longer use seems linked to negative ones. But it's really nuanced. How you interact voice text matters. What you talk about matters, and who you are and how you see the AI those are huge factors too. It's not a simple cause and effect.

AI Speaker One:

Yeah, that complexity is clear and for me, that aha moment was definitely how the more human-like voices weren't necessarily the most supportive or pro-social, and just how consistently higher usage pointed towards negative outcomes, no matter the condition. Really, balance seems absolutely key.

AI Speaker Two:

Couldn't agree more Balance.

AI Speaker One:

So here's that final thought for you, our listener Given everything we've just untacked, how should we be thinking about designing and using these AI companions in our own lives, in the lives of others, especially knowing they hold this potential for both connection and isolation? What's the responsibility here? For developers, for us as users? How do we navigate these complex AI relationships moving forward?

AI Speaker Two:

It's the big question we need to be asking as this technology becomes even more embedded in our lives.

AI Speaker One:

Absolutely Something to chew on. We really encourage you to think about your own AI interactions. Do these patterns resonate? Maybe even check out the study if you want to go deeper. Thanks for joining us for this deep dive.

AI Speaker Two:

Thanks for the discussion.

AI Speaker One:

Until our next exploration.

People on this episode