Skip to Content, Navigation, or Footer.
Saturday, Nov. 23
The Indiana Daily Student

opinion

OPINION: AI could be detrimental to human relationships

opaifears-illo

In a primarily technology-driven world, I sometimes wonder: what will come in the way of face-to-face interaction next? TV was one of the first things that began the trend. Yes, TV can help educate and inform an audience, but it can also distract people from interacting with those around them. It pulls people away from authentic interactions and into alternate universes with unrealistic expectations and occurrences. 

Cell phones are another example. From texting to scrolling through social media, touchscreen phones began to pull people away from the outside world and into digital territory. We've been trained by these devices to rely on social media apps like Instagram and Snapchat to fulfill our communication requirements. No need to talk to someone in person about their vacation – you saw their post on social media.  

We’ve created new time-consuming ways to capture our attention and take us away from what really matters: face-to-face human interaction. Phones and social media have made significant contributions to this evolving trend. I'm constantly observing people walking on the street with their heads cast down, eyes trained on their phones. Screen time averages rise as we fall deeper and deeper into the phone-verse. 

It’s possible that AI is next in the line of succession, making its way to the forefront of our lives. I will admit, AI has its perks. ChatGPT, one of OpenAI’s most recent inventions, has the ability to interact with its users in a conversational manner. It can solve math problems, produce computer code and take a prompt and write its own unique response. 

While this may promote efficiency and inspire new teaching methods, it also poses a risk to human interaction. If a student constantly relies on ChatGPT for clarifications and school help, it potentially reduces the need and desire to communicate with teachers face-to-face.  

[Related: IU Kelley professors study possibilities of AI technology]

ChatGPT is not the only form of AI that presents such a risk; digital humans are also among the demographic. Digital humans are similar to a chatbot – they mimic real-life human behavior and appear as real humans on a computer screen. They are capable of providing customer service as well as tutoring services. You can personalize your digital human to look and sound however you like. 

Students may tend to lean on these AI renditions to get quick and easy solutions when they need it and not wait until the next day to get a full explanation from their teacher. Having a digital human available 24/7 to answer questions is admittedly more convenient for a student and may inadvertently encourage them to avoid asking questions in the classroom. 

Moreover, Emotion AI is a type of AI that can be incorporated into digital humans’ programming to help the bot recognize and mimic human emotions. This component poses an even greater risk to human interaction: you have an AI chatbot that looks just like a human and acts just like a human but does not judge. I think people will naturally be even more inclined to interact with something that is always available to listen to them while also not eliciting any negative feedback to make them feel bad about themselves. 

It's possible that people may not want to interact with their peers for fear that they will react in a harsh way or disagree with them. It can be more stressful for humans to confess something to someone face-to-face; they could worry about potentially burdening someone or being embarrassed. Yet if they have a digital human who will sit, listen and calmly talk with them, it tells the human that they can continue to trust this AI. 

This pattern has already been observed with Alexa, an Amazon smart speaker. Alexa has been programmed to sound and react like humans. These smart speakers have also unintentionally stepped in as therapists: they are conversational, and like digital humans, do not judge. They tend to take a more objective stance and simply comfort the person they’re “talking” to without any additional, potentially harmful comments.  

In her article “Alexa, Should We Trust You?,” Judith Shulevitz says that AI machines like Alexa “give us a way to reveal shameful feelings without feeling shame.” In this case, you know that the AI has no ulterior intentions when they talk to you about your feelings because they’re just a machine. You don’t have to try to read their expression or face because they don’t have one.  

Researchers are in a constant pursuit to perfect AI machines so they evoke the right amount of emotion and emit a certain tone, sculpting them to be more human-like. While their intentions may not go beyond providing a convenient, easy outlet for people to use, there are other inevitable consequences.  

[Related: OPINION: Staying in touch – the social media cocoon]

If society continues to lean more and more on AI technology as a potential companion, human interaction may become more meaningless over the years. Why interact with a subjective, somewhat judgmental human when you can just have an easy, less stressful conversation with an AI chatbot? At some point, people may not want to go through the effort and stress to get help from real humans when AI is more reachable and foolproof.  

Ultimately, I believe it’s important that we avoid falling into the AI abyss. While TV and cellphones have already made their mark on society, it’s not too late to combat AI. Yes, we’re allowed to reap the benefits and services AI provides us with. But we must be aware of the risk they pose to human relationships and make an effort to recognize the value of and preserve human interaction.  

Isabella Vesperini (she/her) is a sophomore majoring in journalism and minoring in Italian.  

Get stories like this in your inbox
Subscribe