Artificial Intelligence: The Human to Bot Handoff

By

Artificial intelligence (AI) is one of the most talked about and debated topics of conversation happening today. It is touching every industry – from consumer to automotive to financial to healthcare – and its capabilities only seem to be growing.

According to Elon Musk and Stephen Hawking, we are witnessing an “intelligence explosion,” a term that refers to “an uncontrollable hyper-leap in the cognitive ability of AI.” The past couple of years have witnessed a dramatic increase in the accuracy and aptitude of AI technology and these products have become front-and-center in our lives.

A key aspect that critics of AI take issue with is how far those capabilities can go, fearing that the intelligence of the machine will surpass that of the human, enabling the machine to act autonomously.

Though it is unlikely (at least in the near future) that we will encounter robots of the Sci-Fi depiction set on eliminating the human race, AI does pose potential threats that we should be cautious of.

One potential threat is the biases that could be introduced when training AI systems. AI is trained on data that is collected from randomized control trials which often contain large amounts of biases due to factors like differences in trial periods, location and patients. The biases presented in the data are then perpetuated through the decision-making process of the AI model, further widening gaps of inequality and misrepresentation.

For example, if an AI model was being trained to recognize symptoms of a disease, it would collect information from existing medical textbooks and studies. However, those texts describe typical symptoms one would see in younger, white males. The bot does not know that women may experience slightly different symptoms, or that a symptom for one ailment could also be associated with a different issue that accompanies age. AI is highly capable, but in narrow fields. It knows only the data it is trained on and cannot make the connection between different but related data sets.

This is especially problematic in fields where you cannot act solely on data but must base your approach on interpersonal knowledge and the understanding of how varying factors affect each other.

In the mental health field, non-white students are significantly less likely than white students to seek mental health counseling, even though they tend to carry more stress. The stress that non-white students experience also differs from their white counterparts in that they cope with race and minority-related anxiety on top of the pressures everyone faces in college. What’s more is that most therapists are also white, with only 10 percent being black, 8 percent Asian and 7 percent Hispanic.

If a bot were to be trained on data collected from therapists, patients and current statistics, the data would be skewed towards white people, identifying their worries and problems as “typical.” The bot would not be able to fully comprehend the feelings that other users were reporting and thus would not be as effective.

In the case of therapy, autonomously acting AI would be worrisome.  

Humans can understand that people are unique from one another and that emotions are nuanced, where machines cannot. When dealing with sensitive and personal matters, it is the human connection and empathy that we crave. We want to be heard and to know we are taken care of – we don’t want to risk being wrongly categorized or mistreated. Humans do not base therapy goals and techniques on hard data but rather individualized approaches.

That said, AI has been immensely helpful in gathering and analyzing data at a speed a person never could and has provided an extra layer of insight to therapists that would not be easily accessible without its technology. It has brought innovation and progress to an industry that has not previously been included in the burst of technological advancement.  

Still, it should not be used on its own as a replacement for actual therapy. It should instead be used as a tool to supplement the relationship between therapist and client and allow for a deeper and more connected experience. Smart tools should be used to collect data and track patterns but should then be turned over to a human who can process the data in terms of the individual for who it pertains.

The speculation around the future of humans and bots is not misplaced but is perhaps misguided. We should not fear the skills of AI but should use them in tandem with our own skillsets to achieve goals that neither would be able to accomplish alone.




Edited by Mandi Nowitz
Get stories like this delivered straight to your inbox. [Free eNews Subscription]


SHARE THIS ARTICLE
Related Articles

Can Science Outsmart Deepfake Deceivers? Klick Labs Proposes an Emerging Solution

By: Alex Passett    3/25/2024

Researchers at Klick Labs were able to identify audio deepfakes from authentic audio recordings via new vocal biomarker technology (alongside AI model…

Read More

Top 5 Best Ways to Integrate Technology for Successful Project-Based Learning

By: Contributing Writer    3/19/2024

Project-based learning, also popularly known as the PBL curriculum, emphasizes using and integrating technology with classroom teaching. This approach…

Read More

How to Protect Your Website From LDAP Injection Attacks

By: Contributing Writer    3/12/2024

Prevent LDAP injection attacks with regular testing, limiting access privileges, sanitizing user input, and applying the proper encoding functions.

Read More

Azure Cost Optimization: 5 Things You Can Do to Save on Azure

By: Contributing Writer    3/7/2024

Azure cost optimization is the process of managing and reducing the overall cost of using Azure. It involves understanding the resources you're using,…

Read More

Massive Meta Apps and Services Outage Impacts Users Worldwide

By: Alex Passett    3/5/2024

Meta's suite of apps and services are experiencing major global outages on Super Tuesday 2024.

Read More