Artificial Intelligence: The Human to Bot Handoff

By

Artificial intelligence (AI) is one of the most talked about and debated topics of conversation happening today. It is touching every industry – from consumer to automotive to financial to healthcare – and its capabilities only seem to be growing.

According to Elon Musk and Stephen Hawking, we are witnessing an “intelligence explosion,” a term that refers to “an uncontrollable hyper-leap in the cognitive ability of AI.” The past couple of years have witnessed a dramatic increase in the accuracy and aptitude of AI technology and these products have become front-and-center in our lives.

A key aspect that critics of AI take issue with is how far those capabilities can go, fearing that the intelligence of the machine will surpass that of the human, enabling the machine to act autonomously.

Though it is unlikely (at least in the near future) that we will encounter robots of the Sci-Fi depiction set on eliminating the human race, AI does pose potential threats that we should be cautious of.

One potential threat is the biases that could be introduced when training AI systems. AI is trained on data that is collected from randomized control trials which often contain large amounts of biases due to factors like differences in trial periods, location and patients. The biases presented in the data are then perpetuated through the decision-making process of the AI model, further widening gaps of inequality and misrepresentation.

For example, if an AI model was being trained to recognize symptoms of a disease, it would collect information from existing medical textbooks and studies. However, those texts describe typical symptoms one would see in younger, white males. The bot does not know that women may experience slightly different symptoms, or that a symptom for one ailment could also be associated with a different issue that accompanies age. AI is highly capable, but in narrow fields. It knows only the data it is trained on and cannot make the connection between different but related data sets.

This is especially problematic in fields where you cannot act solely on data but must base your approach on interpersonal knowledge and the understanding of how varying factors affect each other.

In the mental health field, non-white students are significantly less likely than white students to seek mental health counseling, even though they tend to carry more stress. The stress that non-white students experience also differs from their white counterparts in that they cope with race and minority-related anxiety on top of the pressures everyone faces in college. What’s more is that most therapists are also white, with only 10 percent being black, 8 percent Asian and 7 percent Hispanic.

If a bot were to be trained on data collected from therapists, patients and current statistics, the data would be skewed towards white people, identifying their worries and problems as “typical.” The bot would not be able to fully comprehend the feelings that other users were reporting and thus would not be as effective.

In the case of therapy, autonomously acting AI would be worrisome.  

Humans can understand that people are unique from one another and that emotions are nuanced, where machines cannot. When dealing with sensitive and personal matters, it is the human connection and empathy that we crave. We want to be heard and to know we are taken care of – we don’t want to risk being wrongly categorized or mistreated. Humans do not base therapy goals and techniques on hard data but rather individualized approaches.

That said, AI has been immensely helpful in gathering and analyzing data at a speed a person never could and has provided an extra layer of insight to therapists that would not be easily accessible without its technology. It has brought innovation and progress to an industry that has not previously been included in the burst of technological advancement.  

Still, it should not be used on its own as a replacement for actual therapy. It should instead be used as a tool to supplement the relationship between therapist and client and allow for a deeper and more connected experience. Smart tools should be used to collect data and track patterns but should then be turned over to a human who can process the data in terms of the individual for who it pertains.

The speculation around the future of humans and bots is not misplaced but is perhaps misguided. We should not fear the skills of AI but should use them in tandem with our own skillsets to achieve goals that neither would be able to accomplish alone.




Edited by Mandi Nowitz


SHARE THIS ARTICLE
Related Articles

Generative AI Expo Launches as Newest Event in the ITEXPO #TECHSUPERSHOW

By: TMCnet News    1/27/2023

Generative AI Expo is the starting point for you research the countless potentially game-changing pillars that may solidify generative AI as THE indis…

Read More

Can Machine Learning Models Help Fight Cybercrime?

By: Contributing Writer    1/25/2023

A machine learning model is a mathematical representation of a system or process that is trained to make predictions or decisions based on data. It is…

Read More

What Are the Risks of Continuous Deployment?

By: Contributing Writer    1/25/2023

Continuous deployment (CD) is a software development practice where code changes are automatically built, tested, and deployed to production without h…

Read More

The Benefits of Implementing a Master Data Management Platform

By: Contributing Writer    1/19/2023

A master data management (MDM) platform is invaluable for any business. By centralizing data into one cohesive system, companies can improve their ope…

Read More

What Is an SBOM and Why Is It Critical for Software Compliance

By: Contributing Writer    1/17/2023

An SBOM, or software bill of materials, is a list of all the components and dependencies that make up a piece of software. This can include things lik…

Read More