Artificial Intelligence: The Human to Bot Handoff

By Special Guest
Kouris Kalligas, CEO and co-founder of Therachat
September 21, 2017

Artificial intelligence (AI) is one of the most talked about and debated topics of conversation happening today. It is touching every industry – from consumer to automotive to financial to healthcare – and its capabilities only seem to be growing.

According to Elon Musk and Stephen Hawking, we are witnessing an “intelligence explosion,” a term that refers to “an uncontrollable hyper-leap in the cognitive ability of AI.” The past couple of years have witnessed a dramatic increase in the accuracy and aptitude of AI technology and these products have become front-and-center in our lives.

A key aspect that critics of AI take issue with is how far those capabilities can go, fearing that the intelligence of the machine will surpass that of the human, enabling the machine to act autonomously.

Though it is unlikely (at least in the near future) that we will encounter robots of the Sci-Fi depiction set on eliminating the human race, AI does pose potential threats that we should be cautious of.

One potential threat is the biases that could be introduced when training AI systems. AI is trained on data that is collected from randomized control trials which often contain large amounts of biases due to factors like differences in trial periods, location and patients. The biases presented in the data are then perpetuated through the decision-making process of the AI model, further widening gaps of inequality and misrepresentation.

For example, if an AI model was being trained to recognize symptoms of a disease, it would collect information from existing medical textbooks and studies. However, those texts describe typical symptoms one would see in younger, white males. The bot does not know that women may experience slightly different symptoms, or that a symptom for one ailment could also be associated with a different issue that accompanies age. AI is highly capable, but in narrow fields. It knows only the data it is trained on and cannot make the connection between different but related data sets.

This is especially problematic in fields where you cannot act solely on data but must base your approach on interpersonal knowledge and the understanding of how varying factors affect each other.

In the mental health field, non-white students are significantly less likely than white students to seek mental health counseling, even though they tend to carry more stress. The stress that non-white students experience also differs from their white counterparts in that they cope with race and minority-related anxiety on top of the pressures everyone faces in college. What’s more is that most therapists are also white, with only 10 percent being black, 8 percent Asian and 7 percent Hispanic.

If a bot were to be trained on data collected from therapists, patients and current statistics, the data would be skewed towards white people, identifying their worries and problems as “typical.” The bot would not be able to fully comprehend the feelings that other users were reporting and thus would not be as effective.

In the case of therapy, autonomously acting AI would be worrisome.  

Humans can understand that people are unique from one another and that emotions are nuanced, where machines cannot. When dealing with sensitive and personal matters, it is the human connection and empathy that we crave. We want to be heard and to know we are taken care of – we don’t want to risk being wrongly categorized or mistreated. Humans do not base therapy goals and techniques on hard data but rather individualized approaches.

That said, AI has been immensely helpful in gathering and analyzing data at a speed a person never could and has provided an extra layer of insight to therapists that would not be easily accessible without its technology. It has brought innovation and progress to an industry that has not previously been included in the burst of technological advancement.  

Still, it should not be used on its own as a replacement for actual therapy. It should instead be used as a tool to supplement the relationship between therapist and client and allow for a deeper and more connected experience. Smart tools should be used to collect data and track patterns but should then be turned over to a human who can process the data in terms of the individual for who it pertains.

The speculation around the future of humans and bots is not misplaced but is perhaps misguided. We should not fear the skills of AI but should use them in tandem with our own skillsets to achieve goals that neither would be able to accomplish alone.




Edited by Mandi Nowitz


SHARE THIS ARTICLE
Related Articles

Get Smart About Influencer Attribution in a Blockchain World

By: Maurice Nagle    4/16/2018

The retail value chain is in for a blockchain-enabled overhaul, with smarter relationships, delivering enhanced transparency across an environment of …

Read More

Facebook Flip-Flopping on GDPR

By: Maurice Nagle    4/12/2018

With GDPR on the horizon, Zuckerberg in Congress testifying and Facebook users questioning loyalty, change is coming. What that change will look like,…

Read More

The Next Phase of Flash Storage and the Mid-Sized Business

By: Joanna Fanuko    4/11/2018

Organizations amass profuse amounts of data these days, ranging from website traffic metrics to online customer surveys. Collectively, AI, IoT and eve…

Read More

Satellite Imaging - Petabytes of Developer, Business Opportunities

By: Doug Mohney    4/11/2018

Hollywood has programmed society into believing satellite imaging as a magic, all-seeing tool, but the real trick is in analysis. Numerous firms are f…

Read More

Blockchain in Space

By: Doug Mohney    4/10/2018

The fact is that everyone is putting a special spin upon blockchain this minute. Given that, it's no surprise a number of companies are discussing dis…

Read More