Artificial Intelligence: The Human to Bot Handoff

By

Artificial intelligence (AI) is one of the most talked about and debated topics of conversation happening today. It is touching every industry – from consumer to automotive to financial to healthcare – and its capabilities only seem to be growing.

According to Elon Musk and Stephen Hawking, we are witnessing an “intelligence explosion,” a term that refers to “an uncontrollable hyper-leap in the cognitive ability of AI.” The past couple of years have witnessed a dramatic increase in the accuracy and aptitude of AI technology and these products have become front-and-center in our lives.

A key aspect that critics of AI take issue with is how far those capabilities can go, fearing that the intelligence of the machine will surpass that of the human, enabling the machine to act autonomously.

Though it is unlikely (at least in the near future) that we will encounter robots of the Sci-Fi depiction set on eliminating the human race, AI does pose potential threats that we should be cautious of.

One potential threat is the biases that could be introduced when training AI systems. AI is trained on data that is collected from randomized control trials which often contain large amounts of biases due to factors like differences in trial periods, location and patients. The biases presented in the data are then perpetuated through the decision-making process of the AI model, further widening gaps of inequality and misrepresentation.

For example, if an AI model was being trained to recognize symptoms of a disease, it would collect information from existing medical textbooks and studies. However, those texts describe typical symptoms one would see in younger, white males. The bot does not know that women may experience slightly different symptoms, or that a symptom for one ailment could also be associated with a different issue that accompanies age. AI is highly capable, but in narrow fields. It knows only the data it is trained on and cannot make the connection between different but related data sets.

This is especially problematic in fields where you cannot act solely on data but must base your approach on interpersonal knowledge and the understanding of how varying factors affect each other.

In the mental health field, non-white students are significantly less likely than white students to seek mental health counseling, even though they tend to carry more stress. The stress that non-white students experience also differs from their white counterparts in that they cope with race and minority-related anxiety on top of the pressures everyone faces in college. What’s more is that most therapists are also white, with only 10 percent being black, 8 percent Asian and 7 percent Hispanic.

If a bot were to be trained on data collected from therapists, patients and current statistics, the data would be skewed towards white people, identifying their worries and problems as “typical.” The bot would not be able to fully comprehend the feelings that other users were reporting and thus would not be as effective.

In the case of therapy, autonomously acting AI would be worrisome.  

Humans can understand that people are unique from one another and that emotions are nuanced, where machines cannot. When dealing with sensitive and personal matters, it is the human connection and empathy that we crave. We want to be heard and to know we are taken care of – we don’t want to risk being wrongly categorized or mistreated. Humans do not base therapy goals and techniques on hard data but rather individualized approaches.

That said, AI has been immensely helpful in gathering and analyzing data at a speed a person never could and has provided an extra layer of insight to therapists that would not be easily accessible without its technology. It has brought innovation and progress to an industry that has not previously been included in the burst of technological advancement.  

Still, it should not be used on its own as a replacement for actual therapy. It should instead be used as a tool to supplement the relationship between therapist and client and allow for a deeper and more connected experience. Smart tools should be used to collect data and track patterns but should then be turned over to a human who can process the data in terms of the individual for who it pertains.

The speculation around the future of humans and bots is not misplaced but is perhaps misguided. We should not fear the skills of AI but should use them in tandem with our own skillsets to achieve goals that neither would be able to accomplish alone.




Edited by Mandi Nowitz


SHARE THIS ARTICLE
Related Articles

Coding and Invention Made Fun

By: Special Guest    10/12/2018

SAM is a series of kits that integrates hardware and software with the Internet. Combining wireless building blocks composed of sensors and actors con…

Read More

Facebook Marketplace Now Leverages AI

By: Paula Bernier    10/3/2018

Artificial intelligence is changing the way businesses interact with customers. Facebook's announcement this week is just another example of how this …

Read More

Oct. 17 Webinar to Address Apache Spark Benefits, Tools

By: Paula Bernier    10/2/2018

In the upcoming webinar "Apache Spark: The New Enterprise Backbone for ETL, Batch and Real-time Streaming," industry experts will offer details on clo…

Read More

It's Black and White: Cybercriminals Are Spending 10x More Than Enterprises to Control, Disrupt and Steal

By: Cynthia S. Artin    9/26/2018

In a stunning new report by Carbon Black, "Hacking, Escalating Attacks and The Role of Threat Hunting" the company revealed that 92% of UK companies s…

Read More

6 Challenges of 5G, and the 9 Pillars of Assurance Strategy

By: Special Guest    9/17/2018

To make 5G possible, everything will change. The 5G network will involve new antennas and chipsets, new architectures, new KPIs, new vendors, cloud di…

Read More