People all over the world have been getting a kick out of the new products powered by generative AI, such as ChatGPT and Bing’s AI. But what happens when bad actors – people bent on doing harm – harness the technology for their own negative uses?
At ITEXPO 2023, one presenter compared the move to generative AI with the introduction of automobiles. "It sure is exciting," she said, "but the horses are scared to death."
"It's amazing it's here, but we will need some traffic lights, and maybe some drivers' licenses," metaphorized Bärbel Wetenkamp, CEO of The Swiss Quality Consulting GmbH. Wetenkamp is also a professor in master studies at a university in Munich, Germany.
In 2021, just 1% of content on the internet was created using generative AI. That figure is expected to zoom to more than 10% of all content in two years. "Yes, I think there will be some lost jobs," she said, "but I also think there will be new jobs created."
"Generative AI will help us to scale our businesses and get more output," Wetenkamp asserted.
On the other hand, the possible use of generative AI to create despicable content is worrisome. "There will be abuse," she projected. "Imagine hard-working mothers and fathers pictured doing the unthinkable, and then their children see it online."
Wetenkamp's three top fears for the immediate future of generative AI include:
- Deep Fakes – “Right now, it's nearly impossible to determine whether an image is real or if it’s a Deep Fake image,” she said. As the victim herself of a photo that was altered and shared, she realizes firsthand the damage a Deep Fake can do.
- Copyright – What are the rules regarding copyright infringement made by a robot? What is your recourse if you are slandered or libeled by a robot? “Copyright law as its written is not ready for generative AI,” she said.
- Malicious Use – With generative AI, bullies and bad actors now have another avenue for damaging somebody's reputation. The malicious use of generative AI is another lawful issue that hasn't yet been addressed.
Wetenkamp told the expo session’s audience she harbors no grudge against generative AI itself. "There's nothing wrong with it," she said. "We just need to mitigate the risk of using it."
Wetenkamp's three top tips for mitigating the immediate dangers posed by generative AI include:
- Ethics Adoption – Developers and users need to agree upon an ethical framework for employing generative AI, Wetenkamp said. “The recent report by Garner on AI Trust, Risk and Security Management is a good place to begin.”
- Bill of Rights – People who are affected by AI – i.e. all of us - need protection from bad actors. "Everyone who crates and uses generative AI should be held responsible for what happens,” Wetenkamp said.
- Awareness Campaigns – Business in the generative AI world should be required to fund awareness campaigns, just like tobacco companies were.
She noted that Open AI, the company behind ChatGPT, said they are working on a way for users to distinguish between generative AI content and original human content. "I think it's a good thing. It’s something we need.”
Overall, regulation needs to precede development, because the issue is so important to us all.
Edited by Alex Passett